Speaking at PSConf EU 2020

I’m proud to announce that I will be speaking at PSConf EU 2020 in Hannover, Germany. The conference runs from 2 June 2020 to 5 June 2020 and brings together some of the titans of the PowerShell community and members of the PowerShell team from Microsoft. 

This is an incredible event packed with fantastic, deep dive content. Check out the amazing schedule! Head on over to the site and register now!

This year I have two sessions!

On Thursday, 2 June at 13:00 – I’m presenting “Linux OS Fundamentals for the PowerShell Pro

Here’s the abstract

PowerShell and SQL Server are now available on Linux and management wants you to leverage this shift in technology to more effectively manage your systems, but you’re a Windows admin, Don’t fear! It’s just an operating system. It has all the same components Windows has and in this session, we’ll show you that. We will look at the Linux operating system architecture and show you how to interact with and manage a Linux system. By the end of this session, you’ll be ready to go back to the office and get started working with Linux.

In this session, we’ll cover the following 
– Service control
– Package installation
– System resource management (CPU, disk and memory)
– Using PowerShell to interact with Linux systems 

On Friday, 3 June at 11:00 – I’m presenting “Using PowerShell Core Remoting in Cross-Platform Environments

Here’s the abstract

PowerShell Core is about choice and the transport layer for Remoting is one of those choices. In this session, we’ll look at Remoting in cross-platform environments, installing and configuring OpenSSH and how we can leverage Remoting to really scale up our administrative capabilities.

In this session, we’ll cover the following
– Cross-platform Remoting use cases
– Configuring SSH based Remoting
– Troubleshooting Remoting

 PS Conf EU logo

Speaking at Data Grillen 2020

I’m proud to announce that I will be speaking at Data Grillen 2020 the conference runs from 28 May 2020 through 29 May 2020.

This is an incredible event packed with fantastic content, speakers, bratwurst and Beer! 

Check out the amazing schedule (and when I say check out the amazing schedule, I really mean it. Some of the world’s best Data Platform speakers are going to be there)

On Thursday, May 28th at 15:00 – I’m presenting “Containers –  Day 2” in the Handschuh room.

Here’s the abstract

You’ve been working with containers in development for a while, benefiting from the ease and speed of the deployments. Now it’s time to extend your container-based data platform’s capabilities for your production scenarios.

In this session, we’ll look at how to build custom containers, enabling you to craft a container image for your production system’s needs. We’ll also dive deeper into operationalizing your container-based data platform and learn how to provision advanced disk topologies, seed larger databases, implement resource control and understand performance concepts.

By the end of this session, you will learn what it takes to build containers and make them production ready for your environment.

My good friend, and container expert, Andrew Pruski (@dbafromthecold) will be presenting “SQL Server and Kubernetes” in the same room just before me at 13:30, be sure to come to both sessions for a deep dive into running SQL Server in Containers and Kubernetes.

Prost! 

Speaking at PowerShell Summit 2020!

I’m proud to announce that I will be speaking at PowerShell + DevOps Global Summit 2020 the conference runs from April 27th through April 30. This is an incredible event packed with fantastic content and speakers. Check out the amazing schedule! All the data you need on going is in this excellent brochure right here!

This year I have two sessions!

On Wednesday, April 29th at 09:00AM – I’m presenting “Inside Kubernetes – An Architectural Deep Dive

Here’s the abstract

In this session we will introduce Kubernetes, we’ll deep dive into cluster architecture and higher-level abstractions such as Services, Controllers, and Deployments and how they can be used to ensure the desired state of an application deployed in Kubernetes. In this session we will introduce Kubernetes, we’ll deep dive into each component and its responsibility in a cluster. We will also look at and demonstrate higher-level abstractions such as Services, Controllers, Deployments and Jobs and how they can be used to ensure the desired state of an application deployed in Kubernetes. By the end of this session, you will understand what’s needed to put your applications in production in a Kubernetes cluster

Session Objectives

  • Understand Kubernetes cluster architecture
  • Understand Services, Controllers, and Deployments
  • Designing Production-Ready Kubernetes Clusters
  • Learn to run PowerShell in Kubernetes Jobs.

I look forward to seeing you there.

Speaking at SQLBits 2020

I’m proud to announce that I will be speaking at SQLBits! I had the absolute pleasure of speaking at SQLBits last year for the first time and saw first hand how great this event is and cannot wait to get back and speak again! And this year, I have two sessions!!! One on building and deploying container based applications in Kubernetes and the other on deploying SQL Server in Kubernetes

If you haven’t been to SQLBits before, what are you waiting for! Sign up now!

 

SQL Bits Excel London

Here’s the details for my sessions!

Practical Container Scenarios in Azure – April 2 2020 – 12:40PM

You’ve heard the buzz about containers and Kubernetes, now let’s start your journey towards rapidly deploying and scaling your container-based applications in Azure. In this session, we will introduce containers and the container orchestrator Kubernetes. Then we’ll dive into how to build a container image, push it into our Azure Container Registry and deploy it to our Azure Kubernetes Services cluster. Once deployed, we’ll learn how to keep our applications available and how to scale them using Kubernetes.

Key topics introduced

  • Building a container based application
  • Publishing containers to Azure Container Registry
  • Deploying Azure Kubernetes Services Clusters
  • Scaling our container-based applications in Azure Kubernetes Services

Deploying SQL Server in Kubernetes – April 3 2020  4:50PM

Are you thinking about running SQL Server in Kubernetes and don’t know where to start…are you wondering what you really need to know? If so, then this is the session for you! When deploying SQL Server In Kubernetes key considerations include data persistency, Pod configuration, resource management, and high availability/disaster recovery scenarios. In this session, we’ll look closely at each of these elements and learn how to run SQL Server in Kubernetes.

Learning Objectives

  • Deploying SQL Server in Kubernetes
  • Allocating Persistent Data Storage and configuring advanced disk topologies
  • SQL Server Specific Pod Configuration
  • Near zero-downtime upgrades
  • High availability and Disaster Recovery Scenarios 

Be sure to come to both sessions, learn how to build and deploy containers based applications in Kubernetes and also how to deploy SQL Server in Kubernetes!

Speaking at SQLIntersection Orlando 2020

I’m very pleased to announce that I will be speaking at SQL Intersection April 2020!  This is my first time speaking at SQL Intersection and I’m very excited to be doing so!

Speaking at SQL Intersection means so much to me because in 2014 I got my first exposure to the SQL Server community via SQLskills and their training. Then to follow up on their training workshops I attended my very first IT conference, SQL Intersection and now I get to come back as a speaker. Let’s just say, I’m a little excited!!!

Now as for the sessions…lots of content here on SQL Server on Linux, Containers and Kubernetes…check them out! Click here to register!

Full Day Workshop

Kubernetes Zero to Here: Installation, Configuration and Application Deployment

Modern application deployment needs to be fast and consistent to keep up with business objectives and Kubernetes is quickly becoming the standard for deploying container-based applications, fast. In this day-long session, we will start with an architectural overview of a Kubernetes cluster and how it manages application state. Then we will learn how to build a production-ready cluster. With our cluster up and running, will learn how to interact with our cluster, common administrative tasks, then wrap up with how to deploy applications and SQL Server. At the end of the session, you will know how to set up a Kubernetes cluster, manage a cluster, deploy applications and databases, and how to keep everything up and running.

Workshop Objectives:

  • Introduce Kuberentes Cluster Components
  • Introduce Kubernetes API Objects and Controllers
  • Installing Kubernetes
  • Interacting with your cluster
  • Storing persistent data in Kubernetes
  • Deploying Applications in Kubernetes
  • Deploying SQL Server in Kubernetes
  • High Availability SQL Server scenarios in Kubernetes

General Sessions

Containers – It’s Time to Get on Board

Containers are taking over, changing the way systems are developed and deployed…and that’s not hyperbole. Just imagine if you could deploy SQL Server or even your whole application stack in just minutes? You can do that, leveraging containers! In this session, we’ll get your started on your container journey, learn some common container scenarios and introduce container orchestration with Kubernetes.

In this session we’ll look at

  • Container Fundamentals
  • Common Container Scenarios
  • Running SQL Server in a Container
  • Container Orchestration with Kubernetes

Containers – Continued!

You’ve been working with containers in development for a while, benefiting from the ease and speed of the deployments. Now it’s time to extend your container-based data platform’s capabilities for your production scenarios.
In this session, we’ll look at how to build custom containers, enabling you to craft a container image for your production system’s needs. We’ll also dive deeper into operationalizing your container-based data platform and learn how to provision advanced disk topologies, seed larger databases, implement resource control and understand performance concepts.

By the end of this session, you will learn what it takes to build containers and make them production ready for your environment.

  • Custom container builds with Features
  • Advanced disk configurations
  • Backups/restores
  • Seeding larger databases
  • Backup restore into the container from a mounted volume
  • Resource control
  • Container Restart Policy
  • Container based performance concepts

Linux OS Fundamentals for the SQL Admin

Do you manage SQL Server but have developers using Linux? It’s time to take the leap to understand and communicate better with your Linux peers! You might be a Windows / SQL Server Admin but both SQL Server and PowerShell are now available on Linux. You can manage ALL of these technologies more effectively now. Don’t fear! Linux is just an operating system! While it feels different, it still has all the same components as Windows! In this session, I’ll show you that. We will look at the Linux operating system architecture and show you how to interact with and manage a Linux system. By the end of this session, you’ll be ready to go back to the office and get started working with Linux with a fundamental understanding of how it works.

Monitoring Linux Performance for the SQL Server Admin

Taking what you learned in our Fundamentals session one step further, we will continue and focus on the performance data you’re used to collecting on Windows! We’ll dive into SQLPAL and how the Linux architecture / internals enable high performance for your SQL Server. By the end of this session you’ll be ready to go back to the office and have a solid understanding of performance monitoring Linux systems and SQL on Linux. We’ll look at the core system components of CPU, Disk, Memory, and Networking monitoring techniques for each and look some of the new tools available from DMVs to DBFS.

In this session we’ll cover the following

  • System resource management concepts, CPU, disk, memory and networking
  • Introduce SQLPAL architecture and internals and how its design enables high performance for SQL Server on Linux
  • Baselining and benchmarking 

 

SQLint20 1024x512 NOCENTINO

New Pluralsight Course – Configuring and Managing Kubernetes Storage and Scheduling

My new course “Configuring and Managing Kubernetes Storage and Scheduling” in now available on Pluralsight here! Check out the trailer here or if you want to dive right in go here! This course offers practical tips from my experiences managing Kubernetes Clusters and workloads for Centino Systems clients.

This course targets IT professionals that design and maintain Kubernetes and container based solutions.The course can be used by both the IT pro learning new skills and the system administrator or developer preparing for using Kubernetes both on premises and in the Cloud and is the fourth course in my Kubernetes Administration Learning Path.

Let’s take your Kubernetes administration and configuration skills to the next level and get you started now!

The modules of the course are:

  • Configuring and Managing Storage in Kubernetes – In this module, we will introduce the need for persistent storage in container based applications and then introduce Kubernetes storage objects that provide those services. We’ll dive into the storage lifecycle and how Pods use Persistent Volumes and Persistent Volume Claims to consume storage. We’ll look closely at the types of PVs available and controlling access to PVs with access modes. Once we have the fundamentals down we will learn how to use both Static and Dynamic Provisioning to map Pods to their underlying storage. 
  • Configuration as Data – Environment Variables, Secrets and ConfigMaps – In this demo-heavy module, we’ll look at how to configure Pods using environment variables, secrets and ConfigMaps. We’ll begin with Pod configuration using environment variables and learn how to leverage secrets to securely configure Pod/container based application. Next we’ll see how we can use ConfigMaps to decouple application and Pod configurations in our Pods.
  • Managing and Controlling the Kubernetes Scheduler – In this module we’ll learn, In Kubernetes the Scheduler has the responsibility of sheduling Pods to worker Nodes in the Cluster.  In this module, we will learn how scheduling works and how we can influence the scheduler to help meet application requirements. We will learn how to place Pods on specific nodes (or subsets of nodes) in the cluster, 

NewImage

Check out the course at Pluralsight!

My Desktop Setup

Every once in awhile when I’m recording a Pluralsight course, I’ll take a photo of my desk to let people see the behind the scenes of the process. Well, my friend Steve Jones (@way0utwest) encouraged me to write a desk setup post…so here we go!

CF996869 34A9 48D0 B6D2 F1CF54799523 1 105 c

Desk

Autonomous SmartDesk 2 – Home Office

Most standup desks come at a much higher price point, this one lands somewhere between $379-$500 depending on the features. While it’s pretty minimalist, it gets the job done. I have several presets for various heights depending on the current task I’m performing. One tip for those who record audio, I always stand when recoding, it helps me with annunciation and also control the tone of my voice better. A standing desk is a must if you’re going to be recording production quality audio. I think there are some health benefits too to standing desks. :)

Compute Power

Main Laptop – 2018 MacBook Pro – 2.9 GHz i9 – 32GB RAM – 1TB SSD

This is my primary computing device, I don’t have a workstation, on this computer I do almost everything so the specs are pretty strong. The only upgrade I didn’t get in this laptop was the 2TB hard drive. I offload archive content to a 2012 Mac Mini that has 2TB of disk space. 

Backup Laptop – 2018 MacBook Air – 1.6 GHz i5 – 16GB RAM – 512GB SSD

As a consultant and trainer downtime isn’t acceptable for my business. So I need to be able to reach into my laptop bag, plug in and go and that’s the intent of this machine. It has enough horsepower to run all of my critical functions and training workshops in the event my MacBook Pro dies. It’s a touch slower, but it gets the job done. I keep all of my content sync’d between the two laptops with OneDrive.  

File Server – 2012 MacMini – 2.5GHz i7 – 16GB, 2TB SSD

This computer is ancient as the sea but has served me well. It has 2TB of SSD storage serves as a local backup target and also where I archive data.

Monitors

Monitors – Philips 288P6LJEB 28″ Monitor, 4K UHD

I have two 28” monitors, which honestly for me isn’t the best solution. First, when I put the monitors at full 4K resolution I can’t read anything the font is too small. I didn’t take that into account when I made the purchase :) so I usually operate them at 2560 x 1440…which I can actually read. Further, I generally only use one monitor at a time during day to day functions, there’s enough real estate at the resolution I previously mentioned to get things done. When recording or presenting, as you can see my setup in the photo above, I’ll put the external monitor 1280 x 720 and drive demos on that monitor and use my MacBook Pro’s monitor for my presenter’s view.

When it comes to connectivity, we’re in a transition in the Mac universe where everything is going USB-C. So I have the monitors plugged into my laptop via USB-C for video using this cable. The monitors have a USB 3.0 hub and I plug in my USB 3.0 devices into that, so my recording rig and the desktop charging gear all plug into the monitor’s hub…then the monitor’s hub plugs into my laptop via a USB 3.0 to USB-C converter cable. There really isn’t a need to buy one of those expensive hubs. As devices get swapped out I opt for USB-C or Bluetooth.

Desk Arms –  Loctek D5D Dual Monitor Arm Desk Monitor Mounts Fits 10″-27″ Monitors, Gas Spring LCD Arm

These work well and give me a ton of desktop real estate back when compared with monitor stands. If you notice, the supported range for the arms is 10” to 27” inches…yea, I messed that up as my monitors are 28” so there’s a little overlap on the left monitor there. But it works out ok in practice. 

Recording Gear

OK, for the recording stuff, my main goal is to achieve the highest audio quality while recording without being an audio engineer. When recording you want to make sure that you’re getting the highest quality audio on what you’re recording. You can fix a lot of issues in post-production but it’s always best to never let those issues get into your recorded audio. The main reason is, good editing is expensive…in both time and money. So with this rig below, I’m able to archive my goal of good quality audio, but with a simple setup. Background noise is literally non-existent.

Microphone – Shure SM7B

I switched to this microphone in April of 2018 and have never looked back. My first microphone was a Blue Snowball ICE. This was a great microphone for getting started. But as recording became a bigger part of the business…I wanted to step up the audio quality and also reduce my editing time, so I switched to the Shure. A pop filter is included with this microphone.

USB Interface – Scarlett Solo USB

The Shure microphone is a professional device requiring inline power and has an XLR interface. The Scarlett Solo is a pre-amp device that boosts the audio signal and then connects to my monitor via USB 3.0. This device is simple and effective. I only have to remember to turn it on. 

Mic Activator – Cloud Microphones Cloudlifter CL-1

This device boosts the audio signal from the microphone into the pre-amp enabling you to have a cleaner signal going into your pre-amp without having to crank up the gain a bunch.

Putting this all together, the cabling looks like this:

Shure -> XLR -> Cloudlifter -> XLR -> Scarlett -> USB 3.0 (monitor) -> MacBook Pro

Boom Arm – RODE PSA 1 Swivel Mount Studio Microphone Boom

My main thing about a boom arm is buy a quality one that doesn’t have springs. If your arm has springs when you bump the mic or your desk, the springs will vibrate and your mic will pick that up. Remember, my goal is to record quality audio the first time…a good boom arm actually contributes to that audio quality. It mounts solidly to the desk and when bumped or moved it is silent. This boom arm is great, highly recommended.

XLR Cables – Tainston XLR Microphone Cable Male to Female-3 Feet

The Shure requires XLR cables. Don’t skimp on cables, buy good ones. I might revisit this one and get shielded cables as every once in a while if I have my cell phone too close to the recording rig I get a little background noise in the recording. 

Recording Software – Camtasia

I use Camtasia for all recording. It works great for simple recording and editing. I try to keep each project file less than 1 hour in recording length as it starts to struggle from a performance standpoint when I go longer than that. I don’t do any post-production in Camtasia. I use a professional editor and he uses Adobe Premier.  

Headphones – Sony Noise Cancelling Headphones WH1000XM3

When recording having wireless headphones is great. Not having a wire is beneficial as you are not constantly having to move it out of the way or get caught up in it while recording or while listening to recently recorded audio.  Much has been said about the quality of these headphones…they’re great and I highly recommend.

Input Devices

Keyboard – Microsoft Sculpt

I’ve been using various Microsoft ergonomic keyboards for years. As for this one, I want to ensure my keyboard has the shortest keystroke possible when pressing on the key and this keyboard has that. I also use the native MacBook Pro keyboard which has a similar shallow keystroke.

Mouse – Logitech MX Master 2S Wireless Mouse

When I switched to this MacBook Pro, everything went USB-C. My previous Logitech mouse used a dongle that was USB 3.0. So I got essentially the same mouse, but the Bluetooth version. The mouse can be paired with multiple computers at the same time. There’s some software that enables you to move between the computers seamlessly…well, let’s just say that doesn’t work so well. There’s a little button on the bottom of the mouse that will swap computers. That actually works.

Upgrading SQL Server 2017 Containers to 2019 non-root Containers with Data Volumes – Another Method

Yesterday in this post I described a method to correct permissions when upgrading a SQL Server 2017 container using Data Volumes to 2019’s non-root container on implementations that use the Moby or HyperKit VM. My friend Steve Jones’ on Twitter wondered if you could do this in one step by attaching a shell (bash) in the 2017 container prior to shutdown. Absolutely…let’s walk through that here in this post.  I opted to use an intermediate container in the prior post out of an abundance of caution so that I was not changing permissions on the SQL Server instance directory and all of the data files while they were in use. Technically this is a-ok, but again…just being paranoid there.

Start Up a Container with a Data Volume

Start up a container with a Data Volume (sqldata1) using the 2017 image. This will create the directories and files with root as the owner and group.

docker run \
    --name 'sql1' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -d mcr.microsoft.com/mssql/server:2017-latest
597652b61b22b27ff6d765b48196621a79dd2ffd7798328868d2296c7e953950 

Create a Database

Let’s create a database and confirm it’s there.

sqlcmd -S localhost,1433 -U sa -Q 'CREATE DATABASE TestDB1' -P $PASSWORD
sqlcmd -S localhost,1433 -U sa -Q 'SELECT name from sys.databases' -P $PASSWORD -W

name ---- master tempdb model msdb TestDB1 (5 rows affected) 

Get a Shell into the Container

Now, let’s get a shell into our running container. Logging in as root is great, isn’t it? :) 

docker exec -it sql1 /bin/bash
root@ed9051c6b5f3:/# 

Adjust the Permissions

Now while we’re in the running 2017 container we can adjust the permissions on the instance directory. The user mssql (uid 10001) doesn’t have to exist in the 2017 container. The key to the permissions is using the uid directly.

ls -laR /var/opt/mssql
chgrp -R 0 /var/opt/mssql
chmod -R g=u /var/opt/mssql
chown -R 10001:0 /var/opt/mssql
ls -laR /var/opt/mssql
exit

Stop our Container

Now to start the process of upgrading from 2017 to 2019, we’ll stop and remove the existing container.

docker stop sql1
docker rm sql1
sql1 

Start up a 2019 non-root Container

docker run \
    --name 'sql1' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -d mcr.microsoft.com/mssql/server:2019-GDR1-ubuntu-16.04 

Is Everything OK?

Are our database there? Yep! 

sqlcmd -S localhost,1433 -U sa -Q 'SELECT name from sys.databases' -P $PASSWORD
name
----
master
tempdb
model
msdb
TestDB1

(5 rows affected)

 

 

Upgrading SQL Server 2017 Containers to 2019 non-root Containers with Data Volumes

Recently Microsoft released a Non-Root SQL Server 2019 container and that’s the default if you’re pulling a new container image. But what if you’re using a 2017 container running as root and want to upgrade your system the SQL Server 2019 container…well something’s going to break. As you can see here, my friend Grant Fritchey came across this issue recently and asked for some help on Twitter’s #sqlhelp. This article describe a solution to getting things sorted and running again. The scenario below is if you’re using a Linux based SQL Server container on Windows or Mac host where the container volumes are backed by a Docker Moby or HyperKit virtual machine. If you’re using Linux container on Linux, you’ll adjust the file system permissions directly.

What’s the issue?

When you start up the 2017 container, the SQL Server (sqlservr) process is running as root (uid 0). Any files created by this process will have the user and group ownership of the root user. Now when we come along later and start up a 2019 container, the sqlservr process is running as the user msssql (uid 10001 by default). This new user doesn’t have permission to open the database files and other files used by SQL Server.

How do we fix this?

The way I fixed this issue is by stopping the SQL Server 2017 container and using another container, attaching the data volumes used by the 2017 container into this container then recursively adjusting the permissions to allow a user with the uid 10001 access to the files in the instance directory /var/opt/mssql. If you’re databases and log files are in other paths you’ll have to take that into account if using this process. Once we adjust the permissions, stop that ubuntu container and start up SQL Server’s 2019 non-root container and everything should be happy happy. Let’s do it together…

Start Up a Container with a Data Volume

Start up a container with a Data Volume (sqldata1) using the 2017 image. This will create the files with root as the owner and group.

docker run \
    --name 'sql1' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -d mcr.microsoft.com/mssql/server:2017-latest
597652b61b22b27ff6d765b48196621a79dd2ffd7798328868d2296c7e953950 

Create a Database

Let’s create a database and confirm it’s there.

sqlcmd -S localhost,1433 -U sa -Q 'CREATE DATABASE TestDB1' -P $PASSWORD
sqlcmd -S localhost,1433 -U sa -Q 'SELECT name from sys.databases' -P $PASSWORD -W

name ---- master tempdb model msdb TestDB1 (5 rows affected)  

Stop our Container

Now to start the process of upgrading from 2017 to 2019, we’ll stop and remove the existing container.

docker stop sql1
docker rm sql1
sql1 

Start a 2019 non-root Container

Create a new container pointing to that existing Data Volume (sqldata1), this time I’m not using -d so we can attach to stdout and see the error messages on the terminal. Here you can see that the sqlservr process is unable to open a file instance_id.

docker run \
    --name 'sql1' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
     mcr.microsoft.com/mssql/server:2019-GDR1-ubuntu-16.04

SQL Server 2019 will run as non-root by default. This container is running as user mssql. Your master database file is owned by root. To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216. sqlservr: Unable to open /var/opt/mssql/.system/instance_id: Permission denied (13) /opt/mssql/bin/sqlservr: Unable to open /var/opt/mssql/.system//instance_id: Permission denied (13) 

Since that was a bust, let’s go ahead and delete that container since it’s not usable. 

docker rm sql1
sql1 

Changing Permissions on the Files

Let’s create an intermediate container, in this case using an Ubuntu image, and mount that data volume (sqldata1), and then change the permissions on the files SQL Server needs to work with. 

docker run \
    --name 'permissionsarehard' \
    -v sqldata1:/var/opt/mssql \
    -it ubuntu:latest

If we look at the permissions of the instance directory (/var/opt/mssql/) we can see the files user and group owner are root. This is just a peek at the instance directory, we’ll need to adjust permissions on all of the file SQL Server needs to work with and recursively within this directory.

ls -la /var/opt/mssql
/var/opt/mssql:
total 24
drwxr-xr-x 6 root root 4096 Nov 20 13:43 .
drwxr-xr-x 1 root root 4096 Nov 20 13:46 ..
drwxr-xr-x 5 root root 4096 Nov 20 13:43 .system
drwxr-xr-x 2 root root 4096 Nov 20 13:43 data
drwxr-xr-x 2 root root 4096 Nov 20 13:43 log
drwxr-xr-x 2 root root 4096 Nov 20 13:43 secrets

Let’s adjust the permissions on the directories and files sqlservr needs access to…again I want to point out, that this is against the default instance directory which is /var/opt/mssql…if you have files in other locations they will need their permissions updated too. Check out the Microsoft Docs article here for more information on this.

ls -laR /var/opt/mssql
chgrp -R 0 /var/opt/mssql
chmod -R g=u /var/opt/mssql
chown -R 10001:0 /var/opt/mssql
ls -laR /var/opt/mssql
exit

Here’s some output from a directory listing of our instance directory after we’ve made the permissions changed…now they have the owner of 10001 and a group owner of root.

ls -la /var/opt/mssql
/var/opt/mssql:
total 24
drwxrwxr-x 6 10001 root 4096 Nov 20 13:43 .
drwxr-xr-x 1 root  root 4096 Nov 20 13:46 ..
drwxrwxr-x 5 10001 root 4096 Nov 20 13:43 .system
drwxrwxr-x 2 10001 root 4096 Nov 20 13:43 data
drwxrwxr-x 2 10001 root 4096 Nov 20 13:43 log
drwxrwxr-x 2 10001 root 4096 Nov 20 13:43 secrets

Let’s start up a 2019 non-root container now

Start up our 2019 container now…should work eh? Woot!

docker run \
    --name 'sql1' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -d mcr.microsoft.com/mssql/server:2019-GDR1-ubuntu-16.04 

Why UID 10001?

Let’s hop into the container now that it’s up and running…and we’ll see sqlservr is running as mssql which has a uid of 10001. This is the default uid used inside non-root container. If you’re using a system that doesn’t have this user defined, like the intermediate ubuntu container, you’ll need to adjust permissions using the uid directly. That permission information is written into the directory and files and when we start up the 2019 container again the correct permissions are in place since the uid of the mssql user matches the uid of the permissions on the files and directories.

docker exec -it sql1 /bin/bash

ps -aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND mssql 1 8.4 0.3 148820 22768 ? Ssl 13:49 0:00 /opt/mssql/bin/ mssql 9 96.5 9.3 7470104 570680 ? Sl 13:49 0:03 /opt/mssql/bin/ mssql 140 2.0 0.0 18220 3060 pts/0 Ss 13:49 0:00 /bin/bash mssql 148 0.0 0.0 34420 2792 pts/0 R+ 13:49 0:00 ps -aux
id mssql uid=10001(mssql) gid=0(root) groups=0(root)
exit

Is Everything OK?

Are our database there? Yep! 

sqlcmd -S localhost,1433 -U sa -Q 'SELECT name from sys.databases' -P $PASSWORD
name
----
master
tempdb
model
msdb
TestDB1

(5 rows affected)

Another Method

If you like living on the edge you can correct the permissions logging into the running 2017 container prior to shutdown and not using an intermediate container, check out this post here

 

 

Speaking at PASS Summit 2019!

I’m very pleased to announce that I will be speaking at PASS Summit 2019!  This is my second time speaking at PASS Summit and I’m very excited to be doing so! What’s more, is I get to help blaze new ground with an emerging technology, Kubernetes and how to run SQL Server in Kubernetes!

My session is Inside Kubernetes – An Architectural Deep Dive if you’re a just getting started in the container space and want to learn how Kubernetes works and dive into how to deploy SQL Server in Kubernetes this is the session for you. I hope to see you there!

Inside Kubernetes – An Architectural Deep Dive

Abstract

In this session we will introduce Kubernetes, we’ll deep dive into each component and its responsibility in a cluster. We will also look at and demonstrate higher-level abstractions such as Services, Controllers, and Deployments, and how they can be used to ensure the desired state of an application and data platform deployed in Kubernetes. Next, we’ll look at Kubernetes networking and intercluster communication patterns. With that foundation, we will then introduce various cluster scenarios and high availability designs. By the end of this session, you will understand what’s needed to put your applications and data platform in production in a Kubernetes cluster. 

In addition to my session be sure to check out the following sessions on Kubernetes by my friends Bob Ward and Hamish Watson, I’m certainly going to be at both of these sessions!

 

 

 

 

PASS Summit 2019