Persisting SQL Server Data in Docker Containers – Part 2

So in my previous post, we discussed Docker Volumes and how they have a lifecycle independent of the container enabling us to service the container image independent of the data inside the container. Now let’s dig into Volumes a little bit more and learn where Docker actually stores that data on the underlying operating system.  

This is the second post in a three part series on Persisting SQL Server Data in Docker Containers. The first post introducing Docker Volumes is here. And the third post on mapping base OS directories directly into containers is here.

Docker Volumes

When we working with Docker Volumes we can use the -v option to create the Volume at the time the container is created or we can create the Volume ahead of time with the docker volume create command. The left side sqldata1 is the Volume name. The right side /var/opt/mssql is the directory the Volume will be mapped to inside the container. 

We can specify the location of the Volume on the underlying OS but if we don’t it will be created in the default location defined in your Docker preferences which is /var/lib/docker/volumes. Let’s go with the default location for now…we’re going to cover specifying locations in an upcoming post.

docker run \
    --name 'sql19' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -d mcr.microsoft.com/mssql/server:2019-latest

On a system where Linux is the base OS running Docker the container runtime will create Volumes on the base OS’s file system and map them into the container. So as we create Volumes, they will land in /var/lib/docker/volumes when using the code above that doesn’t specify a specific file location on the base OS. Well, /var/lib/docker/volumes is quite a Linux’y sounding path and if I try to browse to that path on my Mac or Windows it doesn’t exist. OK, I don’t have a Windows system but I think it’s a safe bet that’s not there either :)

Let’s use the inspect command to get more information on a Docker resource and here we’re using docker volume inspect on sqldata1.

docker volume inspect sqldata1
[
    {
        "CreatedAt": "2019-09-01T11:51:25Z",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/sqldata1/_data",
        "Name": "sqldata1",
        "Options": null,
        "Scope": "local"
    }
]

In the output here we can see the Mountpoint, which is where the data lives on the underlying OS, as /var/lib/docker/volumes/sqldata1/_data that’s where Docker put the Volume since I didn’t define a path it put the Volume data in the default location of /var/lib/docker/volumes…but like we said a second ago that path doesn’t exists on my local filesystem…what’s going on?

Docker Volumes on non-Linux Operating Systems

Docker on Mac (and Windows uses the same technique for now) exposes a small Linux VM to provide kernel services to Linux containers. When we create a Docker Volume without specifying a file location for the Volume it will be created *inside* this VM. So that’s where our Volume data is stored. Let’s keep digging…

Using the technique defined here we can get a shell into that Virtual Machine and browse the file system to find the actual data files. Here’s how you attach a shell to the Docker VM.

screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty

Now let’s decompose this command below a bit…screen is a command used to attach a shell to a process, the path (~/Library/Containers/com.docker.docker/Data/vms/0/tty) is the default disk image path defined in your Docker preferences. And the file tty is a process running that exposes a shell interface to the running VM supporting our Linux containers. Everything in Linux is a file, so that tty is actually a terminal process inside the VM that’s exposed outside the VM on your base operating system’s file system. Screen will ‘open’ that file and start to read from it…since it’s exposes a terminal you get a shell into the VM. 

Now that we’re inside the VM let’s look around and see where our Volume data lives, we see all of the actual SQL Server file data supporting the instance and the databases and log files.

ls /var/lib/docker/volumes/
metadata.db sqldata1

In the directory listing above we’ll see a directory for each named Docker Volume in this case we see sqldata1.

ls /var/lib/docker/volumes/sqldata1/_data/
data     log      secrets

Inside of sqldata1, we will start to see the directories and files created by SQL Server in the Docker Volume. Remember SQL Server thinks this is /var/opt/mssql because this path is being mapped into the container at that location.  

ls /var/lib/docker/volumes/sqldata1/_data/data/
Entropy.bin                     model.mdf
MS_AgentSigningCertificate.cer  modellog.ldf
TestDB1.mdf                     msdbdata.mdf
TestDB1_log.ldf                 msdblog.ldf
master.mdf                      tempdb.mdf
mastlog.ldf                     templog.ldf

And if we look inside the data directory we’ll see system and user databases. 

What’s Next? 

So now that we know where Docker Volume data lives when running Linux Containers on a Mac (this same technique applies to Windows too), in our next post we’re going to go a little bit further into this and define a specific location for Docker to store our data on the base OS.

Persisting SQL Server Data in Docker Containers – Part 1

What’s the number one thing a data professional wants to do with their data…keep it around. Let’s talk about running SQL Server in Containers using Docker Volumes on a Mac

This is the first post in a three part series on Persisting SQL Server Data in Docker Containers. The second post on where Docker actually stores your data is here. And the third post on mapping base OS directories directly into containers is here.

The Need for Data Persistency in Containers

A container image is read-only. When an application changes data inside a running container writes are written to a writable layer. The writable layer plus the read-only container image are brought together by the container runtime and presenting to the processes running inside the container as a single file system. 

The primary issue with this is that the writeable layer has the lifecycle of the container. If you delete the container, you delete the writeable layer and any data that was in there. Luckily Docker containers give us a way to decouple the container and its data.

In Figure 1, you can see a container image and it’s writeable layer. The application inside the container sees this has a single file system. If we delete this container, any data written to the writeable layer will be deleted too. 

NewImage

Figure 1: A container and it’s writable layer

Docker Volumes

A Docker Volume is a Docker managed resource that is mapped into a defined point in the filesystem inside the container. The primary benefit of using Docker Volumes is that they have a lifecycle that’s independent of a container. This enables you to decouple your application from its state to the point where you can simply throw away the container and replace it with a new container image start up your application and point it to your data.

In Figure 2, we have a container, a writeable layer and a volume. A container will always have a writeable layer even when a Volume is defined. A Volume will be mounted at a specific location in the file system inside the container and writes to that location will be written to the Volume. Writes to other parts of the file system will be written to the writable layer. 

NewImage

Figure 2: A container, it’s writable layer and a Volume

SQL Server using Docker Volumes

Let’s talk about how we can use Docker Volumes and SQL Server to persist data. If we want to run SQL Server in a container we will want to decouple our data from the container itself. Doing so will enable us to delete the container, replace it and start up a new one pointing at our existing data. When running SQL Server in a container will store data in /var/opt/mssql by default. When the container starts up for the first time it will put the system databases in that location and any user databases created will also be placed at this location by default. 

Now, if we don’t use a Volume that data will be written into the writeable layer of the container and if we delete the container…we delete our data. We don’t want that so let’s start up a container with a Volume. To do so we use the -v option when we use the docker run command. 

docker run \
    --name 'sql17' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -d mcr.microsoft.com/mssql/server:2017-latest


In the code above you can see –v sqldata1:/var/opt/mssql specified as part of the docker run command. This creates a Docker Volume sqldata1 and maps that inside the container to /var/opt/mssql. Now during this container’s start up when SQL Server will write its data to /var/opt/mssql which is actually going to be written to the Volume. If we delete this container and replace it…when SQL Server starts up it will see the master database and proceed initializing the system as defined in master. If there are any user databases defined in master and they’re accessible they will be brought online too. Let’s try it out…first up let’s create a user database and query the file information about the databases in this container.

sqlcmd -S localhost,1433 -U sa -Q 'CREATE DATABASE TestDB1' -P $PASSWORD

sqlcmd -S localhost,1433 -U sa -Q 'SELECT name, physical_name from sys.master_files' -P $PASSWORD -W

name physical_name
---- -------------
master /var/opt/mssql/data/master.mdf
mastlog /var/opt/mssql/data/mastlog.ldf
tempdev /var/opt/mssql/data/tempdb.mdf
templog /var/opt/mssql/data/templog.ldf
modeldev /var/opt/mssql/data/model.mdf
modellog /var/opt/mssql/data/modellog.ldf
MSDBData /var/opt/mssql/data/MSDBData.mdf
MSDBLog /var/opt/mssql/data/MSDBLog.ldf
TestDB1 /var/opt/mssql/data/TestDB1.mdf
TestDB1_log /var/opt/mssql/data/TestDB1_log.ldf

In the code above we create a database, when query master for the information about the databases running inside this container. You can see all of the paths are /var/opt/mssql which is our volume. 

Container and Data Independence

The Docker Volume created with the -v option created a Docker managed Volume that is independent of the container so our data will live in there and we can service the container independent of the volume. So let’s do that…let’s delete the container and start up a new container and let’s go so far as to use a 2019 container to upgrade SQL Server…that’s cool!

docker stop sql17
docker rm sql17


The code above will stop and then delete the container. When the container is deleted, so is its writeable layer. But we are storing out data in the Volume and that still exists.

docker volume ls
DRIVER              VOLUME NAME
local               sqldata1

Above we can see there is a Volume using the local driver and its name is sqldata1…this still exists and can be mounted by new containers. The local drive is used to map directories from the base OS inside the container. There are other types of drivers that expose other types of storage into the container. More on this later.

docker run \
    --name 'sql19' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -d mcr.microsoft.com/mssql/server:2019-latest


With this code, we start up a new container and tell it to use the same Volume and mount it into /var/opt/mssql. So when SQL Server starts it finds the master database, master has the metadata about any configuration and user databases and we get back into the state we were previously in. Let’s ask SQL Server for a list of databases.

sqlcmd -S localhost,1433 -U sa -Q 'SELECT name, physical_name from sys.master_files' -P $PASSWORD -W

name physical_name
---- -------------
master /var/opt/mssql/data/master.mdf
mastlog /var/opt/mssql/data/mastlog.ldf
tempdev /var/opt/mssql/data/tempdb.mdf
templog /var/opt/mssql/data/templog.ldf
modeldev /var/opt/mssql/data/model.mdf
modellog /var/opt/mssql/data/modellog.ldf
MSDBData /var/opt/mssql/data/MSDBData.mdf
MSDBLog /var/opt/mssql/data/MSDBLog.ldf
TestDB1 /var/opt/mssql/data/TestDB1.mdf
TestDB1_log /var/opt/mssql/data/TestDB1_log.ldf

…and there you can see in the output above, SQL Server is in the state it was in the initial running of the container on the 2017 image. Now we’re on the 2019 image and have access to all of our persisted data independent of the container image. 

sqlcmd -S localhost,1433 -U sa -Q 'SELECT @@VERSION' -P $PASSWORD
                                                                                                      
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Microsoft SQL Server 2019 (RC1) - 15.0.1900.25 (X64)
        Aug 16 2019 14:20:53
        Copyright (C) 2019 Microsoft Corporation
        Developer Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)                                                                                                                       
 

Containers have replaced virtual machines for me and the decoupling of data and computation will have a significant impact on how we design data platforms and systems going forward. In this post, I wanted to highlight how you can use a container with persistent state systems like SQL Server. In the next post, I’m going to show you where that data actually lives on the underlying Operating System.

Microsoft Most Valuable Professional – Data Platform for 2019-2020

Today, I’m proud to announce that I have been renewed as an Microsoft MVP – Data Platform for the 2019-2020 award year, my third MVP award. This is an truly an honor and I’m humbled to be included in this group of exceptional data professionals. I really look forward to continuing to work with everyone in the MVP community and continuing to contribute to our unmatched Data Platform Community!

MVP Logo Horizontal Secondary Blue286 CMYK 300ppi

What is an MVP?

Here’s the definition according to Microsoft

Microsoft Most Valuable Professionals, or MVPs, are technology experts who passionately share their knowledge with the community. They are always on the “bleeding edge” and have an unstoppable urge to get their hands on new, exciting technologies. They have very deep knowledge of Microsoft products and services, while also being able to bring together diverse platforms, products and solutions, to solve real world problems. MVPs make up a global community of over 4,000 technical experts and community leaders across 90 countries and are driven by their passion, community spirit, and quest for knowledge. Above all and in addition to their amazing technical abilities, MVPs are always willing to help others – that’s what sets them apart.

For 2019-2020, I have been named a Data Platform MVP, which means my technical specialization is on data products like SQL Server. The group of people that have received this award is quite small…by my count 332 worldwide and less than 75 in the US. I’m honored to be in this group of talented professionals.

Why I’m excited to be an MVP?

Honestly, the primary reason I’m excited to be an MVP is to give back (more), I’ve learned so much from other MVPs and receiving this award will help me to continue to build relationships with other MVPs and Microsoft employees to further help develop the Data Platform itself and the community that surrounds that platform.

At the start of 2016 I had set a goal of being an MVP in 5 years. I don’t know why I picked that number, but what I figured was…MVP would be validation of consistent, quality work for our community and being recognized for the work that I’ve contributed. Things like blogging, social media, public speaking and more. You learn a ton by teaching! 

The past year has been great, focusing on Containers and Kubernetes and how we can leverage that tech in our modern Data Platform.

To the people that have helped along the way, I thank you all!

Speaking at SQL Saturday Dallas

Speaking at SQLSaturday Dallas!

I’m proud to announce that I will be speaking at SQL Saturday Dallas on May 17th 2018! This one won’t let you down! Check out the amazing schedule!

If you don’t know what SQLSaturday is, it’s a whole day of free SQL Server training available to you at no cost!

If you haven’t been to a SQLSaturday, what are you waiting for! Sign up now!

My presentation is Practical Container Scenarios in Azure” 

NewImage

Here’s the abstract for the talk

You’ve heard the buzz about containers and Kubernetes, now let’s start your journey towards rapidly deploying and scaling your container-based applications in Azure. In this session, we will introduce containers and the container orchestrator Kubernetes. Then we’ll dive into how to build a container image, push it into our Azure Container Registry and deploy it our Azure Kubernetes Services cluster. Once deployed, we’ll learn how to keep our applications available and how to scale them using Kubernetes.

Key topics introduced
Building Container based applications
Publishing containers to Azure Container Registry
Deploying Azure Kubernetes Services Clusters
Scaling our container-based applications in Azure Kubernetes Services

Workshop – Kubernetes Zero to Hero – Installation, Configuration, and Application Deployment

Pre-conference Workshop at SQLSaturday Baton Rouge

I’m proud to announce that I will be be presenting an all day pre-conference workshop at SQL Saturday Baton Rouge on August 16th 2019! This one won’t let you down! 

The workshop is Kubernetes Zero to Hero – Installation, Configuration, and Application Deployment” 

NewImage

Here’s the abstract for the workshop

Modern application deployment needs to be fast and consistent to keep up with business objectives and Kubernetes is quickly becoming the standard for deploying container-based applications, fast. In this day-long session, we will start with an architectural overview of a Kubernetes cluster and how it manages application state. Then we will learn how to build a production-ready cluster. With our cluster up and running, we will learn how to interact with our cluster, common administrative tasks, then wrap up with how to deploy applications and SQL Server. At the end of the session, you will know how to set up a Kubernetes cluster, manage a cluster, deploy applications and databases, and how to keep everything up and running.

Session Objectives

  • Introduce Kubernetes Cluster Components
  • Introduce Kubernetes API Objects and Controllers
  • Installing Kubernetes
  • Interacting with your cluster
  • Storing persistent data in Kubernetes
  • Deploying Applications in Kubernetes
  • Deploying SQL Server in Kubernetes
  • High Availability scenarios in Kubernetes

FAQs

How much does it cost?

The full day training event is $125 per attendee.

What can I bring into the event?
WiFi at the location is limited. The workshop will be primarily demonstration based. Code will be made available for download prior to the event if you would like to follow along during the session.

How can I contact the organizer with any questions?
Please feel free to email me with any questions: aen@centinosystems.com

What’s the refund policy?
7 days: Attendees can receive refunds up to 7 days before your event start date.

Do I need to know SQL Server or Kubernetes to attend this workshop?
No, while we will be focusing on deploying SQL Server in Kubernetes, no prior knowledge of SQL Server or Kubernetes is needed. We will build up our Kubernetes skills using SQL Server as the primary application we will deploy.

What are the prerequisites for the workshop?
All examples will be executed at the command line, so proficiency at a command line is required. Platform dependent (Linux/Windows,Cloud CLIs) configurations and commands will be introduced and discussed in the workshop.  

New Pluralsight Course – Managing the Kubernetes API Server and Pods

New Pluralsight Course – Managing the Kubernetes API Server and Pods

My new course “Managing the Kubernetes API Server and Pods” in now available on Pluralsight here! Check out the trailer here or if you want to dive right in go here! This course offers practical tips from my experiences managing Kubernetes Clusters and workloads for Centino Systems clients.

This course targets IT professionals that design and maintain Kubernetes and container based solutions.The course can be used by both the IT pro learning new skills and the system administrator or developer preparing for using Kubernetes both on premises and in the Cloud.

Let’s take your Kubernetes administration and configuration skills to the next level and get you started now!

The modules of the course are:

  • Using the Kubernetes API – In this module we will dive into the Kubernetes API, looking closely at the architecture of the API Server and how exposes and manages Kubernetes API Objects. Then we will learn about API versioning and object maturity. Next, we’ll look at anatomy of an API request, leading us up to debugging interactions with the API Server.
  • Managing Objects with Labels, Annotations and Namespaces – In this demo-heavy module, we will learn out to organize and interact with resources in Kubernetes using Labels, Annotations, and Namespaces. We will also learn how to use labels to influence Kubernetes operations in Controllers and Pod scheduling.
  • Running and Managing Pods – In this module, we will look at the fundamental unit of work in Kubernetes, the Pod, looking at why the Pod abstraction is needed and design principals for placing your applications in Pods and running those Pods in your cluster.  We’ll examine Pod lifecycle and how its state impacts application health and availability. We wrap up with how Controllers interact with Pods and how Pods report their health status with readiness probes and liveness probes. 

NewImage

Check out the course at Pluralsight!

 

Speaking at SQLSaturday Atlanta – 845

Speaking at SQLSaturday Atlanta!

I’m proud to announce that I will be speaking at SQL Saturday Atlanta on May 17th 2018! This one won’t let you down! Check out the amazing schedule!

If you don’t know what SQLSaturday is, it’s a whole day of free SQL Server training available to you at no cost!

If you haven’t been to a SQLSaturday, what are you waiting for! Sign up now!

My presentation is Containers – You Better Get on Board!” 

SQLSaturday #845 - Atlanta 2019

Here’s the abstract for the talk

Containers are taking over, changing the way systems are developed and deployed…and that’s NOT hyperbole. Just imagine if you could deploy SQL Server or even your whole application stack in just minutes. You can do that, leveraging containers! In this session, we’ll get you started on your container journey learning container fundamentals in Docker, then look at some common container scenarios and introduce deployment automation with Kubernetes.

In this session we’ll look at
Container Fundamentals with Docker
Common Container Scenarios
Automation with Kubernetes

Prerequisites: Operating system concepts such as command line use and basic networking skills.

Data Persistency and Advanced SQL Server Disk Topologies in Kubernetes

When working with SQL Server in containers and Kubernetes storage is a key concept. In this post, we’re going to walk through how to deploy SQL Server in Kubernetes with Persistent Volumes for the system and user databases.

One of the key principals of Kubernetes is the ephemerality of Pods. No Pod is every redeployed, a completely new Pod is created. If a Pod dies, for whatever reason, a new Pod is created in its place there is no continuity in the state of that Pod. The newly created Pod will go back to the initial state of the container image defined in the Pod’s spec. This is very valuable for stateless workloads, not so much for stateful workloads like SQL Server.

This means that for a stateful workload like SQL Server we need to store both configuration and data externally from the Pod to maintain state through the recreation of a Pod. Kubernetes give us constructs two constructs to do that, environment variables and Persistent Volumes. 

Using Environment Variables for Container Configuration

Container-based applications use environment variables for configuration at startup. The SQL Server container has a collection of environment variables that can be used to configure it at container startup. We will leverage two of those in this configuration. MSSQL_DATA_DIR and MSSQL_LOG_DIR these allow us to define a file system locations for user database and log files. When the SQL Server container is started inside the Pod, it reads the environment variables at runtime and sets its configuration based on those values. We define these variables as part of the Pod Spec. We will cover that configuration below.

Using Persistent Volumes to Maintain Database State

To persist the state of our SQL Server container, we will configure SQL Server to store its data and log files for both user and system databases on Persistent Volumes.

First, let’s review how SQL Server in a container starts up. During the initial startup, the SQL Server process checks to see if there are any system databases in the default system file location which is, /var/opt/mssql/data. If there are none the system databases are copied there, if they are there no action is taken. 

To add persistently to the system databases, and really all of the other components of SQL Server such as the Error Log and other system files, we will configure /var/opt/mssql so that it is backed by a Persistent Volume.

By placing the system databases on a Persistent Volume, when a Pod is recreated and the Persistent Volumes are attached and mounted in the same location when the SQL Server process starts up it sees the system databases and has what it needs to maintain state between creation.

If there are records for user databases in the system databases, SQL Server will start the process of bringing those databases online as well. We certainly the default location for user databases is /var/opt/mssql/data but we are going to override that with an environment variable for both the data and log directories, placing each on a dedicated Persistent Volumes.

Let’s walk through that configuration together. 

Persistent Volume Claims

In this configuration, we will use dynamic storage provisioning. In dynamic provisioning, a Persistent Volume Claim (PVC) is used to request a Persistent Volume (PV) from a Storage Class. In this case, we’ll be using AKS’s managed-premium Storage Class. 

Here we define three PVCs, one for each place we want Persistent Volume, for the system files and databases and the user database and log files.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "pvc-sql-data"
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: managed-premium
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "pvc-sql-system"
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: managed-premium
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "pvc-sql-log"
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: managed-premium
  resources:
    requests:
      storage: 10Gi

Deployment

In the Pod spec for our Deployment, we want to define several elements to support this configuration. 

  • Volumes – define volumes that can be mounted by this Pod. In this case, we’re creating and naming three volumes, backed by the PVCs defined above.
  • volumeMounts – volumes mounted into the container and their mountPath, location. This maps the names from the named Volumes to a location in the filesystem in the container.
  • env – due to the ephemerality of the container in the Pod, we need to tell SQL Server at start up that the data and log files will be stored in a specified directory. We are leaving the system databases and files in the default location which is /var/opt/mssql
The net effect of this storage configuration is that we are mapping the Persistent Volumes into a particular location in the filesystem inside the container. 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mssql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: mssql
        image: 'mcr.microsoft.com/mssql/server:2017-latest'
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: 'Y'
        - name: MSSQL_DATA_DIR
          value: '/data'
        - name: MSSQL_LOG_DIR
          value: '/log'
        - name: SA_PASSWORD
          value: 'S0methingS@Str0ng!'
        volumeMounts:
        - name: mssql-system
          mountPath: /var/opt/mssql
        - name: mssql-data
          mountPath: /data
        - name: mssql-log
          mountPath: /log
      volumes:
      - name: mssql-system
        persistentVolumeClaim:
          claimName: pvc-sql-system
      - name: mssql-data
        persistentVolumeClaim:
          claimName: pvc-sql-data
      - name: mssql-log
        persistentVolumeClaim:
          claimName: pvc-sql-log

Service

We’ll front end our SQL Server with a public IP address and a load balancer. 

apiVersion: v1
kind: Service
metadata:
  name: mssql-deployment
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 31433
      targetPort: 1433
  type: LoadBalancer

Apply the Configuration

Save the code above into a YAML file and deploy it into SQL Server.

kubectl apply -f deployment-advanced-disk.yaml

You’ll get this output

persistentvolumeclaim/pvc-sql-data created
persistentvolumeclaim/pvc-sql-system created
persistentvolumeclaim/pvc-sql-log created
deployment.apps/mssql-deployment created
service/mssql-deployment created

Confirm the configuration

We can use kubectl get pv to list the Persistent Volumes (PV) dynamically allocated by our cluster. Here there are three Persistent Volumes. The key here is the status is Bound, which means they are bound to a PVC. I also want to point out the Reclaim Policy is Delete. This means if the PVC is deleted, the PV will be deleted at a cleanup interval sometime in the future. 

kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS      REASON   AGE
pvc-e0b418ef-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            Delete           Bound    default/pvc-sql-data     managed-premium            11m
pvc-e0cf2345-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            Delete           Bound    default/pvc-sql-system   managed-premium            11m
pvc-e0ea01a8-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            Delete           Bound    default/pvc-sql-log      managed-premium            11m

With kubectl get pvc we get a list of the PVCs in our configuration, once for each we defined above. The key here is the status is Bound, or that they are bound to a PV.

kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
pvc-sql-data     Bound    pvc-e0b418ef-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            managed-premium   12m
pvc-sql-log      Bound    pvc-e0ea01a8-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            managed-premium   12m
pvc-sql-system   Bound    pvc-e0cf2345-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            managed-premium   12m 

Now let’s use kubectl describe pods to get the deep dive info about our storage configuration and how it’s mapped into the Pod. 

There are three keep places in the output below I want to point you to

  • Containers: mssql: Environment: you’ll find the two environment variables set for the data and log directories. Configured as /data and /log
  • Mounts: we see the file system location inside the container and the name of the Volumes defined in the Pod Spec
  • Volumes: we see the name of the Volumes, their type, claim name and the read/write status.
  • Events: this is a log of the events for the creation of this Pod. Key here is that sometimes the container will come up prior to the storage being available to the Pod. That’s what the error below is, but it clears itself up and the container is able to start.
kubectl describe pods
Name:               mssql-deployment-df4cf5c4c-nf8lf
Namespace:          default
Priority:           0
PriorityClassName:
Node:               aks-nodepool1-89481420-2/10.240.0.6
Start Time:         Sat, 04 May 2019 07:41:59 -0500
Labels:             app=mssql
                    pod-template-hash=df4cf5c4c
Annotations:
Status:             Running
IP:                 10.244.1.51
Controlled By:      ReplicaSet/mssql-deployment-df4cf5c4c
Containers:
  mssql:
    Container ID:   docker://f2320ae8f94c24fbb04214b903b4a218b82e9548f8d88a95daa7e207eeaa42b4
    Image:          mcr.microsoft.com/mssql/server:2017-latest
    Image ID:       docker-pullable://mcr.microsoft.com/mssql/server@sha256:39554141d307f2d40d2abfc54e3a0eea3aa527e58f616496c6f3ed3245a2e2b1
    Port:           1433/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sat, 04 May 2019 07:44:21 -0500
    Ready:          True
    Restart Count:  0
    Environment:
      ACCEPT_EULA:                   Y
      MSSQL_DATA_DIR:                /data
      MSSQL_LOG_DIR:                 /log
      SA_PASSWORD:                   S0methingS@Str0ng!
      KUBERNETES_PORT_443_TCP_ADDR:  cscluster-kubernetes-cloud-fd0c5e-8bca8b54.hcp.centralus.azmk8s.io
      KUBERNETES_PORT:               tcp://cscluster-kubernetes-cloud-fd0c5e-8bca8b54.hcp.centralus.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://cscluster-kubernetes-cloud-fd0c5e-8bca8b54.hcp.centralus.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       cscluster-kubernetes-cloud-fd0c5e-8bca8b54.hcp.centralus.azmk8s.io
    Mounts:
      /data from mssql-data (rw)
      /log from mssql-log (rw)
      /var/opt/mssql from mssql-system (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z9sbf (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  mssql-system:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-sql-system
    ReadOnly:   false
  mssql-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-sql-data
    ReadOnly:   false
  mssql-log:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-sql-log
    ReadOnly:   false
  default-token-z9sbf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-z9sbf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age   From                               Message
  ----     ------                  ----  ----                               -------
  Normal   Scheduled               13m   default-scheduler                  Successfully assigned default/mssql-deployment-df4cf5c4c-nf8lf to aks-nodepool1-89481420-2
  Normal   SuccessfulAttachVolume  13m   attachdetach-controller            AttachVolume.Attach succeeded for volume "pvc-e0ea01a8-6e69-11e9-a433-f659caf6a6f5"
  Normal   SuccessfulAttachVolume  12m   attachdetach-controller            AttachVolume.Attach succeeded for volume "pvc-e0cf2345-6e69-11e9-a433-f659caf6a6f5"
  Normal   SuccessfulAttachVolume  12m   attachdetach-controller            AttachVolume.Attach succeeded for volume "pvc-e0b418ef-6e69-11e9-a433-f659caf6a6f5"
  Warning  FailedMount             11m   kubelet, aks-nodepool1-89481420-2  Unable to mount volumes for pod "mssql-deployment-df4cf5c4c-nf8lf_default(027c46f7-6e6a-11e9-a433-f659caf6a6f5)": timeout expired waiting for volumes to attach or mount for pod "default"/"mssql-deployment-df4cf5c4c-nf8lf". list of unmounted volumes=[mssql-system mssql-data]. list of unattached volumes=[mssql-system mssql-data mssql-log default-token-z9sbf]
  Normal   Pulled                  11m   kubelet, aks-nodepool1-89481420-2  Container image "mcr.microsoft.com/mssql/server:2017-latest" already present on machine
  Normal   Created                 11m   kubelet, aks-nodepool1-89481420-2  Created container
  Normal   Started                 11m   kubelet, aks-nodepool1-89481420-2  Started container

Creating a Database and Verifying File Location

With this code, we’ll get our IP address for our SQL Server service then we’ll create a database and query master_files for a list of data files. Notice I’m defining my service port as 31443 which is what we defined when creating our service in the earlier step.

SVCIP=$(kubectl get svc mssql-deployment | grep mssql-deployment |  awk '{print $4}')
sqlcmd -S $SVCIP,31433 -U sa -Q 'CREATE DATABASE TestDB1' -P $PASSWORD
sqlcmd -S $SVCIP,31433 -U sa -Q 'SELECT name,physical_name from sys.master_files' -P $PASSWORD


And we’ll get this output, you can see all of the system databases backed by /var/opt/mssql and our user database is on /data and the log is on /log. All backed by Persistent Volumes.

master        /var/opt/mssql/data/master.mdf
mastlog       /var/opt/mssql/data/mastlog.ldf
tempdev       /var/opt/mssql/data/tempdb.mdf
templog       /var/opt/mssql/data/templog.ldf
modeldev      /var/opt/mssql/data/model.mdf
modellog      /var/opt/mssql/data/modellog.ldf
MSDBData      /var/opt/mssql/data/MSDBData.mdf
MSDBLog       /var/opt/mssql/data/MSDBLog.ldf
TestDB1       /data/TestDB1.mdf
TestDB1_log   /log/TestDB1_log.ldf

Confirming Persistency

Let’s go ahead and delete our Pod to confirm that when it’s recreated by our Deployment our data is still there. 

kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
mssql-deployment-df4cf5c4c-nf8lf   1/1     Running   0          4d2h

kubectl delete pod mssql-deployment-df4cf5c4c-nf8lf 
pod "mssql-deployment-df4cf5c4c-nf8lf" deleted

Once the Pod is recreated, let’s query master files to see where our databases are located. And you’ll find that your the database created in the previous step persisted between Pod creations.

sqlcmd -S $SVCIP,31433 -U sa -Q 'SELECT name,physical_name from sys.master_files' -P $PASSWORD

master        /var/opt/mssql/data/master.mdf
mastlog       /var/opt/mssql/data/mastlog.ldf
tempdev       /var/opt/mssql/data/tempdb.mdf
templog       /var/opt/mssql/data/templog.ldf
modeldev      /var/opt/mssql/data/model.mdf
modellog      /var/opt/mssql/data/modellog.ldf
MSDBData      /var/opt/mssql/data/MSDBData.mdf
MSDBLog       /var/opt/mssql/data/MSDBLog.ldf
TestDB1       /data/TestDB1.mdf
TestDB1_log   /log/TestDB1_log.ldf

OpenSSH Resources for Windows and PowerShell

Had a conversation with a good friend in the SQL Community about OpenSSH and how it fits as a transport layer for PowerShell Remoting. I pointed him towards several resources I have online. So here’s a post aggregating those resources.

If you’re looking to get started with OpenSSH on Linux and Windows Systems check out thisPowerShell Summit presentation I did in 2018. This covers OpenSSH in theory and practice.

Session: OpenSSH Internals for PowerShell Pros

  • Remote Access Concepts
  • OpenSSH Architecture
  • Authentication Methods (including Key Based Authentication)
  • OpenSSH Server and Client Configuration

If you’re looking to get started with OpenSSH based PowerShell Remoting check out this session from PowerShell Summit in 2018 co-presented with Richard Siddaway.

Session: PowerShell Remoting: Installing and TroubleShooting in a Multiplatform Environment

  • Installing OpenSSH on Windows and Linux
  • Authentication Users (Including AD Authentication)
  • Setting Up PowerShell Remoting on Windows and Linux
  • Troubleshooting OpenSSH

Shortly after I did the sessions above, OpenSSH was released as a Window Capability, check out this blog post on how to install OpenSSH at the command line using Add-WindowsCapability.  This also applies to Windows 2016 Server. 

Session: Firewall Evasion and Remote Access with OpenSSH

  • OpenSSH is much more than just remote terminal access to servers, it provides a full suite of remote connectivity methods to your network and its services. In this session, we will look at how to use OpenSSH and its forwarding, tunneling and VPN capabilities so that we can reach securely reach network services that are behind firewalls and other security boundaries. Common use cases for these techniques are cloud jump boxes, secure access into segmented networks and being able to get remote access and move data around in poorly secured networks….these tips are things that will likely get you some extra attention from your security team

Blog Post: Configuring Passwordless PowerShell Remoting over SSH

Blog Post: Distributing SSH User Keys via PowerShell

Blog Post: Installing OpenSSH Server on Windows 10

Blog Post: PowerShell Remoting in Multi-Platform Environments using OpenSSH

Using PowerShell in Containers

The vision for PowerShell Core is to be able to run PowerShell anywhere. In this article, I’m going to discuss how you can use Docker Containers to enable just that. We’ll look at running PowerShell in a container, running cmdlets, running different versions of PowerShell at the same time, and also how to build our own “serverless” computing platform.

Let’s address a few reasons why you would want to run PowerShell in a container.

  • Speed and agility – this for me is probably the number one reason to run PowerShell in a container.  The PowerShell container images are coming in at around 375MB, this means with a modern Internet connection you’ll be able to pull a PowerShell container image and be up in running in a very small amount of time.
  • Version – there are container images available for every release of PowerShell Core, including preview/release candidate code. With containers, you can run multiple versions of PowerShell Core in a way where they will not conflict with each other.
  • Platform independence – there are container images for Ubuntu, Fedora, Windows Server Core, Nano Server and more. This allows you to be able to consume PowerShell Core regardless of your underlying platform. You can select whichever image you want, pull the container and go. 
  • Testing – if you need to test your scripts across various versions of PowerShell Core you can pull the container, run the script on the exact version you need. You can have multiple containers on your system running multiple versions of PowerShell and be able to run them all at the same time.  
  • Isolation – containers will allow you to have self-contained environments for execution, security, environment, and configuration settings. You can also use this idea to isolate conflicting modules from each other. This is particularly valuable when developing modules and/or cmdlets.

Getting Up and Running

Let’s get started with using PowerShell Core in a container. First up, we will want to pull the Docker Container Image to our local machine. This will pull the image with the latest tag. Which at the time of this post is 6.2.0-ubuntu-18.04.

docker pull mcr.microsoft.com/powershell:latest

With the container image local, let’s go ahead and start up the container. In this first go, I’m going to start up the container with the docker run command and with the –interactive and –tty flags. What these flags do is, when the container starts, attach to the terminal of the container so I can use PowerShell Core interactively at the command line.

docker run                    \
        --name "pwsh-latest"  \
        --interactive --tty   \
        mcr.microsoft.com/powershell:latest 

This will get you a PowerShell prompt. I told you this was going to be fast.

PowerShell 6.2.0
Copyright (c) Microsoft Corporation. All rights reserved.
 
https://aka.ms/pscore6-docs
Type 'help' to get help.
 
PS /> 

From that prompt, we can do the normal PowerShell things we need to do. Let’s start our journey like all good PowerShell demos do and run Get-Process. You’ll notice that there is only one process running in the container, and that’s your pwsh session. This is due to the isolation concepts of Containers. With this isolation, problems like conflicting modules and settings go away. The container gives you script an isolated execution environment. If you need to have two conflicting versions of a module, DLL or library to run your workload or script…you can use a container to isolate their execution giving them the ability to co-exist on the same system.

PS /> Get-Process
 
 NPM(K)    PM(M)      WS(M)     CPU(s)      Id  SI ProcessName
 ------    -----      -----     ------      --  -- -----------
      0     0.00     110.03       2.01       1   1 pwsh

We can use exit to get out of PowerShell. When you exit PowerShell the container will stop. You can see that status of your container with docker ps.

CONTAINER ID        IMAGE                                 COMMAND             CREATED             STATUS                     PORTS               NAMES
8c9160fea43f        mcr.microsoft.com/powershell:latest   "pwsh"              6 minutes ago       Exited (0) 8 seconds ago                       pwsh-latest
 
If you’d like to get back into your container you can use docker start pwsh-latest -i where pwsh-latest is the container name we just created and -i is for interactive (we used –interactive earlier). Run that and you’ll land right back at a PowerShell prompt again. 

Running a cmdlet When Starting a Container

Now, let’s say we wanted to start our container up and non-interactively run a cmdlet right away, we can do that. With the docker run command, we can tell the container that we want it to start pwsh and pass in a cmdlet as a parameter into pwsh, with the -c parameter and that cmdlet will be executed. Let’s check out how.
docker run mcr.microsoft.com/powershell:latest pwsh -c "&{Get-Process}"
 
 NPM(K)    PM(M)      WS(M)     CPU(s)      Id  SI ProcessName
 ------    -----      -----     ------      --  -- -----------
      0     0.00      81.35       0.54       1   1 pwsh
 
From a performance standpoint, I want to point out the time it takes to do this work, we can use the time command to help us with that. Less than two seconds to start the container, start pwsh and execute our cmdlet and shut down the container.
time docker run mcr.microsoft.com/powershell:latest pwsh -c "&{Get-Process}"
 
 NPM(K)    PM(M)      WS(M)     CPU(s)      Id  SI ProcessName
 ------    -----      -----     ------      --  -- -----------
      0     0.00      81.61       0.54       1   1 pwsh
 
real 0m1.901s
user 0m0.038s
sys. 0m0.086s
 
Now let’s say I wanted to test a cmdlet execution against a specific version of PowerShell Core, perhaps even a Release Candidate. Let’s change the tag from latest to preview and docker will pull that container, start it up and we immediately have an environment for testing. This could be leveraged for script testing, cmdlet testing, module testing and so on. In the output below, you can see the preview tag points to the 6.2.0-rc1 version of PowerShell Core.
docker run mcr.microsoft.com/powershell:preview pwsh -c "&{Get-Host}"
 
Name             : ConsoleHost
Version          : 6.2.0-rc.1
…output omitted...
 
Now, each time we started a container so far in this post and then exited pwsh, the container shut down and was still on our system. We can see the containers with a docker ps -a. We can restart any of these containers and get them back by using the command mentioned previously.
docker ps -a
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS                     PORTS               NAMES
d8d8d27ec7be        mcr.microsoft.com/powershell:preview  "pwsh -c &{Get-Host}"    4 seconds ago       Exited (0) 2 seconds ago                       pensive_poincare
5eace290b47c        mcr.microsoft.com/powershell:latest   "pwsh -c &{Get-Proce…"   4 minutes ago       Exited (0) 4 minutes ago                       dreamy_haibt
c8361b9e0a76        mcr.microsoft.com/powershell:latest   "pwsh -c &{Get-Proce…"   6 minutes ago       Exited (0) 6 minutes ago                       boring_shirley
8c9160fea43f        mcr.microsoft.com/powershell:latest   "pwsh"                   15 minutes ago      Exited (0) 8 minutes ago                       pwsh-latest
 
We can delete each container by name, using docker rm then specifying the name as a parameter. For example, docker rm pwsh-latest would delete that container.

Running a Script When Starting a Container

When a container is deleted, the data “inside” the container is deleted too. So if we created a script inside a container and then delete the container that means the script would go away too. In Docker, we can use a volume to help us with this. A volume allows us to store our data externally to the container, we can mount the volume inside the container and it looks like it’s part of the container’s file system.
 
With volumes, when we delete the container, the data stays inside the volume. We can then create a new container and attach the volume to that new container and the data will be there for us to work with.
 
Let’s start a container and attach a volume at the /scripts location of the container’s file system. Let’s also add the –detach parameter. This is going to start the container, start pwsh and then stop the container. Then I’m going to copy a script from my local file system into the container. The container does not need to be running for the copy operation to succeed.
docker run                       \
     --name "pwsh-script"        \
     --interactive --tty         \
     --volume PSScripts:/scripts \
       mcr.microsoft.com/powershell:latest
 
Here’s the code to copy the script from my local file system into the container where pwsh-script is the container name and /scripts is the location we want to copy the script to inside the container. This is the volume we attached to the container. The script is a simple hello-world script.
docker cp Get-Containers.ps1 pwsh-script:/scripts
 
With that, let’s go ahead and remove the container. We used it just to copy the script into the volume. I kind of feel bad, but we’ll keep moving on.
docker rm pwsh-script
 
With that, let’s create a new container in interactive mode, with the volume attached. This will put us at a pwsh prompt.
docker run                       \
     --name "pwsh-script"        \
     --interactive --tty         \
     --volume PSScripts:/scripts \
       mcr.microsoft.com/powershell:latest
 
Now, since our script is in the volume and we attached that volume when we created this new container, it’s available for us inside the container. Let’s go ahead and run that script inside the container and then delete the container with docker rm when it’s finished. 
PS /> ls -la /scripts/
total 12
drwxr-xr-x 2 root root    4096 May  2 18:30 .
drwxr-xr-x 1 root root    4096 May  2 18:33 ..
-rw-r--r-- 1  502 dialout   73 Apr 28 21:43 Get-Containers.ps1
PS /> /scripts/Get-Containers.ps1
Hello, world!
PS /> exit
docker rm pwsh-script

Sounds Like…Serverless?

Now let’s take that technique we just stepped through, where we started the container, ran a script and deleted the container and combine all of that into one step. To do so, we’ll use the following command options for docker run. We specify the –rm option which will delete the container when it exits, add the /scripts volume and tell pwsh to run the script that’s in our volume by specifying its location with the parameter -F /scripts/Get-Containers.ps1.
docker run                       \
     --rm                        \
     --volume PSScripts:/scripts \
       mcr.microsoft.com/powershell:latest pwsh -F /scripts/Get-Containers.ps1
Hello, world!
 
Now, with that last technique, we’ve encapsulated the entire lifecycle of the execution of that script into one line of code. It’s like this script execution never happened…or did it ;) All kidding aside, we effectively have a serverless computing platform now. Using this technique in our data centers, we can spin up a container, on any version of PowerShell on any platform, run some workload/script and when the workload finishes, the container just goes away. For this to work well, we will need something to drive that process. In an upcoming blog post, we’ll talk more about how we can automate the running of PowerShell containers in Kubernetes.
 
In this post, we covered a lot, we looked at how you can interactively run PowerShell Core in a container, how you can pass cmdlets into a container at runtime, running different versions of PowerShell Core and also how you can persistently store scripts outside of containers in volumes and run those scripts in your containers. We also looked at how you can encapsulate the whole execution of a script and the containers life cycle into one line of code. Really giving you the ability to run PowerShell Core anywhere on any platform.
 
I hope you enjoyed this and are as excited as I am about how we can leverage this technology to solve new and unique problems in your data center and IT operations.