Category Archives: Kubernetes

Memory Settings for Running SQL Server in Kubernetes

People often ask me what’s the number one thing to look out for when running SQL Server on Kubernetes…the answer is memory settings. In this post, we’re going to dig into why you need to configure resource limits in your SQL Server’s Pod Spec when running SQL Server workloads in Kubernetes. I’m running these demos in Azure Kubernetes Service (AKS), but these concepts apply to any SQL Server environment running in Kubernetes. 

Let’s deploy SQL Server in a Pod without any resource limits.  In the yaml below, we’re using a Deployment to run one SQL Server Pod with a PersistentVolumeClaim for our instance directory and also frontending the Pod with a Service for access. 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment-2017
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
        app: mssql-2017
  template:
    metadata:
      labels:
        app: mssql-2017
    spec:
      hostname: sql3
      containers:
      - name: mssql
        image: 'mcr.microsoft.com/mssql/server:2017-CU16-ubuntu'
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: SA_PASSWORD
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: pvc-sql-2017
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-sql-2017
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
  storageClassName: managed-premium
---
apiVersion: v1
kind: Service
metadata:
  name: mssql-svc-2017
spec:
  selector:
    app: mssql-2017
  ports:
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: LoadBalancer

Running a Workload Against our Pod…then BOOM!

With that Pod deployed, I loaded up a HammerDB TPC-C test with about 10GB of data and drove a workload against our SQL Server. Then while monitoring the workload…boom HammerDB throws connection errors and crashes. Let’s look at why.

First thing’s first, let’s check the Pods status with kubectl get pods. We’ll that’s interesting I have 13 Pods. 1 has a Status of Running and the remainder have are Evicted

kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
mssql-deployment-2017-8698fb8bf5-2pw2z   0/1     Evicted   0          8m24s
mssql-deployment-2017-8698fb8bf5-4bn6c   0/1     Evicted   0          8m23s
mssql-deployment-2017-8698fb8bf5-4pw7d   0/1     Evicted   0          8m25s
mssql-deployment-2017-8698fb8bf5-54k6k   0/1     Evicted   0          8m27s
mssql-deployment-2017-8698fb8bf5-96lzf   0/1     Evicted   0          8m26s
mssql-deployment-2017-8698fb8bf5-clrbx   0/1     Evicted   0          8m27s
mssql-deployment-2017-8698fb8bf5-cp6ml   0/1     Evicted   0          8m27s
mssql-deployment-2017-8698fb8bf5-ln8zt   0/1     Evicted   0          8m27s
mssql-deployment-2017-8698fb8bf5-nmq65   0/1     Evicted   0          8m21s
mssql-deployment-2017-8698fb8bf5-p2mvm   0/1     Evicted   0          25h
mssql-deployment-2017-8698fb8bf5-stzfw   0/1     Evicted   0          8m23s
mssql-deployment-2017-8698fb8bf5-td24w   1/1     Running   0          8m20s
mssql-deployment-2017-8698fb8bf5-wpgcx   0/1     Evicted   0          8m22s

What Just Happened?

Let’s keep digging and look at kubectl get events to see if that can help us sort out what’s happening…reading through these events a lot is going on. Let’s start at the top, we can see that our original Pod mssql-deployment-2017-8698fb8bf5-p2mvm is Killed and the line below that tells us why. The Node had a MemoryPressure condition. A few lines below that we see that our mssql container was using 4461532Ki which exceeded its request of 0 (more on why it’s 0 in a bit). So then our Deployment Controller sees that our Pod is no longer up and running so the Deployment controller does what it’s supposed to do start a new Pod in the place of the failed Pod.
 
The scheduler in Kubernetes will try to place a Pod back onto the same Node if the Node is still available, in our case aks-agentpool-43452558-0. And each time the scheduler places the Pod back onto the same Node it find that the MemoryPressure condition is still true, so after the 10th try the scheduler selects a new Node, aks-agentpool-43452558-3 to run our Pod. And in the last line of the output below we can see that once the workload is moved to aks-agentpool-43452558-3 the MemoryPressure condition goes away on aks-agentpool-43452558-0 since it’s no longer running our workload. 
 
kubectl get events --sort-by=.metadata.creationTimestamp
LAST SEEN   TYPE      REASON                      OBJECT                                        MESSAGE
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-clrbx    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-clrbx to aks-agentpool-43452558-0
17m         Warning   EvictionThresholdMet        node/aks-agentpool-43452558-0                 Attempting to reclaim memory
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-clrbx
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-ln8zt
17m         Normal    Killing                     pod/mssql-deployment-2017-8698fb8bf5-p2mvm    Stopping container mssql
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-54k6k    The node had condition: [MemoryPressure].
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-p2mvm    The node was low on resource: memory. Container mssql was using 4461532Ki, which exceeds its request of 0.
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-cp6ml    The node had condition: [MemoryPressure].
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-cp6ml    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-cp6ml to aks-agentpool-43452558-0
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-54k6k    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-54k6k to aks-agentpool-43452558-0
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-clrbx    The node had condition: [MemoryPressure].
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-cp6ml
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-54k6k
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-ln8zt    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-ln8zt to aks-agentpool-43452558-0
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-96lzf    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-96lzf to aks-agentpool-43452558-0
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-96lzf
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-ln8zt    The node had condition: [MemoryPressure].
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-96lzf    The node had condition: [MemoryPressure].
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-4pw7d    The node had condition: [MemoryPressure].
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-4pw7d    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-4pw7d to aks-agentpool-43452558-0
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-4pw7d
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-2pw2z    The node had condition: [MemoryPressure].
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-2pw2z    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-2pw2z to aks-agentpool-43452558-0
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-2pw2z
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-4bn6c    The node had condition: [MemoryPressure].
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-4bn6c
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-stzfw
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-4bn6c    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-4bn6c to aks-agentpool-43452558-0
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-stzfw    The node had condition: [MemoryPressure].
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   (combined from similar events): Created pod: mssql-deployment-2017-8698fb8bf5-td24w
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-wpgcx    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-wpgcx to aks-agentpool-43452558-0
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-wpgcx    The node had condition: [MemoryPressure].
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-stzfw    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-stzfw to aks-agentpool-43452558-3
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-nmq65    The node had condition: [MemoryPressure].
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-nmq65    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-nmq65 to aks-agentpool-43452558-0
17m         Normal    NodeHasInsufficientMemory   node/aks-agentpool-43452558-0                 Node aks-agentpool-43452558-0 status is now: NodeHasInsufficientMemory
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-td24w    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-td24w to aks-agentpool-43452558-3
16m         Normal    SuccessfulAttachVolume      pod/mssql-deployment-2017-8698fb8bf5-td24w    AttachVolume.Attach succeeded for volume "pvc-f35b270a-e063-11e9-9b6d-ee8baa4f9319"
15m         Normal    Pulling                     pod/mssql-deployment-2017-8698fb8bf5-td24w    Pulling image "mcr.microsoft.com/mssql/server:2017-CU16-ubuntu"
15m         Normal    Pulled                      pod/mssql-deployment-2017-8698fb8bf5-td24w    Successfully pulled image "mcr.microsoft.com/mssql/server:2017-CU16-ubuntu"
15m         Normal    Started                     pod/mssql-deployment-2017-8698fb8bf5-td24w    Started container mssql
15m         Normal    Created                     pod/mssql-deployment-2017-8698fb8bf5-td24w    Created container mssql
12m         Normal    NodeHasSufficientMemory     node/aks-agentpool-43452558-0                 Node aks-agentpool-43452558-0 status is now: NodeHasSufficientMemory
 
But guess what…we’re going to have the same problem on this new Node. If we run our workload again, our memory allocation will grow and Kubernetes will kill the Pod again once the MemoryPressure condition is met. So what do we do…how can we prevent our nodes from going into a MemoryPressure condition? 

Understanding Allocatable Memory in Kubernetes 

Using kubectl describe nodein the output below there’s a section Allocatable. In there we can see that amount of allocatable resources on this Node in terms of CPU, disk, RAM and Pods. These are the amount of resources available to run user Pods on this Node. And there we see the amount of allocatable memory is 4667840Ki (~4.45GB) so we have about that much memory to run our workloads. The amount here is a function of the amount of memory in the Node and reservations made by Kubernetes for system functions, more on that here. Our AKS cluster VMs are Standard DS2 v2 which have 2vCPU and 7GB of RAM, so about 2.55GB is reserved for other uses.  The output below is from after our Pod was evicted so we can see the LastTransitionTime shows the last time a condition occurred and for MemoryPressure we can see an event at 7:53 AM. The other LastTransitionTimes are from when the Node was started. Another key point is in the Events section where we can see the conditions change state.
 
kubectl describe nodes aks-agentpool-43452558-0
Name:               aks-agentpool-43452558-0
...output omitted...
Unschedulable:      false
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Tue, 10 Sep 2019 16:20:00 -0500   Tue, 10 Sep 2019 16:20:00 -0500   RouteCreated                 RouteController created a route
  MemoryPressure       False.  Sat, 28 Sep 2019 07:58:56 -0500.  Sat, 28 Sep 2019 07:53:55 -0500.  KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sat, 28 Sep 2019 07:58:56 -0500   Tue, 10 Sep 2019 16:18:27 -0500   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sat, 28 Sep 2019 07:58:56 -0500   Tue, 10 Sep 2019 16:18:27 -0500   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sat, 28 Sep 2019 07:58:56 -0500   Tue, 10 Sep 2019 16:18:27 -0500   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  Hostname:    aks-agentpool-43452558-0
  InternalIP:  10.240.0.6
Capacity:
 attachable-volumes-azure-disk:  8
 cpu:                            2
 ephemeral-storage:              101584140Ki
 hugepages-1Gi:                  0
 hugepages-2Mi:                  0
 memory:                         7113152Ki
 pods:                           110
Allocatable:
  attachable-volumes-azure-disk:  8
 cpu:                            1931m
 ephemeral-storage:              93619943269
 hugepages-1Gi:                  0
 hugepages-2Mi:                  0
 memory: 4667840Ki  pods:                           110
...output omitted...
Events:
Type     Reason                     Age                  From                               Message
  ----     ------                     ----                 ----                               -------
  Warning  EvictionThresholdMet       10m                  kubelet, aks-agentpool-43452558-0  Attempting to reclaim memory
  Normal   NodeHasInsufficientMemory  10m                  kubelet, aks-agentpool-43452558-0  Node aks-agentpool-43452558-0 status is now: NodeHasInsufficientMemory
  Normal   NodeHasSufficientMemory    5m15s (x2 over 14d)  kubelet, aks-agentpool-43452558-0  Node aks-agentpool-43452558-0 status is now: NodeHasSufficientMemory

SQL Server’s View of Memory on Kubernetes Nodes

When using a Pod with no memory limits defined in the Pod Spec (which is why we saw 0 for the limits in the Event entry) SQL Server sees 5557MB (~5.4GB) memory available and thinks it has that to use. Why is that? Well, SQL Server on Linux looks at the base OS to see how much memory is available on the system and by default uses approximately 80% of that memory due its architecture (SQLPAL).
2019-09-28 14:46:16.23 Server      Detected 5557 MB of RAM. This is an informational message; no user action is required. 
This is bad news in our situation, Kubernetes has only 4667840Ki (~4.45GB) to allocate before setting the MemoryPressure condition which will cause our Pod to be Evicted and Terminated. And as with our workload running SQL Server allocates memory, primarily to the buffer pool, and it exceeds the Allocatable amount of memory for the Node Kubernetes kills our Pod to protect the Node and the cluster as a whole. 

Configuring Pod Limits for SQL Server

So how do we fix all of this? We need to set a resource limit in our Pod Spec. Limits allow us to control the amount of a particular resource exposed to a Pod. And in our case, we want to limit the amount of memory we want SQL Server to see. In our environment we know we have  4667840Ki (~4.45GB) of Allocatable memory for user Pods on Nodes so lets set a value lower than that…and to be super safe I’m going to use 3GB. In the code below you can see in the Pod Spec for our mssql container we have a section for resources, limits and a value of memory: “3Gi”.

    spec:
      hostname: sql3
      containers:
      - name: mssql
        image: 'mcr.microsoft.com/mssql/server:2017-CU16-ubuntu'
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: SA_PASSWORD
        resources:
          limits:
            memory: "3Gi"
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: pvc-sql-system-2017
With this configured we limit the amount of memory SQL Server sees to 3GB. Given that the container is running SQL Server on Linux, SQL Server will actually see about 80% of that 2458MB
2019-09-28 14:01:46.16 Server      Detected 2458 MB of RAM. This is an informational message; no user action is required.

Summary

With that, I hope you can see why I consider memory settings the number one thing to look out for when deploying SQL Server in Kubernetes.  Setting appropriate values will ensure that your SQL Server instance on Kubernetes stays up and running and happily with the other workloads you have running in your cluster.  What’s the best value to set? We need to take into account the amount of memory on the Node, the amount of memory we need to run our workload in SQL Server, and the reservations needed by both Kubernetes and SQLPAL. Additionally, we should set max server memory instance level setting inside of SQL Server to limit the amount of memory that’s allocatable. My suggestion to you is to configure both a resource limit at the Pod Spec and configure max server memory at the instance level.

If you want to read more about resource management and Pod eviction check out this resources:

 

 

 

 

Using kubectl logs to read the SQL Server Error Log in Kubernetes

When working with SQL Server running containers the Error Log is written to standard out. Kubernetes will expose that information to you via kubectl. Let’s check out how it works.

If we start up a Pod running SQL Server and grab the Pod name

kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
mssql-deployment-56d8dbb7b7-hrqwj   1/1     Running   0          22m

We can use follow flag and that will continuously write the error log to your console, similar to using tail with the -f option. If you remove the follow flag it will write the current log to your console. This can be useful in debugging failed startups or in the case below, monitoring the status of a database restore. When finished you can use CTRL+C to break out and return back to your prompt.

kubectl logs mssql-deployment-56d8dbb7b7-hrqwj --follow

Will yield the following output

SQL Server 2019 will run as non-root by default.
This container is running as user root.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
This is an evaluation version.  There are [157] days left in the evaluation period.
2019-09-12 18:11:06.74 Server      Setup step is copying system data file 'C:\templatedata\master.mdf' to '/var/opt/mssql/data/master.mdf'.
2019-09-12 18:11:06.82 Server      Did not find an existing master data file /var/opt/mssql/data/master.mdf, copying the missing default master and other system database files. If you have moved the database location, but not moved the database files, startup may fail. To repair: shutdown SQL Server, move the master database to configured location, and restart.
2019-09-12 18:11:06.83 Server      Setup step is copying system data file 'C:\templatedata\mastlog.ldf' to '/var/opt/mssql/data/mastlog.ldf'.
2019-09-12 18:11:06.85 Server      Setup step is copying system data file 'C:\templatedata\model.mdf' to '/var/opt/mssql/data/model.mdf'.
2019-09-12 18:11:06.87 Server      Setup step is copying system data file 'C:\templatedata\modellog.ldf' to '/var/opt/mssql/data/modellog.ldf'.
2019-09-12 18:11:06.89 Server      Setup step is copying system data file 'C:\templatedata\msdbdata.mdf' to '/var/opt/mssql/data/msdbdata.mdf'.
...output omitted...
2019-09-12 18:11:12.37 spid9s      Database 'msdb' running the upgrade step from version 903 to version 904.
2019-09-12 18:11:12.52 spid9s      Recovery is complete. This is an informational message only. No user action is required.
2019-09-12 18:11:12.55 spid20s     The default language (LCID 0) has been set for engine and full-text services.
2019-09-12 18:11:12.87 spid20s     The tempdb database has 2 data file(s).
2019-09-12 18:14:29.78 spid56      Attempting to load library 'xpstar.dll' into memory. This is an informational message only. No user action is required.
2019-09-12 18:14:29.84 spid56      Using 'xpstar.dll' version '2019.150.1900' to execute extended stored procedure 'xp_instance_regread'. This is an informational message only; no user action is required.
2019-09-12 18:14:30.00 spid56      Attempting to load library 'xplog70.dll' into memory. This is an informational message only. No user action is required.
 
2019-09-12 18:14:30.05 spid56      Using 'xplog70.dll' version '2019.150.1900' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required.
...output omitted...
2019-09-12 18:32:32.40 spid66      [5]. Feature Status: PVS: 0. CTR: 0. ConcurrentPFSUpdate: 1.
2019-09-12 18:32:32.41 spid66      Starting up database ‘DB1'.
2019-09-12 18:32:32.72 spid66      The database 'DB1' is marked RESTORING and is in a state that does not allow recovery to be run.
2019-09-12 18:32:37.44 Backup      Database was restored: Database: DB1  creation date(time): 2019/05/11(13:32:05), first LSN: 148853:1000384:1, last LSN: 148853:1067344:1, number of dump devices: 1, device information: (FILE=1, TYPE=URL: {'https://yourenotallowtoknow.blob.core.windows.net/servername/DB1_FULL_20190912_020000.bak'}). Informational message. No user action required.

New Pluralsight Course – Managing Kubernetes Controllers and Deployments

My new course “Managing Kubernetes Controllers and Deployments” in now available on Pluralsight here! Check out the trailer here or if you want to dive right in go here! This course offers practical tips from my experiences managing Kubernetes Clusters and workloads for Centino Systems clients.
 

This course targets IT professionals that design and maintain Kubernetes and container based solutions.The course can be used by both the IT pro learning new skills and the system administrator or developer preparing for using Kubernetes both on premises and in the Cloud. 

Let’s take your Kubernetes administration and configuration skills to the next level and get you started now!

The modules of the course are:

  • Using Controllers to Deploy Applications and Deployment Basics – In this module we dive into what Controllers are and how they can be used to deploy applications in Kubernetes. We’ll introduce several core controller types and look at the fundamentals of using the Deployment Controller to deploy applications and take a deep dive into the Controller operations of ReplicaSets.
  • Maintaining Applications with Deployments – In this demo-heavy module, we look closer at Deployments and learn how we can maintain our container based applications. We look at updating Deployments, controlling rollouts and using updateStrategy and readinessProbes to ensure successful rollouts. We’ll also cover what to do when things go wrong and learn how to pause and rollback rollouts.
  • Deploying and Maintaining Applications with DaemonSets and Jobs – In this module, we introduce the DaemonSet controller and how it’s used to deploy applications to all Nodes or a subset of Nodes in our cluster, we’ll also cover DaemonSet operations such as updating and controlling rollouts. We wrap up the course with a look at how we can use Jobs and CronJobs to ensure work completes in our cluster. 

NewImage

Check out the course at Pluralsight!

 

Workshop – Kubernetes Zero to Hero at SQL Saturday Denver!

Pre-conference Workshop at SQLSaturday Denver

I’m proud to announce that I will be be presenting an all day pre-conference workshop at SQL Saturday Denver on October 11th 2019! This one won’t let you down! 

The workshop is Kubernetes Zero to Hero – Installation, Configuration, and Application Deployment” 

NewImage

Here’s the abstract for the workshop

Modern application deployment needs to be fast and consistent to keep up with business objectives and Kubernetes is quickly becoming the standard for deploying container-based applications, fast. In this day-long session, we will start with an architectural overview of a Kubernetes cluster and how it manages application state. Then we will learn how to build a production-ready cluster. With our cluster up and running, we will learn how to interact with our cluster, common administrative tasks, then wrap up with how to deploy applications and SQL Server. At the end of the session, you will know how to set up a Kubernetes cluster, manage a cluster, deploy applications and databases, and how to keep everything up and running.

Session Objectives

  • Introduce Kubernetes Cluster Components
  • Introduce Kubernetes API Objects and Controllers
  • Installing Kubernetes
  • Interacting with your cluster
  • Storing persistent data in Kubernetes
  • Deploying Applications in Kubernetes
  • Deploying SQL Server in Kubernetes
  • High Availability scenarios in Kubernetes

FAQs

How much does it cost?

The full day training event is $150 per attendee.

What can I bring into the event?
WiFi at the location is limited. The workshop will be primarily demonstration based. Code will be made available for download prior to the event if you would like to follow along during the session.

How can I contact the organizer with any questions?
Please feel free to email me with any questions: aen@centinosystems.com

What’s the refund policy?
7 days: Attendees can receive refunds up to 1 days before your event start date.

Do I need to know SQL Server or Kubernetes to attend this workshop?
No, while we will be focusing on deploying SQL Server in Kubernetes, no prior knowledge of SQL Server or Kubernetes is needed. We will build up our Kubernetes skills using SQL Server as the primary application we will deploy.

What are the prerequisites for the workshop?
All examples will be executed at the command line, so proficiency at a command line is required. Platform dependent (Linux/Windows,Cloud CLIs) configurations and commands will be introduced and discussed in the workshop.  

Workshop – Kubernetes Zero to Hero – Installation, Configuration, and Application Deployment

Pre-conference Workshop at SQLSaturday Baton Rouge

I’m proud to announce that I will be be presenting an all day pre-conference workshop at SQL Saturday Baton Rouge on August 16th 2019! This one won’t let you down! 

The workshop is Kubernetes Zero to Hero – Installation, Configuration, and Application Deployment” 

NewImage

Here’s the abstract for the workshop

Modern application deployment needs to be fast and consistent to keep up with business objectives and Kubernetes is quickly becoming the standard for deploying container-based applications, fast. In this day-long session, we will start with an architectural overview of a Kubernetes cluster and how it manages application state. Then we will learn how to build a production-ready cluster. With our cluster up and running, we will learn how to interact with our cluster, common administrative tasks, then wrap up with how to deploy applications and SQL Server. At the end of the session, you will know how to set up a Kubernetes cluster, manage a cluster, deploy applications and databases, and how to keep everything up and running.

Session Objectives

  • Introduce Kubernetes Cluster Components
  • Introduce Kubernetes API Objects and Controllers
  • Installing Kubernetes
  • Interacting with your cluster
  • Storing persistent data in Kubernetes
  • Deploying Applications in Kubernetes
  • Deploying SQL Server in Kubernetes
  • High Availability scenarios in Kubernetes

FAQs

How much does it cost?

The full day training event is $125 per attendee.

What can I bring into the event?
WiFi at the location is limited. The workshop will be primarily demonstration based. Code will be made available for download prior to the event if you would like to follow along during the session.

How can I contact the organizer with any questions?
Please feel free to email me with any questions: aen@centinosystems.com

What’s the refund policy?
7 days: Attendees can receive refunds up to 7 days before your event start date.

Do I need to know SQL Server or Kubernetes to attend this workshop?
No, while we will be focusing on deploying SQL Server in Kubernetes, no prior knowledge of SQL Server or Kubernetes is needed. We will build up our Kubernetes skills using SQL Server as the primary application we will deploy.

What are the prerequisites for the workshop?
All examples will be executed at the command line, so proficiency at a command line is required. Platform dependent (Linux/Windows,Cloud CLIs) configurations and commands will be introduced and discussed in the workshop.  

New Pluralsight Course – Managing the Kubernetes API Server and Pods

New Pluralsight Course – Managing the Kubernetes API Server and Pods

My new course “Managing the Kubernetes API Server and Pods” in now available on Pluralsight here! Check out the trailer here or if you want to dive right in go here! This course offers practical tips from my experiences managing Kubernetes Clusters and workloads for Centino Systems clients.

This course targets IT professionals that design and maintain Kubernetes and container based solutions.The course can be used by both the IT pro learning new skills and the system administrator or developer preparing for using Kubernetes both on premises and in the Cloud.

Let’s take your Kubernetes administration and configuration skills to the next level and get you started now!

The modules of the course are:

  • Using the Kubernetes API – In this module we will dive into the Kubernetes API, looking closely at the architecture of the API Server and how exposes and manages Kubernetes API Objects. Then we will learn about API versioning and object maturity. Next, we’ll look at anatomy of an API request, leading us up to debugging interactions with the API Server.
  • Managing Objects with Labels, Annotations and Namespaces – In this demo-heavy module, we will learn out to organize and interact with resources in Kubernetes using Labels, Annotations, and Namespaces. We will also learn how to use labels to influence Kubernetes operations in Controllers and Pod scheduling.
  • Running and Managing Pods – In this module, we will look at the fundamental unit of work in Kubernetes, the Pod, looking at why the Pod abstraction is needed and design principals for placing your applications in Pods and running those Pods in your cluster.  We’ll examine Pod lifecycle and how its state impacts application health and availability. We wrap up with how Controllers interact with Pods and how Pods report their health status with readiness probes and liveness probes. 

NewImage

Check out the course at Pluralsight!

 

Data Persistency and Advanced SQL Server Disk Topologies in Kubernetes

When working with SQL Server in containers and Kubernetes storage is a key concept. In this post, we’re going to walk through how to deploy SQL Server in Kubernetes with Persistent Volumes for the system and user databases.

One of the key principals of Kubernetes is the ephemerality of Pods. No Pod is every redeployed, a completely new Pod is created. If a Pod dies, for whatever reason, a new Pod is created in its place there is no continuity in the state of that Pod. The newly created Pod will go back to the initial state of the container image defined in the Pod’s spec. This is very valuable for stateless workloads, not so much for stateful workloads like SQL Server.

This means that for a stateful workload like SQL Server we need to store both configuration and data externally from the Pod to maintain state through the recreation of a Pod. Kubernetes give us constructs two constructs to do that, environment variables and Persistent Volumes. 

Using Environment Variables for Container Configuration

Container-based applications use environment variables for configuration at startup. The SQL Server container has a collection of environment variables that can be used to configure it at container startup. We will leverage two of those in this configuration. MSSQL_DATA_DIR and MSSQL_LOG_DIR these allow us to define a file system locations for user database and log files. When the SQL Server container is started inside the Pod, it reads the environment variables at runtime and sets its configuration based on those values. We define these variables as part of the Pod Spec. We will cover that configuration below.

Using Persistent Volumes to Maintain Database State

To persist the state of our SQL Server container, we will configure SQL Server to store its data and log files for both user and system databases on Persistent Volumes.

First, let’s review how SQL Server in a container starts up. During the initial startup, the SQL Server process checks to see if there are any system databases in the default system file location which is, /var/opt/mssql/data. If there are none the system databases are copied there, if they are there no action is taken. 

To add persistently to the system databases, and really all of the other components of SQL Server such as the Error Log and other system files, we will configure /var/opt/mssql so that it is backed by a Persistent Volume.

By placing the system databases on a Persistent Volume, when a Pod is recreated and the Persistent Volumes are attached and mounted in the same location when the SQL Server process starts up it sees the system databases and has what it needs to maintain state between creation.

If there are records for user databases in the system databases, SQL Server will start the process of bringing those databases online as well. We certainly the default location for user databases is /var/opt/mssql/data but we are going to override that with an environment variable for both the data and log directories, placing each on a dedicated Persistent Volumes.

Let’s walk through that configuration together. 

Persistent Volume Claims

In this configuration, we will use dynamic storage provisioning. In dynamic provisioning, a Persistent Volume Claim (PVC) is used to request a Persistent Volume (PV) from a Storage Class. In this case, we’ll be using AKS’s managed-premium Storage Class. 

Here we define three PVCs, one for each place we want Persistent Volume, for the system files and databases and the user database and log files.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "pvc-sql-data"
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: managed-premium
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "pvc-sql-system"
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: managed-premium
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "pvc-sql-log"
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: managed-premium
  resources:
    requests:
      storage: 10Gi

Deployment

In the Pod spec for our Deployment, we want to define several elements to support this configuration. 

  • Volumes – define volumes that can be mounted by this Pod. In this case, we’re creating and naming three volumes, backed by the PVCs defined above.
  • volumeMounts – volumes mounted into the container and their mountPath, location. This maps the names from the named Volumes to a location in the filesystem in the container.
  • env – due to the ephemerality of the container in the Pod, we need to tell SQL Server at start up that the data and log files will be stored in a specified directory. We are leaving the system databases and files in the default location which is /var/opt/mssql
The net effect of this storage configuration is that we are mapping the Persistent Volumes into a particular location in the filesystem inside the container. 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mssql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mssql
    spec:
      
      containers:
      - name: mssql
        image: 'mcr.microsoft.com/mssql/server:2017-latest'
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: 'Y'
        - name: MSSQL_DATA_DIR
          value: '/data'
        - name: MSSQL_LOG_DIR
          value: '/log'
        - name: SA_PASSWORD
          value: 'S0methingS@Str0ng!'
        volumeMounts:
        - name: mssql-system
          mountPath: /var/opt/mssql
        - name: mssql-data
          mountPath: /data
        - name: mssql-log
          mountPath: /log
      volumes:
      - name: mssql-system
        persistentVolumeClaim:
          claimName: pvc-sql-system
      - name: mssql-data
        persistentVolumeClaim:
          claimName: pvc-sql-data
      - name: mssql-log
        persistentVolumeClaim:
          claimName: pvc-sql-log

Service

We’ll front end our SQL Server with a public IP address and a load balancer. 

apiVersion: v1
kind: Service
metadata:
  name: mssql-deployment
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 31433
      targetPort: 1433
  type: LoadBalancer

Apply the Configuration

Save the code above into a YAML file and deploy it into SQL Server.

kubectl apply -f deployment-advanced-disk.yaml

You’ll get this output

persistentvolumeclaim/pvc-sql-data created
persistentvolumeclaim/pvc-sql-system created
persistentvolumeclaim/pvc-sql-log created
deployment.apps/mssql-deployment created
service/mssql-deployment created

Confirm the configuration

We can use kubectl get pv to list the Persistent Volumes (PV) dynamically allocated by our cluster. Here there are three Persistent Volumes. The key here is the status is Bound, which means they are bound to a PVC. I also want to point out the Reclaim Policy is Delete. This means if the PVC is deleted, the PV will be deleted at a cleanup interval sometime in the future. 

kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS      REASON   AGE
pvc-e0b418ef-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            Delete           Bound    default/pvc-sql-data     managed-premium            11m
pvc-e0cf2345-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            Delete           Bound    default/pvc-sql-system   managed-premium            11m
pvc-e0ea01a8-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            Delete           Bound    default/pvc-sql-log      managed-premium            11m

With kubectl get pvc we get a list of the PVCs in our configuration, once for each we defined above. The key here is the status is Bound, or that they are bound to a PV.

kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
pvc-sql-data     Bound    pvc-e0b418ef-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            managed-premium   12m
pvc-sql-log      Bound    pvc-e0ea01a8-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            managed-premium   12m
pvc-sql-system   Bound    pvc-e0cf2345-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            managed-premium   12m 

Now let’s use kubectl describe pods to get the deep dive info about our storage configuration and how it’s mapped into the Pod. 

There are three keep places in the output below I want to point you to

  • Containers: mssql: Environment: you’ll find the two environment variables set for the data and log directories. Configured as /data and /log
  • Mounts: we see the file system location inside the container and the name of the Volumes defined in the Pod Spec
  • Volumes: we see the name of the Volumes, their type, claim name and the read/write status.
  • Events: this is a log of the events for the creation of this Pod. Key here is that sometimes the container will come up prior to the storage being available to the Pod. That’s what the error below is, but it clears itself up and the container is able to start.
kubectl describe pods
Name:               mssql-deployment-df4cf5c4c-nf8lf
Namespace:          default
Priority:           0
PriorityClassName:
Node:               aks-nodepool1-89481420-2/10.240.0.6
Start Time:         Sat, 04 May 2019 07:41:59 -0500
Labels:             app=mssql
                    pod-template-hash=df4cf5c4c
Annotations:
Status:             Running
IP:                 10.244.1.51
Controlled By:      ReplicaSet/mssql-deployment-df4cf5c4c
Containers:
  mssql:
    Container ID:   docker://f2320ae8f94c24fbb04214b903b4a218b82e9548f8d88a95daa7e207eeaa42b4
    Image:          mcr.microsoft.com/mssql/server:2017-latest
    Image ID:       docker-pullable://mcr.microsoft.com/mssql/server@sha256:39554141d307f2d40d2abfc54e3a0eea3aa527e58f616496c6f3ed3245a2e2b1
    Port:           1433/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sat, 04 May 2019 07:44:21 -0500
    Ready:          True
    Restart Count:  0
    Environment:
      ACCEPT_EULA:                   Y
      MSSQL_DATA_DIR:                /data
      MSSQL_LOG_DIR:                 /log
      SA_PASSWORD:                   S0methingS@Str0ng!
      KUBERNETES_PORT_443_TCP_ADDR:  cscluster-kubernetes-cloud-fd0c5e-8bca8b54.hcp.centralus.azmk8s.io
      KUBERNETES_PORT:               tcp://cscluster-kubernetes-cloud-fd0c5e-8bca8b54.hcp.centralus.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://cscluster-kubernetes-cloud-fd0c5e-8bca8b54.hcp.centralus.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       cscluster-kubernetes-cloud-fd0c5e-8bca8b54.hcp.centralus.azmk8s.io
    Mounts:
      /data from mssql-data (rw)
      /log from mssql-log (rw)
      /var/opt/mssql from mssql-system (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z9sbf (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  mssql-system:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-sql-system
    ReadOnly:   false
  mssql-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-sql-data
    ReadOnly:   false
  mssql-log:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-sql-log
    ReadOnly:   false
  default-token-z9sbf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-z9sbf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age   From                               Message
  ----     ------                  ----  ----                               -------
  Normal   Scheduled               13m   default-scheduler                  Successfully assigned default/mssql-deployment-df4cf5c4c-nf8lf to aks-nodepool1-89481420-2
  Normal   SuccessfulAttachVolume  13m   attachdetach-controller            AttachVolume.Attach succeeded for volume "pvc-e0ea01a8-6e69-11e9-a433-f659caf6a6f5"
  Normal   SuccessfulAttachVolume  12m   attachdetach-controller            AttachVolume.Attach succeeded for volume "pvc-e0cf2345-6e69-11e9-a433-f659caf6a6f5"
  Normal   SuccessfulAttachVolume  12m   attachdetach-controller            AttachVolume.Attach succeeded for volume "pvc-e0b418ef-6e69-11e9-a433-f659caf6a6f5"
  Warning  FailedMount             11m   kubelet, aks-nodepool1-89481420-2  Unable to mount volumes for pod "mssql-deployment-df4cf5c4c-nf8lf_default(027c46f7-6e6a-11e9-a433-f659caf6a6f5)": timeout expired waiting for volumes to attach or mount for pod "default"/"mssql-deployment-df4cf5c4c-nf8lf". list of unmounted volumes=[mssql-system mssql-data]. list of unattached volumes=[mssql-system mssql-data mssql-log default-token-z9sbf]
  Normal   Pulled                  11m   kubelet, aks-nodepool1-89481420-2  Container image "mcr.microsoft.com/mssql/server:2017-latest" already present on machine
  Normal   Created                 11m   kubelet, aks-nodepool1-89481420-2  Created container
  Normal   Started                 11m   kubelet, aks-nodepool1-89481420-2  Started container

Creating a Database and Verifying File Location

With this code, we’ll get our IP address for our SQL Server service then we’ll create a database and query master_files for a list of data files. Notice I’m defining my service port as 31443 which is what we defined when creating our service in the earlier step.

SVCIP=$(kubectl get svc mssql-deployment | grep mssql-deployment |  awk '{print $4}')
sqlcmd -S $SVCIP,31433 -U sa -Q 'CREATE DATABASE TestDB1' -P $PASSWORD
sqlcmd -S $SVCIP,31433 -U sa -Q 'SELECT name,physical_name from sys.master_files' -P $PASSWORD


And we’ll get this output, you can see all of the system databases backed by /var/opt/mssql and our user database is on /data and the log is on /log. All backed by Persistent Volumes.

master        /var/opt/mssql/data/master.mdf
mastlog       /var/opt/mssql/data/mastlog.ldf
tempdev       /var/opt/mssql/data/tempdb.mdf
templog       /var/opt/mssql/data/templog.ldf
modeldev      /var/opt/mssql/data/model.mdf
modellog      /var/opt/mssql/data/modellog.ldf
MSDBData      /var/opt/mssql/data/MSDBData.mdf
MSDBLog       /var/opt/mssql/data/MSDBLog.ldf
TestDB1       /data/TestDB1.mdf
TestDB1_log   /log/TestDB1_log.ldf

Confirming Persistency

Let’s go ahead and delete our Pod to confirm that when it’s recreated by our Deployment our data is still there. 

kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
mssql-deployment-df4cf5c4c-nf8lf   1/1     Running   0          4d2h

kubectl delete pod mssql-deployment-df4cf5c4c-nf8lf 
pod "mssql-deployment-df4cf5c4c-nf8lf" deleted

Once the Pod is recreated, let’s query master files to see where our databases are located. And you’ll find that your the database created in the previous step persisted between Pod creations.

sqlcmd -S $SVCIP,31433 -U sa -Q 'SELECT name,physical_name from sys.master_files' -P $PASSWORD

master        /var/opt/mssql/data/master.mdf
mastlog       /var/opt/mssql/data/mastlog.ldf
tempdev       /var/opt/mssql/data/tempdb.mdf
templog       /var/opt/mssql/data/templog.ldf
modeldev      /var/opt/mssql/data/model.mdf
modellog      /var/opt/mssql/data/modellog.ldf
MSDBData      /var/opt/mssql/data/MSDBData.mdf
MSDBLog       /var/opt/mssql/data/MSDBLog.ldf
TestDB1       /data/TestDB1.mdf
TestDB1_log   /log/TestDB1_log.ldf

Using PowerShell in Containers

The vision for PowerShell Core is to be able to run PowerShell anywhere. In this article, I’m going to discuss how you can use Docker Containers to enable just that. We’ll look at running PowerShell in a container, running cmdlets, running different versions of PowerShell at the same time, and also how to build our own “serverless” computing platform.

Let’s address a few reasons why you would want to run PowerShell in a container.

  • Speed and agility – this for me is probably the number one reason to run PowerShell in a container.  The PowerShell container images are coming in at around 375MB, this means with a modern Internet connection you’ll be able to pull a PowerShell container image and be up in running in a very small amount of time.
  • Version – there are container images available for every release of PowerShell Core, including preview/release candidate code. With containers, you can run multiple versions of PowerShell Core in a way where they will not conflict with each other.
  • Platform independence – there are container images for Ubuntu, Fedora, Windows Server Core, Nano Server and more. This allows you to be able to consume PowerShell Core regardless of your underlying platform. You can select whichever image you want, pull the container and go. 
  • Testing – if you need to test your scripts across various versions of PowerShell Core you can pull the container, run the script on the exact version you need. You can have multiple containers on your system running multiple versions of PowerShell and be able to run them all at the same time.  
  • Isolation – containers will allow you to have self-contained environments for execution, security, environment, and configuration settings. You can also use this idea to isolate conflicting modules from each other. This is particularly valuable when developing modules and/or cmdlets.

Getting Up and Running

Let’s get started with using PowerShell Core in a container. First up, we will want to pull the Docker Container Image to our local machine. This will pull the image with the latest tag. Which at the time of this post is 6.2.0-ubuntu-18.04.

docker pull mcr.microsoft.com/powershell:latest

With the container image local, let’s go ahead and start up the container. In this first go, I’m going to start up the container with the docker run command and with the –interactive and –tty flags. What these flags do is, when the container starts, attach to the terminal of the container so I can use PowerShell Core interactively at the command line.

docker run                    \
        --name "pwsh-latest"  \
        --interactive --tty   \
        mcr.microsoft.com/powershell:latest 

This will get you a PowerShell prompt. I told you this was going to be fast.

PowerShell 6.2.0
Copyright (c) Microsoft Corporation. All rights reserved.
 
https://aka.ms/pscore6-docs
Type 'help' to get help.
 
PS /> 

From that prompt, we can do the normal PowerShell things we need to do. Let’s start our journey like all good PowerShell demos do and run Get-Process. You’ll notice that there is only one process running in the container, and that’s your pwsh session. This is due to the isolation concepts of Containers. With this isolation, problems like conflicting modules and settings go away. The container gives you script an isolated execution environment. If you need to have two conflicting versions of a module, DLL or library to run your workload or script…you can use a container to isolate their execution giving them the ability to co-exist on the same system.

PS /> Get-Process
 
 NPM(K)    PM(M)      WS(M)     CPU(s)      Id  SI ProcessName
 ------    -----      -----     ------      --  -- -----------
      0     0.00     110.03       2.01       1   1 pwsh

We can use exit to get out of PowerShell. When you exit PowerShell the container will stop. You can see that status of your container with docker ps.

CONTAINER ID        IMAGE                                 COMMAND             CREATED             STATUS                     PORTS               NAMES
8c9160fea43f        mcr.microsoft.com/powershell:latest   "pwsh"              6 minutes ago       Exited (0) 8 seconds ago                       pwsh-latest
 
If you’d like to get back into your container you can use docker start pwsh-latest -i where pwsh-latest is the container name we just created and -i is for interactive (we used –interactive earlier). Run that and you’ll land right back at a PowerShell prompt again. 

Running a cmdlet When Starting a Container

Now, let’s say we wanted to start our container up and non-interactively run a cmdlet right away, we can do that. With the docker run command, we can tell the container that we want it to start pwsh and pass in a cmdlet as a parameter into pwsh, with the -c parameter and that cmdlet will be executed. Let’s check out how.
docker run mcr.microsoft.com/powershell:latest pwsh -c "&{Get-Process}"
 
 NPM(K)    PM(M)      WS(M)     CPU(s)      Id  SI ProcessName
 ------    -----      -----     ------      --  -- -----------
      0     0.00      81.35       0.54       1   1 pwsh
 
From a performance standpoint, I want to point out the time it takes to do this work, we can use the time command to help us with that. Less than two seconds to start the container, start pwsh and execute our cmdlet and shut down the container.
time docker run mcr.microsoft.com/powershell:latest pwsh -c "&{Get-Process}"
 
 NPM(K)    PM(M)      WS(M)     CPU(s)      Id  SI ProcessName
 ------    -----      -----     ------      --  -- -----------
      0     0.00      81.61       0.54       1   1 pwsh
 
real 0m1.901s
user 0m0.038s
sys. 0m0.086s
 
Now let’s say I wanted to test a cmdlet execution against a specific version of PowerShell Core, perhaps even a Release Candidate. Let’s change the tag from latest to preview and docker will pull that container, start it up and we immediately have an environment for testing. This could be leveraged for script testing, cmdlet testing, module testing and so on. In the output below, you can see the preview tag points to the 6.2.0-rc1 version of PowerShell Core.
docker run mcr.microsoft.com/powershell:preview pwsh -c "&{Get-Host}"
 
Name             : ConsoleHost
Version          : 6.2.0-rc.1
…output omitted...
 
Now, each time we started a container so far in this post and then exited pwsh, the container shut down and was still on our system. We can see the containers with a docker ps -a. We can restart any of these containers and get them back by using the command mentioned previously.
docker ps -a
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS                     PORTS               NAMES
d8d8d27ec7be        mcr.microsoft.com/powershell:preview  "pwsh -c &{Get-Host}"    4 seconds ago       Exited (0) 2 seconds ago                       pensive_poincare
5eace290b47c        mcr.microsoft.com/powershell:latest   "pwsh -c &{Get-Proce…"   4 minutes ago       Exited (0) 4 minutes ago                       dreamy_haibt
c8361b9e0a76        mcr.microsoft.com/powershell:latest   "pwsh -c &{Get-Proce…"   6 minutes ago       Exited (0) 6 minutes ago                       boring_shirley
8c9160fea43f        mcr.microsoft.com/powershell:latest   "pwsh"                   15 minutes ago      Exited (0) 8 minutes ago                       pwsh-latest
 
We can delete each container by name, using docker rm then specifying the name as a parameter. For example, docker rm pwsh-latest would delete that container.

Running a Script When Starting a Container

When a container is deleted, the data “inside” the container is deleted too. So if we created a script inside a container and then delete the container that means the script would go away too. In Docker, we can use a volume to help us with this. A volume allows us to store our data externally to the container, we can mount the volume inside the container and it looks like it’s part of the container’s file system.
 
With volumes, when we delete the container, the data stays inside the volume. We can then create a new container and attach the volume to that new container and the data will be there for us to work with.
 
Let’s start a container and attach a volume at the /scripts location of the container’s file system. Let’s also add the –detach parameter. This is going to start the container, start pwsh and then stop the container. Then I’m going to copy a script from my local file system into the container. The container does not need to be running for the copy operation to succeed.
docker run                       \
     --name "pwsh-script"        \
     --interactive --tty         \
     --volume PSScripts:/scripts \
       mcr.microsoft.com/powershell:latest
 
Here’s the code to copy the script from my local file system into the container where pwsh-script is the container name and /scripts is the location we want to copy the script to inside the container. This is the volume we attached to the container. The script is a simple hello-world script.
docker cp Get-Containers.ps1 pwsh-script:/scripts
 
With that, let’s go ahead and remove the container. We used it just to copy the script into the volume. I kind of feel bad, but we’ll keep moving on.
docker rm pwsh-script
 
With that, let’s create a new container in interactive mode, with the volume attached. This will put us at a pwsh prompt.
docker run                       \
     --name "pwsh-script"        \
     --interactive --tty         \
     --volume PSScripts:/scripts \
       mcr.microsoft.com/powershell:latest
 
Now, since our script is in the volume and we attached that volume when we created this new container, it’s available for us inside the container. Let’s go ahead and run that script inside the container and then delete the container with docker rm when it’s finished. 
PS /> ls -la /scripts/
total 12
drwxr-xr-x 2 root root    4096 May  2 18:30 .
drwxr-xr-x 1 root root    4096 May  2 18:33 ..
-rw-r--r-- 1  502 dialout   73 Apr 28 21:43 Get-Containers.ps1
PS /> /scripts/Get-Containers.ps1
Hello, world!
PS /> exit
docker rm pwsh-script

Sounds Like…Serverless?

Now let’s take that technique we just stepped through, where we started the container, ran a script and deleted the container and combine all of that into one step. To do so, we’ll use the following command options for docker run. We specify the –rm option which will delete the container when it exits, add the /scripts volume and tell pwsh to run the script that’s in our volume by specifying its location with the parameter -F /scripts/Get-Containers.ps1.
docker run                       \
     --rm                        \
     --volume PSScripts:/scripts \
       mcr.microsoft.com/powershell:latest pwsh -F /scripts/Get-Containers.ps1
Hello, world!
 
Now, with that last technique, we’ve encapsulated the entire lifecycle of the execution of that script into one line of code. It’s like this script execution never happened…or did it ;) All kidding aside, we effectively have a serverless computing platform now. Using this technique in our data centers, we can spin up a container, on any version of PowerShell on any platform, run some workload/script and when the workload finishes, the container just goes away. For this to work well, we will need something to drive that process. In an upcoming blog post, we’ll talk more about how we can automate the running of PowerShell containers in Kubernetes.
 
In this post, we covered a lot, we looked at how you can interactively run PowerShell Core in a container, how you can pass cmdlets into a container at runtime, running different versions of PowerShell Core and also how you can persistently store scripts outside of containers in volumes and run those scripts in your containers. We also looked at how you can encapsulate the whole execution of a script and the containers life cycle into one line of code. Really giving you the ability to run PowerShell Core anywhere on any platform.
 
I hope you enjoyed this and are as excited as I am about how we can leverage this technology to solve new and unique problems in your data center and IT operations.
 

Using Kubernetes Deployments for Updating SQL Server

In Kubernetes we can leverage Controllers to help manage our application state, keeping them in the desired state. In this blog post, we’re going to look at how to use a Deployment Controller to manage the application state of SQL Server in Kubernetes. We’ll look at deploying SQL Server in a Deployment and using that deployment to upgrade SQL Server and rollback our upgrade.

Deploying SQL Server in a Deployment

Let’s start off with deploying SQL Server in Kubernetes. We can do that with the following YAML file to describe our Deployment.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mssql
    spec:
      containers:
      - name: mssql
        image: 'mcr.microsoft.com/mssql/server:2017-CU11-ubuntu'
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: SA_PASSWORD
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: pvc-sql-data
---
apiVersion: v1
kind: Service
metadata:
  name: mssql-deployment
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 31433
      targetPort: 1433
  type: NodePort

Listing 1: deployment-sql.yaml

There are a few things I want to point out in our YAML file. First, we’re using a Deployment Controller. This will implement a Replica Set of the desired number of replicas using the container imaged defined. In this case, we’ll have 1 replica using the SQL Server 2017 CU11 Image. A Replica Set will guarantee that a defined set of Pods are running at any given time, here we’ll have exactly one Pod. We’re using a Deployment Controller, which gives us move between versions of Replica Sets based off different container images in a controlled fashion…more on that in a second. I would also like to point out, the volume described in this manifest. Our container’s data directory is /var/opt/mssql which is mounted on a PersistentVolumeClaim. This means our data is external to our Pod, if our Pod is redeployed our databases will be in this directory, they will be mounted and our databases will be made available. We’re also using a Service to provide a fixed IP and Port for access to our SQL Server in this Deployment.

Let’s go ahead and apply the code in Listing 1: deployment-sql.yaml

kubectl create secret generic mssql --from-literal=SA_PASSWORD=OurR&4llyStr0ngP4ssw0rd!
kubectl apply -f deployment-sql.yaml --record

With that applied, our SQL Server Deployment will schedule one Pod, start up the container, expose it as a NodePort Service and our SQL Server is up and running on the 2017 CU11 container image. That –record flag will record the operation as an annotation on the resource. Basically giving us some human-readable information about what we’re doing with that command that we can user later.

Deployments and Replica Sets

In Kubernetes, Deployments are made of Replica Sets. With our SQL Server Pod up in running from our Deployment, let’s start our investigation using kubectl get deployment mssql-deployment. In the output below, we can see the deployment mssql-deployment started a Replica Set based off of the SQL Server 2017 CU11 Image. And the Replica Set started for that container image is mssql-deployment-55bd89b84d.

kubectl get deployment mssql-deployment

Name: mssql-deployment …output omitted
Pod Template:
Containers:
mssql:
Image: mcr.microsoft.com/mssql/server:2017-CU11-ubuntu
…output omitted
NewReplicaSet:   mssql-deployment-55bd89b84d (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 7m10s deployment-controller Scaled up replica set mssql-deployment-55bd89b84d to 1

Screen Shot 2019 03 12 at 6 54 14 AM

Figure 1: SQL Server Deployment

Updating the Deployment with a New Container Image

Now we can use Deployments to easily move between versions of container images. So let’s update this 2017 CU11 container image with a 2017 CU12 container image. We can do that with this code:

kubectl --record deployment set image mssql-deployment mssql=mcr.microsoft.com/mssql/server:2017-CU12-ubuntu

With this block of code, we’re recording the updating of the container image with –record and we’re setting the container image for the mssql container in our Pod Template to 2017-CU12-ubuntu.

Now our container image is being updated using our defined Update Strategy…we defined our update strategy way back in deployment-sql.yaml with the attribute strategy: type: Recreate. The Recreate update strategy will shut down the existing Pod(s) in the Replica Set before starting the new Pod(s) with the new container image in the new Replica Set we’re updating to. This makes sense in an RDBMS since we want to have only one Pod have access to the data files at one point in time. This entire process takes only a few seconds! You may have to wait while SQL Server runs update scripts on the databases.We can check the status with kubectl rollout status deployment mssql-deployment 

kubectl rollout status deployment mssql-deployment

Waiting for deployment "mssql-deployment" rollout to finish: 0 out of 1 new replicas have been updated... Waiting for deployment "mssql-deployment" rollout to finish: 0 of 1 updated replicas are available... deployment "mssql-deployment" successfully rolled out

Now look more closely at our Deployment again with kubectl describe deployment mssql-deployment. In the output below, here we see the original Replica Set (mssql-deployment-55bd89b84d) scaled from 1 to 0 and our new Replica Set (mssql-deployment-6776c966b7) based off of the CU12 image scaled from 0 to 1. I also want to point out that Kubernetes will keep the original Replica Set metadata around for us which we can use to rollback if needed.

kubectl describe deployment mssql-deployment

Name: mssql-deployment …output omitted Pod Template: Containers: mssql: Image: mcr.microsoft.com/mssql/server:2017-CU12-ubuntu
…output omitted
NewReplicaSet: mssql-deployment-6776c966b7 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 15m deployment-controller Scaled up replica set mssql-deployment-55bd89b84d to 1 Normal ScalingReplicaSet 114s deployment-controller Scaled down replica set mssql-deployment-55bd89b84d to 0 Normal ScalingReplicaSet 109s deployment-controller Scaled up replica set mssql-deployment-6776c966b7 to 1

Screen Shot 2019 03 12 at 6 54 26 AM

Figure 2: SQL Server Deployment with updated container image

Check out the Revision History

If you want to check the history of your rollouts, with the recorded changes you’ve made, for your Deployment you can use kubectl rollout history deployment mssql-deployment

kubectl rollout history deployment mssql-deployment

REVISION CHANGE-CAUSE 1 kubectl apply --filename=deployment-sql.yaml --record=true 2 kubectl deployment set image mssql-deployment mssql=mcr.microsoft.com/mssql/server:2017-CU12-ubuntu --record=true

With this we can see the history of our changes to our Deployment, specifically Revision number 1 when we created our Deployment. Then Revision number 2 when we changed the image from CU11 to CU12.

Rolling Back our SQL Server Deployment to the Previous Container Image

Now, if we needed to rollback from CU12 to CU11, that’s quite easy in Kubernetes, we can do that with kubectl rollout undo deployment mssql-deployment –to-revision=1  

kubectl rollout undo deployment mssql-deployment --to-revision=1 

Then we can use kubectl describe deployment mssql-deployment to check the status of our Deployment rollback.

kubectl describe deployment mssql-deployment

Name: mssql-deployment
…output omitted
Pod Template:
Containers:
mssql:
Image: mcr.microsoft.com/mssql/server:2017-CU11-ubuntu
…output omitted
NewReplicaSet: mssql-deployment-55bd89b84d (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 7m55s deployment-controller Scaled down replica set mssql-deployment-55bd89b84d to 0
Normal ScalingReplicaSet 7m50s deployment-controller Scaled up replica set mssql-deployment-6776c966b7 to 1
Normal ScalingReplicaSet 18s deployment-controller Scaled down replica set mssql-deployment-6776c966b7 to 0 
Normal ScalingReplicaSet 12s (x2 over 21m) deployment-controller Scaled up replica set mssql-deployment-55bd89b84d to 1 

In the output above you can see our updated Replica Set (mssql-deployment-6776c966b7) is scaled from 1 to 0 and the original Replica Set is scaled from 0 to 1 (mssql-deployment-55bd89b84d). Bringing the Replica Set backed with the CU 11 image back online. Again, similar to the rollout of the image update above, this entire process takes only a few seconds. Again, you may have to wait while SQL Server runs update scripts on the databases.

Summary

Kubernetes offers us many ways to manage our application state. Deployment Controllers give us the ability to easily move between versions of our application and rollback if needed. In SQL Server, this method offers us a way to move between Cumulative Updates in a controlled way with a very quick, and controlled way to rollback if needed. However, in SQL Server, we have to deal with upgrades where we can’t easily roll back as is the case when we update the database version. We can still use this to upgrade SQL Server between database versions, but we lose the ability to rollback. In those scenarios, testing is the best way to ensure you are compatible in the upgraded state. You’ll find this rollout method is amazingly simple and fast when you try it out.

Please feel free to contact me with any questions regarding Linux, Kubernetes or other SQL Server related issues at : aen@centinosystems.com

Speaking at SQLBits 2019

Speaking at SQLBits 2019!

I’m proud to announce that I will be speaking at SQLBits on March 2nd 2019! It’s been a goal of mine to speak at SQLBits for a few years now and I’m VERY excited for the opportunity! This year’s conference won’t let you down. Check out the amazing schedule of Experts and Microsoft MVPs!

If you haven’t been to SQLBits before, what are you waiting for! Sign up now!

SQLBits Logo

Here’s the details on my session!

Inside Kubernetes – An Architectural Deep Dive – March 2 2018 – 15:10 – Room 12

In this session we will introduce Kubernetes, we’ll deep dive into each component and its responsibility in a cluster. We will also look at and demonstrate higher-level abstractions such as Services, Controllers, and Deployments and how they can be used to ensure the desired state of an application and data platform deployed in Kubernetes. Next, we’ll look at Kubernetes networking and intercluster communication patterns. With that foundation, we will then introduce various cluster scenarios such as a single node, single head, and high availability designs. By the end of this session, you will understand what’s needed to put your applications and data platform in production in a Kubernetes cluster

Session Objectives:
Understand Kubernetes cluster architecture
Understand Services, Controllers, and Deployments
Designing Production Ready Kubernetes Clusters

Right before my session, in the same room at 13:45, my good friend Andrew Pruski will be delivering a session on Kubernetes as well! His session is: SQL Server and Kubernetes!

Be sure to come to both sessions, Andrew will get you started on your Kubernetes journey and I’ll dive deep into how Kubernetes works!