Tag Archives: Containers

Upgrading SQL Server 2017 Containers to 2019 non-root Containers with Data Volumes – Another Method

Yesterday in this post I described a method to correct permissions when upgrading a SQL Server 2017 container using Data Volumes to 2019’s non-root container on implementations that use the Moby or HyperKit VM. My friend Steve Jones’ on Twitter wondered if you could do this in one step by attaching a shell (bash) in the 2017 container prior to shutdown. Absolutely…let’s walk through that here in this post.  I opted to use an intermediate container in the prior post out of an abundance of caution so that I was not changing permissions on the SQL Server instance directory and all of the data files while they were in use. Technically this is a-ok, but again…just being paranoid there.

Start Up a Container with a Data Volume

Start up a container with a Data Volume (sqldata1) using the 2017 image. This will create the directories and files with root as the owner and group.

docker run \
    --name 'sql1' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -d mcr.microsoft.com/mssql/server:2017-latest
597652b61b22b27ff6d765b48196621a79dd2ffd7798328868d2296c7e953950 

Create a Database

Let’s create a database and confirm it’s there.

sqlcmd -S localhost,1433 -U sa -Q 'CREATE DATABASE TestDB1' -P $PASSWORD
sqlcmd -S localhost,1433 -U sa -Q 'SELECT name from sys.databases' -P $PASSWORD -W

name ---- master tempdb model msdb TestDB1 (5 rows affected) 

Get a Shell into the Container

Now, let’s get a shell into our running container. Logging in as root is great, isn’t it? :) 

docker exec -it sql1 /bin/bash
root@ed9051c6b5f3:/# 

Adjust the Permissions

Now while we’re in the running 2017 container we can adjust the permissions on the instance directory. The user mssql (uid 10001) doesn’t have to exist in the 2017 container. The key to the permissions is using the uid directly.

ls -laR /var/opt/mssql
chgrp -R 0 /var/opt/mssql
chmod -R g=u /var/opt/mssql
chown -R 10001:0 /var/opt/mssql
ls -laR /var/opt/mssql
exit

Stop our Container

Now to start the process of upgrading from 2017 to 2019, we’ll stop and remove the existing container.

docker stop sql1
docker rm sql1
sql1 

Start up a 2019 non-root Container

docker run \
    --name 'sql1' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -d mcr.microsoft.com/mssql/server:2019-GDR1-ubuntu-16.04 

Is Everything OK?

Are our database there? Yep! 

sqlcmd -S localhost,1433 -U sa -Q 'SELECT name from sys.databases' -P $PASSWORD
name
----
master
tempdb
model
msdb
TestDB1

(5 rows affected)

 

 

Upgrading SQL Server 2017 Containers to 2019 non-root Containers with Data Volumes

Recently Microsoft released a Non-Root SQL Server 2019 container and that’s the default if you’re pulling a new container image. But what if you’re using a 2017 container running as root and want to upgrade your system the SQL Server 2019 container…well something’s going to break. As you can see here, my friend Grant Fritchey came across this issue recently and asked for some help on Twitter’s #sqlhelp. This article describe a solution to getting things sorted and running again. The scenario below is if you’re using a Linux based SQL Server container on Windows or Mac host where the container volumes are backed by a Docker Moby or HyperKit virtual machine. If you’re using Linux container on Linux, you’ll adjust the file system permissions directly.

What’s the issue?

When you start up the 2017 container, the SQL Server (sqlservr) process is running as root (uid 0). Any files created by this process will have the user and group ownership of the root user. Now when we come along later and start up a 2019 container, the sqlservr process is running as the user msssql (uid 10001 by default). This new user doesn’t have permission to open the database files and other files used by SQL Server.

How do we fix this?

The way I fixed this issue is by stopping the SQL Server 2017 container and using another container, attaching the data volumes used by the 2017 container into this container then recursively adjusting the permissions to allow a user with the uid 10001 access to the files in the instance directory /var/opt/mssql. If you’re databases and log files are in other paths you’ll have to take that into account if using this process. Once we adjust the permissions, stop that ubuntu container and start up SQL Server’s 2019 non-root container and everything should be happy happy. Let’s do it together…

Start Up a Container with a Data Volume

Start up a container with a Data Volume (sqldata1) using the 2017 image. This will create the files with root as the owner and group.

docker run \
    --name 'sql1' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -d mcr.microsoft.com/mssql/server:2017-latest
597652b61b22b27ff6d765b48196621a79dd2ffd7798328868d2296c7e953950 

Create a Database

Let’s create a database and confirm it’s there.

sqlcmd -S localhost,1433 -U sa -Q 'CREATE DATABASE TestDB1' -P $PASSWORD
sqlcmd -S localhost,1433 -U sa -Q 'SELECT name from sys.databases' -P $PASSWORD -W

name ---- master tempdb model msdb TestDB1 (5 rows affected)  

Stop our Container

Now to start the process of upgrading from 2017 to 2019, we’ll stop and remove the existing container.

docker stop sql1
docker rm sql1
sql1 

Start a 2019 non-root Container

Create a new container pointing to that existing Data Volume (sqldata1), this time I’m not using -d so we can attach to stdout and see the error messages on the terminal. Here you can see that the sqlservr process is unable to open a file instance_id.

docker run \
    --name 'sql1' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
     mcr.microsoft.com/mssql/server:2019-GDR1-ubuntu-16.04

SQL Server 2019 will run as non-root by default. This container is running as user mssql. Your master database file is owned by root. To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216. sqlservr: Unable to open /var/opt/mssql/.system/instance_id: Permission denied (13) /opt/mssql/bin/sqlservr: Unable to open /var/opt/mssql/.system//instance_id: Permission denied (13) 

Since that was a bust, let’s go ahead and delete that container since it’s not usable. 

docker rm sql1
sql1 

Changing Permissions on the Files

Let’s create an intermediate container, in this case using an Ubuntu image, and mount that data volume (sqldata1), and then change the permissions on the files SQL Server needs to work with. 

docker run \
    --name 'permissionsarehard' \
    -v sqldata1:/var/opt/mssql \
    -it ubuntu:latest

If we look at the permissions of the instance directory (/var/opt/mssql/) we can see the files user and group owner are root. This is just a peek at the instance directory, we’ll need to adjust permissions on all of the file SQL Server needs to work with and recursively within this directory.

ls -la /var/opt/mssql
/var/opt/mssql:
total 24
drwxr-xr-x 6 root root 4096 Nov 20 13:43 .
drwxr-xr-x 1 root root 4096 Nov 20 13:46 ..
drwxr-xr-x 5 root root 4096 Nov 20 13:43 .system
drwxr-xr-x 2 root root 4096 Nov 20 13:43 data
drwxr-xr-x 2 root root 4096 Nov 20 13:43 log
drwxr-xr-x 2 root root 4096 Nov 20 13:43 secrets

Let’s adjust the permissions on the directories and files sqlservr needs access to…again I want to point out, that this is against the default instance directory which is /var/opt/mssql…if you have files in other locations they will need their permissions updated too. Check out the Microsoft Docs article here for more information on this.

ls -laR /var/opt/mssql
chgrp -R 0 /var/opt/mssql
chmod -R g=u /var/opt/mssql
chown -R 10001:0 /var/opt/mssql
ls -laR /var/opt/mssql
exit

Here’s some output from a directory listing of our instance directory after we’ve made the permissions changed…now they have the owner of 10001 and a group owner of root.

ls -la /var/opt/mssql
/var/opt/mssql:
total 24
drwxrwxr-x 6 10001 root 4096 Nov 20 13:43 .
drwxr-xr-x 1 root  root 4096 Nov 20 13:46 ..
drwxrwxr-x 5 10001 root 4096 Nov 20 13:43 .system
drwxrwxr-x 2 10001 root 4096 Nov 20 13:43 data
drwxrwxr-x 2 10001 root 4096 Nov 20 13:43 log
drwxrwxr-x 2 10001 root 4096 Nov 20 13:43 secrets

Let’s start up a 2019 non-root container now

Start up our 2019 container now…should work eh? Woot!

docker run \
    --name 'sql1' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -d mcr.microsoft.com/mssql/server:2019-GDR1-ubuntu-16.04 

Why UID 10001?

Let’s hop into the container now that it’s up and running…and we’ll see sqlservr is running as mssql which has a uid of 10001. This is the default uid used inside non-root container. If you’re using a system that doesn’t have this user defined, like the intermediate ubuntu container, you’ll need to adjust permissions using the uid directly. That permission information is written into the directory and files and when we start up the 2019 container again the correct permissions are in place since the uid of the mssql user matches the uid of the permissions on the files and directories.

docker exec -it sql1 /bin/bash

ps -aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND mssql 1 8.4 0.3 148820 22768 ? Ssl 13:49 0:00 /opt/mssql/bin/ mssql 9 96.5 9.3 7470104 570680 ? Sl 13:49 0:03 /opt/mssql/bin/ mssql 140 2.0 0.0 18220 3060 pts/0 Ss 13:49 0:00 /bin/bash mssql 148 0.0 0.0 34420 2792 pts/0 R+ 13:49 0:00 ps -aux
id mssql uid=10001(mssql) gid=0(root) groups=0(root)
exit

Is Everything OK?

Are our database there? Yep! 

sqlcmd -S localhost,1433 -U sa -Q 'SELECT name from sys.databases' -P $PASSWORD
name
----
master
tempdb
model
msdb
TestDB1

(5 rows affected)

Another Method

If you like living on the edge you can correct the permissions logging into the running 2017 container prior to shutdown and not using an intermediate container, check out this post here

 

 

Speaking at PASS Summit 2019!

I’m very pleased to announce that I will be speaking at PASS Summit 2019!  This is my second time speaking at PASS Summit and I’m very excited to be doing so! What’s more, is I get to help blaze new ground with an emerging technology, Kubernetes and how to run SQL Server in Kubernetes!

My session is Inside Kubernetes – An Architectural Deep Dive if you’re a just getting started in the container space and want to learn how Kubernetes works and dive into how to deploy SQL Server in Kubernetes this is the session for you. I hope to see you there!

Inside Kubernetes – An Architectural Deep Dive

Abstract

In this session we will introduce Kubernetes, we’ll deep dive into each component and its responsibility in a cluster. We will also look at and demonstrate higher-level abstractions such as Services, Controllers, and Deployments, and how they can be used to ensure the desired state of an application and data platform deployed in Kubernetes. Next, we’ll look at Kubernetes networking and intercluster communication patterns. With that foundation, we will then introduce various cluster scenarios and high availability designs. By the end of this session, you will understand what’s needed to put your applications and data platform in production in a Kubernetes cluster. 

In addition to my session be sure to check out the following sessions on Kubernetes by my friends Bob Ward and Hamish Watson, I’m certainly going to be at both of these sessions!

 

 

 

 

PASS Summit 2019

Updated: Getting Started with Installing Kubernetes

Let’s get you started on your Kubernetes journey with installing Kubernetes and creating a cluster in virtual machines.

Kubernetes is a distributed system, you will be creating a cluster which will have a master node that is in charge of all operations in your cluster. In this walkthrough we’ll create three workers which will run our applications. This cluster topology is, by no means, production ready. If you’re looking for production cluster builds check out Kubernetes documentation. Here and here. The primary components that need high availability in a Kubernetes cluster are the API Server which controls the state of the cluster and the etcd database which persists the state of the cluster. You can learn more about Kubernetes cluster components here. If you want to dive into Kubernetes more check out my Pluralsight Courses here! Where I have a dedicated course on Installation and Configuration.

In our demonstration here, the master is where the API Server, etcd, and the other control plan functions will live. The workers/nodes, will be joined to the cluster and run our application workloads. 

Get your infrastructure sorted

I’m using 4 Ubuntu Virtual machines in VMware Fusion on my Mac. Each with 2vCPUs and 2GB of RAM running Ubuntu 16.04.5. Ubuntu 18 requires a slightly different install. Documented here. In there you will add the Docker repository, then install Docker from there. The instructions below get Docker from Ubuntu’s repository. You will also need to disable the swap on any system which you will run the kubelet, which in our case is all systems. To do so you need to turn swap off with sudo swapoff -a and edit /etc/fstab removing or commenting out the swap volume entry. 

  • c1-master1 – 172.16.94.15
  • c1-node1 – DHCP
  • c1-node2 – DHCP
  • c1-node3 – DHCP

Ensure that each host has a unique name and that all nodes can have network reachability between each other. Take note of the IPs, because you will need to log into each node with SSH. If you need assistance getting your environment ready, check out my training on Pluralsight to get you started here! I have courses on installation, command line basics all the way up through advanced topics on networking and performance.

Overview of the cluster creation process

  • Install Kubernetes packages on all nodes
    • Add Kubernetes’ apt repositories
    • Install the required software packages for Kubernetes
  • Download deployment files for your Pod Network
  • Create a Kubernetes cluster on the Master
    • We’re going to use a utility called kubeadm to create our cluster with a basic configuration
  • Install a Pod Network
  • Join our three worker nodes to our cluster

Install Kubernetes Pakages

Let’s start off with installing the required Kubernetes packages on to all of the nodes in our system. This is going to require logging into each server via SSH (or console), adding the Kubernetes apt repositories and installing the required packages. Perform the following tasks on ALL nodes in your cluster, the master and the three workers. If you add more nodes, you will need to install these packages on those nodes too.

Add the gpg key for the Kubernetes apt repository to your local system

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
Add the Kubernetes apt repository to your local repository locations
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
Next, we’ll update our apt package lists
sudo apt-get update
Install the required packages
sudo apt-get install -y docker.io kubelet kubeadm kubectl
Then we need to tell apt to not update these packages. 
sudo apt-mark hold docker.io kubelet kubeadm kubectl
With Docker installed, we need to make one adjustment to its configuration changing the cgroup driver to systemd.
sudo bash -c 'cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF'
With that file created, go ahead and reload the systemd configuration and restart the docker daemon.
sudo systemctl daemon-reload
sudo systemctl restart docker
Here’s what you just installed
  • kubelet – On each node in the cluster, this is in charge of starting and stopping pods in response to the state defined on the API Server on the master 
  • kubeadm – Primary command line utility for creating your cluster
  • kubectl – Primary command line utility for working with your cluster
  • docker – Remember, that Kubernetes is a container orchestrator so we’ll need a container runtime to run your containers. We’re using Docker. You can use other container runtimes if required

Download the YAML files for your Pod Network

Now, only on the Master, let’s download the YAML deployment file for your Pod network and get our cluster created. Networking in Kubernetes is different than what you’d expect. For Pods to be on different nodes to be able to communicate with each other on the same IP network, you’ll want to create a Pod network. Which essentially is an overlay network that gives you a uniform address space for Pods to operate in. The decision of which Pod network to use, or even if you need one is very dependent on your local or cloud infrastructure. For this demo, I’m going to use the Calico Pod network overlay. The code below will download the Pod manifest in YAML and we’ll deploy those into our cluster. This creates a DaemonSet. A DaemonSet is a Kubernetes Controller that will start the specified Pod on all or some of the nodes in the cluster. In this case, the Calico network Pod will be deployed on all nodes in our cluster. So as we join nodes, you might see some delay in nodes becoming Ready…this is because the container is being pulled and started on the node.
 
Download the YAML for the Pod network
wget https://docs.projectcalico.org/master/manifests/calico.yaml
If you need to change the address of your Pod network edit calico.yaml, look for the name: CALICO_IPV4POOL_CIDR and set the value: to your specified CIDR range. It’s 192.168.0.0/16 by default. 

Creating a Kubernetes Cluster

Now we’re ready to create our Kubernetes cluster, we’re going to use kubeadm to help us get this done. It’s a community-based tool that does a lot of the heavy lifting for you.
 
To create a cluster do this, here we’re specifying a CIDR range to match that in our calico.yaml file.
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
What’s happening behind the scenes with kubeadm init:
  • Creates a certificate authority – Kubernetes uses certificates to secure communication between components, verify the identity of Nodes in the cluster and authenticate users.
  • Creates kubeconfig files – On the Master, this will create configuration files for various Kubernetes cluster components
  • Pulls Control Plane container images – the services implementing the cluster components are deployed into the cluster as containers. Very cool! You can, of course, run these as local system daemons on the hosts, but Kubernetes suggests keeping them inside containers
  • Bootstraps the Control Plane Pods – starts up the pods and creates static manifests on the master start automatically when the master node starts up
  • Taints the Master to just system pods – this means the master will run (schedule) only system Pods, not user Pods. This is ideal for production. In testing, you may want to untaint the master, you’ll really want to do this if you’re running a single node cluster. See this link for details on that.
  • Generates a bootstrap token – used to join worker nodes to the cluster
  • Starts any add-ons – the most common add-ons are the DNS pod and the master’s kube-proxy
If you see this output, you’re good to go! Keep that join command handy. We’ll need it in a second.
[init] Using Kubernetes version: v1.16.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
…output omitted… Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  /docs/concepts/cluster-administration/addons/ 
Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.94.20:6443 --token czpkcj.ncl6p005orlie95h \ --discovery-token-ca-cert-hash sha256:3e21bb225c0986330ba11dd37c51fcd6542928964832705e13b84354872270bd

The output from your cluster creation is very important, it’s going to give you the code needed to access your cluster, the code needed to create your Pod network and also the code needed to join worker nodes to your cluster (just go ahead and copy this into a text file right now). Let’s go through each of those together.

Configuring your cluster for access from the Master node as a non-privileged user

This will allow you to log into your system with a regular account and administer your cluster.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Create your Pod network

Now that your cluster is created, you can deploy the YAML files for your Pod network. You must do this prior to adding more nodes to your cluster and certainly before starting any Pods on those nodes. We are going to use kubectl apply -f calico.yaml to deploy the Pod network from the YAML manifest we downloaded earlier. 

kubectl apply -f calico.yaml
configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created
Before moving forward, check for the creation of the Calico pods and also the DNS pods, once these are created and the STATUS is Running then you can proceed. In this output here you can also see the other components of your Kubernetes cluster. You see the Pods running etcd, API Server, the Controller Manager, kube-proxy and the Scheduler.
kubectl get pods --all-namespaces
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7594bb948-4mgqd   1/1     Running   0          2m58s
kube-system   calico-node-qpcv7                         1/1     Running   0          2m58s
kube-system   coredns-5644d7b6d9-2lxgt                  1/1     Running   0          3m42s
kube-system   coredns-5644d7b6d9-g5tfc                  1/1     Running   0          3m42s
kube-system   etcd-c2-master1                           1/1     Running   0          2m50s
kube-system   kube-apiserver-c2-master1                 1/1     Running   0          2m41s
kube-system   kube-controller-manager-c2-master1        1/1     Running   0          3m5s
kube-system   kube-proxy-d2c6s                          1/1     Running   0          3m42s
kube-system   kube-scheduler-c2-master1                 1/1     Running   0          2m44s

Joining worker nodes to your cluster

Now on each of the worker nodes, let’s use kubeadm join to join the worker nodes to the cluster. Go back to the output of kubeadm init and copy the string from that output be sure to put a sudo on the front before you do this on each node. The process below is called a TLS bootstrap. This securely joins the node to the cluster over TLS and authenticates the host with server certificates.
sudo kubeadm join 172.16.94.20:6443 \
>     --token czpkcj.ncl6p005orlie95h \
>     --discovery-token-ca-cert-hash sha256:3e21bb225c0986330ba11dd37c51fcd6542928964832705e13b84354872270bd
[sudo] password for aen:
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster. 
If you didn’t keep the token or the CA Cert Hash in the earlier steps, go back to the master and run these commands. Also note, that join token is only valid for 24 hours. 
 
To get the current join token
kubeadm token list
To get the CA Cert Hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
Back on the master, check on the status of your nodes joining the cluster. These nodes are currently NotReady, behind the scenes they’re pulling the Calico Pods and setting up the Pod network.
kubectl get nodes
NAME         STATUS     ROLES    AGE   VERSION
c1-master1   Ready      master   11m   v1.16.1
c1-node1     NotReady   <none>   63s   v1.16.1
c1-node2     NotReady   <none>   57s   v1.16.1
C1-node3     NotReady   <none>   33s   v1.16.1
And here we are with a fully functional Kubernetes cluster! All nodes joined and Ready.
kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
c1-master1   Ready    master   12m     v1.16.1
c1-node1     Ready    <none>   3m04s   v1.16.1
c1-node2     Ready    <none>   2m31s   v1.16.1
C1-node3     Ready    <none>   1m28s   v1.16.1
Please feel free to contact me with any questions regarding Kubernetes, Linux and other SQL Server related issues at: aen@centinosystems.com 

Memory Settings for Running SQL Server in Kubernetes

People often ask me what’s the number one thing to look out for when running SQL Server on Kubernetes…the answer is memory settings. In this post, we’re going to dig into why you need to configure resource limits in your SQL Server’s Pod Spec when running SQL Server workloads in Kubernetes. I’m running these demos in Azure Kubernetes Service (AKS), but these concepts apply to any SQL Server environment running in Kubernetes. 

Let’s deploy SQL Server in a Pod without any resource limits.  In the yaml below, we’re using a Deployment to run one SQL Server Pod with a PersistentVolumeClaim for our instance directory and also frontending the Pod with a Service for access. 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment-2017
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
        app: mssql-2017
  template:
    metadata:
      labels:
        app: mssql-2017
    spec:
      hostname: sql3
      containers:
      - name: mssql
        image: 'mcr.microsoft.com/mssql/server:2017-CU16-ubuntu'
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: SA_PASSWORD
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: pvc-sql-2017
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-sql-2017
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
  storageClassName: managed-premium
---
apiVersion: v1
kind: Service
metadata:
  name: mssql-svc-2017
spec:
  selector:
    app: mssql-2017
  ports:
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: LoadBalancer

Running a Workload Against our Pod…then BOOM!

With that Pod deployed, I loaded up a HammerDB TPC-C test with about 10GB of data and drove a workload against our SQL Server. Then while monitoring the workload…boom HammerDB throws connection errors and crashes. Let’s look at why.

First thing’s first, let’s check the Pods status with kubectl get pods. We’ll that’s interesting I have 13 Pods. 1 has a Status of Running and the remainder have are Evicted

kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
mssql-deployment-2017-8698fb8bf5-2pw2z   0/1     Evicted   0          8m24s
mssql-deployment-2017-8698fb8bf5-4bn6c   0/1     Evicted   0          8m23s
mssql-deployment-2017-8698fb8bf5-4pw7d   0/1     Evicted   0          8m25s
mssql-deployment-2017-8698fb8bf5-54k6k   0/1     Evicted   0          8m27s
mssql-deployment-2017-8698fb8bf5-96lzf   0/1     Evicted   0          8m26s
mssql-deployment-2017-8698fb8bf5-clrbx   0/1     Evicted   0          8m27s
mssql-deployment-2017-8698fb8bf5-cp6ml   0/1     Evicted   0          8m27s
mssql-deployment-2017-8698fb8bf5-ln8zt   0/1     Evicted   0          8m27s
mssql-deployment-2017-8698fb8bf5-nmq65   0/1     Evicted   0          8m21s
mssql-deployment-2017-8698fb8bf5-p2mvm   0/1     Evicted   0          25h
mssql-deployment-2017-8698fb8bf5-stzfw   0/1     Evicted   0          8m23s
mssql-deployment-2017-8698fb8bf5-td24w   1/1     Running   0          8m20s
mssql-deployment-2017-8698fb8bf5-wpgcx   0/1     Evicted   0          8m22s

What Just Happened?

Let’s keep digging and look at kubectl get events to see if that can help us sort out what’s happening…reading through these events a lot is going on. Let’s start at the top, we can see that our original Pod mssql-deployment-2017-8698fb8bf5-p2mvm is Killed and the line below that tells us why. The Node had a MemoryPressure condition. A few lines below that we see that our mssql container was using 4461532Ki which exceeded its request of 0 (more on why it’s 0 in a bit). So then our Deployment Controller sees that our Pod is no longer up and running so the Deployment controller does what it’s supposed to do start a new Pod in the place of the failed Pod.
 
The scheduler in Kubernetes will try to place a Pod back onto the same Node if the Node is still available, in our case aks-agentpool-43452558-0. And each time the scheduler places the Pod back onto the same Node it find that the MemoryPressure condition is still true, so after the 10th try the scheduler selects a new Node, aks-agentpool-43452558-3 to run our Pod. And in the last line of the output below we can see that once the workload is moved to aks-agentpool-43452558-3 the MemoryPressure condition goes away on aks-agentpool-43452558-0 since it’s no longer running our workload. 
 
kubectl get events --sort-by=.metadata.creationTimestamp
LAST SEEN   TYPE      REASON                      OBJECT                                        MESSAGE
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-clrbx    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-clrbx to aks-agentpool-43452558-0
17m         Warning   EvictionThresholdMet        node/aks-agentpool-43452558-0                 Attempting to reclaim memory
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-clrbx
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-ln8zt
17m         Normal    Killing                     pod/mssql-deployment-2017-8698fb8bf5-p2mvm    Stopping container mssql
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-54k6k    The node had condition: [MemoryPressure].
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-p2mvm    The node was low on resource: memory. Container mssql was using 4461532Ki, which exceeds its request of 0.
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-cp6ml    The node had condition: [MemoryPressure].
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-cp6ml    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-cp6ml to aks-agentpool-43452558-0
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-54k6k    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-54k6k to aks-agentpool-43452558-0
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-clrbx    The node had condition: [MemoryPressure].
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-cp6ml
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-54k6k
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-ln8zt    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-ln8zt to aks-agentpool-43452558-0
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-96lzf    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-96lzf to aks-agentpool-43452558-0
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-96lzf
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-ln8zt    The node had condition: [MemoryPressure].
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-96lzf    The node had condition: [MemoryPressure].
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-4pw7d    The node had condition: [MemoryPressure].
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-4pw7d    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-4pw7d to aks-agentpool-43452558-0
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-4pw7d
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-2pw2z    The node had condition: [MemoryPressure].
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-2pw2z    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-2pw2z to aks-agentpool-43452558-0
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-2pw2z
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-4bn6c    The node had condition: [MemoryPressure].
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-4bn6c
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   Created pod: mssql-deployment-2017-8698fb8bf5-stzfw
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-4bn6c    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-4bn6c to aks-agentpool-43452558-0
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-stzfw    The node had condition: [MemoryPressure].
17m         Normal    SuccessfulCreate            replicaset/mssql-deployment-2017-8698fb8bf5   (combined from similar events): Created pod: mssql-deployment-2017-8698fb8bf5-td24w
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-wpgcx    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-wpgcx to aks-agentpool-43452558-0
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-wpgcx    The node had condition: [MemoryPressure].
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-stzfw    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-stzfw to aks-agentpool-43452558-3
17m         Warning   Evicted                     pod/mssql-deployment-2017-8698fb8bf5-nmq65    The node had condition: [MemoryPressure].
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-nmq65    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-nmq65 to aks-agentpool-43452558-0
17m         Normal    NodeHasInsufficientMemory   node/aks-agentpool-43452558-0                 Node aks-agentpool-43452558-0 status is now: NodeHasInsufficientMemory
17m         Normal    Scheduled                   pod/mssql-deployment-2017-8698fb8bf5-td24w    Successfully assigned default/mssql-deployment-2017-8698fb8bf5-td24w to aks-agentpool-43452558-3
16m         Normal    SuccessfulAttachVolume      pod/mssql-deployment-2017-8698fb8bf5-td24w    AttachVolume.Attach succeeded for volume "pvc-f35b270a-e063-11e9-9b6d-ee8baa4f9319"
15m         Normal    Pulling                     pod/mssql-deployment-2017-8698fb8bf5-td24w    Pulling image "mcr.microsoft.com/mssql/server:2017-CU16-ubuntu"
15m         Normal    Pulled                      pod/mssql-deployment-2017-8698fb8bf5-td24w    Successfully pulled image "mcr.microsoft.com/mssql/server:2017-CU16-ubuntu"
15m         Normal    Started                     pod/mssql-deployment-2017-8698fb8bf5-td24w    Started container mssql
15m         Normal    Created                     pod/mssql-deployment-2017-8698fb8bf5-td24w    Created container mssql
12m         Normal    NodeHasSufficientMemory     node/aks-agentpool-43452558-0                 Node aks-agentpool-43452558-0 status is now: NodeHasSufficientMemory
 
But guess what…we’re going to have the same problem on this new Node. If we run our workload again, our memory allocation will grow and Kubernetes will kill the Pod again once the MemoryPressure condition is met. So what do we do…how can we prevent our nodes from going into a MemoryPressure condition? 

Understanding Allocatable Memory in Kubernetes 

Using kubectl describe nodein the output below there’s a section Allocatable. In there we can see that amount of allocatable resources on this Node in terms of CPU, disk, RAM and Pods. These are the amount of resources available to run user Pods on this Node. And there we see the amount of allocatable memory is 4667840Ki (~4.45GB) so we have about that much memory to run our workloads. The amount here is a function of the amount of memory in the Node and reservations made by Kubernetes for system functions, more on that here. Our AKS cluster VMs are Standard DS2 v2 which have 2vCPU and 7GB of RAM, so about 2.55GB is reserved for other uses.  The output below is from after our Pod was evicted so we can see the LastTransitionTime shows the last time a condition occurred and for MemoryPressure we can see an event at 7:53 AM. The other LastTransitionTimes are from when the Node was started. Another key point is in the Events section where we can see the conditions change state.
 
kubectl describe nodes aks-agentpool-43452558-0
Name:               aks-agentpool-43452558-0
...output omitted...
Unschedulable:      false
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Tue, 10 Sep 2019 16:20:00 -0500   Tue, 10 Sep 2019 16:20:00 -0500   RouteCreated                 RouteController created a route
  MemoryPressure       False.  Sat, 28 Sep 2019 07:58:56 -0500.  Sat, 28 Sep 2019 07:53:55 -0500.  KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sat, 28 Sep 2019 07:58:56 -0500   Tue, 10 Sep 2019 16:18:27 -0500   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sat, 28 Sep 2019 07:58:56 -0500   Tue, 10 Sep 2019 16:18:27 -0500   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sat, 28 Sep 2019 07:58:56 -0500   Tue, 10 Sep 2019 16:18:27 -0500   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  Hostname:    aks-agentpool-43452558-0
  InternalIP:  10.240.0.6
Capacity:
 attachable-volumes-azure-disk:  8
 cpu:                            2
 ephemeral-storage:              101584140Ki
 hugepages-1Gi:                  0
 hugepages-2Mi:                  0
 memory:                         7113152Ki
 pods:                           110
Allocatable:
  attachable-volumes-azure-disk:  8
 cpu:                            1931m
 ephemeral-storage:              93619943269
 hugepages-1Gi:                  0
 hugepages-2Mi:                  0
 memory: 4667840Ki  pods:                           110
...output omitted...
Events:
Type     Reason                     Age                  From                               Message
  ----     ------                     ----                 ----                               -------
  Warning  EvictionThresholdMet       10m                  kubelet, aks-agentpool-43452558-0  Attempting to reclaim memory
  Normal   NodeHasInsufficientMemory  10m                  kubelet, aks-agentpool-43452558-0  Node aks-agentpool-43452558-0 status is now: NodeHasInsufficientMemory
  Normal   NodeHasSufficientMemory    5m15s (x2 over 14d)  kubelet, aks-agentpool-43452558-0  Node aks-agentpool-43452558-0 status is now: NodeHasSufficientMemory

SQL Server’s View of Memory on Kubernetes Nodes

When using a Pod with no memory limits defined in the Pod Spec (which is why we saw 0 for the limits in the Event entry) SQL Server sees 5557MB (~5.4GB) memory available and thinks it has that to use. Why is that? Well, SQL Server on Linux looks at the base OS to see how much memory is available on the system and by default uses approximately 80% of that memory due its architecture (SQLPAL).
2019-09-28 14:46:16.23 Server      Detected 5557 MB of RAM. This is an informational message; no user action is required. 
This is bad news in our situation, Kubernetes has only 4667840Ki (~4.45GB) to allocate before setting the MemoryPressure condition which will cause our Pod to be Evicted and Terminated. And as with our workload running SQL Server allocates memory, primarily to the buffer pool, and it exceeds the Allocatable amount of memory for the Node Kubernetes kills our Pod to protect the Node and the cluster as a whole. 

Configuring Pod Limits for SQL Server

So how do we fix all of this? We need to set a resource limit in our Pod Spec. Limits allow us to control the amount of a particular resource exposed to a Pod. And in our case, we want to limit the amount of memory we want SQL Server to see. In our environment we know we have  4667840Ki (~4.45GB) of Allocatable memory for user Pods on Nodes so lets set a value lower than that…and to be super safe I’m going to use 3GB. In the code below you can see in the Pod Spec for our mssql container we have a section for resources, limits and a value of memory: “3Gi”.

    spec:
      hostname: sql3
      containers:
      - name: mssql
        image: 'mcr.microsoft.com/mssql/server:2017-CU16-ubuntu'
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: SA_PASSWORD
        resources:
          limits:
            memory: "3Gi"
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: pvc-sql-system-2017
With this configured we limit the amount of memory SQL Server sees to 3GB. Given that the container is running SQL Server on Linux, SQL Server will actually see about 80% of that 2458MB
2019-09-28 14:01:46.16 Server      Detected 2458 MB of RAM. This is an informational message; no user action is required.

Summary

With that, I hope you can see why I consider memory settings the number one thing to look out for when deploying SQL Server in Kubernetes.  Setting appropriate values will ensure that your SQL Server instance on Kubernetes stays up and running and happily with the other workloads you have running in your cluster.  What’s the best value to set? We need to take into account the amount of memory on the Node, the amount of memory we need to run our workload in SQL Server, and the reservations needed by both Kubernetes and SQLPAL. Additionally, we should set max server memory instance level setting inside of SQL Server to limit the amount of memory that’s allocatable. My suggestion to you is to configure both a resource limit at the Pod Spec and configure max server memory at the instance level.

If you want to read more about resource management and Pod eviction check out this resources:

 

 

 

 

Using kubectl logs to read the SQL Server Error Log in Kubernetes

When working with SQL Server running containers the Error Log is written to standard out. Kubernetes will expose that information to you via kubectl. Let’s check out how it works.

If we start up a Pod running SQL Server and grab the Pod name

kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
mssql-deployment-56d8dbb7b7-hrqwj   1/1     Running   0          22m

We can use follow flag and that will continuously write the error log to your console, similar to using tail with the -f option. If you remove the follow flag it will write the current log to your console. This can be useful in debugging failed startups or in the case below, monitoring the status of a database restore. When finished you can use CTRL+C to break out and return back to your prompt.

kubectl logs mssql-deployment-56d8dbb7b7-hrqwj --follow

Will yield the following output

SQL Server 2019 will run as non-root by default.
This container is running as user root.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
This is an evaluation version.  There are [157] days left in the evaluation period.
2019-09-12 18:11:06.74 Server      Setup step is copying system data file 'C:\templatedata\master.mdf' to '/var/opt/mssql/data/master.mdf'.
2019-09-12 18:11:06.82 Server      Did not find an existing master data file /var/opt/mssql/data/master.mdf, copying the missing default master and other system database files. If you have moved the database location, but not moved the database files, startup may fail. To repair: shutdown SQL Server, move the master database to configured location, and restart.
2019-09-12 18:11:06.83 Server      Setup step is copying system data file 'C:\templatedata\mastlog.ldf' to '/var/opt/mssql/data/mastlog.ldf'.
2019-09-12 18:11:06.85 Server      Setup step is copying system data file 'C:\templatedata\model.mdf' to '/var/opt/mssql/data/model.mdf'.
2019-09-12 18:11:06.87 Server      Setup step is copying system data file 'C:\templatedata\modellog.ldf' to '/var/opt/mssql/data/modellog.ldf'.
2019-09-12 18:11:06.89 Server      Setup step is copying system data file 'C:\templatedata\msdbdata.mdf' to '/var/opt/mssql/data/msdbdata.mdf'.
...output omitted...
2019-09-12 18:11:12.37 spid9s      Database 'msdb' running the upgrade step from version 903 to version 904.
2019-09-12 18:11:12.52 spid9s      Recovery is complete. This is an informational message only. No user action is required.
2019-09-12 18:11:12.55 spid20s     The default language (LCID 0) has been set for engine and full-text services.
2019-09-12 18:11:12.87 spid20s     The tempdb database has 2 data file(s).
2019-09-12 18:14:29.78 spid56      Attempting to load library 'xpstar.dll' into memory. This is an informational message only. No user action is required.
2019-09-12 18:14:29.84 spid56      Using 'xpstar.dll' version '2019.150.1900' to execute extended stored procedure 'xp_instance_regread'. This is an informational message only; no user action is required.
2019-09-12 18:14:30.00 spid56      Attempting to load library 'xplog70.dll' into memory. This is an informational message only. No user action is required.
 
2019-09-12 18:14:30.05 spid56      Using 'xplog70.dll' version '2019.150.1900' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required.
...output omitted...
2019-09-12 18:32:32.40 spid66      [5]. Feature Status: PVS: 0. CTR: 0. ConcurrentPFSUpdate: 1.
2019-09-12 18:32:32.41 spid66      Starting up database ‘DB1'.
2019-09-12 18:32:32.72 spid66      The database 'DB1' is marked RESTORING and is in a state that does not allow recovery to be run.
2019-09-12 18:32:37.44 Backup      Database was restored: Database: DB1  creation date(time): 2019/05/11(13:32:05), first LSN: 148853:1000384:1, last LSN: 148853:1067344:1, number of dump devices: 1, device information: (FILE=1, TYPE=URL: {'https://yourenotallowtoknow.blob.core.windows.net/servername/DB1_FULL_20190912_020000.bak'}). Informational message. No user action required.

New Pluralsight Course – Managing Kubernetes Controllers and Deployments

My new course “Managing Kubernetes Controllers and Deployments” in now available on Pluralsight here! Check out the trailer here or if you want to dive right in go here! This course offers practical tips from my experiences managing Kubernetes Clusters and workloads for Centino Systems clients.
 

This course targets IT professionals that design and maintain Kubernetes and container based solutions.The course can be used by both the IT pro learning new skills and the system administrator or developer preparing for using Kubernetes both on premises and in the Cloud. 

Let’s take your Kubernetes administration and configuration skills to the next level and get you started now!

The modules of the course are:

  • Using Controllers to Deploy Applications and Deployment Basics – In this module we dive into what Controllers are and how they can be used to deploy applications in Kubernetes. We’ll introduce several core controller types and look at the fundamentals of using the Deployment Controller to deploy applications and take a deep dive into the Controller operations of ReplicaSets.
  • Maintaining Applications with Deployments – In this demo-heavy module, we look closer at Deployments and learn how we can maintain our container based applications. We look at updating Deployments, controlling rollouts and using updateStrategy and readinessProbes to ensure successful rollouts. We’ll also cover what to do when things go wrong and learn how to pause and rollback rollouts.
  • Deploying and Maintaining Applications with DaemonSets and Jobs – In this module, we introduce the DaemonSet controller and how it’s used to deploy applications to all Nodes or a subset of Nodes in our cluster, we’ll also cover DaemonSet operations such as updating and controlling rollouts. We wrap up the course with a look at how we can use Jobs and CronJobs to ensure work completes in our cluster. 

NewImage

Check out the course at Pluralsight!

 

Workshop – Kubernetes Zero to Hero at SQL Saturday Denver!

Pre-conference Workshop at SQLSaturday Denver

I’m proud to announce that I will be be presenting an all day pre-conference workshop at SQL Saturday Denver on October 11th 2019! This one won’t let you down! 

The workshop is Kubernetes Zero to Hero – Installation, Configuration, and Application Deployment” 

NewImage

Here’s the abstract for the workshop

Modern application deployment needs to be fast and consistent to keep up with business objectives and Kubernetes is quickly becoming the standard for deploying container-based applications, fast. In this day-long session, we will start with an architectural overview of a Kubernetes cluster and how it manages application state. Then we will learn how to build a production-ready cluster. With our cluster up and running, we will learn how to interact with our cluster, common administrative tasks, then wrap up with how to deploy applications and SQL Server. At the end of the session, you will know how to set up a Kubernetes cluster, manage a cluster, deploy applications and databases, and how to keep everything up and running.

Session Objectives

  • Introduce Kubernetes Cluster Components
  • Introduce Kubernetes API Objects and Controllers
  • Installing Kubernetes
  • Interacting with your cluster
  • Storing persistent data in Kubernetes
  • Deploying Applications in Kubernetes
  • Deploying SQL Server in Kubernetes
  • High Availability scenarios in Kubernetes

FAQs

How much does it cost?

The full day training event is $150 per attendee.

What can I bring into the event?
WiFi at the location is limited. The workshop will be primarily demonstration based. Code will be made available for download prior to the event if you would like to follow along during the session.

How can I contact the organizer with any questions?
Please feel free to email me with any questions: aen@centinosystems.com

What’s the refund policy?
7 days: Attendees can receive refunds up to 1 days before your event start date.

Do I need to know SQL Server or Kubernetes to attend this workshop?
No, while we will be focusing on deploying SQL Server in Kubernetes, no prior knowledge of SQL Server or Kubernetes is needed. We will build up our Kubernetes skills using SQL Server as the primary application we will deploy.

What are the prerequisites for the workshop?
All examples will be executed at the command line, so proficiency at a command line is required. Platform dependent (Linux/Windows,Cloud CLIs) configurations and commands will be introduced and discussed in the workshop.  

Using strace inside a SQL Server Container

So, if you’ve been following my blog you know my love for internals. Well, I needed to find out exactly how something worked at the startup of a SQL Server process running inside a docker container and my primary tool for this is stracewell how do you run strace against processes running in a container? I hadn’t done this before and needed to figure this out…so let’s go through how I pulled this off.

The First (not so successful) Attempt

My initial attempt involved creating a second container image with strace installed and then starting that container in the same PID namespace at the SQL Server container. The benefit here is that I do need to do anything special to the SQL Server container…I can use an unmodified SQL Server image and create a container for running strace.

Create a dockerfile for a container and install strace inside the container

FROM ubuntu:16.04

RUN export DEBIAN_FRONTEND=noninteractive && \
    apt-get update && \
    apt-get install -yq curl gnupg apt-transport-https && \
    apt-get install -y strace && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists

CMD /bin/bash

Then build the container with docker build -t strace .

docker build -t strace .
Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM ubuntu:16.04
 ---> a3551444fc85
Step 2/3 : RUN export DEBIAN_FRONTEND=noninteractive &&     apt-get update &&     apt-get install -yq curl gnupg apt-transport-https &&     apt-get install -y strace &&     apt-get clean &&     rm -rf /var/lib/apt/lists
 ---> Running in 2832df1c4921
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
...output omitted...
Fetched 179 kB in 0s (218 kB/s)
Selecting previously unselected package strace.
(Reading database ... 5300 files and directories currently installed.)
Preparing to unpack .../strace_4.11-1ubuntu3_amd64.deb ...
Unpacking strace (4.11-1ubuntu3) ...
Setting up strace (4.11-1ubuntu3) ...
Removing intermediate container 2832df1c4921
 ---> 686bc74ddd24
Step 3/3 : CMD /bin/bash
 ---> Running in 1b1ca2bb04d7
Removing intermediate container 1b1ca2bb04d7
 ---> d89cfe1231c1
Successfully built d89cfe1231c1
Successfully tagged strace:latest

With the container built let’s use it to run strace against our SQL Server process running in another container. 

Startup a container running SQL Server

docker run \
    --name 'sql19' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=‘$PASSWORD' \
    -p 1434:1433 \
    -d mcr.microsoft.com/mssql/server:2019-latest

Then start up our strace container and attach it to the PID namespace of the sql19 container. 

docker run -it \
    --cap-add=SYS_PTRACE \
    --pid=container:sql19 strace /bin/bash -c '/usr/bin/strace -f -p 1' 

A lot is going on in this command so let’s expand out each of the parameters

  • -it – this will attach the standard out of our container to our current shell. Basically, we’ll see the output of strace on our active console and can redirect to file if needed.
  • –cap-add=SYS_PTRACE – this adds the SYS_PTRACE capability to the container. This allows ptrace (the system call behind strace) the ability to attach to process. If this is not specified you will get an error saying ‘Operation not permitted’
  • –pid=container:sql19 – specifies the container and the namespace we want to attach to. This will start up our strace container in the same PID namespace as the sql19 container. With this there is one process namespace shared between the two containers, effectively they will be able to see each other’s processes which is what we want. We want the strace process to be able to see the sqlservr process.
  • strace – is the name of the container image we built above.
  • /bin/bash -c ‘/usr/bin/strace -p 1 -f’ – this is the command (CMD) we want to run inside the strace container. In this case, we’re starting a sh shell with the parameters to launch strace
  • strace -p 1 -f – the option  -p 1 will attach strace to PID 1 which is sqlservr and the -f option will attach to any forked processes from the traced process
 When we run this docker command we get this output
docker run -it    --cap-add=SYS_PTRACE    --pid=container:sql19 strace sh -c '/usr/bin/strace -p 1 -f'
/usr/bin/strace: Process 1 attached with 2 threads
[pid     9] ppoll([{fd=14, events=POLLIN}], 1, NULL, NULL, 8
[pid     1] wait4(10, 
 
We’re attaching to an already running docker container running SQL. But what we get is an idle SQL Server process this is great if we have a running workload we want to analyze but my goal for all of this is to see how SQL Server starts up and this isn’t going to cut it.
 
My next attempt was to stop the sql19 container and quickly start the strace container but the strace container still missed events at the startup of the sql19 container. So I needed a better way.
 
UPDATE: David Barbarin, fellow Data Platform MVP and SQL Server and Container expert, pursued the idea of using a second container and came up with a very elegant solution to this! He is using the sleep command at the launch of the SQL Server container then attaching a second strace container to the PID namespace. Using this technique he’s able to catch the startup events and not have to build a custom SQL Server container…check out the details here! Exactly what I’m looking for!
 
Also, as David points out in his post PID 1 is the watchdog process. I totally forgot about that in the code above. So when running the code above, swap -p 1 for the actual PID of the sqlservr process that is the child of PID 1. But a better way the is to use pgrep -P 1  to dynamical get the child process ID of PID 1.
 
So let’s use this technique to connect to the correct PID inside a running SQL Server container. This will attach to the child of PID 1, which will be the base sqlservr process that’s the database engine.
docker run -it \
    --cap-add=SYS_PTRACE \
    --pid=container:sql1 strace /bin/bash -c '/usr/bin/strace -f -p $(pgrep -P 1)' 
 
The Second (and more successful) Attempt

I want to attach strace to the SQL process at startup and the way that I can achieve that is by creating a custom container with SQL Server and strace installed. Then starting that container  telling strace to start up SQL Server process.

So let’s start by creating our custom SQL Server container with strace installed. Here’s the dockerfile for that

FROM ubuntu:16.04

RUN export DEBIAN_FRONTEND=noninteractive && \
    apt-get update && \
    apt-get install -yq curl gnupg apt-transport-https && \
    curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \
    curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server-preview.list | tee /etc/apt/sources.list.d/mssql-server.list && \
    apt-get update && \
    apt-get install -y mssql-server && \
 apt-get install -y strace && \     
apt-get clean && \ rm -rf /var/lib/apt/lists CMD /opt/mssql/bin/sqlservr

This is pretty standard for creating a SQL Server container the key difference here is that we’re installing the strace package in addition to the mssql-server package. Good news is, we can leave the CMD of the container as sqlservr…which means we can use this for general purpose database container as well as strace use cases. We’re going to use another technique to override CMD that when we start the container so that it will start a strace’d sqlservr process for us.

Let’s go ahead and build that container with docker build -t sqlstrace .

Sending build context to Docker daemon  127.1MB
Step 1/3 : FROM ubuntu:16.04
 ---> a3551444fc85
Step 2/3 : RUN export DEBIAN_FRONTEND=noninteractive &&     apt-get update &&     apt-get install -yq curl gnupg apt-transport-https &&     curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - &&     curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server-preview.list | tee /etc/apt/sources.list.d/mssql-server.list &&     apt-get update &&     apt-get install -y mssql-server &&     apt-get install -y strace &&     apt-get clean &&     rm -rf /var/lib/apt/lists
 ---> Running in 806a3b4b9345
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB]
...output omitted...
Setting up mssql-server (15.0.1900.25-1) …
...output omitted...
 ---> 42a1ca28ae72
Step 3/3 : CMD /opt/mssql/bin/sqlservr
 ---> Running in 1e57d6759df6
Removing intermediate container 1e57d6759df6
 ---> 6e3f5e82a177
Successfully built 6e3f5e82a177
Successfully tagged sqlstrace:latest

Once that container is built we can override the CMD that’s used to start the container defined in the dockerfile with another executable inside the container…you guessed it, strace.

docker run \
    --name 'sql19strace'  -it  \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
     sqlstrace /bin/bash -c "/usr/bin/strace -f /opt/mssql/bin/sqlservr"

The first four lines of the docker run command are standard for starting a SQL Server container. But that last line is a bit different, we’re starting our sqlstrace container. Inside that container image we’re starting a bash shell and passing in the command (-c“/usr/bin/strace -f /opt/mssql/bin/sqlservr” which will start strace, following any forked processes (-f) and then start SQL Server (sqlservr). From there SQL Server will start up and strace will have full visibility into the process execution.  The cool thing about this technique is we can adjust our strace parameters as needed at the time we create the container. 

execve("/opt/mssql/bin/sqlservr", ["/opt/mssql/bin/sqlservr"], [/* 9 vars */]) = 0
brk(NULL)                               = 0x55b7bc77c000
access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
readlink("/proc/self/exe", "/opt/mssql/bin/sqlservr", 4096) = 23
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
open("/opt/mssql/bin/tls/x86_64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat("/opt/mssql/bin/tls/x86_64", 0x7fffe9bc9510) = -1 ENOENT (No such file or directory)
open("/opt/mssql/bin/tls/libpthread.so.0", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat("/opt/mssql/bin/tls", 0x7fffe9bc9510) = -1 ENOENT (No such file or directory)
open("/opt/mssql/bin/x86_64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
...output omitted... 

Above is the output of strace on SQL Server kicking off with an execve which is the system call used after a fork to swap in the new program into the new process space. 

Hopefully, this can help you get into those deep dive debugging/troubleshooting/discovery scenarios you may find yourself working with in SQL Server inside a container

Docker Image Tags are Case Sensitive

A quick post about pulling docker containers (this applies to docker run too)…when specifying the container image, the container image name and tag are case sensitive. We’re not going to discuss how much time troubleshooting it too me to figure this out…but let’s just say it’s more than I care to admit publicly. 

In this code you can see I’m specifying the following image and tag server:2019-rc1-ubuntu (notice the lowercase rc in the tag)

docker pull mcr.microsoft.com/mssql/server:2019-rc1-ubuntu 

Docker responds that it cannot find that image manifest

Error response from daemon: manifest for mcr.microsoft.com/mssql/server:2019-rc1-ubuntu not found: manifest unknown: manifest unknown 

If we specify server:2019-RC1-ubuntu (notice the uppercase RC in the tag)

docker pull mcr.microsoft.com/mssql/server:2019-RC1-ubuntu

Then docker is able to find that image and downloads it to my local machine

2019-RC1-ubuntu: Pulling from mssql/server
59ab41dd721a: Already exists
57da90bec92c: Already exists
06fe57530625: Already exists
5a6315cba1ff: Already exists
739f58768b3f: Already exists
e39f945bda21: Pull complete
6689ce95f395: Pull complete
ec004dcfdfb5: Pull complete
e44708601d04: Pull complete
Digest: sha256:a11facbda1b1cc299d4a37499ef79cd18e38bfb8e5dd7e45cc73670cc07772e5
Status: Downloaded newer image for mcr.microsoft.com/mssql/server:2019-RC1-ubuntu
mcr.microsoft.com/mssql/server:2019-RC1-ubuntu

Want to get a list of tags for a container image so you know what image and tags to specify? Here’s how you do that with curl.

 curl -L  https://mcr.microsoft.com/v2/mssql/server/tags/list 

If you’re of the PowerShell persuasion (shout out to Andrew Pruski for this gem) here how you can generate a list of tags with Invoke-Webrequest

(Invoke-Webrequest https://mcr.microsoft.com/v2/mssql/server/tags/list).content