Tag Archives: SQL

Microsoft Most Valuable Professional – Data Platform for 2019-2020

Today, I’m proud to announce that I have been renewed as an Microsoft MVP – Data Platform for the 2019-2020 award year, my third MVP award. This is an truly an honor and I’m humbled to be included in this group of exceptional data professionals. I really look forward to continuing to work with everyone in the MVP community and continuing to contribute to our unmatched Data Platform Community!

MVP Logo Horizontal Secondary Blue286 CMYK 300ppi

What is an MVP?

Here’s the definition according to Microsoft

Microsoft Most Valuable Professionals, or MVPs, are technology experts who passionately share their knowledge with the community. They are always on the “bleeding edge” and have an unstoppable urge to get their hands on new, exciting technologies. They have very deep knowledge of Microsoft products and services, while also being able to bring together diverse platforms, products and solutions, to solve real world problems. MVPs make up a global community of over 4,000 technical experts and community leaders across 90 countries and are driven by their passion, community spirit, and quest for knowledge. Above all and in addition to their amazing technical abilities, MVPs are always willing to help others – that’s what sets them apart.

For 2019-2020, I have been named a Data Platform MVP, which means my technical specialization is on data products like SQL Server. The group of people that have received this award is quite small…by my count 332 worldwide and less than 75 in the US. I’m honored to be in this group of talented professionals.

Why I’m excited to be an MVP?

Honestly, the primary reason I’m excited to be an MVP is to give back (more), I’ve learned so much from other MVPs and receiving this award will help me to continue to build relationships with other MVPs and Microsoft employees to further help develop the Data Platform itself and the community that surrounds that platform.

At the start of 2016 I had set a goal of being an MVP in 5 years. I don’t know why I picked that number, but what I figured was…MVP would be validation of consistent, quality work for our community and being recognized for the work that I’ve contributed. Things like blogging, social media, public speaking and more. You learn a ton by teaching! 

The past year has been great, focusing on Containers and Kubernetes and how we can leverage that tech in our modern Data Platform.

To the people that have helped along the way, I thank you all!

Speaking at SQL Saturday Dallas

Speaking at SQLSaturday Dallas!

I’m proud to announce that I will be speaking at SQL Saturday Dallas on May 17th 2018! This one won’t let you down! Check out the amazing schedule!

If you don’t know what SQLSaturday is, it’s a whole day of free SQL Server training available to you at no cost!

If you haven’t been to a SQLSaturday, what are you waiting for! Sign up now!

My presentation is Practical Container Scenarios in Azure” 

NewImage

Here’s the abstract for the talk

You’ve heard the buzz about containers and Kubernetes, now let’s start your journey towards rapidly deploying and scaling your container-based applications in Azure. In this session, we will introduce containers and the container orchestrator Kubernetes. Then we’ll dive into how to build a container image, push it into our Azure Container Registry and deploy it our Azure Kubernetes Services cluster. Once deployed, we’ll learn how to keep our applications available and how to scale them using Kubernetes.

Key topics introduced
Building Container based applications
Publishing containers to Azure Container Registry
Deploying Azure Kubernetes Services Clusters
Scaling our container-based applications in Azure Kubernetes Services

Workshop – Kubernetes Zero to Hero – Installation, Configuration, and Application Deployment

Pre-conference Workshop at SQLSaturday Baton Rouge

I’m proud to announce that I will be be presenting an all day pre-conference workshop at SQL Saturday Baton Rouge on August 16th 2019! This one won’t let you down! 

The workshop is Kubernetes Zero to Hero – Installation, Configuration, and Application Deployment” 

NewImage

Here’s the abstract for the workshop

Modern application deployment needs to be fast and consistent to keep up with business objectives and Kubernetes is quickly becoming the standard for deploying container-based applications, fast. In this day-long session, we will start with an architectural overview of a Kubernetes cluster and how it manages application state. Then we will learn how to build a production-ready cluster. With our cluster up and running, we will learn how to interact with our cluster, common administrative tasks, then wrap up with how to deploy applications and SQL Server. At the end of the session, you will know how to set up a Kubernetes cluster, manage a cluster, deploy applications and databases, and how to keep everything up and running.

Session Objectives

  • Introduce Kubernetes Cluster Components
  • Introduce Kubernetes API Objects and Controllers
  • Installing Kubernetes
  • Interacting with your cluster
  • Storing persistent data in Kubernetes
  • Deploying Applications in Kubernetes
  • Deploying SQL Server in Kubernetes
  • High Availability scenarios in Kubernetes

FAQs

How much does it cost?

The full day training event is $125 per attendee.

What can I bring into the event?
WiFi at the location is limited. The workshop will be primarily demonstration based. Code will be made available for download prior to the event if you would like to follow along during the session.

How can I contact the organizer with any questions?
Please feel free to email me with any questions: aen@centinosystems.com

What’s the refund policy?
7 days: Attendees can receive refunds up to 7 days before your event start date.

Do I need to know SQL Server or Kubernetes to attend this workshop?
No, while we will be focusing on deploying SQL Server in Kubernetes, no prior knowledge of SQL Server or Kubernetes is needed. We will build up our Kubernetes skills using SQL Server as the primary application we will deploy.

What are the prerequisites for the workshop?
All examples will be executed at the command line, so proficiency at a command line is required. Platform dependent (Linux/Windows,Cloud CLIs) configurations and commands will be introduced and discussed in the workshop.  

Speaking at SQLSaturday Atlanta – 845

Speaking at SQLSaturday Atlanta!

I’m proud to announce that I will be speaking at SQL Saturday Atlanta on May 17th 2018! This one won’t let you down! Check out the amazing schedule!

If you don’t know what SQLSaturday is, it’s a whole day of free SQL Server training available to you at no cost!

If you haven’t been to a SQLSaturday, what are you waiting for! Sign up now!

My presentation is Containers – You Better Get on Board!” 

SQLSaturday #845 - Atlanta 2019

Here’s the abstract for the talk

Containers are taking over, changing the way systems are developed and deployed…and that’s NOT hyperbole. Just imagine if you could deploy SQL Server or even your whole application stack in just minutes. You can do that, leveraging containers! In this session, we’ll get you started on your container journey learning container fundamentals in Docker, then look at some common container scenarios and introduce deployment automation with Kubernetes.

In this session we’ll look at
Container Fundamentals with Docker
Common Container Scenarios
Automation with Kubernetes

Prerequisites: Operating system concepts such as command line use and basic networking skills.

Data Persistency and Advanced SQL Server Disk Topologies in Kubernetes

When working with SQL Server in containers and Kubernetes storage is a key concept. In this post, we’re going to walk through how to deploy SQL Server in Kubernetes with Persistent Volumes for the system and user databases.

One of the key principals of Kubernetes is the ephemerality of Pods. No Pod is every redeployed, a completely new Pod is created. If a Pod dies, for whatever reason, a new Pod is created in its place there is no continuity in the state of that Pod. The newly created Pod will go back to the initial state of the container image defined in the Pod’s spec. This is very valuable for stateless workloads, not so much for stateful workloads like SQL Server.

This means that for a stateful workload like SQL Server we need to store both configuration and data externally from the Pod to maintain state through the recreation of a Pod. Kubernetes give us constructs two constructs to do that, environment variables and Persistent Volumes. 

Using Environment Variables for Container Configuration

Container-based applications use environment variables for configuration at startup. The SQL Server container has a collection of environment variables that can be used to configure it at container startup. We will leverage two of those in this configuration. MSSQL_DATA_DIR and MSSQL_LOG_DIR these allow us to define a file system locations for user database and log files. When the SQL Server container is started inside the Pod, it reads the environment variables at runtime and sets its configuration based on those values. We define these variables as part of the Pod Spec. We will cover that configuration below.

Using Persistent Volumes to Maintain Database State

To persist the state of our SQL Server container, we will configure SQL Server to store its data and log files for both user and system databases on Persistent Volumes.

First, let’s review how SQL Server in a container starts up. During the initial startup, the SQL Server process checks to see if there are any system databases in the default system file location which is, /var/opt/mssql/data. If there are none the system databases are copied there, if they are there no action is taken. 

To add persistently to the system databases, and really all of the other components of SQL Server such as the Error Log and other system files, we will configure /var/opt/mssql so that it is backed by a Persistent Volume.

By placing the system databases on a Persistent Volume, when a Pod is recreated and the Persistent Volumes are attached and mounted in the same location when the SQL Server process starts up it sees the system databases and has what it needs to maintain state between creation.

If there are records for user databases in the system databases, SQL Server will start the process of bringing those databases online as well. We certainly the default location for user databases is /var/opt/mssql/data but we are going to override that with an environment variable for both the data and log directories, placing each on a dedicated Persistent Volumes.

Let’s walk through that configuration together. 

Persistent Volume Claims

In this configuration, we will use dynamic storage provisioning. In dynamic provisioning, a Persistent Volume Claim (PVC) is used to request a Persistent Volume (PV) from a Storage Class. In this case, we’ll be using AKS’s managed-premium Storage Class. 

Here we define three PVCs, one for each place we want Persistent Volume, for the system files and databases and the user database and log files.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "pvc-sql-data"
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: managed-premium
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "pvc-sql-system"
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: managed-premium
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "pvc-sql-log"
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: managed-premium
  resources:
    requests:
      storage: 10Gi

Deployment

In the Pod spec for our Deployment, we want to define several elements to support this configuration. 

  • Volumes – define volumes that can be mounted by this Pod. In this case, we’re creating and naming three volumes, backed by the PVCs defined above.
  • volumeMounts – volumes mounted into the container and their mountPath, location. This maps the names from the named Volumes to a location in the filesystem in the container.
  • env – due to the ephemerality of the container in the Pod, we need to tell SQL Server at start up that the data and log files will be stored in a specified directory. We are leaving the system databases and files in the default location which is /var/opt/mssql
The net effect of this storage configuration is that we are mapping the Persistent Volumes into a particular location in the filesystem inside the container. 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mssql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: mssql
        image: 'mcr.microsoft.com/mssql/server:2017-latest'
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: 'Y'
        - name: MSSQL_DATA_DIR
          value: '/data'
        - name: MSSQL_LOG_DIR
          value: '/log'
        - name: SA_PASSWORD
          value: 'S0methingS@Str0ng!'
        volumeMounts:
        - name: mssql-system
          mountPath: /var/opt/mssql
        - name: mssql-data
          mountPath: /data
        - name: mssql-log
          mountPath: /log
      volumes:
      - name: mssql-system
        persistentVolumeClaim:
          claimName: pvc-sql-system
      - name: mssql-data
        persistentVolumeClaim:
          claimName: pvc-sql-data
      - name: mssql-log
        persistentVolumeClaim:
          claimName: pvc-sql-log

Service

We’ll front end our SQL Server with a public IP address and a load balancer. 

apiVersion: v1
kind: Service
metadata:
  name: mssql-deployment
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 31433
      targetPort: 1433
  type: LoadBalancer

Apply the Configuration

Save the code above into a YAML file and deploy it into SQL Server.

kubectl apply -f deployment-advanced-disk.yaml

You’ll get this output

persistentvolumeclaim/pvc-sql-data created
persistentvolumeclaim/pvc-sql-system created
persistentvolumeclaim/pvc-sql-log created
deployment.apps/mssql-deployment created
service/mssql-deployment created

Confirm the configuration

We can use kubectl get pv to list the Persistent Volumes (PV) dynamically allocated by our cluster. Here there are three Persistent Volumes. The key here is the status is Bound, which means they are bound to a PVC. I also want to point out the Reclaim Policy is Delete. This means if the PVC is deleted, the PV will be deleted at a cleanup interval sometime in the future. 

kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS      REASON   AGE
pvc-e0b418ef-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            Delete           Bound    default/pvc-sql-data     managed-premium            11m
pvc-e0cf2345-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            Delete           Bound    default/pvc-sql-system   managed-premium            11m
pvc-e0ea01a8-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            Delete           Bound    default/pvc-sql-log      managed-premium            11m

With kubectl get pvc we get a list of the PVCs in our configuration, once for each we defined above. The key here is the status is Bound, or that they are bound to a PV.

kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
pvc-sql-data     Bound    pvc-e0b418ef-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            managed-premium   12m
pvc-sql-log      Bound    pvc-e0ea01a8-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            managed-premium   12m
pvc-sql-system   Bound    pvc-e0cf2345-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            managed-premium   12m 

Now let’s use kubectl describe pods to get the deep dive info about our storage configuration and how it’s mapped into the Pod. 

There are three keep places in the output below I want to point you to

  • Containers: mssql: Environment: you’ll find the two environment variables set for the data and log directories. Configured as /data and /log
  • Mounts: we see the file system location inside the container and the name of the Volumes defined in the Pod Spec
  • Volumes: we see the name of the Volumes, their type, claim name and the read/write status.
  • Events: this is a log of the events for the creation of this Pod. Key here is that sometimes the container will come up prior to the storage being available to the Pod. That’s what the error below is, but it clears itself up and the container is able to start.
kubectl describe pods
Name:               mssql-deployment-df4cf5c4c-nf8lf
Namespace:          default
Priority:           0
PriorityClassName:
Node:               aks-nodepool1-89481420-2/10.240.0.6
Start Time:         Sat, 04 May 2019 07:41:59 -0500
Labels:             app=mssql
                    pod-template-hash=df4cf5c4c
Annotations:
Status:             Running
IP:                 10.244.1.51
Controlled By:      ReplicaSet/mssql-deployment-df4cf5c4c
Containers:
  mssql:
    Container ID:   docker://f2320ae8f94c24fbb04214b903b4a218b82e9548f8d88a95daa7e207eeaa42b4
    Image:          mcr.microsoft.com/mssql/server:2017-latest
    Image ID:       docker-pullable://mcr.microsoft.com/mssql/server@sha256:39554141d307f2d40d2abfc54e3a0eea3aa527e58f616496c6f3ed3245a2e2b1
    Port:           1433/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sat, 04 May 2019 07:44:21 -0500
    Ready:          True
    Restart Count:  0
    Environment:
      ACCEPT_EULA:                   Y
      MSSQL_DATA_DIR:                /data
      MSSQL_LOG_DIR:                 /log
      SA_PASSWORD:                   S0methingS@Str0ng!
      KUBERNETES_PORT_443_TCP_ADDR:  cscluster-kubernetes-cloud-fd0c5e-8bca8b54.hcp.centralus.azmk8s.io
      KUBERNETES_PORT:               tcp://cscluster-kubernetes-cloud-fd0c5e-8bca8b54.hcp.centralus.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://cscluster-kubernetes-cloud-fd0c5e-8bca8b54.hcp.centralus.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       cscluster-kubernetes-cloud-fd0c5e-8bca8b54.hcp.centralus.azmk8s.io
    Mounts:
      /data from mssql-data (rw)
      /log from mssql-log (rw)
      /var/opt/mssql from mssql-system (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z9sbf (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  mssql-system:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-sql-system
    ReadOnly:   false
  mssql-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-sql-data
    ReadOnly:   false
  mssql-log:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-sql-log
    ReadOnly:   false
  default-token-z9sbf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-z9sbf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age   From                               Message
  ----     ------                  ----  ----                               -------
  Normal   Scheduled               13m   default-scheduler                  Successfully assigned default/mssql-deployment-df4cf5c4c-nf8lf to aks-nodepool1-89481420-2
  Normal   SuccessfulAttachVolume  13m   attachdetach-controller            AttachVolume.Attach succeeded for volume "pvc-e0ea01a8-6e69-11e9-a433-f659caf6a6f5"
  Normal   SuccessfulAttachVolume  12m   attachdetach-controller            AttachVolume.Attach succeeded for volume "pvc-e0cf2345-6e69-11e9-a433-f659caf6a6f5"
  Normal   SuccessfulAttachVolume  12m   attachdetach-controller            AttachVolume.Attach succeeded for volume "pvc-e0b418ef-6e69-11e9-a433-f659caf6a6f5"
  Warning  FailedMount             11m   kubelet, aks-nodepool1-89481420-2  Unable to mount volumes for pod "mssql-deployment-df4cf5c4c-nf8lf_default(027c46f7-6e6a-11e9-a433-f659caf6a6f5)": timeout expired waiting for volumes to attach or mount for pod "default"/"mssql-deployment-df4cf5c4c-nf8lf". list of unmounted volumes=[mssql-system mssql-data]. list of unattached volumes=[mssql-system mssql-data mssql-log default-token-z9sbf]
  Normal   Pulled                  11m   kubelet, aks-nodepool1-89481420-2  Container image "mcr.microsoft.com/mssql/server:2017-latest" already present on machine
  Normal   Created                 11m   kubelet, aks-nodepool1-89481420-2  Created container
  Normal   Started                 11m   kubelet, aks-nodepool1-89481420-2  Started container

Creating a Database and Verifying File Location

With this code, we’ll get our IP address for our SQL Server service then we’ll create a database and query master_files for a list of data files. Notice I’m defining my service port as 31443 which is what we defined when creating our service in the earlier step.

SVCIP=$(kubectl get svc mssql-deployment | grep mssql-deployment |  awk '{print $4}')
sqlcmd -S $SVCIP,31433 -U sa -Q 'CREATE DATABASE TestDB1' -P $PASSWORD
sqlcmd -S $SVCIP,31433 -U sa -Q 'SELECT name,physical_name from sys.master_files' -P $PASSWORD


And we’ll get this output, you can see all of the system databases backed by /var/opt/mssql and our user database is on /data and the log is on /log. All backed by Persistent Volumes.

master        /var/opt/mssql/data/master.mdf
mastlog       /var/opt/mssql/data/mastlog.ldf
tempdev       /var/opt/mssql/data/tempdb.mdf
templog       /var/opt/mssql/data/templog.ldf
modeldev      /var/opt/mssql/data/model.mdf
modellog      /var/opt/mssql/data/modellog.ldf
MSDBData      /var/opt/mssql/data/MSDBData.mdf
MSDBLog       /var/opt/mssql/data/MSDBLog.ldf
TestDB1       /data/TestDB1.mdf
TestDB1_log   /log/TestDB1_log.ldf

Confirming Persistency

Let’s go ahead and delete our Pod to confirm that when it’s recreated by our Deployment our data is still there. 

kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
mssql-deployment-df4cf5c4c-nf8lf   1/1     Running   0          4d2h

kubectl delete pod mssql-deployment-df4cf5c4c-nf8lf 
pod "mssql-deployment-df4cf5c4c-nf8lf" deleted

Once the Pod is recreated, let’s query master files to see where our databases are located. And you’ll find that your the database created in the previous step persisted between Pod creations.

sqlcmd -S $SVCIP,31433 -U sa -Q 'SELECT name,physical_name from sys.master_files' -P $PASSWORD

master        /var/opt/mssql/data/master.mdf
mastlog       /var/opt/mssql/data/mastlog.ldf
tempdev       /var/opt/mssql/data/tempdb.mdf
templog       /var/opt/mssql/data/templog.ldf
modeldev      /var/opt/mssql/data/model.mdf
modellog      /var/opt/mssql/data/modellog.ldf
MSDBData      /var/opt/mssql/data/MSDBData.mdf
MSDBLog       /var/opt/mssql/data/MSDBLog.ldf
TestDB1       /data/TestDB1.mdf
TestDB1_log   /log/TestDB1_log.ldf

Presenting with VS Code and Running Commands on Remote Hosts

In this blog post, I’m going to show you how to use VS Code and Azure Data Studio for presentations where you need to execute local code on remote systems over SSH.

The Challenge

When using tools like VS Code and PowerShell ISE these tools give you the ability to highlight code and execute that highlighted code. This generally means the highlighted code you want to execute will execute against the local machine. When presenting, this technique is very useful and impactful for your audience because both the code being executed and its output will be on the screen at the same time as you step through your demos. I do a lot of presenting on topics like Linux, SQL Server on Linux, Kubernetes, and OpenSSH when I’m logged into remote hosts over SSH. I’d like to have the same functionality, so I can highlight the code in VS Code locally and execute that code on a remote host over SSH. Let’s explore how I do that.

The Solution

You can create a custom keyboard shortcut in VS Code (And Azure Data Studio too) that gives you this functionality. Highlight code, press a button and execute that code in the active terminal, which just so happens to be SSH’d into a remote host.

Head over to Preferences->Keyboard Shortcuts (Picture 1) and in there you’ll find a shortcut called “Terminal: Run Selected Text In Active Terminal” (Picture 2). This is exactly what I want. Now, when I’m presenting…I can highlight the code…and what I highlighted gets copied into the terminal below and executed on whatever system is active in the terminal below. This could be either my local computer or a remote system over SSH.

NewImage

Picture 1: Preferences -> Keyboard Shortcuts. This is where you’ll find the Shortcut.

NewImage

Picture 2:  The Run Selected Text in Active Terminal is bound to keys SHIFT+F8

The Output

So, let’s see this in action. In Picture 3, you can see I have VS Code open, the first thing I do is SSH into a remote host, in this case c1-master1 which is a Master Node in a Kubernetes Cluster. Once Logged in, I highlight the command “kubectl get nodes” then I hit SHIFT+F8 which copies my selected text into the active terminal, which is the SSH session on the remote host, and executes the code. This makes me so happy. Being able to do this when presenting provides a lot of value to the attendee/viewer so they can see both the code and its output on the screen at the same time.  Another quick tip, VS Code and Azure Data Studio have excellent font scaling, crank that font up…your attendees/viewers will be happier with you.

DemosInCode

Picture 3:  Here you can see the shortcut in action in an SSH session to a Kubernetes Cluster Master.

Using Kubernetes Deployments for Updating SQL Server

In Kubernetes we can leverage Controllers to help manage our application state, keeping them in the desired state. In this blog post, we’re going to look at how to use a Deployment Controller to manage the application state of SQL Server in Kubernetes. We’ll look at deploying SQL Server in a Deployment and using that deployment to upgrade SQL Server and rollback our upgrade.

Deploying SQL Server in a Deployment

Let’s start off with deploying SQL Server in Kubernetes. We can do that with the following YAML file to describe our Deployment.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: mssql
        image: 'mcr.microsoft.com/mssql/server:2017-CU11-ubuntu'
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: SA_PASSWORD
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: pvc-sql-data
---
apiVersion: v1
kind: Service
metadata:
  name: mssql-deployment
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 31433
      targetPort: 1433
  type: NodePort

Listing 1: deployment-sql.yaml

There are a few things I want to point out in our YAML file. First, we’re using a Deployment Controller. This will implement a Replica Set of the desired number of replicas using the container imaged defined. In this case, we’ll have 1 replica using the SQL Server 2017 CU11 Image. A Replica Set will guarantee that a defined set of Pods are running at any given time, here we’ll have exactly one Pod. We’re using a Deployment Controller, which gives us move between versions of Replica Sets based off different container images in a controlled fashion…more on that in a second. I would also like to point out, the volume described in this manifest. Our container’s data directory is /var/opt/mssql which is mounted on a PersistentVolumeClaim. This means our data is external to our Pod, if our Pod is redeployed our databases will be in this directory, they will be mounted and our databases will be made available. We’re also using a Service to provide a fixed IP and Port for access to our SQL Server in this Deployment.

Let’s go ahead and apply the code in Listing 1: deployment-sql.yaml

kubectl create secret generic mssql --from-literal=SA_PASSWORD=OurR&4llyStr0ngP4ssw0rd!
kubectl apply -f deployment-sql.yaml --record

With that applied, our SQL Server Deployment will schedule one Pod, start up the container, expose it as a NodePort Service and our SQL Server is up and running on the 2017 CU11 container image. That –record flag will record the operation as an annotation on the resource. Basically giving us some human-readable information about what we’re doing with that command that we can user later.

Deployments and Replica Sets

In Kubernetes, Deployments are made of Replica Sets. With our SQL Server Pod up in running from our Deployment, let’s start our investigation using kubectl get deployment mssql-deployment. In the output below, we can see the deployment mssql-deployment started a Replica Set based off of the SQL Server 2017 CU11 Image. And the Replica Set started for that container image is mssql-deployment-55bd89b84d.

kubectl get deployment mssql-deployment

Name: mssql-deployment …output omitted
Pod Template:
Containers:
mssql:
Image: mcr.microsoft.com/mssql/server:2017-CU11-ubuntu
…output omitted
NewReplicaSet:   mssql-deployment-55bd89b84d (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 7m10s deployment-controller Scaled up replica set mssql-deployment-55bd89b84d to 1

Screen Shot 2019 03 12 at 6 54 14 AM

Figure 1: SQL Server Deployment

Updating the Deployment with a New Container Image

Now we can use Deployments to easily move between versions of container images. So let’s update this 2017 CU11 container image with a 2017 CU12 container image. We can do that with this code:

kubectl --record deployment set image mssql-deployment mssql=mcr.microsoft.com/mssql/server:2017-CU12-ubuntu

With this block of code, we’re recording the updating of the container image with –record and we’re setting the container image for the mssql container in our Pod Template to 2017-CU12-ubuntu.

Now our container image is being updated using our defined Update Strategy…we defined our update strategy way back in deployment-sql.yaml with the attribute strategy: type: Recreate. The Recreate update strategy will shut down the existing Pod(s) in the Replica Set before starting the new Pod(s) with the new container image in the new Replica Set we’re updating to. This makes sense in an RDBMS since we want to have only one Pod have access to the data files at one point in time. This entire process takes only a few seconds! You may have to wait while SQL Server runs update scripts on the databases.We can check the status with kubectl rollout status deployment mssql-deployment 

kubectl rollout status deployment mssql-deployment

Waiting for deployment "mssql-deployment" rollout to finish: 0 out of 1 new replicas have been updated... Waiting for deployment "mssql-deployment" rollout to finish: 0 of 1 updated replicas are available... deployment "mssql-deployment" successfully rolled out

Now look more closely at our Deployment again with kubectl describe deployment mssql-deployment. In the output below, here we see the original Replica Set (mssql-deployment-55bd89b84d) scaled from 1 to 0 and our new Replica Set (mssql-deployment-6776c966b7) based off of the CU12 image scaled from 0 to 1. I also want to point out that Kubernetes will keep the original Replica Set metadata around for us which we can use to rollback if needed.

kubectl describe deployment mssql-deployment

Name: mssql-deployment …output omitted Pod Template: Containers: mssql: Image: mcr.microsoft.com/mssql/server:2017-CU12-ubuntu
…output omitted
NewReplicaSet: mssql-deployment-6776c966b7 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 15m deployment-controller Scaled up replica set mssql-deployment-55bd89b84d to 1 Normal ScalingReplicaSet 114s deployment-controller Scaled down replica set mssql-deployment-55bd89b84d to 0 Normal ScalingReplicaSet 109s deployment-controller Scaled up replica set mssql-deployment-6776c966b7 to 1

Screen Shot 2019 03 12 at 6 54 26 AM

Figure 2: SQL Server Deployment with updated container image

Check out the Revision History

If you want to check the history of your rollouts, with the recorded changes you’ve made, for your Deployment you can use kubectl rollout history deployment mssql-deployment

kubectl rollout history deployment mssql-deployment

REVISION CHANGE-CAUSE 1 kubectl apply --filename=deployment-sql.yaml --record=true 2 kubectl deployment set image mssql-deployment mssql=mcr.microsoft.com/mssql/server:2017-CU12-ubuntu --record=true

With this we can see the history of our changes to our Deployment, specifically Revision number 1 when we created our Deployment. Then Revision number 2 when we changed the image from CU11 to CU12.

Rolling Back our SQL Server Deployment to the Previous Container Image

Now, if we needed to rollback from CU12 to CU11, that’s quite easy in Kubernetes, we can do that with kubectl rollout undo deployment mssql-deployment –to-revision=1  

kubectl rollout undo deployment mssql-deployment --to-revision=1 

Then we can use kubectl describe deployment mssql-deployment to check the status of our Deployment rollback.

kubectl describe deployment mssql-deployment

Name: mssql-deployment
…output omitted
Pod Template:
Containers:
mssql:
Image: mcr.microsoft.com/mssql/server:2017-CU11-ubuntu
…output omitted
NewReplicaSet: mssql-deployment-55bd89b84d (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 7m55s deployment-controller Scaled down replica set mssql-deployment-55bd89b84d to 0
Normal ScalingReplicaSet 7m50s deployment-controller Scaled up replica set mssql-deployment-6776c966b7 to 1
Normal ScalingReplicaSet 18s deployment-controller Scaled down replica set mssql-deployment-6776c966b7 to 0 
Normal ScalingReplicaSet 12s (x2 over 21m) deployment-controller Scaled up replica set mssql-deployment-55bd89b84d to 1 

In the output above you can see our updated Replica Set (mssql-deployment-6776c966b7) is scaled from 1 to 0 and the original Replica Set is scaled from 0 to 1 (mssql-deployment-55bd89b84d). Bringing the Replica Set backed with the CU 11 image back online. Again, similar to the rollout of the image update above, this entire process takes only a few seconds. Again, you may have to wait while SQL Server runs update scripts on the databases.

Summary

Kubernetes offers us many ways to manage our application state. Deployment Controllers give us the ability to easily move between versions of our application and rollback if needed. In SQL Server, this method offers us a way to move between Cumulative Updates in a controlled way with a very quick, and controlled way to rollback if needed. However, in SQL Server, we have to deal with upgrades where we can’t easily roll back as is the case when we update the database version. We can still use this to upgrade SQL Server between database versions, but we lose the ability to rollback. In those scenarios, testing is the best way to ensure you are compatible in the upgraded state. You’ll find this rollout method is amazingly simple and fast when you try it out.

Please feel free to contact me with any questions regarding Linux, Kubernetes or other SQL Server related issues at : aen@centinosystems.com

Speaking at SQLBits 2019

Speaking at SQLBits 2019!

I’m proud to announce that I will be speaking at SQLBits on March 2nd 2019! It’s been a goal of mine to speak at SQLBits for a few years now and I’m VERY excited for the opportunity! This year’s conference won’t let you down. Check out the amazing schedule of Experts and Microsoft MVPs!

If you haven’t been to SQLBits before, what are you waiting for! Sign up now!

SQLBits Logo

Here’s the details on my session!

Inside Kubernetes – An Architectural Deep Dive – March 2 2018 – 15:10 – Room 12

In this session we will introduce Kubernetes, we’ll deep dive into each component and its responsibility in a cluster. We will also look at and demonstrate higher-level abstractions such as Services, Controllers, and Deployments and how they can be used to ensure the desired state of an application and data platform deployed in Kubernetes. Next, we’ll look at Kubernetes networking and intercluster communication patterns. With that foundation, we will then introduce various cluster scenarios such as a single node, single head, and high availability designs. By the end of this session, you will understand what’s needed to put your applications and data platform in production in a Kubernetes cluster

Session Objectives:
Understand Kubernetes cluster architecture
Understand Services, Controllers, and Deployments
Designing Production Ready Kubernetes Clusters

Right before my session, in the same room at 13:45, my good friend Andrew Pruski will be delivering a session on Kubernetes as well! His session is: SQL Server and Kubernetes!

Be sure to come to both sessions, Andrew will get you started on your Kubernetes journey and I’ll dive deep into how Kubernetes works!

Awarded Friend of Redgate – 2019

Friend of Redgate – 2019

I’m excited to announce that I have been named a Friend of Redgate for 2019, my forth year in a row! The program targets influential people in their respective technical communities such as SQL, .NET and DevOps and enables us to participate in the conversation around product and community development.

As a multi-year awardee in the program I get to see first hand the continuing dedication Redgate has to the SQL community and to making great software. I met a ton of really cool, very dedicated people along the way. Thanks for the recognition and I look forward to another great year!

Redgate makes outstanding products! While I focus mainly on the DBA side of things such as SQL Monitor, SQL Backup and SQL Prompt there are many more. I’ve used these tools for years and let’s just say they’re awesome.

Redgate isn’t just software, they’re committed to community and education. Here are some of the things they do to support technical communities:

  • Online resources – SimpleTalkSQL Server Central, and books and Free eBooks. These resources aren’t marketing fluff, it’s killer content written by real experts
  • Events – hosting events, exhibiting at events and supporting user groups across the world. One word can describe this, engaged
Thank you to Redgate for this opportunity! I look forward to participating in this program, sharing my thoughts and learning as much as I can from all involved.
 
 
If you need you’d like to talk about Redgate’s products and where they fit into your SQL Server system please feel free to contact me.
 
Follow me on Twitter: @nocentino

Speaking at SQLSaturday Nashville – 815!

Speaking at SQLSaturday Nashville – 815!

I’m proud to announce that I will be speaking at SQL Saturday Nashville on January 12th 2019! And wow, 815 SQL Saturdays! This one won’t let you down. Check out the amazing schedule!

If you don’t know what SQLSaturday is, it’s a whole day of free SQL Server training available to you at no cost!

If you haven’t been to a SQLSaturday, what are you waiting for! Sign up now!

NewImage

This year I have TWO sessions!

1. Inside Kubernetes – An Architectural Deep Dive

In this session we will introduce Kubernetes, we’ll deep dive into each component and its responsibility in a cluster. We will also look at and demonstrate higher-level abstractions such as Services, Controllers, and Deployments and how they can be used to ensure the desired state of an application and data platform deployed in Kubernetes. Next, we’ll look at Kubernetes networking and intercluster communication patterns. With that foundation, we will then introduce various cluster scenarios such as a single node, single head, and high availability designs. By the end of this session, you will understand what’s needed to put your applications and data platform in production in a Kubernetes cluster

Session Objectives:
Understand Kubernetes cluster architecture
Understand Services, Controllers, and Deployments
Designing Production Ready Kubernetes Clusters

 

2. Containers – You Better Get on Board

Containers are taking over, changing the way systems are developed and deployed…and that’s NOT hyperbole. Just imagine if you could deploy SQL Server or even your whole application stack in just minutes. You can do that, leveraging containers! In this session, we’ll get your started on your container journey learning container fundamentals in Docker, then look at some common container scenarios and introduce deployment automation with Kubernetes. In this session we’ll look at Container Fundamentals with Docker Common, Container Scenarios and Orchestration with Kubernetes