Tag Archives: SQL

T-SQL Tuesday #140 Wrap up: What have you been up to with containers?

I want to start by saying thank you to all who submitted, and an amazing collection of people submitted some fantastic content. Also, thanks to Steve for asking me to host and being patient with me for mixing up the dates and the hashtag. It’s #tsql2sday and it’s on Tuesday not Wednesday :P

T SQL Tuesday Logo

Now, onto the posts in submission order.

Rob Farley – On containers

Rob discusses how he uses containers to quickly spin up SQL Server instances without installing them on his local OS, replacing the virtual machine-based environments he used in the past. And I can’t agree with this more. I’ve used Macs for 20 years and have used VMs to do SQL Server-based work. That’s no longer the case. I can run SQL Server in containers without VMs*. And you can do the same you can run a container in minutes anywhere you have a container runtime like docker. Thanks for sharing, Rob!

Jeff Hill – Container Convenience

Jeff introduces us to some non-SQL Server container images he uses at home, like pi-hole, a media server, a personal CRM, and more. The big idea in this post is that due to the isolation principles of containers, you and spin up containers from container images super easy via a container repository like docker hub…which enable you to test out new software or even new versions of existing software and if you don’t want that container anymore delete it. There’s no leftover crud on your system like config files etc. Great post, Jeff! And as Jeff suggests, head over to docker hub and see what software you can find that’s useful for you!

PS: If you aren’t using Pi-Hole, oh, you really should. Check it out…I run it in a container on my laptop for when I’m traveling, and I have one in Azure that my home network is using.

Kevin Chant – Easy demos using containers

Kevin shows us how he’s been using containers in demos, specifically for sessions based on DevOps. Using tools like Azure Pipelines in Azure DevOps or GitHub Actions, you can build automated testing and deployments of SQL Server instances and databases for the various environments needed in your deployment, such as prod, stage, etc. This is truly one of the superpowers of containers…building deployment automation in code so that you can roll out in a defined, tested way…every time you roll out a new instance or database or change to an instance or database. Kevin also introduces us to the idea that many of us are already using containers and might not even know that we are. He points out that several Azure services like Databricks and Synapse Analytics use containers behind the scenes. Excellent post, Kevin!

Aaron Bertrand – What have you been up to with containers?

Aaron shows us what he’s been up to with containers, specifically spinning up containers to test out application compatibility when making changes around case-sensitivity and binary collation settings at the instance level. He describes how he can quickly spin up the container, run the test and remove the container quickly…and like we discussed earlier…this used to be something that would require provisioning a whole VM and installing SQL Server. Such a huge time saver. In addition to describing how he uses containers, Aaron also gives us some example code to start up a container with some of the unique settings he wanted to test. Thanks, Aaron, great stuff!

Tom – Containers and me

Tom discusses how he’s containers to deploy RabbitMQ, a monitoring stack using Grafana, and deploying a SQL Server environment via a build pipeline. Excellent stuff. Thanks for sharing, Tom! Tom also mentions my container-mate Andrew Pruski’s SQL Server and Containers Guide, this is fantastic stuff check it out! everything from getting started to deep dive is available there.

Todd Kleinhans – RAPIDS and SQL Server Containers

Todd shows us how you can use containers to enable data science scenarios. In his post, he shows you how to start up a RAPIDS container with access to your system’s GPU. There are two cool things to unpack there. First, RAPIDS, as Todd points out, is a suite of open-source software libraries and APIs, giving you an end-to-end data science and an analytics pipeline all in one container image. Spin that container up with access to your GPU, and you’re off to the races performing GPU accelerated data science without having to struggle with downloading software and setting it up…just grab the container and go. Of course, to do data science, you need data, so he also dives into how to spin up a SQL Server container and access that data from the RAPIDS application suite. Super awesome stuff, Todd!

Kendra Little – Create a Disposable SQL Server Database Container in an Azure DevOps Pipeline With Spawn

Kendra shows us how to create a disposable SQL Server Database Container in an Azure Pipeline using Spawn in this post. As discussed in the post, Spawn is a tool that addresses two key challenges when working with data in development processes, testing against realistic datasets and resetting that data after changes. Spawn brings the power of containers to help instantiate datasets rather than just applications. Combine that with Azure DevOps Pipelines, and you have a super slick way of building automated workflows and testing code changes against realistic datasets. Outstanding post, Kendra!

PS: I saw Spawn a few years back at a SQLSaturday…watch this space. I think they’re building something special here!

Mark Wilkinson – Baselining SQL Server with the TIG Stack

Next up, my fellow EightKB organizer Mark Wilkinson shows us how to stand up a STIG (Telegraf, InfluxDB, Grafana) monitoring stack using Docker Compose. Grab the code here! The STIG monitoring stack enables you to collect baseline metrics for your SQL Server instances and use visualization dashboards via Grafana. This post hits home for me. I will use this code to stand up performance monitoring environments for testing and spot troubleshooting. As a consultant, this would have been SO super valuable since I can quickly spin up the whole monitoring stack and collect metrics on instances…I’ve had many clients over the years that have no monitoring. This project would have been a HUGE time saver for me. Banging post, Mark!

Barney Lawrence – Containers for Business and Pleasure

In Barney’s post, he mixes business and pleasure…showing us how to deploy SQL Server in a container on docker backed by Windows Services for Linux (WSL) and how to deploy a Minecraft Bedrock Server using Docker Compose. There are a couple of cool things to point out here: first, how data is managed in both scenarios using Docker Volumes, and second, leveraging Docker Compose to manage the configuration and state of the Minecraft Server defining environment variables and volumes in code. Well done, Barney!

Cathrine Wilhelmsen – Developing in Containers using Visual Studio Code

In this post, Catherine walks us through setting up a development environment using containers in Visual Studio Code. She highlights some core reasons for using containers containers. First, containers give organizations the ability to control which libraries and tools developers are using. Second, containers enable organizations to quickly onboard new developers and consultants like herself online with the proper tooling as fast as possible to be productive as quickly as possible. Thank you for sharing this super valuable content, Catherine!

Deborah Melkin – What have I been doing with Containers

Deborah highlights the various use cases for running SQL Server in containers, things like quick deployment, and code and upgrade testing. She also introduces the term ‘virtual instance’, which is a fantastic way to describe to DBAs what you get when you run SQL Server in a container. Deborah also links to some other posts that where she describes her experiences getting started with docker and setting up ports for SQL Server in containers. Awesome post(s) :) thanks for this, Deborah!

And Last But Not Least, the Rule Breakers!!!

They didn’t post on my invite post, but I found them on that pesky #tsql2sday hashtag on Twitter.

Shane O’Neill – What have you been up to with containers?

In this post, Shane shares with us what happens when you run a Kubernetes cluster on your laptop…things get hot…fast. (Y’all remember when my laptop caught fire 🔥???) But anyway, Shane isolats the problem, cleans things up, and gets back to running SQL Server in containers in Docker on his laptop to keep things cool 😎 . Thanks for sharing this, Shane!

KUBERNETES…drink!

Rob Sewell – TSql2sday video – Azure Arc Enabled Data Services in AKS Cluster

The Beard brings it all together…in his “post” (which is a YouTube video because Rob is incredible and also a rule breaker), Rob shows you Azure Arc enabled Data Services (something near and dear to my heart)…he deploys an Azure Kubernetes Service cluster, an Azure Arc enabled Data Services deployment, a couple of SQL Server Managed Instances, and a complete monitoring and logging stack using Grafana and Kibana. And this, my friend, is the magic of containers and Kubernetes…Rob does all of this in code in a repeatable fashion in just about 30 minutes.

Summary

Summing this all up, there are a couple of primary themes here, speed and consistency. Containers enable you to develop, deploy and maintain applications quickly and consistently in code. And as we discussed in the Invitation post, containers are the foundation for the next generation of the Microsoft Data Platform; Azure Arc enabled Data Services! Thank you all again for your fantastic posts!

T-SQL Tuesday #140: What have you been up to with containers?

In recent years containers have come into the data platform world, exposing new technologies to data professionals. Microsoft put SQL Server in Linux, and shortly after that, SQL Server made its way into containers. SQL Server in Containers has become the foundation for things like Big Data Clusters and Azure Arc-enabled Data Services

My invitation to you for this month’s #tsql2sday is…

I want to invite you to share your experiences using containers and not just SQL Server in containers…

  • What are the cool things you’ve done with containers in your environment, test lab, or even presentation demos?
  • Are you using containers in production? If so, what are the tips or tricks you can share to help others?

If you haven’t tried containers yet…here’s a video showing you how to do the following…

  • Deploy a SQL Server in just a few minutes!
  • Connect to your container-based SQL Server.
  • Upgrade a container-based SQL Server to a new version.

So, if you haven’t used containers before, go ahead and try out the demos from this video, which you can get here, and write about your experience!

*** The Rules ***

I’d love to see some new contributors to #tsql2sday – if you’re not familiar with how this works, here are the rules in a nutshell:

Your post must be published on Tuesday, July 13th 2021 (in any timezone).
Include the T-SQL Tuesday Logo and make it link to this invitation post.
Please add a comment to this post with a link to your own so I know where to find it.
Tweet about your post using the #tsql2sday hashtag.

Getting SQL Agent Jobs and Job Steps Configuration

Recently I needed to take a look at all of the SQL Server Agent Jobs and their Jobs Steps for a customer. Specifically, I needed to review all of the Jobs and Job Steps for Ola Hallengren’s Maintenance Solution and look at the Backup, Index Maintenance and Integrity Jobs to ensure they’re configured properly and also account for any customizations and one-offs in the Job definitions. This customer has dozens of SQL Server instances and well, I wasn’t about to click through everything in SSMS…and writing this in TSQL would have been a good candidate for a Ph.D. dissertation. So let’s check out how I solved this problem using dbatools.

Enter dbatools…

In my first attempt at doing this I tried getting all the Jobs using Get-DbaAgentJob and exporting the Jobs to TSQL using Export-DbaScript. This did give me the code for all of the Jobs I was interested in. But that left me trying to decipher SQL Agent Job and Schedule syntax and encodings and I got all twisted up in the TSQL-ness of that. I needed this to be more readable.

So I thought…there has to be a better way…there is! So, I wrote the following. This code gets each SQL Agent Job, print the Job’s Name, NextRunDate, if it has a Schedule, Operator information, and then for each JobStep it prints the Step’s Name, Subsystem, and finally the Command. Using this I can quickly get a feel for the configurations across the environment.

Get a listing of all SQL Instances

    $Servers = Get-DbaRegisteredServer

Get all of the SQL Agent Jobs across all SQL Instances

    $jobs = Get-DbaAgentJob -SqlInstance $Servers.Name

Filter that list down to the SQL Agent Jobs that are in the Database Maintenance category

    $MaintenanceJobs = $jobs | Where-Object { $_.Category -eq 'Database Maintenance' } 

For each SQL Agent Job, print the Job’s Name, NextRunDate, if it has a Schedule, Operator information, and then for each JobStep print its Name, Agent Subsystem, and finally the Command.

    $JobsAndSteps = foreach ($MaintenanceJob in $MaintenanceJobs){
        foreach ($JobStep in $MaintenanceJob.JobSteps) {
            $obj = [PSCustomObject]@{
                SqlInstance = $MaintenanceJob.SqlInstance
                Name = $MaintenanceJob.Name
                NextRunDate = $MaintenanceJob.NextRunDate
                HasSchedule = $MaintenanceJob.HasSchedule
                OperatorToEmail = $MaintenanceJob.OperatorToEmail
                JobStepName = $JobStep.Name
                SubSystem = $JobStep.SubSystem
                Command = $JobStep.Command
                }
            $obj  
        }
    }

Here’s some sample output using Format-Table. From there I can quickly scan and analyze all the Jobs on all of the Instances in an environment.

$JobsAndSteps | Format-Table

SqlInstance     Name                                    NextRunDate           HasSchedule OperatorToEmail JobStepName                                           SubSystem Command
-----------     ----                                    -----------           ----------- --------------- -----------                                           --------- -------
PRODSQL1        DatabaseBackup - USER_DATABASES - FULL  2/3/2021 1:00:00 AM          True DbaTeam         DatabaseBackup - USER_DATABASES - FULL - Backup         CmdExec sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -Q "EXECUTE [dbo].[DatabaseBackup] @Databases = 'USER_DATABASES', @Directory = N'T:\Backup', @Ba...
PRODSQL1        DatabaseBackup - USER_DATABASES - FULL  2/3/2021 1:00:00 AM          True DbaTeam         DatabaseBackup - USER_DATABASES - FULL - Sync           CmdExec ROBOCOPY SOME STUFF
PRODSQL1        DatabaseBackup - USER_DATABASES - FULL  2/3/2021 1:00:00 AM          True DbaTeam         DatabaseBackup - USER_DATABASES - FULL - Cleanup     PowerShell RUN SOME POWERSHELL TO DO COOL STUFF
PRODSQL2        DatabaseBackup - USER_DATABASES - FULL  2/3/2021 1:00:00 AM          True DbaTeam         DatabaseBackup - USER_DATABASES - FULL - Backup         CmdExec sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -Q "EXECUTE [dbo].[DatabaseBackup] @Databases = 'USER_DATABASES', @Directory = N'T:\Backup', @Ba...
PRODSQL2        DatabaseBackup - USER_DATABASES - FULL  2/3/2021 1:00:00 AM          True DbaTeam         DatabaseBackup - USER_DATABASES - FULL - Sync           CmdExec ROBOCOPY SOME STUFF
PRODSQL2        DatabaseBackup - USER_DATABASES - FULL  2/3/2021 1:00:00 AM          True DbaTeam         DatabaseBackup - USER_DATABASES - FULL - Cleanup     PowerShell RUN SOME POWERSHELL TO DO COOL STUFF

You can also take that output and convert it to CSV and then Excel for analysis

$JobsAndSteps | ConvertTo-Csv -NoTypeInformation | Out-File JobSteps.csv

Persistent Server Name Metadata When Deploying SQL Server in Kubernetes

In this post, we will explore how a Pod name is generated, Pod Name lifecycle, how it’s used inside a Pod to set the system hostname, and how the system hostname is used by SQL Server to set its server name metadata.

Pod Naming in Deployments

When deploying SQL Server in Kubernetes using a Deployment, the Pod created by the Deployment Controller will have a name with a structure of <DeploymentName>-<PodTemplateHash>-<PodID> for example, mssql-deployment-8cbdc8ddd-9n7jh.

Let’s break that example Pod name down a bit more:

  • mssql-deployment – this is the name of the Deployment specified at metatdata.name. This is stable for the lifecycle of the deployment
  • 8cbdc8ddd – this is a hash of the Pod Template Spec in the Deployment object template.spec. Changing the Pod Template Spec changes this value and also triggers a rollout of the new Pod configuration.
  • 9n7jh – this is a random string assigned to help identify the Pod uniquely. This changes with the lifecycle of the Pod itself.

In a default Deployment configuration, the Pod’s name is used to system hostname inside the Pod. In a Deployment, when a Pod is deleted for whatever reason, Pod/Node failure, Pod administratively deleted, or an update to the Pod Template Spec triggering a rollout, the new Pod created will have a new Pod Name and a matching hostname inside the Pod. It is a new Pod after all. :) This can lead to an interesting scenario inside SQL Server since the Pod name can change. Let’s dig deeper…

Server name metadata inside SQL Server running in a Pod

To ensure SQL Server’s data has a lifecycle independent of the Pod’s lifecycle, in a basic configuration, a PersistentVolume is used for the instance directory /var/opt/mssql. The first time SQL Server starts up, it copies a set of system databases into the directory /var/opt/mssql. During the initial startup, the current hostname of the Pod is used to set SQL Server system metadata for the server name. Specifically @@SERVERNAME, SERVERPROPERTY('ServerName') and the Name column from sys.servers.

In Listing 1, is an example Deployment for SQL Server. In this configuration, the hostname inside the Pod will match the current Pod Name. But what happens when the Pod name changes when a Pod is deleted, and new Pod is created with a new name? Let’s walk through that together in the next section.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
spec:  
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
        app: mssql
  template:
    metadata:
      labels:
        app: mssql
    spec:
      securityContext:
        fsGroup: 10001
      containers:
      - name: mssql
        image: 'mcr.microsoft.com/mssql/server:2019-CU8-ubuntu-18.04'
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: SA_PASSWORD 
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: pvc-nfs-instance

Listing 1 – Example SQL Server Manifest using a Deployment Controller

Examining Server Name Metadata When Deploying SQL Server in a Deployment

Initial Deployment

When the Deployment is created, a Pod is created. In the output below, you can see the name of the Pod is mssql-deployment-bb44b7bf7-nzkmt, and the hostname set inside the Pod is the same, mssql-deployment-bb44b7bf7-nzkmt

kubectl get pods 
NAME                               READY   STATUS    RESTARTS   AGE
mssql-deployment-bb44b7bf7-nzkmt   1/1     Running   0          7s

kubectl exec -it mssql-deployment-bb44b7bf7-nzkmt -- /bin/hostname
mssql-deployment-bb44b7bf7-nzkmt

Check Server Name Metadata

Since this is the initial deployment of this SQL Server instance, system databases are copied into /var/opt/mssql, and the server name metadata is set. Let’s query SQL Server for @@SERVERNAME, SERVERPROPERTY('ServerName') and the Name column from sys.servers. In the output below you can see all three values match.

sqlcmd -S $SERVICEIP,$PORT -U sa -Q "SELECT @@SERVERNAME AS SERVERNAME, SERVERPROPERTY('ServerName') AS SERVERPROPERTY, name FROM sys.servers" -P $PASSWORD -W
SERVERNAME                          SERVERPROPERTY                   name
----------                          --------------                   ----
mssql-deployment-bb44b7bf7-nzkmt    mssql-deployment-bb44b7bf7-nzkmt mssql-deployment-bb44b7bf7-nzkmt

Delete the Currently Running Pod

Next, let’s delete a Pod and what happens to the Pod’s name, the Pod’s hostname, and the SQL Server server name metadata.

kubectl delete pod mssql-deployment-bb44b7bf7-nzkmt
pod "mssql-deployment-bb44b7bf7-nzkmt" deleted

I’ve deleted the Pod, and since this is controller by a Deployment controller, it immediately creates a new Pod in its place. This Pod gets a new name. The existing databases and configuration are persisted in the attached PersistentVolume at /var/opt/mssql. These databases are all brought online. In this output below, you can see the new Pod name and hostname are both mssql-deployment-bb44b7bf7-6gm6v.

kubectl get pods 
NAME                               READY   STATUS    RESTARTS   AGE
mssql-deployment-bb44b7bf7-6gm6v   1/1     Running   0          20s

kubectl exec -it mssql-deployment-bb44b7bf7-6gm6v -- hostname
mssql-deployment-bb44b7bf7-6gm6v

What’s in a name?

Now let’s query the server name metadata again. In the output below, you can see there are some inconsistencies. We saw above that Pod has a new name and hostname (mssql-deployment-bb44b7bf7-6gm6v), but this change isn’t updating all the server name metadata inside our Instance. The only place it is updated is SERVERPROPERTY('ServerName') the other values still have the initial Pod Name mssql-deployment-bb44b7bf7-nzkmt.

sqlcmd -S $SERVICEIP,$PORT -U sa -Q "SELECT @@SERVERNAME AS SERVERNAME, SERVERPROPERTY('ServerName') AS SERVERPROPERTY, name FROM sys.servers" -P $PASSWORD -W
SERVERNAME                          SERVERPROPERTY                   name
----------                          --------------                   ----
mssql-deployment-bb44b7bf7-nzkmt mssql-deployment-bb44b7bf7-6gm6v mssql-deployment-bb44b7bf7-nzkmt

Setting a Pod’s Hostname

So what do we do about this? Having instability in the server name metadata can break Replication, mess up our server monitoring systems, and even break code. To get the Pod’s hostname to a persistent value, you need to set the template.pod.spec.hostname field in the Deployment. This sets the system hostname inside the Pod to this value.

In the code below you, can see I’ve set the template.pod.spec.hostname to sql01. On the initial deployment of a SQL Instance, this is the value that is stored in the Instance server name metadata.

If you already have a SQL Server up and running in Kubernetes and did not set the template.pod.spec.hostname value, the server name metadata will need to be updated using standard SQL Server methods with sp_dropserver and sp_addserver.

But for demonstration purposes, I’m going to start over as if this is an initial deployment. And deploy the manifest in Listing 2 into my cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
spec:  
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
        app: mssql
  template:
    metadata:
      labels:
        app: mssql
    spec:
      securityContext:
        fsGroup: 10001
      hostname:
        sql01
      containers:
      - name: mssql
        image: 'mcr.microsoft.com/mssql/server:2019-CU8-ubuntu-18.04'
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: SA_PASSWORD 
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: pvc-nfs-instance

Listing 2 – Example SQL Server Manifest using a Deployment Controller, setting the Pod’s hostname

In the output, below the Pod Name is mssql-deployment-8cbdc8ddd-nv8j4, but inside the Pod, the hostname is sql01, and now all three values for our server name metadata match. If this Pod is deleted, the Pod gets a new name, the hostname inside the Pod will still be sql01, and the Pod server name metadata will still be set to sql01.

kubectl get pods 
NAME                               READY   STATUS    RESTARTS   AGE
mssql-deployment-8cbdc8ddd-nv8j4   1/1     Running   0          43s

kubectl exec -it mssql-deployment-8cbdc8ddd-nv8j4  -- hostname
sql01

sqlcmd -S $SERVICEIP,$PORT -U sa -Q "SELECT @@SERVERNAME AS SERVERNAME, SERVERPROPERTY('ServerName') AS SERVERPROPERTY, name FROM sys.servers" -P $PASSWORD -W
SERVERNAME  SERVERPROPERTY name
----------  -------------- ----
sql01       sql01           sql01

Setting the hostname in the Pod Template Spec gives you the ability to persist the hostname and thus the server name metadata inside SQL Server. This is crucial for services and code that depend on a static hostname. A StatefulSet is a Controller in Kubernetes that does give you persistent, stable naming independent of the lifecycle of a Pod. I will explore those in an upcoming blog post.

Pre-Conference Workshop and Sessions at PASS Summit

I’m pleased to announce that I will be presenting at PASS Summit. This year I have a pre-conference workshop and a regular session. Let’s dive into each.

Pre-Conference Workshop: The Future of Deployment for Modern Data Platform Applications

Ben Weissman and I teach a pre-conference workshop called “The Future of Deployment for Modern Data Platform Applications” in this workshop. We’re going to cover how you will be deploying data platform applications in the near future. Here’s a listing of the topics we’re going to cover.

  • Kubernetes Fundamentals – building a cluster and deploying applications
  • Deploying SQL Server in Kubernetes – diving deep into what it takes to run a stateful application in Kubernetes
  • Deploying Big Data Clusters – showcasing how you can deploy a complex stateful application in Kubernetes.
  • Azure Arc Enabled Data Services Fundamentals – learn how to run any Azure Data Service anywhere you have Kubernetes, in any cloud or on-premises.
  • Deploying Azure Arc Enabled Data Services – tons of demos and code samples to highlighting how to deploy SQL Managed Instance and PostgreSQL HyperScale in any cloud or on-premises. 

You will leave this session with the knowledge, scripts, and tools to get started with Kubernetes and Kubernetes based applications.

Sign up for our workshop here: https://www.pass.org/summit/2020/Register-Now

Regular Session: Deploying and Managing SQL Server with dbatools

Well, if you’ve been following my blog and work over the last few years, it’s been all containers and Kubernetes. But I still have clients that run SQL Server on Windows. And for those clients, there’s only one that I install SQL Server…with dbatools. So I wrote a session describing how I did it for my client, and I’m going to share all that knowledge with you! Check out the deets…

Abstract

The dbatools project brings automation to the forefront of the SQL Server configuration, operations, and deployment tasks. This session will look at how to install and configure multiple SQL Servers quickly and consistently using dbatools deployment tools. Once those systems are up and running, we will look at how to configure and manage multiple systems using PowerShell automation techniques. By the end of this session, you will have the tools, techniques, and code to automatically and consistently deploy and configure SQL Server in your environment.

Hope to see you at PASS Summit this year!

Sign up PASS Summit here: https://www.pass.org/summit/2020/Register-Now

PASS Summit 2020

Speaking at Data Grillen 2020

I’m proud to announce that I will be speaking at Data Grillen 2020 the conference runs from 28 May 2020 through 29 May 2020.

This is an incredible event packed with fantastic content, speakers, bratwurst and Beer! 

Check out the amazing schedule (and when I say check out the amazing schedule, I really mean it. Some of the world’s best Data Platform speakers are going to be there)

On Thursday, May 28th at 15:00 – I’m presenting “Containers –  Day 2” in the Handschuh room.

Here’s the abstract

You’ve been working with containers in development for a while, benefiting from the ease and speed of the deployments. Now it’s time to extend your container-based data platform’s capabilities for your production scenarios.

In this session, we’ll look at how to build custom containers, enabling you to craft a container image for your production system’s needs. We’ll also dive deeper into operationalizing your container-based data platform and learn how to provision advanced disk topologies, seed larger databases, implement resource control and understand performance concepts.

By the end of this session, you will learn what it takes to build containers and make them production ready for your environment.

My good friend, and container expert, Andrew Pruski (@dbafromthecold) will be presenting “SQL Server and Kubernetes” in the same room just before me at 13:30, be sure to come to both sessions for a deep dive into running SQL Server in Containers and Kubernetes.

Prost! 

Speaking at SQLBits 2020

I’m proud to announce that I will be speaking at SQLBits! I had the absolute pleasure of speaking at SQLBits last year for the first time and saw first hand how great this event is and cannot wait to get back and speak again! And this year, I have two sessions!!! One on building and deploying container based applications in Kubernetes and the other on deploying SQL Server in Kubernetes

If you haven’t been to SQLBits before, what are you waiting for! Sign up now!

 

SQL Bits Excel London

Here’s the details for my sessions!

Practical Container Scenarios in Azure – April 2 2020 – 12:40PM

You’ve heard the buzz about containers and Kubernetes, now let’s start your journey towards rapidly deploying and scaling your container-based applications in Azure. In this session, we will introduce containers and the container orchestrator Kubernetes. Then we’ll dive into how to build a container image, push it into our Azure Container Registry and deploy it to our Azure Kubernetes Services cluster. Once deployed, we’ll learn how to keep our applications available and how to scale them using Kubernetes.

Key topics introduced

  • Building a container based application
  • Publishing containers to Azure Container Registry
  • Deploying Azure Kubernetes Services Clusters
  • Scaling our container-based applications in Azure Kubernetes Services

Deploying SQL Server in Kubernetes – April 3 2020  4:50PM

Are you thinking about running SQL Server in Kubernetes and don’t know where to start…are you wondering what you really need to know? If so, then this is the session for you! When deploying SQL Server In Kubernetes key considerations include data persistency, Pod configuration, resource management, and high availability/disaster recovery scenarios. In this session, we’ll look closely at each of these elements and learn how to run SQL Server in Kubernetes.

Learning Objectives

  • Deploying SQL Server in Kubernetes
  • Allocating Persistent Data Storage and configuring advanced disk topologies
  • SQL Server Specific Pod Configuration
  • Near zero-downtime upgrades
  • High availability and Disaster Recovery Scenarios 

Be sure to come to both sessions, learn how to build and deploy containers based applications in Kubernetes and also how to deploy SQL Server in Kubernetes!

Speaking at SQLIntersection Orlando 2020

I’m very pleased to announce that I will be speaking at SQL Intersection April 2020!  This is my first time speaking at SQL Intersection and I’m very excited to be doing so!

Speaking at SQL Intersection means so much to me because in 2014 I got my first exposure to the SQL Server community via SQLskills and their training. Then to follow up on their training workshops I attended my very first IT conference, SQL Intersection and now I get to come back as a speaker. Let’s just say, I’m a little excited!!!

Now as for the sessions…lots of content here on SQL Server on Linux, Containers and Kubernetes…check them out! Click here to register!

Full Day Workshop

Kubernetes Zero to Here: Installation, Configuration and Application Deployment

Modern application deployment needs to be fast and consistent to keep up with business objectives and Kubernetes is quickly becoming the standard for deploying container-based applications, fast. In this day-long session, we will start with an architectural overview of a Kubernetes cluster and how it manages application state. Then we will learn how to build a production-ready cluster. With our cluster up and running, will learn how to interact with our cluster, common administrative tasks, then wrap up with how to deploy applications and SQL Server. At the end of the session, you will know how to set up a Kubernetes cluster, manage a cluster, deploy applications and databases, and how to keep everything up and running.

Workshop Objectives:

  • Introduce Kuberentes Cluster Components
  • Introduce Kubernetes API Objects and Controllers
  • Installing Kubernetes
  • Interacting with your cluster
  • Storing persistent data in Kubernetes
  • Deploying Applications in Kubernetes
  • Deploying SQL Server in Kubernetes
  • High Availability SQL Server scenarios in Kubernetes

General Sessions

Containers – It’s Time to Get on Board

Containers are taking over, changing the way systems are developed and deployed…and that’s not hyperbole. Just imagine if you could deploy SQL Server or even your whole application stack in just minutes? You can do that, leveraging containers! In this session, we’ll get your started on your container journey, learn some common container scenarios and introduce container orchestration with Kubernetes.

In this session we’ll look at

  • Container Fundamentals
  • Common Container Scenarios
  • Running SQL Server in a Container
  • Container Orchestration with Kubernetes

Containers – Continued!

You’ve been working with containers in development for a while, benefiting from the ease and speed of the deployments. Now it’s time to extend your container-based data platform’s capabilities for your production scenarios.
In this session, we’ll look at how to build custom containers, enabling you to craft a container image for your production system’s needs. We’ll also dive deeper into operationalizing your container-based data platform and learn how to provision advanced disk topologies, seed larger databases, implement resource control and understand performance concepts.

By the end of this session, you will learn what it takes to build containers and make them production ready for your environment.

  • Custom container builds with Features
  • Advanced disk configurations
  • Backups/restores
  • Seeding larger databases
  • Backup restore into the container from a mounted volume
  • Resource control
  • Container Restart Policy
  • Container based performance concepts

Linux OS Fundamentals for the SQL Admin

Do you manage SQL Server but have developers using Linux? It’s time to take the leap to understand and communicate better with your Linux peers! You might be a Windows / SQL Server Admin but both SQL Server and PowerShell are now available on Linux. You can manage ALL of these technologies more effectively now. Don’t fear! Linux is just an operating system! While it feels different, it still has all the same components as Windows! In this session, I’ll show you that. We will look at the Linux operating system architecture and show you how to interact with and manage a Linux system. By the end of this session, you’ll be ready to go back to the office and get started working with Linux with a fundamental understanding of how it works.

Monitoring Linux Performance for the SQL Server Admin

Taking what you learned in our Fundamentals session one step further, we will continue and focus on the performance data you’re used to collecting on Windows! We’ll dive into SQLPAL and how the Linux architecture / internals enable high performance for your SQL Server. By the end of this session you’ll be ready to go back to the office and have a solid understanding of performance monitoring Linux systems and SQL on Linux. We’ll look at the core system components of CPU, Disk, Memory, and Networking monitoring techniques for each and look some of the new tools available from DMVs to DBFS.

In this session we’ll cover the following

  • System resource management concepts, CPU, disk, memory and networking
  • Introduce SQLPAL architecture and internals and how its design enables high performance for SQL Server on Linux
  • Baselining and benchmarking 

 

SQLint20 1024x512 NOCENTINO

Upgrading SQL Server 2017 Containers to 2019 non-root Containers with Data Volumes – Another Method

Yesterday in this post I described a method to correct permissions when upgrading a SQL Server 2017 container using Data Volumes to 2019’s non-root container on implementations that use the Moby or HyperKit VM. My friend Steve Jones’ on Twitter wondered if you could do this in one step by attaching a shell (bash) in the 2017 container prior to shutdown. Absolutely…let’s walk through that here in this post.  I opted to use an intermediate container in the prior post out of an abundance of caution so that I was not changing permissions on the SQL Server instance directory and all of the data files while they were in use. Technically this is a-ok, but again…just being paranoid there.

Start Up a Container with a Data Volume

Start up a container with a Data Volume (sqldata1) using the 2017 image. This will create the directories and files with root as the owner and group.

docker run \
    --name 'sql1' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -d mcr.microsoft.com/mssql/server:2017-latest
597652b61b22b27ff6d765b48196621a79dd2ffd7798328868d2296c7e953950 

Create a Database

Let’s create a database and confirm it’s there.

sqlcmd -S localhost,1433 -U sa -Q 'CREATE DATABASE TestDB1' -P $PASSWORD
sqlcmd -S localhost,1433 -U sa -Q 'SELECT name from sys.databases' -P $PASSWORD -W

name ---- master tempdb model msdb TestDB1 (5 rows affected) 

Get a Shell into the Container

Now, let’s get a shell into our running container. Logging in as root is great, isn’t it? :) 

docker exec -it sql1 /bin/bash
root@ed9051c6b5f3:/# 

Adjust the Permissions

Now while we’re in the running 2017 container we can adjust the permissions on the instance directory. The user mssql (uid 10001) doesn’t have to exist in the 2017 container. The key to the permissions is using the uid directly.

ls -laR /var/opt/mssql
chgrp -R 0 /var/opt/mssql
chmod -R g=u /var/opt/mssql
chown -R 10001:0 /var/opt/mssql
ls -laR /var/opt/mssql
exit

Stop our Container

Now to start the process of upgrading from 2017 to 2019, we’ll stop and remove the existing container.

docker stop sql1
docker rm sql1
sql1 

Start up a 2019 non-root Container

docker run \
    --name 'sql1' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -d mcr.microsoft.com/mssql/server:2019-GDR1-ubuntu-16.04 

Is Everything OK?

Are our database there? Yep! 

sqlcmd -S localhost,1433 -U sa -Q 'SELECT name from sys.databases' -P $PASSWORD
name
----
master
tempdb
model
msdb
TestDB1

(5 rows affected)

 

 

Upgrading SQL Server 2017 Containers to 2019 non-root Containers with Data Volumes

Recently Microsoft released a Non-Root SQL Server 2019 container and that’s the default if you’re pulling a new container image. But what if you’re using a 2017 container running as root and want to upgrade your system the SQL Server 2019 container…well something’s going to break. As you can see here, my friend Grant Fritchey came across this issue recently and asked for some help on Twitter’s #sqlhelp. This article describe a solution to getting things sorted and running again. The scenario below is if you’re using a Linux based SQL Server container on Windows or Mac host where the container volumes are backed by a Docker Moby or HyperKit virtual machine. If you’re using Linux container on Linux, you’ll adjust the file system permissions directly.

What’s the issue?

When you start up the 2017 container, the SQL Server (sqlservr) process is running as root (uid 0). Any files created by this process will have the user and group ownership of the root user. Now when we come along later and start up a 2019 container, the sqlservr process is running as the user msssql (uid 10001 by default). This new user doesn’t have permission to open the database files and other files used by SQL Server.

How do we fix this?

The way I fixed this issue is by stopping the SQL Server 2017 container and using another container, attaching the data volumes used by the 2017 container into this container then recursively adjusting the permissions to allow a user with the uid 10001 access to the files in the instance directory /var/opt/mssql. If you’re databases and log files are in other paths you’ll have to take that into account if using this process. Once we adjust the permissions, stop that ubuntu container and start up SQL Server’s 2019 non-root container and everything should be happy happy. Let’s do it together…

Start Up a Container with a Data Volume

Start up a container with a Data Volume (sqldata1) using the 2017 image. This will create the files with root as the owner and group.

docker run \
    --name 'sql1' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -d mcr.microsoft.com/mssql/server:2017-latest
597652b61b22b27ff6d765b48196621a79dd2ffd7798328868d2296c7e953950 

Create a Database

Let’s create a database and confirm it’s there.

sqlcmd -S localhost,1433 -U sa -Q 'CREATE DATABASE TestDB1' -P $PASSWORD
sqlcmd -S localhost,1433 -U sa -Q 'SELECT name from sys.databases' -P $PASSWORD -W

name ---- master tempdb model msdb TestDB1 (5 rows affected)  

Stop our Container

Now to start the process of upgrading from 2017 to 2019, we’ll stop and remove the existing container.

docker stop sql1
docker rm sql1
sql1 

Start a 2019 non-root Container

Create a new container pointing to that existing Data Volume (sqldata1), this time I’m not using -d so we can attach to stdout and see the error messages on the terminal. Here you can see that the sqlservr process is unable to open a file instance_id.

docker run \
    --name 'sql1' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
     mcr.microsoft.com/mssql/server:2019-GDR1-ubuntu-16.04

SQL Server 2019 will run as non-root by default. This container is running as user mssql. Your master database file is owned by root. To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216. sqlservr: Unable to open /var/opt/mssql/.system/instance_id: Permission denied (13) /opt/mssql/bin/sqlservr: Unable to open /var/opt/mssql/.system//instance_id: Permission denied (13) 

Since that was a bust, let’s go ahead and delete that container since it’s not usable. 

docker rm sql1
sql1 

Changing Permissions on the Files

Let’s create an intermediate container, in this case using an Ubuntu image, and mount that data volume (sqldata1), and then change the permissions on the files SQL Server needs to work with. 

docker run \
    --name 'permissionsarehard' \
    -v sqldata1:/var/opt/mssql \
    -it ubuntu:latest

If we look at the permissions of the instance directory (/var/opt/mssql/) we can see the files user and group owner are root. This is just a peek at the instance directory, we’ll need to adjust permissions on all of the file SQL Server needs to work with and recursively within this directory.

ls -la /var/opt/mssql
/var/opt/mssql:
total 24
drwxr-xr-x 6 root root 4096 Nov 20 13:43 .
drwxr-xr-x 1 root root 4096 Nov 20 13:46 ..
drwxr-xr-x 5 root root 4096 Nov 20 13:43 .system
drwxr-xr-x 2 root root 4096 Nov 20 13:43 data
drwxr-xr-x 2 root root 4096 Nov 20 13:43 log
drwxr-xr-x 2 root root 4096 Nov 20 13:43 secrets

Let’s adjust the permissions on the directories and files sqlservr needs access to…again I want to point out, that this is against the default instance directory which is /var/opt/mssql…if you have files in other locations they will need their permissions updated too. Check out the Microsoft Docs article here for more information on this.

ls -laR /var/opt/mssql
chgrp -R 0 /var/opt/mssql
chmod -R g=u /var/opt/mssql
chown -R 10001:0 /var/opt/mssql
ls -laR /var/opt/mssql
exit

Here’s some output from a directory listing of our instance directory after we’ve made the permissions changed…now they have the owner of 10001 and a group owner of root.

ls -la /var/opt/mssql
/var/opt/mssql:
total 24
drwxrwxr-x 6 10001 root 4096 Nov 20 13:43 .
drwxr-xr-x 1 root  root 4096 Nov 20 13:46 ..
drwxrwxr-x 5 10001 root 4096 Nov 20 13:43 .system
drwxrwxr-x 2 10001 root 4096 Nov 20 13:43 data
drwxrwxr-x 2 10001 root 4096 Nov 20 13:43 log
drwxrwxr-x 2 10001 root 4096 Nov 20 13:43 secrets

Let’s start up a 2019 non-root container now

Start up our 2019 container now…should work eh? Woot!

docker run \
    --name 'sql1' \
    -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD='$PASSWORD \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -d mcr.microsoft.com/mssql/server:2019-GDR1-ubuntu-16.04 

Why UID 10001?

Let’s hop into the container now that it’s up and running…and we’ll see sqlservr is running as mssql which has a uid of 10001. This is the default uid used inside non-root container. If you’re using a system that doesn’t have this user defined, like the intermediate ubuntu container, you’ll need to adjust permissions using the uid directly. That permission information is written into the directory and files and when we start up the 2019 container again the correct permissions are in place since the uid of the mssql user matches the uid of the permissions on the files and directories.

docker exec -it sql1 /bin/bash

ps -aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND mssql 1 8.4 0.3 148820 22768 ? Ssl 13:49 0:00 /opt/mssql/bin/ mssql 9 96.5 9.3 7470104 570680 ? Sl 13:49 0:03 /opt/mssql/bin/ mssql 140 2.0 0.0 18220 3060 pts/0 Ss 13:49 0:00 /bin/bash mssql 148 0.0 0.0 34420 2792 pts/0 R+ 13:49 0:00 ps -aux
id mssql uid=10001(mssql) gid=0(root) groups=0(root)
exit

Is Everything OK?

Are our database there? Yep! 

sqlcmd -S localhost,1433 -U sa -Q 'SELECT name from sys.databases' -P $PASSWORD
name
----
master
tempdb
model
msdb
TestDB1

(5 rows affected)

Another Method

If you like living on the edge you can correct the permissions logging into the running 2017 container prior to shutdown and not using an intermediate container, check out this post here