Author Archives: Anthony Nocentino

T-SQL Tuesday #140 Wrap up: What have you been up to with containers?

I want to start by saying thank you to all who submitted, and an amazing collection of people submitted some fantastic content. Also, thanks to Steve for asking me to host and being patient with me for mixing up the dates and the hashtag. It’s #tsql2sday and it’s on Tuesday not Wednesday :P

T SQL Tuesday Logo

Now, onto the posts in submission order.

Rob Farley – On containers

Rob discusses how he uses containers to quickly spin up SQL Server instances without installing them on his local OS, replacing the virtual machine-based environments he used in the past. And I can’t agree with this more. I’ve used Macs for 20 years and have used VMs to do SQL Server-based work. That’s no longer the case. I can run SQL Server in containers without VMs*. And you can do the same you can run a container in minutes anywhere you have a container runtime like docker. Thanks for sharing, Rob!

Jeff Hill – Container Convenience

Jeff introduces us to some non-SQL Server container images he uses at home, like pi-hole, a media server, a personal CRM, and more. The big idea in this post is that due to the isolation principles of containers, you and spin up containers from container images super easy via a container repository like docker hub…which enable you to test out new software or even new versions of existing software and if you don’t want that container anymore delete it. There’s no leftover crud on your system like config files etc. Great post, Jeff! And as Jeff suggests, head over to docker hub and see what software you can find that’s useful for you!

PS: If you aren’t using Pi-Hole, oh, you really should. Check it out…I run it in a container on my laptop for when I’m traveling, and I have one in Azure that my home network is using.

Kevin Chant – Easy demos using containers

Kevin shows us how he’s been using containers in demos, specifically for sessions based on DevOps. Using tools like Azure Pipelines in Azure DevOps or GitHub Actions, you can build automated testing and deployments of SQL Server instances and databases for the various environments needed in your deployment, such as prod, stage, etc. This is truly one of the superpowers of containers…building deployment automation in code so that you can roll out in a defined, tested way…every time you roll out a new instance or database or change to an instance or database. Kevin also introduces us to the idea that many of us are already using containers and might not even know that we are. He points out that several Azure services like Databricks and Synapse Analytics use containers behind the scenes. Excellent post, Kevin!

Aaron Bertrand – What have you been up to with containers?

Aaron shows us what he’s been up to with containers, specifically spinning up containers to test out application compatibility when making changes around case-sensitivity and binary collation settings at the instance level. He describes how he can quickly spin up the container, run the test and remove the container quickly…and like we discussed earlier…this used to be something that would require provisioning a whole VM and installing SQL Server. Such a huge time saver. In addition to describing how he uses containers, Aaron also gives us some example code to start up a container with some of the unique settings he wanted to test. Thanks, Aaron, great stuff!

Tom – Containers and me

Tom discusses how he’s containers to deploy RabbitMQ, a monitoring stack using Grafana, and deploying a SQL Server environment via a build pipeline. Excellent stuff. Thanks for sharing, Tom! Tom also mentions my container-mate Andrew Pruski’s SQL Server and Containers Guide, this is fantastic stuff check it out! everything from getting started to deep dive is available there.

Todd Kleinhans – RAPIDS and SQL Server Containers

Todd shows us how you can use containers to enable data science scenarios. In his post, he shows you how to start up a RAPIDS container with access to your system’s GPU. There are two cool things to unpack there. First, RAPIDS, as Todd points out, is a suite of open-source software libraries and APIs, giving you an end-to-end data science and an analytics pipeline all in one container image. Spin that container up with access to your GPU, and you’re off to the races performing GPU accelerated data science without having to struggle with downloading software and setting it up…just grab the container and go. Of course, to do data science, you need data, so he also dives into how to spin up a SQL Server container and access that data from the RAPIDS application suite. Super awesome stuff, Todd!

Kendra Little – Create a Disposable SQL Server Database Container in an Azure DevOps Pipeline With Spawn

Kendra shows us how to create a disposable SQL Server Database Container in an Azure Pipeline using Spawn in this post. As discussed in the post, Spawn is a tool that addresses two key challenges when working with data in development processes, testing against realistic datasets and resetting that data after changes. Spawn brings the power of containers to help instantiate datasets rather than just applications. Combine that with Azure DevOps Pipelines, and you have a super slick way of building automated workflows and testing code changes against realistic datasets. Outstanding post, Kendra!

PS: I saw Spawn a few years back at a SQLSaturday…watch this space. I think they’re building something special here!

Mark Wilkinson – Baselining SQL Server with the TIG Stack

Next up, my fellow EightKB organizer Mark Wilkinson shows us how to stand up a STIG (Telegraf, InfluxDB, Grafana) monitoring stack using Docker Compose. Grab the code here! The STIG monitoring stack enables you to collect baseline metrics for your SQL Server instances and use visualization dashboards via Grafana. This post hits home for me. I will use this code to stand up performance monitoring environments for testing and spot troubleshooting. As a consultant, this would have been SO super valuable since I can quickly spin up the whole monitoring stack and collect metrics on instances…I’ve had many clients over the years that have no monitoring. This project would have been a HUGE time saver for me. Banging post, Mark!

Barney Lawrence – Containers for Business and Pleasure

In Barney’s post, he mixes business and pleasure…showing us how to deploy SQL Server in a container on docker backed by Windows Services for Linux (WSL) and how to deploy a Minecraft Bedrock Server using Docker Compose. There are a couple of cool things to point out here: first, how data is managed in both scenarios using Docker Volumes, and second, leveraging Docker Compose to manage the configuration and state of the Minecraft Server defining environment variables and volumes in code. Well done, Barney!

Cathrine Wilhelmsen – Developing in Containers using Visual Studio Code

In this post, Catherine walks us through setting up a development environment using containers in Visual Studio Code. She highlights some core reasons for using containers containers. First, containers give organizations the ability to control which libraries and tools developers are using. Second, containers enable organizations to quickly onboard new developers and consultants like herself online with the proper tooling as fast as possible to be productive as quickly as possible. Thank you for sharing this super valuable content, Catherine!

Deborah Melkin – What have I been doing with Containers

Deborah highlights the various use cases for running SQL Server in containers, things like quick deployment, and code and upgrade testing. She also introduces the term ‘virtual instance’, which is a fantastic way to describe to DBAs what you get when you run SQL Server in a container. Deborah also links to some other posts that where she describes her experiences getting started with docker and setting up ports for SQL Server in containers. Awesome post(s) :) thanks for this, Deborah!

And Last But Not Least, the Rule Breakers!!!

They didn’t post on my invite post, but I found them on that pesky #tsql2sday hashtag on Twitter.

Shane O’Neill – What have you been up to with containers?

In this post, Shane shares with us what happens when you run a Kubernetes cluster on your laptop…things get hot…fast. (Y’all remember when my laptop caught fire đŸ”„???) But anyway, Shane isolats the problem, cleans things up, and gets back to running SQL Server in containers in Docker on his laptop to keep things cool 😎 . Thanks for sharing this, Shane!

KUBERNETES…drink!

Rob Sewell – TSql2sday video – Azure Arc Enabled Data Services in AKS Cluster

The Beard brings it all together…in his “post” (which is a YouTube video because Rob is incredible and also a rule breaker), Rob shows you Azure Arc enabled Data Services (something near and dear to my heart)…he deploys an Azure Kubernetes Service cluster, an Azure Arc enabled Data Services deployment, a couple of SQL Server Managed Instances, and a complete monitoring and logging stack using Grafana and Kibana. And this, my friend, is the magic of containers and Kubernetes…Rob does all of this in code in a repeatable fashion in just about 30 minutes.

Summary

Summing this all up, there are a couple of primary themes here, speed and consistency. Containers enable you to develop, deploy and maintain applications quickly and consistently in code. And as we discussed in the Invitation post, containers are the foundation for the next generation of the Microsoft Data Platform; Azure Arc enabled Data Services! Thank you all again for your fantastic posts!

T-SQL Tuesday #140: What have you been up to with containers?

In recent years containers have come into the data platform world, exposing new technologies to data professionals. Microsoft put SQL Server in Linux, and shortly after that, SQL Server made its way into containers. SQL Server in Containers has become the foundation for things like Big Data Clusters and Azure Arc-enabled Data Services

My invitation to you for this month’s #tsql2sday is…

I want to invite you to share your experiences using containers and not just SQL Server in containers…

  • What are the cool things you’ve done with containers in your environment, test lab, or even presentation demos?
  • Are you using containers in production? If so, what are the tips or tricks you can share to help others?

If you haven’t tried containers yet…here’s a video showing you how to do the following…

  • Deploy a SQL Server in just a few minutes!
  • Connect to your container-based SQL Server.
  • Upgrade a container-based SQL Server to a new version.

So, if you haven’t used containers before, go ahead and try out the demos from this video, which you can get here, and write about your experience!

*** The Rules ***

I’d love to see some new contributors to #tsql2sday – if you’re not familiar with how this works, here are the rules in a nutshell:

Your post must be published on Tuesday, July 13th 2021 (in any timezone).
Include the T-SQL Tuesday Logo and make it link to this invitation post.
Please add a comment to this post with a link to your own so I know where to find it.
Tweet about your post using the #tsql2sday hashtag.

A New Road Ahead…

Where I’ve Been

Since January 1, 2012 I’ve been the principal consultant at Centino Systems. Jokingly, I refer to myself as The Centino of Systems. I learned a lot of lessons running my own business. Such as how to be a consultant and also how to scale the business even as the only employee/consultant. There’s been ups and downs, successes and failures and I couldn’t be more happy with how things went. The first phase of Centino Systems I learned how to build a consulting practice. Then in the second phase I learned how to scale Centino Systems by focusing on training. I blogged a bunch, produced 21 courses at Pluralsight, co-authored three books (with one more on the way), and numerous corporate and conference sessions and workshops focusing on Linux, SQL Server and of course Kubernetes.

What’s Next

Over the past few years I kept my eye on Pure Storage. Pure builds rocket fast storage systems that I’ve used as the backbone to many SQL Server systems that I’ve built and supported in my consulting practice. Using Pure solutions in my consulting practice exposed me to the technology and the people. Both of which are incredible.

One thing lead to another and starting in July, I am joining Pure Storage as a Principal Field Solution Architect focusing on SQL Server and emerging technologies like Azure Arc-enabled Data Services and deploying SQL Server on Kubernetes. I’m going get to work on bigger and harder challenges, helping customers and Pure Engineering build solutions to solve those challenges. I will remain active in the SQL and PowerShell communities talking about the technologies I enjoy working with. Further, I will continue to produce courses at Pluralsight again focusing on Azure, SQL, PowerShell and of course Kubernetes.

The People

In addition to the technology and challenges ahead, the next reason I want to join Pure it to work with a collection of very talented people. I get to work with of some the best people in our industry. I’ll be fortunate to work with SQL community leaders Argenis Fernandez, Chris Adkin, Marsha Pierce, Melody Zacharias among many others…I have been able to call you friends…and now co-workers. I’m really looking forward to the next phase of my career.

Testing for Specific Versions of TLS Protocols Using curl

Ever need to set your web server a specific protocol version of TLS for web servers and need a quick way to test that out to confirm? Let’s check out how to use curl to go just that.

This code here uses curl with the parameters --tlsv1.1 --tls-max 1.1, which will force the max TLS protocol version to 1.1. Using the --verbose parameter gives you the ability to see the TLS handshake and get the output sent to standard out.

The webserver here has a policy that allows only TLS version 1.2+. So in the output, when forcing curl to use TLS version 1.1, the SSL_connect fails since the webserver only permits 1.2+

curl https://www.notarealurl.com --verbose  --tlsv1.1 --tls-max 1.1
*   Trying 52.173.202.109...
* TCP_NODELAY set
* Connected to www.notarealurl.com (1.2.3.4) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.1 (OUT), TLS handshake, Client hello (1):
* LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to www.notarealurl.com:443 
* Closing connection 0
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to www.notarealurl.com:443 

Now, let’s tell curl to use TLS protocol version of 1.2 with the parameters --tlsv1.2 --tls-max 1.2 and see if we can successfully access the webserver. The output below shows a successful TLS 1.2 TLS handshake and some output from the webserver.

curl https://www.notarealurl.com --verbose  --tlsv1.2 --tls-max 1.2
*   Trying 52.173.202.109...
* TCP_NODELAY set
* Connected to www.notarealurl.com (1.2.3.4) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: C=US; ST=ILLINOIS; L=CHICAGO; O=IT; CN=www.notarealurl.com
*  start date: May 14 00:00:00 2020 GMT
*  expire date: Jul  6 12:00:00 2022 GMT
*  subjectAltName: host "www.notarealurl.com" matched cert's "www.notarealurl.com"
*  issuer: C=US; O=DigiCert Inc; CN=DigiCert SHA2 Secure Server CA
*  SSL certificate verify ok.
> GET / HTTP/1.1
> Host: www.notarealurl.com
> User-Agent: curl/7.64.1
> Accept: */*
> 
< HTTP/1.1 301 Moved Permanently
< Content-Type: text/html; charset=UTF-8
< Location: https://notarealurl.com/
< Server: Microsoft-IIS/10.0
< Set-Cookie: ApplicationGatewayAffinity=ca74a2f7c1dea41a8e5010ecf6deda4f944f5539661e08399d8fae0062592401;Path=/;Domain=www.notarealurl.com
< Set-Cookie: ApplicationGatewayAffinityCORS=ca74a2f7c1dea41a8e5010ecf6deda4f944f5539661e08399d8fae0062592401;Path=/;Domain=www.notarealurl.com;SameSite=None;Secure
< Date: Thu, 20 May 2021 13:48:14 GMT
< Content-Length: 148
< 
<head><title>Document Moved</title></head>
* Connection #0 to host www.notarealurl.com left intact
<body><h1>Object Moved</h1>This document may be found <a HREF="https://notarealurl.com/">here</a></body>* 
Closing connection 0

Updated Pluralsight Course – Managing the Kubernetes API Server and Pods

My updated course “Managing the Kubernetes API Server and Pods” in now available on Pluralsight here! If you want to learn about the course, check out the trailer here or if you want to dive right in check it out here! 

This course targets IT professionals that design and maintain Kubernetes and container based solutions. The course can be used by both the IT pro learning new skills and the system administrator or developer preparing for using Kubernetes both on premises and in the Cloud.

Let’s take your Kubernetes administration and configuration skills to the next level and get you started now!

Key updates to the course include:

  • Using kubectl command options to create workloads and build YAML manifest templates fast such as --dry-run

  • Working with Static Pods

  • Working with Init Containers

  • Managing Pod health with Container Probes

The modules of the course are:

  • Using the Kubernetes API – In this module we dive into the Kubernetes API and the API server. We take a closer look at the API itself, API objects, and the internals of the API server. Next up is we look at working with Kubernetes objects. Looking at the types of objects available, how to use them, looking closely at how we define objects, Kubernetes API groups, and also how the API server itself is versioned. Then we wrap up the module with a deep dive into the anatomy of an API request, where we look closely at what happens when we submit a request into the API server.

  • Managing Objects with Labels, Annotations, and Namespaces – In this module, we discuss organizing objects in Kubernetes, and the techniques to organize objects such as namespaces, labels, and annotations. Once we have those principles behind us, we learn how Kubernetes uses labels to manage critical system functions such as managing Services, controlling Deployments, and workload scheduling in our cluster.

  • Running and Managing Pods – Dig into the fundamental workload element and learn how to run and manage Pods. In this module, we start the conversation off with understanding Pods and why we need this abstraction of a Pod around our container‑based application. Then we look at the interoperation between controllers like Deployments and Replica Sets and Pods themselves and learn why we need such a construct. We look at multi‑container Pods where we have multiple containers resident inside of a single Pod and why we would use something like that in our container‑based application deployments. And then we wrap up the conversation with managing Pod health with probes where we can give Kubernetes a little more information about the health of our application so that it can make good decisions on how to react in certain scenarios with regards to our applications that we’re deploying in Pods.

Check out the course at Pluralsight!

Availability Group StatusNewImage 3

Updated Pluralsight Course – Kubernetes Installation and Configuration Fundamentals

My updated course “Kubernetes Installation and Configuration Fundamentals” in now available on Pluralsight here! If you want to learn about the course, check out the trailer here or if you want to dive right in check it out here! 

This course targets IT professionals that design and maintain Kubernetes and container based solutions. The course can be used by both the IT pro learning new skills and the system administrator or developer preparing for using Kubernetes both on premises and in the Cloud.

Let’s take your Kubernetes administration and configuration skills to the next level and get you started now!

Key updates to the course include:

  • Using containerd as a container runtime

  • Building clusters with kubeadm and Cluster Configuration Files

  • Using kubectl command options to create workloads and build YAML manifest templates fast such as --dry-run

The modules of the course are:

  • Exploring the Kubernetes Architecture – In this module we introduce Kubernetes, deep dive into each component and its responsibility in a cluster. We also look at higher level abstractions such as Services, Controllers, and Deployments and how they can be used to ensure the desired state of an application deployed in Kubernetes

  • Installing and Configuring Kubernetes – In this module, we learn several ways to install a Kubernetes cluster. We start off simple with an installation using kubeadm using containerd. Then we head off to the Cloud, we look at the current state of the cloud managed Kubernetes services and installation methods for each of the major cloud providers (Google, AWS, and Azure) and perform a cluster deployment using Azure Kubernetes Service (AKS).

  • Working with Your Kubernetes Cluster – In this module, we learn how to interact with our cluster. We learn how to use and configure the primary tool for communicating with Kubernetes clusters, kubectl. We then learn how to perform a simple application Deployment both imperatively and declaratively in our Kubernetes cluster. And also learn how to use kubectl to generated YAML manifests for cluster resources quickly and correctly

Check out the course at Pluralsight!

Availability Group StatusNewImage 3

Installing and Configuring containerd as a Kubernetes Container Runtime

In this post, I’m going to show you how to install containerd as the container runtime in a Kubernetes cluster. I will also cover setting the cgroup driver for containerd to systemd which is the preferred cgroup driver for Kubernetes. In Kubernetes version 1.20 Docker was deprecated and will be removed after 1.22. containerd is a CRI compatible container runtime and is one of the supported options you have as a container runtime in Kubernetes in this post Docker Kubernetes world. I do want to call out that you can use containers created with Docker in containerd.

Configure required modules

First load two modules in the current running environment and configure them to load on boot

sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

Configure required sysctl to persist across system reboots

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

Apply sysctl parameters without reboot to current running enviroment

sudo sysctl --system

Install containerd packages

sudo apt-get update 
sudo apt-get install -y containerd

Create a containerd configuration file

sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

Set the cgroup driver for runc to systemd

Set the cgroup driver for runc to systemd which is required for the kubelet.
For more information on this config file see the containerd configuration docs here and also here.

At the end of this section in /etc/containerd/config.toml

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
        ...

Around line 86, add these two lines, indentation matters.

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            SystemdCgroup = true

Restart containerd with the new configuration

sudo systemctl restart containerd

And that’s it, from here you can install and configure Kubernetes on top of this container runtime. In an upcoming post, I will bootstrap a cluster using containerd as the container runtime.

Published Azure Arc-Enabled Data Services Revealed

I’m super proud to announce that Ben E. Weissman and I have published Azure Arc-Enabled Data Services Revealed available now at Apress and your favorite online book sellers! Buy the book now…or keep reading below if you need to be more convinced :)

A couple notes about the book, first I really enjoyed getting to work with this bleeding edge tech and collaborate with the SQL Server Engineering Team at Microsoft on this. I want to call out the support from our tech reviewer and Program Managed for Azure Arc Enabled Data Services, Travis Wright. Thanks for your help and support. Be sure to read the forward from Travis…it tells the story of why and how. From getting SQL Server on Linux, into containers, into Kubernetes, Big Data Clusters and now Arc Enabled Data Services. Awesome stuff. I also want callout my co-author and friend, Ben you are an awesome writer, thank you for including me in this adventure!

About the Book

Get introduced to Azure Arc-enabled data services and the powerful capabilities they provide to deploy and manage local, on-premises, and hybrid cloud data resources using the same centralized management and tooling you get from the Azure cloud. This book shows how you can deploy and manage databases running on SQL Server and Postgres in your corporate data center as if they were part of the Azure platform. You will learn how to benefit from the centralized management that Azure provides, the automated rollout of patches and updates, and more.

This book is the perfect choice for anyone looking for a hybrid or multi-vendor cloud strategy for their data estate. The authors walk you through the possibilities and requirements to get services such as Azure SQL Managed Instance and PostgresSQL HyperScale, deployed outside of Azure, so the services are accessible to companies that cannot move to the cloud or do not want to use the Microsoft cloud exclusively. The technology described in this book will be especially useful to those required to keep sensitive services, such as medical databases, away from the public cloud, but who still want to benefit from the Azure cloud and the centralized management and tooling that it supports.

What You Will Learn

  • The core concepts of Kubernetes
  • The fundamentals and architecture of Azure Arc-enabled data services
  • Build a multi-cloud strategy based on Azure data services
  • Deploy Azure Arc-enabled data services on premises or in any cloud
  • Deploy Azure Arc-enabled SQL Managed Instance on premises or in any cloud
  • Deploy Azure Arc-enabled PostgreSQL HyperScale on premises or in any cloud
  • Manage Azure-enabled data services running outside of Azure
  • Monitor Azure-enabled data services running outside of Azure through the Azure Portal

Who This Book Is For

Database administrators and architects who want to manage on-premises or hybrid cloud data resources from the Microsoft Azure cloud. Especially for those wishing to take advantage of cloud technologies while keeping sensitive data on premises and under physical control.

Azure Arc-Enabled Data Services Revealed

Getting SQL Agent Jobs and Job Steps Configuration

Recently I needed to take a look at all of the SQL Server Agent Jobs and their Jobs Steps for a customer. Specifically, I needed to review all of the Jobs and Job Steps for Ola Hallengren’s Maintenance Solution and look at the Backup, Index Maintenance and Integrity Jobs to ensure they’re configured properly and also account for any customizations and one-offs in the Job definitions. This customer has dozens of SQL Server instances and well, I wasn’t about to click through everything in SSMS…and writing this in TSQL would have been a good candidate for a Ph.D. dissertation. So let’s check out how I solved this problem using dbatools.

Enter dbatools…

In my first attempt at doing this I tried getting all the Jobs using Get-DbaAgentJob and exporting the Jobs to TSQL using Export-DbaScript. This did give me the code for all of the Jobs I was interested in. But that left me trying to decipher SQL Agent Job and Schedule syntax and encodings and I got all twisted up in the TSQL-ness of that. I needed this to be more readable.

So I thought…there has to be a better way…there is! So, I wrote the following. This code gets each SQL Agent Job, print the Job’s Name, NextRunDate, if it has a Schedule, Operator information, and then for each JobStep it prints the Step’s Name, Subsystem, and finally the Command. Using this I can quickly get a feel for the configurations across the environment.

Get a listing of all SQL Instances

    $Servers = Get-DbaRegisteredServer

Get all of the SQL Agent Jobs across all SQL Instances

    $jobs = Get-DbaAgentJob -SqlInstance $Servers.Name

Filter that list down to the SQL Agent Jobs that are in the Database Maintenance category

    $MaintenanceJobs = $jobs | Where-Object { $_.Category -eq 'Database Maintenance' } 

For each SQL Agent Job, print the Job’s Name, NextRunDate, if it has a Schedule, Operator information, and then for each JobStep print its Name, Agent Subsystem, and finally the Command.

    $JobsAndSteps = foreach ($MaintenanceJob in $MaintenanceJobs){
        foreach ($JobStep in $MaintenanceJob.JobSteps) {
            $obj = [PSCustomObject]@{
                SqlInstance = $MaintenanceJob.SqlInstance
                Name = $MaintenanceJob.Name
                NextRunDate = $MaintenanceJob.NextRunDate
                HasSchedule = $MaintenanceJob.HasSchedule
                OperatorToEmail = $MaintenanceJob.OperatorToEmail
                JobStepName = $JobStep.Name
                SubSystem = $JobStep.SubSystem
                Command = $JobStep.Command
                }
            $obj  
        }
    }

Here’s some sample output using Format-Table. From there I can quickly scan and analyze all the Jobs on all of the Instances in an environment.

$JobsAndSteps | Format-Table

SqlInstance     Name                                    NextRunDate           HasSchedule OperatorToEmail JobStepName                                           SubSystem Command
-----------     ----                                    -----------           ----------- --------------- -----------                                           --------- -------
PRODSQL1        DatabaseBackup - USER_DATABASES - FULL  2/3/2021 1:00:00 AM          True DbaTeam         DatabaseBackup - USER_DATABASES - FULL - Backup         CmdExec sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -Q "EXECUTE [dbo].[DatabaseBackup] @Databases = 'USER_DATABASES', @Directory = N'T:\Backup', @Ba...
PRODSQL1        DatabaseBackup - USER_DATABASES - FULL  2/3/2021 1:00:00 AM          True DbaTeam         DatabaseBackup - USER_DATABASES - FULL - Sync           CmdExec ROBOCOPY SOME STUFF
PRODSQL1        DatabaseBackup - USER_DATABASES - FULL  2/3/2021 1:00:00 AM          True DbaTeam         DatabaseBackup - USER_DATABASES - FULL - Cleanup     PowerShell RUN SOME POWERSHELL TO DO COOL STUFF
PRODSQL2        DatabaseBackup - USER_DATABASES - FULL  2/3/2021 1:00:00 AM          True DbaTeam         DatabaseBackup - USER_DATABASES - FULL - Backup         CmdExec sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -Q "EXECUTE [dbo].[DatabaseBackup] @Databases = 'USER_DATABASES', @Directory = N'T:\Backup', @Ba...
PRODSQL2        DatabaseBackup - USER_DATABASES - FULL  2/3/2021 1:00:00 AM          True DbaTeam         DatabaseBackup - USER_DATABASES - FULL - Sync           CmdExec ROBOCOPY SOME STUFF
PRODSQL2        DatabaseBackup - USER_DATABASES - FULL  2/3/2021 1:00:00 AM          True DbaTeam         DatabaseBackup - USER_DATABASES - FULL - Cleanup     PowerShell RUN SOME POWERSHELL TO DO COOL STUFF

You can also take that output and convert it to CSV and then Excel for analysis

$JobsAndSteps | ConvertTo-Csv -NoTypeInformation | Out-File JobSteps.csv

Kubernetes Precon at DPS

Pre-conference Workshop at Data Platform Virtual Summit 2020


DPS 2020 Transparent Logo 150 x 55 01

I’m proud to announce that I will be be presenting pre-conference workshop at Data Platform Virtual Summit 2020 split into Two four hour sessions on 30 November and 1 December! This one won’t let you down!

Here is the start and stop times in various time zones:

Time Zone Start Stop
EST 5.00 PM 9 PM
CET 11.00 PM 3.00 AM (+1)
IST 3.30 AM (+1) 7.30 AM (+1)
AEDT 9.00 AM (+1) 1.00 PM (+1)

The workshop is “Kubernetes Zero to Hero – Installation, Configuration, and Application Deployment”

Abstract: Modern application deployment needs to be fast and consistent to keep up with business objectives, and Kubernetes is quickly becoming the standard for deploying container-based applications fast. In this day-long session, we will start container fundamentals and then get into Kubernetes with an architectural overview of how it manages application state. Then you will learn how to build a cluster. With our cluster up and running, you will learn how to interact with our cluster, common administrative tasks, then wrap up with how to deploy applications and SQL Server. At the end of the session, you will know how to set up a Kubernetes cluster, manage a cluster, deploy applications and databases, and how to keep everything up and running.

PS: This class will be recorded, and the registered attendee will get 12 months streaming access to the recorded class. The recordings will be available within 30 days of class completion.

Workshop Objectives

  • Introduce Kubernetes Cluster Components
  • Introduce Kubernetes API Objects and Controllers
  • Installing Kubernetes
  • Interacting with your cluster
  • Storing persistent data in Kubernetes
  • Deploying Applications in Kubernetes
  • Deploying SQL Server in Kubernetes
  • High Availability scenarios in Kubernetes

Click here to register now!


Anthony dps 2020 Kubernetes Training Class