Tag Archives: Kubernetes

Installing and Configuring containerd as a Kubernetes Container Runtime

In this post, I’m going to show you how to install containerd as the container runtime in a Kubernetes cluster. I will also cover setting the cgroup driver for containerd to systemd which is the preferred cgroup driver for Kubernetes. In Kubernetes version 1.20 Docker was deprecated and will be removed after 1.22. containerd is a CRI compatible container runtime and is one of the supported options you have as a container runtime in Kubernetes in this post Docker Kubernetes world. I do want to call out that you can use containers created with Docker in containerd.

Configure required modules

First load two modules in the current running environment and configure them to load on boot

sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

Configure required sysctl to persist across system reboots

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

Apply sysctl parameters without reboot to current running enviroment

sudo sysctl --system

Install containerd packages

sudo apt-get update 
sudo apt-get install -y containerd

Create a containerd configuration file

sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

Set the cgroup driver for runc to systemd

Set the cgroup driver for runc to systemd which is required for the kubelet.
For more information on this config file see the containerd configuration docs here and also here.

At the end of this section in /etc/containerd/config.toml

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
        ...

Around line 86, add these two lines, indentation matters.

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            SystemdCgroup = true

Restart containerd with the new configuration

sudo systemctl restart containerd

And that’s it, from here you can install and configure Kubernetes on top of this container runtime. In an upcoming post, I will bootstrap a cluster using containerd as the container runtime.

Published Azure Arc-Enabled Data Services Revealed

I’m super proud to announce that Ben E. Weissman and I have published Azure Arc-Enabled Data Services Revealed available now at Apress and your favorite online book sellers! Buy the book now…or keep reading below if you need to be more convinced :)

A couple notes about the book, first I really enjoyed getting to work with this bleeding edge tech and collaborate with the SQL Server Engineering Team at Microsoft on this. I want to call out the support from our tech reviewer and Program Managed for Azure Arc Enabled Data Services, Travis Wright. Thanks for your help and support. Be sure to read the forward from Travis…it tells the story of why and how. From getting SQL Server on Linux, into containers, into Kubernetes, Big Data Clusters and now Arc Enabled Data Services. Awesome stuff. I also want callout my co-author and friend, Ben you are an awesome writer, thank you for including me in this adventure!

About the Book

Get introduced to Azure Arc-enabled data services and the powerful capabilities they provide to deploy and manage local, on-premises, and hybrid cloud data resources using the same centralized management and tooling you get from the Azure cloud. This book shows how you can deploy and manage databases running on SQL Server and Postgres in your corporate data center as if they were part of the Azure platform. You will learn how to benefit from the centralized management that Azure provides, the automated rollout of patches and updates, and more.

This book is the perfect choice for anyone looking for a hybrid or multi-vendor cloud strategy for their data estate. The authors walk you through the possibilities and requirements to get services such as Azure SQL Managed Instance and PostgresSQL HyperScale, deployed outside of Azure, so the services are accessible to companies that cannot move to the cloud or do not want to use the Microsoft cloud exclusively. The technology described in this book will be especially useful to those required to keep sensitive services, such as medical databases, away from the public cloud, but who still want to benefit from the Azure cloud and the centralized management and tooling that it supports.

What You Will Learn

  • The core concepts of Kubernetes
  • The fundamentals and architecture of Azure Arc-enabled data services
  • Build a multi-cloud strategy based on Azure data services
  • Deploy Azure Arc-enabled data services on premises or in any cloud
  • Deploy Azure Arc-enabled SQL Managed Instance on premises or in any cloud
  • Deploy Azure Arc-enabled PostgreSQL HyperScale on premises or in any cloud
  • Manage Azure-enabled data services running outside of Azure
  • Monitor Azure-enabled data services running outside of Azure through the Azure Portal

Who This Book Is For

Database administrators and architects who want to manage on-premises or hybrid cloud data resources from the Microsoft Azure cloud. Especially for those wishing to take advantage of cloud technologies while keeping sensitive data on premises and under physical control.

Azure Arc-Enabled Data Services Revealed

Kubernetes Precon at DPS

Pre-conference Workshop at Data Platform Virtual Summit 2020


DPS 2020 Transparent Logo 150 x 55 01

I’m proud to announce that I will be be presenting pre-conference workshop at Data Platform Virtual Summit 2020 split into Two four hour sessions on 30 November and 1 December! This one won’t let you down!

Here is the start and stop times in various time zones:

Time Zone Start Stop
EST 5.00 PM 9 PM
CET 11.00 PM 3.00 AM (+1)
IST 3.30 AM (+1) 7.30 AM (+1)
AEDT 9.00 AM (+1) 1.00 PM (+1)

The workshop is “Kubernetes Zero to Hero – Installation, Configuration, and Application Deployment”

Abstract: Modern application deployment needs to be fast and consistent to keep up with business objectives, and Kubernetes is quickly becoming the standard for deploying container-based applications fast. In this day-long session, we will start container fundamentals and then get into Kubernetes with an architectural overview of how it manages application state. Then you will learn how to build a cluster. With our cluster up and running, you will learn how to interact with our cluster, common administrative tasks, then wrap up with how to deploy applications and SQL Server. At the end of the session, you will know how to set up a Kubernetes cluster, manage a cluster, deploy applications and databases, and how to keep everything up and running.

PS: This class will be recorded, and the registered attendee will get 12 months streaming access to the recorded class. The recordings will be available within 30 days of class completion.

Workshop Objectives

  • Introduce Kubernetes Cluster Components
  • Introduce Kubernetes API Objects and Controllers
  • Installing Kubernetes
  • Interacting with your cluster
  • Storing persistent data in Kubernetes
  • Deploying Applications in Kubernetes
  • Deploying SQL Server in Kubernetes
  • High Availability scenarios in Kubernetes

Click here to register now!


Anthony dps 2020 Kubernetes Training Class

Persistent Server Name Metadata When Deploying SQL Server in Kubernetes

In this post, we will explore how a Pod name is generated, Pod Name lifecycle, how it’s used inside a Pod to set the system hostname, and how the system hostname is used by SQL Server to set its server name metadata.

Pod Naming in Deployments

When deploying SQL Server in Kubernetes using a Deployment, the Pod created by the Deployment Controller will have a name with a structure of <DeploymentName>-<PodTemplateHash>-<PodID> for example, mssql-deployment-8cbdc8ddd-9n7jh.

Let’s break that example Pod name down a bit more:

  • mssql-deployment – this is the name of the Deployment specified at metatdata.name. This is stable for the lifecycle of the deployment
  • 8cbdc8ddd – this is a hash of the Pod Template Spec in the Deployment object template.spec. Changing the Pod Template Spec changes this value and also triggers a rollout of the new Pod configuration.
  • 9n7jh – this is a random string assigned to help identify the Pod uniquely. This changes with the lifecycle of the Pod itself.

In a default Deployment configuration, the Pod’s name is used to system hostname inside the Pod. In a Deployment, when a Pod is deleted for whatever reason, Pod/Node failure, Pod administratively deleted, or an update to the Pod Template Spec triggering a rollout, the new Pod created will have a new Pod Name and a matching hostname inside the Pod. It is a new Pod after all. :) This can lead to an interesting scenario inside SQL Server since the Pod name can change. Let’s dig deeper…

Server name metadata inside SQL Server running in a Pod

To ensure SQL Server’s data has a lifecycle independent of the Pod’s lifecycle, in a basic configuration, a PersistentVolume is used for the instance directory /var/opt/mssql. The first time SQL Server starts up, it copies a set of system databases into the directory /var/opt/mssql. During the initial startup, the current hostname of the Pod is used to set SQL Server system metadata for the server name. Specifically @@SERVERNAME, SERVERPROPERTY('ServerName') and the Name column from sys.servers.

In Listing 1, is an example Deployment for SQL Server. In this configuration, the hostname inside the Pod will match the current Pod Name. But what happens when the Pod name changes when a Pod is deleted, and new Pod is created with a new name? Let’s walk through that together in the next section.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
spec:  
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
        app: mssql
  template:
    metadata:
      labels:
        app: mssql
    spec:
      securityContext:
        fsGroup: 10001
      containers:
      - name: mssql
        image: 'mcr.microsoft.com/mssql/server:2019-CU8-ubuntu-18.04'
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: SA_PASSWORD 
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: pvc-nfs-instance

Listing 1 – Example SQL Server Manifest using a Deployment Controller

Examining Server Name Metadata When Deploying SQL Server in a Deployment

Initial Deployment

When the Deployment is created, a Pod is created. In the output below, you can see the name of the Pod is mssql-deployment-bb44b7bf7-nzkmt, and the hostname set inside the Pod is the same, mssql-deployment-bb44b7bf7-nzkmt

kubectl get pods 
NAME                               READY   STATUS    RESTARTS   AGE
mssql-deployment-bb44b7bf7-nzkmt   1/1     Running   0          7s

kubectl exec -it mssql-deployment-bb44b7bf7-nzkmt -- /bin/hostname
mssql-deployment-bb44b7bf7-nzkmt

Check Server Name Metadata

Since this is the initial deployment of this SQL Server instance, system databases are copied into /var/opt/mssql, and the server name metadata is set. Let’s query SQL Server for @@SERVERNAME, SERVERPROPERTY('ServerName') and the Name column from sys.servers. In the output below you can see all three values match.

sqlcmd -S $SERVICEIP,$PORT -U sa -Q "SELECT @@SERVERNAME AS SERVERNAME, SERVERPROPERTY('ServerName') AS SERVERPROPERTY, name FROM sys.servers" -P $PASSWORD -W
SERVERNAME                          SERVERPROPERTY                   name
----------                          --------------                   ----
mssql-deployment-bb44b7bf7-nzkmt    mssql-deployment-bb44b7bf7-nzkmt mssql-deployment-bb44b7bf7-nzkmt

Delete the Currently Running Pod

Next, let’s delete a Pod and what happens to the Pod’s name, the Pod’s hostname, and the SQL Server server name metadata.

kubectl delete pod mssql-deployment-bb44b7bf7-nzkmt
pod "mssql-deployment-bb44b7bf7-nzkmt" deleted

I’ve deleted the Pod, and since this is controller by a Deployment controller, it immediately creates a new Pod in its place. This Pod gets a new name. The existing databases and configuration are persisted in the attached PersistentVolume at /var/opt/mssql. These databases are all brought online. In this output below, you can see the new Pod name and hostname are both mssql-deployment-bb44b7bf7-6gm6v.

kubectl get pods 
NAME                               READY   STATUS    RESTARTS   AGE
mssql-deployment-bb44b7bf7-6gm6v   1/1     Running   0          20s

kubectl exec -it mssql-deployment-bb44b7bf7-6gm6v -- hostname
mssql-deployment-bb44b7bf7-6gm6v

What’s in a name?

Now let’s query the server name metadata again. In the output below, you can see there are some inconsistencies. We saw above that Pod has a new name and hostname (mssql-deployment-bb44b7bf7-6gm6v), but this change isn’t updating all the server name metadata inside our Instance. The only place it is updated is SERVERPROPERTY('ServerName') the other values still have the initial Pod Name mssql-deployment-bb44b7bf7-nzkmt.

sqlcmd -S $SERVICEIP,$PORT -U sa -Q "SELECT @@SERVERNAME AS SERVERNAME, SERVERPROPERTY('ServerName') AS SERVERPROPERTY, name FROM sys.servers" -P $PASSWORD -W
SERVERNAME                          SERVERPROPERTY                   name
----------                          --------------                   ----
mssql-deployment-bb44b7bf7-nzkmt mssql-deployment-bb44b7bf7-6gm6v mssql-deployment-bb44b7bf7-nzkmt

Setting a Pod’s Hostname

So what do we do about this? Having instability in the server name metadata can break Replication, mess up our server monitoring systems, and even break code. To get the Pod’s hostname to a persistent value, you need to set the template.pod.spec.hostname field in the Deployment. This sets the system hostname inside the Pod to this value.

In the code below you, can see I’ve set the template.pod.spec.hostname to sql01. On the initial deployment of a SQL Instance, this is the value that is stored in the Instance server name metadata.

If you already have a SQL Server up and running in Kubernetes and did not set the template.pod.spec.hostname value, the server name metadata will need to be updated using standard SQL Server methods with sp_dropserver and sp_addserver.

But for demonstration purposes, I’m going to start over as if this is an initial deployment. And deploy the manifest in Listing 2 into my cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
spec:  
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
        app: mssql
  template:
    metadata:
      labels:
        app: mssql
    spec:
      securityContext:
        fsGroup: 10001
      hostname:
        sql01
      containers:
      - name: mssql
        image: 'mcr.microsoft.com/mssql/server:2019-CU8-ubuntu-18.04'
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: SA_PASSWORD 
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: pvc-nfs-instance

Listing 2 – Example SQL Server Manifest using a Deployment Controller, setting the Pod’s hostname

In the output, below the Pod Name is mssql-deployment-8cbdc8ddd-nv8j4, but inside the Pod, the hostname is sql01, and now all three values for our server name metadata match. If this Pod is deleted, the Pod gets a new name, the hostname inside the Pod will still be sql01, and the Pod server name metadata will still be set to sql01.

kubectl get pods 
NAME                               READY   STATUS    RESTARTS   AGE
mssql-deployment-8cbdc8ddd-nv8j4   1/1     Running   0          43s

kubectl exec -it mssql-deployment-8cbdc8ddd-nv8j4  -- hostname
sql01

sqlcmd -S $SERVICEIP,$PORT -U sa -Q "SELECT @@SERVERNAME AS SERVERNAME, SERVERPROPERTY('ServerName') AS SERVERPROPERTY, name FROM sys.servers" -P $PASSWORD -W
SERVERNAME  SERVERPROPERTY name
----------  -------------- ----
sql01       sql01           sql01

Setting the hostname in the Pod Template Spec gives you the ability to persist the hostname and thus the server name metadata inside SQL Server. This is crucial for services and code that depend on a static hostname. A StatefulSet is a Controller in Kubernetes that does give you persistent, stable naming independent of the lifecycle of a Pod. I will explore those in an upcoming blog post.

Pre-Conference Workshop and Sessions at PASS Summit

I’m pleased to announce that I will be presenting at PASS Summit. This year I have a pre-conference workshop and a regular session. Let’s dive into each.

Pre-Conference Workshop: The Future of Deployment for Modern Data Platform Applications

Ben Weissman and I teach a pre-conference workshop called “The Future of Deployment for Modern Data Platform Applications” in this workshop. We’re going to cover how you will be deploying data platform applications in the near future. Here’s a listing of the topics we’re going to cover.

  • Kubernetes Fundamentals – building a cluster and deploying applications
  • Deploying SQL Server in Kubernetes – diving deep into what it takes to run a stateful application in Kubernetes
  • Deploying Big Data Clusters – showcasing how you can deploy a complex stateful application in Kubernetes.
  • Azure Arc Enabled Data Services Fundamentals – learn how to run any Azure Data Service anywhere you have Kubernetes, in any cloud or on-premises.
  • Deploying Azure Arc Enabled Data Services – tons of demos and code samples to highlighting how to deploy SQL Managed Instance and PostgreSQL HyperScale in any cloud or on-premises. 

You will leave this session with the knowledge, scripts, and tools to get started with Kubernetes and Kubernetes based applications.

Sign up for our workshop here: https://www.pass.org/summit/2020/Register-Now

Regular Session: Deploying and Managing SQL Server with dbatools

Well, if you’ve been following my blog and work over the last few years, it’s been all containers and Kubernetes. But I still have clients that run SQL Server on Windows. And for those clients, there’s only one that I install SQL Server…with dbatools. So I wrote a session describing how I did it for my client, and I’m going to share all that knowledge with you! Check out the deets…

Abstract

The dbatools project brings automation to the forefront of the SQL Server configuration, operations, and deployment tasks. This session will look at how to install and configure multiple SQL Servers quickly and consistently using dbatools deployment tools. Once those systems are up and running, we will look at how to configure and manage multiple systems using PowerShell automation techniques. By the end of this session, you will have the tools, techniques, and code to automatically and consistently deploy and configure SQL Server in your environment.

Hope to see you at PASS Summit this year!

Sign up PASS Summit here: https://www.pass.org/summit/2020/Register-Now

PASS Summit 2020

New Pluralsight Course – Configuring and Managing Kubernetes Security

My new course “Configuring and Managing Kubernetes Security” is now available on Pluralsight here! Check out the trailer here or if you want to dive right in head over to Pluralsight!
 
This course will teach you to configure and manage security in Kubernetes clusters.  

This course targets IT professionals that design and maintain Kubernetes and container-based solutions. The course can be used by both the IT pro learning new skills and the system administrator or developer preparing for using Kubernetes both on-premises and in the Cloud. 

This course is part of my Learning Path covering the content needed to prepare for the Certified Kubernetes Administrator exam.

Let’s take your Kubernetes administration and configuration skills to the next level and get you started now!

The modules of the course are:

  • Kubernetes Security Fundamentals – First, you’ll explore Kubernetes security fundamentals, learning how authentication and authorization work to control access to the Kubernetes API.
  • Managing Certificates and kubeconfig Files – Next, you’ll learn how certificates are used in Kubernetes and how to create and manage certificates in your cluster. Then, you’ll learn how to create and manage kubeconfig files for accessing clusters and then configure cluster access for a new user.
  • Managing Role Based Access Controls – In the last module, you’ll learn how to control access to the Kubernetes API with role based access controls.

When you’re finished with this course you will have the skills needed to operate and manage security in Kubernetes clusters.

NewImage

Check out the course at Pluralsight!

New Pluralsight Course – Maintaining, Monitoring and Troubleshooting Kubernetes

My new course “Maintaining, Monitoring, and Troubleshooting Kubernetes” is now available on Pluralsight here! Check out the trailer here or if you want to dive right in head over to Pluralsight!
 
This course will teach you to maintain, monitor, and troubleshoot production Kubernetes clusters.  

This course targets IT professionals that design and maintain Kubernetes and container-based solutions. The course can be used by both the IT pro learning new skills and the system administrator or developer preparing for using Kubernetes both on-premises and in the Cloud. 

This course is part of my Learning Path covering the content needed to prepare for the Certified Kubernetes Administrator exam.

Let’s take your Kubernetes administration and configuration skills to the next level and get you started now!

The modules of the course are:

  • Maintaining Kubernetes Clusters – In this module you will learn core Kubernetes cluster maintenance tasks. We will start off with a closer look at what etcd is, the services it provides, and learn its backup and restore operations. Next, you will then learn the cluster upgrade process, enabling you to take advantage of new Kubernetes features. Then finally, you will learn how to facilitate Worker Node maintenance such as operating system upgrades with draining and cordoning.
  • Logging and Monitoring in Kubernetes Clusters – Monitoring and logging enable you to understand what’s happening inside your Kubernetes cluster and can tell you how things are performing and when things go wrong. In this module we will look at the Kubernetes logging architecture, learning where logs are stored for the Control Plane, Nodes, and Pods and how to access and review those logs. Then next, we’ll dive into how to monitor performance in your cluster with the Kubernetes Metrics Server and access performance data for Nodes and Pods running in your cluster.
  • Troubleshooting Kubernetes – It is inevitable, something will go wrong in your cluster. In this module, you will learn the tools and techniques needed to troubleshoot your Kubernetes cluster. We will start by introducing common troubleshooting methodologies and pain points in Kubernetes. Then you will learn how to debug and fix issues with your cluster, focusing on the control plane and worker nodes.

NewImage

Check out the course at Pluralsight!

New Pluralsight Course – Configuring and Managing Kubernetes Networking, Services, and Ingress

My new course “Configuring and Managing Kubernetes Networking, Services, and Ingress” is now available on Pluralsight here! Check out the trailer here or if you want to dive right in go here!
 
In this course you will learn Kubernetes cluster networking fundamentals and configuring and accessing applications in a Kubernetes Cluster with Services and Ingress.  

This course targets IT professionals that design and maintain Kubernetes and container-based solutions. The course can be used by both the IT pro learning new skills and the system administrator or developer preparing for using Kubernetes both on-premises and in the Cloud. 

Let’s take your Kubernetes administration and configuration skills to the next level and get you started now!

The modules of the course are:

  • Kubernetes Networking Fundamentals – In this module, you will learn Kubernetes networking fundamentals. We will start with the Kubernetes networking model and the motivation behind it, providing developers consistent and robust networking. You will learn cluster network topology, Pod networking internals and how CNI and network plugins implement the Kubernetes network model. Finally, we will learn how DNS is integrated into our cluster and how to configure the DNS Server and Pod DNS clients.
  • Configuring and Managing Application Access with Services Services are the core abstraction to access applications deployed in Kubernetes. In this module, you will learn the motivation for Services and how Services work. You will learn the types of Services available and when to choose which type for your application. We’ll dive deep and look at how Services are implemented in the cluster. You will then learn the key concepts of Service Discovery in a cluster, enabling applications you deploy to work together seamlessly. 
  • Configuring and Managing Application Access with Ingress – In this demo-heavy module you will learn how to expose applications outside of a Kubernetes cluster using Ingress. Starting with the core constructs Ingress and Ingress Controllers. You will learn how traffic flows from outside your cluster through the Ingress controller and to your Pod-based applications. We will learn how to define rules to access applications in several scenarios including single and multi-service access, name-based virtual hosts, and securing access to applications with TLS.

NewImage

Check out the course at Pluralsight!

Speaking at Data Grillen 2020

I’m proud to announce that I will be speaking at Data Grillen 2020 the conference runs from 28 May 2020 through 29 May 2020.

This is an incredible event packed with fantastic content, speakers, bratwurst and Beer! 

Check out the amazing schedule (and when I say check out the amazing schedule, I really mean it. Some of the world’s best Data Platform speakers are going to be there)

On Thursday, May 28th at 15:00 – I’m presenting “Containers –  Day 2” in the Handschuh room.

Here’s the abstract

You’ve been working with containers in development for a while, benefiting from the ease and speed of the deployments. Now it’s time to extend your container-based data platform’s capabilities for your production scenarios.

In this session, we’ll look at how to build custom containers, enabling you to craft a container image for your production system’s needs. We’ll also dive deeper into operationalizing your container-based data platform and learn how to provision advanced disk topologies, seed larger databases, implement resource control and understand performance concepts.

By the end of this session, you will learn what it takes to build containers and make them production ready for your environment.

My good friend, and container expert, Andrew Pruski (@dbafromthecold) will be presenting “SQL Server and Kubernetes” in the same room just before me at 13:30, be sure to come to both sessions for a deep dive into running SQL Server in Containers and Kubernetes.

Prost! 

Speaking at PowerShell Summit 2020!

I’m proud to announce that I will be speaking at PowerShell + DevOps Global Summit 2020 the conference runs from April 27th through April 30. This is an incredible event packed with fantastic content and speakers. Check out the amazing schedule! All the data you need on going is in this excellent brochure right here!

This year I have two sessions!

On Wednesday, April 29th at 09:00AM – I’m presenting “Inside Kubernetes – An Architectural Deep Dive

Here’s the abstract

In this session we will introduce Kubernetes, we’ll deep dive into cluster architecture and higher-level abstractions such as Services, Controllers, and Deployments and how they can be used to ensure the desired state of an application deployed in Kubernetes. In this session we will introduce Kubernetes, we’ll deep dive into each component and its responsibility in a cluster. We will also look at and demonstrate higher-level abstractions such as Services, Controllers, Deployments and Jobs and how they can be used to ensure the desired state of an application deployed in Kubernetes. By the end of this session, you will understand what’s needed to put your applications in production in a Kubernetes cluster

Session Objectives

  • Understand Kubernetes cluster architecture
  • Understand Services, Controllers, and Deployments
  • Designing Production-Ready Kubernetes Clusters
  • Learn to run PowerShell in Kubernetes Jobs.

I look forward to seeing you there.