Category Archives: SQL

Using strace inside a SQL Server Container

So, if you’ve been following my blog you know my love for internals. Well, I needed to find out exactly how something worked at the startup of a SQL Server process running inside a docker container and my primary tool for this is stracewell how do you run strace against processes running in a container? I hadn’t done this before and needed to figure this out…so let’s go through how I pulled this off.

The First (not so successful) Attempt

My initial attempt involved creating a second container image with strace installed and then starting that container in the same PID namespace at the SQL Server container. The benefit here is that I do need to do anything special to the SQL Server container…I can use an unmodified SQL Server image and create a container for running strace.

Create a dockerfile for a container and install strace inside the container

FROM ubuntu:16.04

RUN export DEBIAN_FRONTEND=noninteractive && \
    apt-get update && \
    apt-get install -yq curl gnupg apt-transport-https && \
    apt-get install -y strace && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists

CMD /bin/bash

Then build the container with docker build -t strace .

docker build -t strace .
Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM ubuntu:16.04
 ---> a3551444fc85
Step 2/3 : RUN export DEBIAN_FRONTEND=noninteractive &&     apt-get update &&     apt-get install -yq curl gnupg apt-transport-https &&     apt-get install -y strace &&     apt-get clean &&     rm -rf /var/lib/apt/lists
 ---> Running in 2832df1c4921
Get:1 xenial-security InRelease [109 kB]
Get:2 xenial InRelease [247 kB]
...output omitted...
Fetched 179 kB in 0s (218 kB/s)
Selecting previously unselected package strace.
(Reading database ... 5300 files and directories currently installed.)
Preparing to unpack .../strace_4.11-1ubuntu3_amd64.deb ...
Unpacking strace (4.11-1ubuntu3) ...
Setting up strace (4.11-1ubuntu3) ...
Removing intermediate container 2832df1c4921
 ---> 686bc74ddd24
Step 3/3 : CMD /bin/bash
 ---> Running in 1b1ca2bb04d7
Removing intermediate container 1b1ca2bb04d7
 ---> d89cfe1231c1
Successfully built d89cfe1231c1
Successfully tagged strace:latest

With the container built let’s use it to run strace against our SQL Server process running in another container. 

Startup a container running SQL Server

docker run \
    --name 'sql19' \
    -p 1434:1433 \

Then start up our strace container and attach it to the PID namespace of the sql19 container. 

docker run -it \
    --cap-add=SYS_PTRACE \
    --pid=container:sql19 strace /bin/bash -c '/usr/bin/strace -f -p 1' 

A lot is going on in this command so let’s expand out each of the parameters

  • -it – this will attach the standard out of our container to our current shell. Basically, we’ll see the output of strace on our active console and can redirect to file if needed.
  • –cap-add=SYS_PTRACE – this adds the SYS_PTRACE capability to the container. This allows ptrace (the system call behind strace) the ability to attach to process. If this is not specified you will get an error saying ‘Operation not permitted’
  • –pid=container:sql19 – specifies the container and the namespace we want to attach to. This will start up our strace container in the same PID namespace as the sql19 container. With this there is one process namespace shared between the two containers, effectively they will be able to see each other’s processes which is what we want. We want the strace process to be able to see the sqlservr process.
  • strace – is the name of the container image we built above.
  • /bin/bash -c ‘/usr/bin/strace -p 1 -f’ – this is the command (CMD) we want to run inside the strace container. In this case, we’re starting a sh shell with the parameters to launch strace
  • strace -p 1 -f – the option  -p 1 will attach strace to PID 1 which is sqlservr and the -f option will attach to any forked processes from the traced process
 When we run this docker command we get this output
docker run -it    --cap-add=SYS_PTRACE    --pid=container:sql19 strace sh -c '/usr/bin/strace -p 1 -f'
/usr/bin/strace: Process 1 attached with 2 threads
[pid     9] ppoll([{fd=14, events=POLLIN}], 1, NULL, NULL, 8
[pid     1] wait4(10, 
We’re attaching to an already running docker container running SQL. But what we get is an idle SQL Server process this is great if we have a running workload we want to analyze but my goal for all of this is to see how SQL Server starts up and this isn’t going to cut it.
My next attempt was to stop the sql19 container and quickly start the strace container but the strace container still missed events at the startup of the sql19 container. So I needed a better way.
UPDATE: David Barbarin, fellow Data Platform MVP and SQL Server and Container expert, pursued the idea of using a second container and came up with a very elegant solution to this! He is using the sleep command at the launch of the SQL Server container then attaching a second strace container to the PID namespace. Using this technique he’s able to catch the startup events and not have to build a custom SQL Server container…check out the details here! Exactly what I’m looking for!
Also, as David points out in his post PID 1 is the watchdog process. I totally forgot about that in the code above. So when running the code above, swap -p 1 for the actual PID of the sqlservr process that is the child of PID 1. But a better way the is to use pgrep -P 1  to dynamical get the child process ID of PID 1.
So let’s use this technique to connect to the correct PID inside a running SQL Server container. This will attach to the child of PID 1, which will be the base sqlservr process that’s the database engine.
docker run -it \
    --cap-add=SYS_PTRACE \
    --pid=container:sql1 strace /bin/bash -c '/usr/bin/strace -f -p $(pgrep -P 1)' 
The Second (and more successful) Attempt

I want to attach strace to the SQL process at startup and the way that I can achieve that is by creating a custom container with SQL Server and strace installed. Then starting that container  telling strace to start up SQL Server process.

So let’s start by creating our custom SQL Server container with strace installed. Here’s the dockerfile for that

FROM ubuntu:16.04

RUN export DEBIAN_FRONTEND=noninteractive && \
    apt-get update && \
    apt-get install -yq curl gnupg apt-transport-https && \
    curl | apt-key add - && \
    curl | tee /etc/apt/sources.list.d/mssql-server.list && \
    apt-get update && \
    apt-get install -y mssql-server && \
 apt-get install -y strace && \     
apt-get clean && \ rm -rf /var/lib/apt/lists CMD /opt/mssql/bin/sqlservr

This is pretty standard for creating a SQL Server container the key difference here is that we’re installing the strace package in addition to the mssql-server package. Good news is, we can leave the CMD of the container as sqlservr…which means we can use this for general purpose database container as well as strace use cases. We’re going to use another technique to override CMD that when we start the container so that it will start a strace’d sqlservr process for us.

Let’s go ahead and build that container with docker build -t sqlstrace .

Sending build context to Docker daemon  127.1MB
Step 1/3 : FROM ubuntu:16.04
 ---> a3551444fc85
Step 2/3 : RUN export DEBIAN_FRONTEND=noninteractive &&     apt-get update &&     apt-get install -yq curl gnupg apt-transport-https &&     curl | apt-key add - &&     curl | tee /etc/apt/sources.list.d/mssql-server.list &&     apt-get update &&     apt-get install -y mssql-server &&     apt-get install -y strace &&     apt-get clean &&     rm -rf /var/lib/apt/lists
 ---> Running in 806a3b4b9345
Get:1 xenial-security InRelease [109 kB]
...output omitted...
Setting up mssql-server (15.0.1900.25-1) …
...output omitted...
 ---> 42a1ca28ae72
Step 3/3 : CMD /opt/mssql/bin/sqlservr
 ---> Running in 1e57d6759df6
Removing intermediate container 1e57d6759df6
 ---> 6e3f5e82a177
Successfully built 6e3f5e82a177
Successfully tagged sqlstrace:latest

Once that container is built we can override the CMD that’s used to start the container defined in the dockerfile with another executable inside the container…you guessed it, strace.

docker run \
    --name 'sql19strace'  -it  \
    -p 1433:1433 \
     sqlstrace /bin/bash -c "/usr/bin/strace -f /opt/mssql/bin/sqlservr"

The first four lines of the docker run command are standard for starting a SQL Server container. But that last line is a bit different, we’re starting our sqlstrace container. Inside that container image we’re starting a bash shell and passing in the command (-c“/usr/bin/strace -f /opt/mssql/bin/sqlservr” which will start strace, following any forked processes (-f) and then start SQL Server (sqlservr). From there SQL Server will start up and strace will have full visibility into the process execution.  The cool thing about this technique is we can adjust our strace parameters as needed at the time we create the container. 

execve("/opt/mssql/bin/sqlservr", ["/opt/mssql/bin/sqlservr"], [/* 9 vars */]) = 0
brk(NULL)                               = 0x55b7bc77c000
access("/etc/", F_OK)      = -1 ENOENT (No such file or directory)
readlink("/proc/self/exe", "/opt/mssql/bin/sqlservr", 4096) = 23
access("/etc/", R_OK)      = -1 ENOENT (No such file or directory)
open("/opt/mssql/bin/tls/x86_64/", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat("/opt/mssql/bin/tls/x86_64", 0x7fffe9bc9510) = -1 ENOENT (No such file or directory)
open("/opt/mssql/bin/tls/", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat("/opt/mssql/bin/tls", 0x7fffe9bc9510) = -1 ENOENT (No such file or directory)
open("/opt/mssql/bin/x86_64/", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
...output omitted... 

Above is the output of strace on SQL Server kicking off with an execve which is the system call used after a fork to swap in the new program into the new process space. 

Hopefully, this can help you get into those deep dive debugging/troubleshooting/discovery scenarios you may find yourself working with in SQL Server inside a container

Docker Image Tags are Case Sensitive

A quick post about pulling docker containers (this applies to docker run too)…when specifying the container image, the container image name and tag are case sensitive. We’re not going to discuss how much time troubleshooting it too me to figure this out…but let’s just say it’s more than I care to admit publicly. 

In this code you can see I’m specifying the following image and tag server:2019-rc1-ubuntu (notice the lowercase rc in the tag)

docker pull 

Docker responds that it cannot find that image manifest

Error response from daemon: manifest for not found: manifest unknown: manifest unknown 

If we specify server:2019-RC1-ubuntu (notice the uppercase RC in the tag)

docker pull

Then docker is able to find that image and downloads it to my local machine

2019-RC1-ubuntu: Pulling from mssql/server
59ab41dd721a: Already exists
57da90bec92c: Already exists
06fe57530625: Already exists
5a6315cba1ff: Already exists
739f58768b3f: Already exists
e39f945bda21: Pull complete
6689ce95f395: Pull complete
ec004dcfdfb5: Pull complete
e44708601d04: Pull complete
Digest: sha256:a11facbda1b1cc299d4a37499ef79cd18e38bfb8e5dd7e45cc73670cc07772e5
Status: Downloaded newer image for

Want to get a list of tags for a container image so you know what image and tags to specify? Here’s how you do that with curl.

 curl -L 

If you’re of the PowerShell persuasion (shout out to Andrew Pruski for this gem) here how you can generate a list of tags with Invoke-Webrequest


Persisting SQL Server Data in Docker Containers – Part 3

In the first two posts in this series we discussed the need for data persistency in containers then we discussed where the data actually lives on our systems. Now let’s look at specifying the location of the data on the underlying file system of the base OS. 

This is the third post in a three part series on Persisting SQL Server Data in Docker Containers. The first post introducing Docker Volumes is hereThe second post on where Docker actually stores your data is here.

Exposing Directories on the Base OS into a Container

Now what if I wanted to expose a directory from my base OS, macOS directly into the container avoiding placing my data inside the Docker Linux VM. Let’s try it and see what happens…let’s start up a container with a Docker Volume mapping /Users/demo/demos/data on the base OS into the container at /var/opt/mssql.

docker run \
    --name 'sql19dv' \
    -p 1433:1433 \
    -v /Users/demo/demos/data:/var/opt/mssql \

If we do a docker ps -a we’ll find our conainer existed with a non-zero exit code. That’s bad.

CONTAINER ID        IMAGE                                        COMMAND                  CREATED             STATUS                        PORTS                    NAMES
2f4a9efc2f89   "/opt/mssql/bin/perm…"   21 seconds ago      Exited (1) 11 seconds ago                              sql19dv

The first thing you should do when this happens is to examine the container’s logs. We can do that with docker logs sql19dv (where sql19dv is our container name) and we’ll get this output.

This program has encountered a fatal error and cannot continue running at Sun Sep  1 14:19:06 2019
The following diagnostic information is available:

         Reason: 0x00000006
        Message: Kernel bug check
        Address: 0x6b047d50
     Parameters: 0x10861f590
    Stack Trace:
        Process: 9 - sqlservr
         Thread: 13 (application thread 0x4)
    Instance Id: 866b1bc1-211d-4390-aa43-a48b32d6f78e
       Crash Id: 07c2a35d-5ddf-4e5d-ad69-a91ef0f5d0e9
    Build stamp: 228a531f7a324b94dd3127e706f889b081f5677bd368be8b30485d8edda4d02b
   Distribution: Ubuntu 16.04.6 LTS
     Processors: 6
   Total Memory: 6246559744 bytes
      Timestamp: Sun Sep  1 14:19:06 2019
     Last errno: 2
Last errno text: No such file or directory

Ubuntu 16.04.6 LTS
Capturing core dump and information to /var/opt/mssql/log...
dmesg: read kernel buffer failed: Operation not permitted No journal files were found.
No journal files were found.
Sun Sep  1 14:19:07 UTC 2019 Capturing program information
Sun Sep  1 14:19:08 UTC 2019 Attempting to capture a dump with paldumper
Captured a dump with paldumper
Sun Sep  1 14:19:11 UTC 2019 Capturing program binaries
Sun Sep  1 14:19:12 UTC 2019 Compressing the dump files

In the output above, SQL Server crashes trying to access the file inside the container that isn’t there…see the ‘Last errno text’ using strace inside the container yields the following information. 

[pid    11] lstat("/opt/mssql/lib/system", 0x7f8f15b63350) = -1 ENOENT (No such file or directory)
[pid    11] <... lstat resumed> 0x7f8f15b63350) = -1 ENOENT (No such file or directory)

Let’s Map a Different Path and See What Happens

docker run \
    --name 'sql19dv2' \
    -p 1432:1433 \
    -v /Users/demo/demos/data:/var/opt/mssql/data \

Here we’re mapping to /var/opt/mssql/data (where above we mapped /var/opt/mssql). This mapping will fail and the container won’t start but this time for a different reason.

If we look at  docker logs you’ll find the following error in from the SQL Server Error Log.

2019-09-02 18:10:15.08 Server      Error 87(The parameter is incorrect.) occurred while opening file '/var/opt/mssql/data/master.mdf' to obtain configuration information at startup. An invalid startup option might have caused the error. Verify your startup options, and correct or remove them if necessary.

Makes sense…we changed where SQL Server is reading/writing data. macOS doesn’t support a file mode called O_DIRECT which allows for unbuffered read/write access to the file opened using the open system call.  O_DIRECT is used by systems that manage their own file caching, like relational database management systems (RDBMS). So as SQL starts up and tries to open the master database with O_DIRECT the files can’t be opened because the macOS kernel doesn’t support this mode. And this is the reason why we have to have that Linux VM around. That Linux VM will support O_DIRECT option on the file opened. See more about this at the GitHub issue here.

An strace of the thread shows the following:

open("/var/opt/mssql/data/master.mdf", O_RDONLY|O_DIRECT
<... open resumed> )        = -1 EINVAL (Invalid argument)
...output omitted...
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_ACCERR, si_addr=0xb804b1fe8} —
...the remainder of the strace shows the creation of the SQL Server Error Log record above...

This issue is specific to macOS. On a Linux machine it would map the base OS directory directly into the container and the file operations will work because the kernel supports the correct file modes on the open system call. On a Window machine this works as confirmed by my friend and fellow MVP Andrew Pruski

But we still can use our base OS directories…really!

All isn’t lost if you’re running Linux containers on a Mac and need to run SQL Server to access the base OS’s file system. We can still use Docker Volumes for other parts of the container. Let’s create a container using TWO Docker Volumes. Let’s define sqldata1 as using the file system inside the Docker VM and we’ll define a second Docker Volume that we can use to read/write other information…like backups. 

 docker run \
    --name 'sql19dv1' \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \
    -v /Users/demo/demos/backup:/backup \

In this configuration our SQL instance will use sqldata1 mapped to /var/opt/mssql for its instance data directories inside the Docker Linux VM so SQL is able to open the files with the appropriate file modes. But we can still read/write information directly to our base OS in the directory /Users/demo/demos/backup which is mapped into the container at the location /backup. Backup files do not use the O_DIRECT flag.

Now let’s run a backup of our database to that location.

sqlcmd -S localhost,1433 -U sa -Q "BACKUP DATABASE [TestDB1] TO DISK = '/backup/TestDB1.bak'" -P $PASSWORD -W

And if we look at that directory on the base operating system we’ll see the databases backup outside the container. That’s cool. Now your automatic backups of your workstations can pick up that file and back it up into the cloud for you…right?

ls -la /Users/demo/demos/backup
total 6504
drwxr-xr-x  3 demo  staff       96 Sep  1 10:04 .
drwxr-xr-x  5 demo  staff      160 Sep  1 09:48 ..
-rw-r-----  1 demo  staff  3330048 Sep  1 10:03 TestDB1.bak

Let’s do something cool…

We can share that /backup volume with other containers on our system. With the container sql19dv1 still running we can start up another container, sql19dv2. We’ll need to ensure this container has a unique name, unique port to listen on and a unique Volume for the instance’s files. The only thing it’s going to share is the backup volume. This technique isn’t specific to containers macOS. This will work on Windows and Linux as well.

 docker run \
    --name 'sql19dv2' \
    -p 1432:1433 \
    -v sqldata2:/var/opt/mssql \
    -v /Users/demo/demos/backup:/backup \

With this container running we can execute a RESTORE statement on the backups that are on the base OS at /Users/demo/demos/backup and mapped into the container at /backup. This technique can be effective if you’re using larger data sets avoiding having to copy the backup into the container with docker cp. 

sqlcmd -S localhost,1432 -U sa -Q "RESTORE DATABASE [TestDB1] FROM DISK = '/backup/TestDB1.bak'" -P $PASSWORD -W
Processed 392 pages for database 'TestDB1', file 'TestDB1' on file 1.
Processed 2 pages for database 'TestDB1', file 'TestDB1_log' on file 1.
RESTORE DATABASE successfully processed 394 pages in 0.026 seconds (118.239 MB/sec).

Wrapping things up 

In this post, we introduced being able to map a file location from the base OS into a container and use it for reading and writing data, in our examples backup files. This could be any type of data. We also learned that for SQL Server data files we still need to use the Docker Volume that’s serviced by the Linux container. We also learned how we can share a Docker Volume between containers a quick way to move backups and other data between containers without having to use docker cp

This technique isn’t specific to containers macOS. This will work on Windows and Linux as well.

Persisting SQL Server Data in Docker Containers – Part 2

So in my previous post, we discussed Docker Volumes and how they have a lifecycle independent of the container enabling us to service the container image independent of the data inside the container. Now let’s dig into Volumes a little bit more and learn where Docker actually stores that data on the underlying operating system.  

This is the second post in a three part series on Persisting SQL Server Data in Docker Containers. The first post introducing Docker Volumes is here. And the third post on mapping base OS directories directly into containers is here.

Docker Volumes

When we working with Docker Volumes we can use the -v option to create the Volume at the time the container is created or we can create the Volume ahead of time with the docker volume create command. The left side sqldata1 is the Volume name. The right side /var/opt/mssql is the directory the Volume will be mapped to inside the container. 

We can specify the location of the Volume on the underlying OS but if we don’t it will be created in the default location defined in your Docker preferences which is /var/lib/docker/volumes. Let’s go with the default location for now…we’re going to cover specifying locations in an upcoming post.

docker run \
    --name 'sql19' \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \

On a system where Linux is the base OS running Docker the container runtime will create Volumes on the base OS’s file system and map them into the container. So as we create Volumes, they will land in /var/lib/docker/volumes when using the code above that doesn’t specify a specific file location on the base OS. Well, /var/lib/docker/volumes is quite a Linux’y sounding path and if I try to browse to that path on my Mac or Windows it doesn’t exist. OK, I don’t have a Windows system but I think it’s a safe bet that’s not there either :)

Let’s use the inspect command to get more information on a Docker resource and here we’re using docker volume inspect on sqldata1.

docker volume inspect sqldata1
        "CreatedAt": "2019-09-01T11:51:25Z",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/sqldata1/_data",
        "Name": "sqldata1",
        "Options": null,
        "Scope": "local"

In the output here we can see the Mountpoint, which is where the data lives on the underlying OS, as /var/lib/docker/volumes/sqldata1/_data that’s where Docker put the Volume since I didn’t define a path it put the Volume data in the default location of /var/lib/docker/volumes…but like we said a second ago that path doesn’t exists on my local filesystem…what’s going on?

Docker Volumes on non-Linux Operating Systems

Docker on Mac (and Windows uses the same technique for now) exposes a small Linux VM to provide kernel services to Linux containers. When we create a Docker Volume without specifying a file location for the Volume it will be created *inside* this VM. So that’s where our Volume data is stored. Let’s keep digging…

Using the technique defined here we can get a shell into that Virtual Machine and browse the file system to find the actual data files. Here’s how you attach a shell to the Docker VM.

screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty

Now let’s decompose this command below a bit…screen is a command used to attach a shell to a process, the path (~/Library/Containers/com.docker.docker/Data/vms/0/tty) is the default disk image path defined in your Docker preferences. And the file tty is a process running that exposes a shell interface to the running VM supporting our Linux containers. Everything in Linux is a file, so that tty is actually a terminal process inside the VM that’s exposed outside the VM on your base operating system’s file system. Screen will ‘open’ that file and start to read from it…since it’s exposes a terminal you get a shell into the VM. 

Now that we’re inside the VM let’s look around and see where our Volume data lives, we see all of the actual SQL Server file data supporting the instance and the databases and log files.

ls /var/lib/docker/volumes/
metadata.db sqldata1

In the directory listing above we’ll see a directory for each named Docker Volume in this case we see sqldata1.

ls /var/lib/docker/volumes/sqldata1/_data/
data     log      secrets

Inside of sqldata1, we will start to see the directories and files created by SQL Server in the Docker Volume. Remember SQL Server thinks this is /var/opt/mssql because this path is being mapped into the container at that location.  

ls /var/lib/docker/volumes/sqldata1/_data/data/
Entropy.bin                     model.mdf
MS_AgentSigningCertificate.cer  modellog.ldf
TestDB1.mdf                     msdbdata.mdf
TestDB1_log.ldf                 msdblog.ldf
master.mdf                      tempdb.mdf
mastlog.ldf                     templog.ldf

And if we look inside the data directory we’ll see system and user databases. 

What’s Next? 

So now that we know where Docker Volume data lives when running Linux Containers on a Mac (this same technique applies to Windows too), in our next post we’re going to go a little bit further into this and define a specific location for Docker to store our data on the base OS.

Persisting SQL Server Data in Docker Containers – Part 1

What’s the number one thing a data professional wants to do with their data…keep it around. Let’s talk about running SQL Server in Containers using Docker Volumes on a Mac

This is the first post in a three part series on Persisting SQL Server Data in Docker Containers. The second post on where Docker actually stores your data is here. And the third post on mapping base OS directories directly into containers is here.

The Need for Data Persistency in Containers

A container image is read-only. When an application changes data inside a running container writes are written to a writable layer. The writable layer plus the read-only container image are brought together by the container runtime and presenting to the processes running inside the container as a single file system. 

The primary issue with this is that the writeable layer has the lifecycle of the container. If you delete the container, you delete the writeable layer and any data that was in there. Luckily Docker containers give us a way to decouple the container and its data.

In Figure 1, you can see a container image and it’s writeable layer. The application inside the container sees this has a single file system. If we delete this container, any data written to the writeable layer will be deleted too. 


Figure 1: A container and it’s writable layer

Docker Volumes

A Docker Volume is a Docker managed resource that is mapped into a defined point in the filesystem inside the container. The primary benefit of using Docker Volumes is that they have a lifecycle that’s independent of a container. This enables you to decouple your application from its state to the point where you can simply throw away the container and replace it with a new container image start up your application and point it to your data.

In Figure 2, we have a container, a writeable layer and a volume. A container will always have a writeable layer even when a Volume is defined. A Volume will be mounted at a specific location in the file system inside the container and writes to that location will be written to the Volume. Writes to other parts of the file system will be written to the writable layer. 


Figure 2: A container, it’s writable layer and a Volume

SQL Server using Docker Volumes

Let’s talk about how we can use Docker Volumes and SQL Server to persist data. If we want to run SQL Server in a container we will want to decouple our data from the container itself. Doing so will enable us to delete the container, replace it and start up a new one pointing at our existing data. When running SQL Server in a container will store data in /var/opt/mssql by default. When the container starts up for the first time it will put the system databases in that location and any user databases created will also be placed at this location by default. 

Now, if we don’t use a Volume that data will be written into the writeable layer of the container and if we delete the container…we delete our data. We don’t want that so let’s start up a container with a Volume. To do so we use the -v option when we use the docker run command. 

docker run \
    --name 'sql17' \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \

In the code above you can see –v sqldata1:/var/opt/mssql specified as part of the docker run command. This creates a Docker Volume sqldata1 and maps that inside the container to /var/opt/mssql. Now during this container’s start up when SQL Server will write its data to /var/opt/mssql which is actually going to be written to the Volume. If we delete this container and replace it…when SQL Server starts up it will see the master database and proceed initializing the system as defined in master. If there are any user databases defined in master and they’re accessible they will be brought online too. Let’s try it out…first up let’s create a user database and query the file information about the databases in this container.

sqlcmd -S localhost,1433 -U sa -Q 'CREATE DATABASE TestDB1' -P $PASSWORD

sqlcmd -S localhost,1433 -U sa -Q 'SELECT name, physical_name from sys.master_files' -P $PASSWORD -W

name physical_name
---- -------------
master /var/opt/mssql/data/master.mdf
mastlog /var/opt/mssql/data/mastlog.ldf
tempdev /var/opt/mssql/data/tempdb.mdf
templog /var/opt/mssql/data/templog.ldf
modeldev /var/opt/mssql/data/model.mdf
modellog /var/opt/mssql/data/modellog.ldf
MSDBData /var/opt/mssql/data/MSDBData.mdf
MSDBLog /var/opt/mssql/data/MSDBLog.ldf
TestDB1 /var/opt/mssql/data/TestDB1.mdf
TestDB1_log /var/opt/mssql/data/TestDB1_log.ldf

In the code above we create a database, when query master for the information about the databases running inside this container. You can see all of the paths are /var/opt/mssql which is our volume. 

Container and Data Independence

The Docker Volume created with the -v option created a Docker managed Volume that is independent of the container so our data will live in there and we can service the container independent of the volume. So let’s do that…let’s delete the container and start up a new container and let’s go so far as to use a 2019 container to upgrade SQL Server…that’s cool!

docker stop sql17
docker rm sql17

The code above will stop and then delete the container. When the container is deleted, so is its writeable layer. But we are storing out data in the Volume and that still exists.

docker volume ls
local               sqldata1

Above we can see there is a Volume using the local driver and its name is sqldata1…this still exists and can be mounted by new containers. The local drive is used to map directories from the base OS inside the container. There are other types of drivers that expose other types of storage into the container. More on this later.

docker run \
    --name 'sql19' \
    -p 1433:1433 \
    -v sqldata1:/var/opt/mssql \

With this code, we start up a new container and tell it to use the same Volume and mount it into /var/opt/mssql. So when SQL Server starts it finds the master database, master has the metadata about any configuration and user databases and we get back into the state we were previously in. Let’s ask SQL Server for a list of databases.

sqlcmd -S localhost,1433 -U sa -Q 'SELECT name, physical_name from sys.master_files' -P $PASSWORD -W

name physical_name
---- -------------
master /var/opt/mssql/data/master.mdf
mastlog /var/opt/mssql/data/mastlog.ldf
tempdev /var/opt/mssql/data/tempdb.mdf
templog /var/opt/mssql/data/templog.ldf
modeldev /var/opt/mssql/data/model.mdf
modellog /var/opt/mssql/data/modellog.ldf
MSDBData /var/opt/mssql/data/MSDBData.mdf
MSDBLog /var/opt/mssql/data/MSDBLog.ldf
TestDB1 /var/opt/mssql/data/TestDB1.mdf
TestDB1_log /var/opt/mssql/data/TestDB1_log.ldf

…and there you can see in the output above, SQL Server is in the state it was in the initial running of the container on the 2017 image. Now we’re on the 2019 image and have access to all of our persisted data independent of the container image. 

sqlcmd -S localhost,1433 -U sa -Q 'SELECT @@VERSION' -P $PASSWORD
Microsoft SQL Server 2019 (RC1) - 15.0.1900.25 (X64)
        Aug 16 2019 14:20:53
        Copyright (C) 2019 Microsoft Corporation
        Developer Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)                                                                                                                       

Containers have replaced virtual machines for me and the decoupling of data and computation will have a significant impact on how we design data platforms and systems going forward. In this post, I wanted to highlight how you can use a container with persistent state systems like SQL Server. In the next post, I’m going to show you where that data actually lives on the underlying Operating System.

Microsoft Most Valuable Professional – Data Platform for 2019-2020

Today, I’m proud to announce that I have been renewed as an Microsoft MVP – Data Platform for the 2019-2020 award year, my third MVP award. This is an truly an honor and I’m humbled to be included in this group of exceptional data professionals. I really look forward to continuing to work with everyone in the MVP community and continuing to contribute to our unmatched Data Platform Community!

MVP Logo Horizontal Secondary Blue286 CMYK 300ppi

What is an MVP?

Here’s the definition according to Microsoft

Microsoft Most Valuable Professionals, or MVPs, are technology experts who passionately share their knowledge with the community. They are always on the “bleeding edge” and have an unstoppable urge to get their hands on new, exciting technologies. They have very deep knowledge of Microsoft products and services, while also being able to bring together diverse platforms, products and solutions, to solve real world problems. MVPs make up a global community of over 4,000 technical experts and community leaders across 90 countries and are driven by their passion, community spirit, and quest for knowledge. Above all and in addition to their amazing technical abilities, MVPs are always willing to help others – that’s what sets them apart.

For 2019-2020, I have been named a Data Platform MVP, which means my technical specialization is on data products like SQL Server. The group of people that have received this award is quite small…by my count 332 worldwide and less than 75 in the US. I’m honored to be in this group of talented professionals.

Why I’m excited to be an MVP?

Honestly, the primary reason I’m excited to be an MVP is to give back (more), I’ve learned so much from other MVPs and receiving this award will help me to continue to build relationships with other MVPs and Microsoft employees to further help develop the Data Platform itself and the community that surrounds that platform.

At the start of 2016 I had set a goal of being an MVP in 5 years. I don’t know why I picked that number, but what I figured was…MVP would be validation of consistent, quality work for our community and being recognized for the work that I’ve contributed. Things like blogging, social media, public speaking and more. You learn a ton by teaching! 

The past year has been great, focusing on Containers and Kubernetes and how we can leverage that tech in our modern Data Platform.

To the people that have helped along the way, I thank you all!

Speaking at SQL Saturday Dallas

Speaking at SQLSaturday Dallas!

I’m proud to announce that I will be speaking at SQL Saturday Dallas on May 17th 2018! This one won’t let you down! Check out the amazing schedule!

If you don’t know what SQLSaturday is, it’s a whole day of free SQL Server training available to you at no cost!

If you haven’t been to a SQLSaturday, what are you waiting for! Sign up now!

My presentation is Practical Container Scenarios in Azure” 


Here’s the abstract for the talk

You’ve heard the buzz about containers and Kubernetes, now let’s start your journey towards rapidly deploying and scaling your container-based applications in Azure. In this session, we will introduce containers and the container orchestrator Kubernetes. Then we’ll dive into how to build a container image, push it into our Azure Container Registry and deploy it our Azure Kubernetes Services cluster. Once deployed, we’ll learn how to keep our applications available and how to scale them using Kubernetes.

Key topics introduced
Building Container based applications
Publishing containers to Azure Container Registry
Deploying Azure Kubernetes Services Clusters
Scaling our container-based applications in Azure Kubernetes Services

Workshop – Kubernetes Zero to Hero – Installation, Configuration, and Application Deployment

Pre-conference Workshop at SQLSaturday Baton Rouge

I’m proud to announce that I will be be presenting an all day pre-conference workshop at SQL Saturday Baton Rouge on August 16th 2019! This one won’t let you down! 

The workshop is Kubernetes Zero to Hero – Installation, Configuration, and Application Deployment” 


Here’s the abstract for the workshop

Modern application deployment needs to be fast and consistent to keep up with business objectives and Kubernetes is quickly becoming the standard for deploying container-based applications, fast. In this day-long session, we will start with an architectural overview of a Kubernetes cluster and how it manages application state. Then we will learn how to build a production-ready cluster. With our cluster up and running, we will learn how to interact with our cluster, common administrative tasks, then wrap up with how to deploy applications and SQL Server. At the end of the session, you will know how to set up a Kubernetes cluster, manage a cluster, deploy applications and databases, and how to keep everything up and running.

Session Objectives

  • Introduce Kubernetes Cluster Components
  • Introduce Kubernetes API Objects and Controllers
  • Installing Kubernetes
  • Interacting with your cluster
  • Storing persistent data in Kubernetes
  • Deploying Applications in Kubernetes
  • Deploying SQL Server in Kubernetes
  • High Availability scenarios in Kubernetes


How much does it cost?

The full day training event is $125 per attendee.

What can I bring into the event?
WiFi at the location is limited. The workshop will be primarily demonstration based. Code will be made available for download prior to the event if you would like to follow along during the session.

How can I contact the organizer with any questions?
Please feel free to email me with any questions:

What’s the refund policy?
7 days: Attendees can receive refunds up to 7 days before your event start date.

Do I need to know SQL Server or Kubernetes to attend this workshop?
No, while we will be focusing on deploying SQL Server in Kubernetes, no prior knowledge of SQL Server or Kubernetes is needed. We will build up our Kubernetes skills using SQL Server as the primary application we will deploy.

What are the prerequisites for the workshop?
All examples will be executed at the command line, so proficiency at a command line is required. Platform dependent (Linux/Windows,Cloud CLIs) configurations and commands will be introduced and discussed in the workshop.  

Speaking at SQLSaturday Atlanta – 845

Speaking at SQLSaturday Atlanta!

I’m proud to announce that I will be speaking at SQL Saturday Atlanta on May 17th 2018! This one won’t let you down! Check out the amazing schedule!

If you don’t know what SQLSaturday is, it’s a whole day of free SQL Server training available to you at no cost!

If you haven’t been to a SQLSaturday, what are you waiting for! Sign up now!

My presentation is Containers – You Better Get on Board!” 

SQLSaturday #845 - Atlanta 2019

Here’s the abstract for the talk

Containers are taking over, changing the way systems are developed and deployed…and that’s NOT hyperbole. Just imagine if you could deploy SQL Server or even your whole application stack in just minutes. You can do that, leveraging containers! In this session, we’ll get you started on your container journey learning container fundamentals in Docker, then look at some common container scenarios and introduce deployment automation with Kubernetes.

In this session we’ll look at
Container Fundamentals with Docker
Common Container Scenarios
Automation with Kubernetes

Prerequisites: Operating system concepts such as command line use and basic networking skills.

Data Persistency and Advanced SQL Server Disk Topologies in Kubernetes

When working with SQL Server in containers and Kubernetes storage is a key concept. In this post, we’re going to walk through how to deploy SQL Server in Kubernetes with Persistent Volumes for the system and user databases.

One of the key principals of Kubernetes is the ephemerality of Pods. No Pod is every redeployed, a completely new Pod is created. If a Pod dies, for whatever reason, a new Pod is created in its place there is no continuity in the state of that Pod. The newly created Pod will go back to the initial state of the container image defined in the Pod’s spec. This is very valuable for stateless workloads, not so much for stateful workloads like SQL Server.

This means that for a stateful workload like SQL Server we need to store both configuration and data externally from the Pod to maintain state through the recreation of a Pod. Kubernetes give us constructs two constructs to do that, environment variables and Persistent Volumes. 

Using Environment Variables for Container Configuration

Container-based applications use environment variables for configuration at startup. The SQL Server container has a collection of environment variables that can be used to configure it at container startup. We will leverage two of those in this configuration. MSSQL_DATA_DIR and MSSQL_LOG_DIR these allow us to define a file system locations for user database and log files. When the SQL Server container is started inside the Pod, it reads the environment variables at runtime and sets its configuration based on those values. We define these variables as part of the Pod Spec. We will cover that configuration below.

Using Persistent Volumes to Maintain Database State

To persist the state of our SQL Server container, we will configure SQL Server to store its data and log files for both user and system databases on Persistent Volumes.

First, let’s review how SQL Server in a container starts up. During the initial startup, the SQL Server process checks to see if there are any system databases in the default system file location which is, /var/opt/mssql/data. If there are none the system databases are copied there, if they are there no action is taken. 

To add persistently to the system databases, and really all of the other components of SQL Server such as the Error Log and other system files, we will configure /var/opt/mssql so that it is backed by a Persistent Volume.

By placing the system databases on a Persistent Volume, when a Pod is recreated and the Persistent Volumes are attached and mounted in the same location when the SQL Server process starts up it sees the system databases and has what it needs to maintain state between creation.

If there are records for user databases in the system databases, SQL Server will start the process of bringing those databases online as well. We certainly the default location for user databases is /var/opt/mssql/data but we are going to override that with an environment variable for both the data and log directories, placing each on a dedicated Persistent Volumes.

Let’s walk through that configuration together. 

Persistent Volume Claims

In this configuration, we will use dynamic storage provisioning. In dynamic provisioning, a Persistent Volume Claim (PVC) is used to request a Persistent Volume (PV) from a Storage Class. In this case, we’ll be using AKS’s managed-premium Storage Class. 

Here we define three PVCs, one for each place we want Persistent Volume, for the system files and databases and the user database and log files.

apiVersion: v1
kind: PersistentVolumeClaim
  name: "pvc-sql-data"
  - ReadWriteOnce
  storageClassName: managed-premium
      storage: 10Gi
apiVersion: v1
kind: PersistentVolumeClaim
  name: "pvc-sql-system"
  - ReadWriteOnce
  storageClassName: managed-premium
      storage: 10Gi
apiVersion: v1
kind: PersistentVolumeClaim
  name: "pvc-sql-log"
  - ReadWriteOnce
  storageClassName: managed-premium
      storage: 10Gi


In the Pod spec for our Deployment, we want to define several elements to support this configuration. 

  • Volumes – define volumes that can be mounted by this Pod. In this case, we’re creating and naming three volumes, backed by the PVCs defined above.
  • volumeMounts – volumes mounted into the container and their mountPath, location. This maps the names from the named Volumes to a location in the filesystem in the container.
  • env – due to the ephemerality of the container in the Pod, we need to tell SQL Server at start up that the data and log files will be stored in a specified directory. We are leaving the system databases and files in the default location which is /var/opt/mssql
The net effect of this storage configuration is that we are mapping the Persistent Volumes into a particular location in the filesystem inside the container. 
apiVersion: apps/v1
kind: Deployment
  name: mssql-deployment
  replicas: 1
      app: mssql
    type: Recreate
        app: mssql
      - name: mssql
        image: ''
        - containerPort: 1433
        - name: ACCEPT_EULA
          value: 'Y'
        - name: MSSQL_DATA_DIR
          value: '/data'
        - name: MSSQL_LOG_DIR
          value: '/log'
        - name: SA_PASSWORD
          value: 'S0methingS@Str0ng!'
        - name: mssql-system
          mountPath: /var/opt/mssql
        - name: mssql-data
          mountPath: /data
        - name: mssql-log
          mountPath: /log
      - name: mssql-system
          claimName: pvc-sql-system
      - name: mssql-data
          claimName: pvc-sql-data
      - name: mssql-log
          claimName: pvc-sql-log


We’ll front end our SQL Server with a public IP address and a load balancer. 

apiVersion: v1
kind: Service
  name: mssql-deployment
    app: mssql
    - protocol: TCP
      port: 31433
      targetPort: 1433
  type: LoadBalancer

Apply the Configuration

Save the code above into a YAML file and deploy it into SQL Server.

kubectl apply -f deployment-advanced-disk.yaml

You’ll get this output

persistentvolumeclaim/pvc-sql-data created
persistentvolumeclaim/pvc-sql-system created
persistentvolumeclaim/pvc-sql-log created
deployment.apps/mssql-deployment created
service/mssql-deployment created

Confirm the configuration

We can use kubectl get pv to list the Persistent Volumes (PV) dynamically allocated by our cluster. Here there are three Persistent Volumes. The key here is the status is Bound, which means they are bound to a PVC. I also want to point out the Reclaim Policy is Delete. This means if the PVC is deleted, the PV will be deleted at a cleanup interval sometime in the future. 

kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS      REASON   AGE
pvc-e0b418ef-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            Delete           Bound    default/pvc-sql-data     managed-premium            11m
pvc-e0cf2345-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            Delete           Bound    default/pvc-sql-system   managed-premium            11m
pvc-e0ea01a8-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            Delete           Bound    default/pvc-sql-log      managed-premium            11m

With kubectl get pvc we get a list of the PVCs in our configuration, once for each we defined above. The key here is the status is Bound, or that they are bound to a PV.

kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
pvc-sql-data     Bound    pvc-e0b418ef-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            managed-premium   12m
pvc-sql-log      Bound    pvc-e0ea01a8-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            managed-premium   12m
pvc-sql-system   Bound    pvc-e0cf2345-6e69-11e9-a433-f659caf6a6f5   10Gi       RWO            managed-premium   12m 

Now let’s use kubectl describe pods to get the deep dive info about our storage configuration and how it’s mapped into the Pod. 

There are three keep places in the output below I want to point you to

  • Containers: mssql: Environment: you’ll find the two environment variables set for the data and log directories. Configured as /data and /log
  • Mounts: we see the file system location inside the container and the name of the Volumes defined in the Pod Spec
  • Volumes: we see the name of the Volumes, their type, claim name and the read/write status.
  • Events: this is a log of the events for the creation of this Pod. Key here is that sometimes the container will come up prior to the storage being available to the Pod. That’s what the error below is, but it clears itself up and the container is able to start.
kubectl describe pods
Name:               mssql-deployment-df4cf5c4c-nf8lf
Namespace:          default
Priority:           0
Node:               aks-nodepool1-89481420-2/
Start Time:         Sat, 04 May 2019 07:41:59 -0500
Labels:             app=mssql
Status:             Running
Controlled By:      ReplicaSet/mssql-deployment-df4cf5c4c
    Container ID:   docker://f2320ae8f94c24fbb04214b903b4a218b82e9548f8d88a95daa7e207eeaa42b4
    Image ID:       docker-pullable://
    Port:           1433/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sat, 04 May 2019 07:44:21 -0500
    Ready:          True
    Restart Count:  0
      ACCEPT_EULA:                   Y
      MSSQL_DATA_DIR:                /data
      MSSQL_LOG_DIR:                 /log
      SA_PASSWORD:                   S0methingS@Str0ng!
      KUBERNETES_PORT:               tcp://
      KUBERNETES_PORT_443_TCP:       tcp://
      /data from mssql-data (rw)
      /log from mssql-log (rw)
      /var/opt/mssql from mssql-system (rw)
      /var/run/secrets/ from default-token-z9sbf (ro)
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-sql-system
    ReadOnly:   false
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-sql-data
    ReadOnly:   false
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-sql-log
    ReadOnly:   false
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-z9sbf
    Optional:    false
QoS Class:       BestEffort
Tolerations: for 300s
        for 300s
  Type     Reason                  Age   From                               Message
  ----     ------                  ----  ----                               -------
  Normal   Scheduled               13m   default-scheduler                  Successfully assigned default/mssql-deployment-df4cf5c4c-nf8lf to aks-nodepool1-89481420-2
  Normal   SuccessfulAttachVolume  13m   attachdetach-controller            AttachVolume.Attach succeeded for volume "pvc-e0ea01a8-6e69-11e9-a433-f659caf6a6f5"
  Normal   SuccessfulAttachVolume  12m   attachdetach-controller            AttachVolume.Attach succeeded for volume "pvc-e0cf2345-6e69-11e9-a433-f659caf6a6f5"
  Normal   SuccessfulAttachVolume  12m   attachdetach-controller            AttachVolume.Attach succeeded for volume "pvc-e0b418ef-6e69-11e9-a433-f659caf6a6f5"
  Warning  FailedMount             11m   kubelet, aks-nodepool1-89481420-2  Unable to mount volumes for pod "mssql-deployment-df4cf5c4c-nf8lf_default(027c46f7-6e6a-11e9-a433-f659caf6a6f5)": timeout expired waiting for volumes to attach or mount for pod "default"/"mssql-deployment-df4cf5c4c-nf8lf". list of unmounted volumes=[mssql-system mssql-data]. list of unattached volumes=[mssql-system mssql-data mssql-log default-token-z9sbf]
  Normal   Pulled                  11m   kubelet, aks-nodepool1-89481420-2  Container image "" already present on machine
  Normal   Created                 11m   kubelet, aks-nodepool1-89481420-2  Created container
  Normal   Started                 11m   kubelet, aks-nodepool1-89481420-2  Started container

Creating a Database and Verifying File Location

With this code, we’ll get our IP address for our SQL Server service then we’ll create a database and query master_files for a list of data files. Notice I’m defining my service port as 31443 which is what we defined when creating our service in the earlier step.

SVCIP=$(kubectl get svc mssql-deployment | grep mssql-deployment |  awk '{print $4}')
sqlcmd -S $SVCIP,31433 -U sa -Q 'CREATE DATABASE TestDB1' -P $PASSWORD
sqlcmd -S $SVCIP,31433 -U sa -Q 'SELECT name,physical_name from sys.master_files' -P $PASSWORD

And we’ll get this output, you can see all of the system databases backed by /var/opt/mssql and our user database is on /data and the log is on /log. All backed by Persistent Volumes.

master        /var/opt/mssql/data/master.mdf
mastlog       /var/opt/mssql/data/mastlog.ldf
tempdev       /var/opt/mssql/data/tempdb.mdf
templog       /var/opt/mssql/data/templog.ldf
modeldev      /var/opt/mssql/data/model.mdf
modellog      /var/opt/mssql/data/modellog.ldf
MSDBData      /var/opt/mssql/data/MSDBData.mdf
MSDBLog       /var/opt/mssql/data/MSDBLog.ldf
TestDB1       /data/TestDB1.mdf
TestDB1_log   /log/TestDB1_log.ldf

Confirming Persistency

Let’s go ahead and delete our Pod to confirm that when it’s recreated by our Deployment our data is still there. 

kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
mssql-deployment-df4cf5c4c-nf8lf   1/1     Running   0          4d2h

kubectl delete pod mssql-deployment-df4cf5c4c-nf8lf 
pod "mssql-deployment-df4cf5c4c-nf8lf" deleted

Once the Pod is recreated, let’s query master files to see where our databases are located. And you’ll find that your the database created in the previous step persisted between Pod creations.

sqlcmd -S $SVCIP,31433 -U sa -Q 'SELECT name,physical_name from sys.master_files' -P $PASSWORD

master        /var/opt/mssql/data/master.mdf
mastlog       /var/opt/mssql/data/mastlog.ldf
tempdev       /var/opt/mssql/data/tempdb.mdf
templog       /var/opt/mssql/data/templog.ldf
modeldev      /var/opt/mssql/data/model.mdf
modellog      /var/opt/mssql/data/modellog.ldf
MSDBData      /var/opt/mssql/data/MSDBData.mdf
MSDBLog       /var/opt/mssql/data/MSDBLog.ldf
TestDB1       /data/TestDB1.mdf
TestDB1_log   /log/TestDB1_log.ldf