Speaking at PowerShell Virtual Group of PASS

This month I’ll be speaking to the PowerShell Virtual Chapter of PASS. The session is on Linux OS Fundamentals for the SQL Admin. At the core of the session we will introduce you to OS concepts like managing files and file systems, installation packages, using PowerShell on Linux, managing system services, commands and processes and system resource management. This session is intended for those who have never seen or have very little exposure to Linux but are seasoned Windows or SQL administrators. Things like processes, memory utilization and writing scripts should be familiar to you but are not required. 

Sign up now! https://attendee.gotowebinar.com/register/4762712017177605123

Wednesday, February 1, 12:00PM-1:00PM Eastern (GMT-5)

NewImage

Abstract

PowerShell and SQL Server are now available on Linux and management wants you to leverage this shift in technology to more effectively manage your systems, but you’re a Windows admin!  Don’t fear! It’s just an operating system! It has all the same components Windows has and in this session we’ll show you that. We will look at the Linux operating system architecture and show you how to interact with and manage Linux system. By the end of this session you’ll be ready to go back to the office and get started working with Linux with a fundamental understanding of how it works.

Interested in growing your knowledge about database systems, sign up for our newsletter today!

Weekly Newsletter

This week we started our Centino Systems weekly newsletter. Check out the first edition here!

The newsletter is going to include the latest in SQL Server and other things in technology that I think are important or interesting…and maybe you will too!

So if you’d like to subscribe to the newsletter go ahead and sign up here!

New Pluralsight Course – LFCE: Advanced Network and System Administration

My new course “LFCE: Advanced Network and System Administration” in now available on Pluralsight here! If you want to learn about the course, check out the trailer here or if you want to dive right in check it out here!

This course targets IT professionals that design and maintain RHEL/CentOS based enterprises. It aligns with the Linux Foundation Certified System Administrator (LFCS) and Linux Foundation Certified Engineer (LFCE) and also Redhat’s RHCSA and RHCE certifications. The course can be used by both the IT pro learning new skills and the senior system administrator preparing for the certification exam

Let’s take your LINUX sysadmin skills to the next level and get you started on your LFCS/LFCE learning path.

If you’re in the SQL Server community and want to learn how Linux manage system services and performance this course is for you too! You have heard that Microsoft is going to release a version of SQL Server for Linux, right, if not…read this!

The modules of the course are:

  • Managing Network Services – Dive deep into how system systemd manages services and it’s other components
  • Monitoring System Performance – We look at core OS performance attributes for CPU, Disk IO and Memory utilization and how to monitor those
  • Advanced Package Management – Learn how to manage software on your systems and packaging your own RPMs for deployment in your data centers
  • Configuring and Managing Network File System – If you haven’t used NFS before watching this, you will after this module!
  • Configuring and Managing Samba – Get Linux to talk Windows and both share and access Samba resources.

Pluralsight Redhat Linux

Check out the course at Pluralsight!

Undestanding Network Latency and Impact on Availability Group Replication

When designing Availability Group systems one of the first pieces of information I ask clients for is how much transaction log their databases generate. *Roughly*, this is going to account for how much data needs to move between their Availability Group Replicas. With that number we can start working towards the infrastructure requirements for their Availability Group system. I do this because I want to ensure the network has a sufficient amount of bandwidth to move the transaction log generated between all the replicas . Basically are the pipes big enough to handle the generated workload. But bandwidth is only part of the story, we also need to ensure latency is low. Why, well we’re going to explore that together in this post!

Network Latency Defined

First, let’s define network latency. It’s how long it takes for a piece of information to move from source to destination. In computer networks, latency is often measured in milliseconds, sometimes microseconds on really fast networks. The most common way we measure network latency is with ping. The measurement ping provides measures from the time the ICMP request is send until the time it was replied to. This is how long it takes to move a piece of information from source to destination. But, the size of the data sent by default is only 64 bytes…that’s not very much data. So really ping isn’t a good way to measure latency for data intensive applications. As you add more data to the transmission, your latency will increase due to fact that the payload being transmitted is larger and those bytes have to be placed on the wire and read from the wire on the other side. This all contributes to your network latency. So we really want to measure what our network latency is with our data transmission size. 

server0:~ demo$ ping 192.168.1.1

PING 192.168.1.1 (192.168.1.1): 56 data bytes

64 bytes from 192.168.1.1: icmp_seq=0 ttl=64 time=0.072 ms


Network Latency and Availability Groups

Now let’s talk about Availability Group replication and network latency. Availability Groups replicate data over your network using Database Mirroring Endpoints which are TCP sockets used to move data between the primary and it’s replicas. When designing Availability Groups, we often think about things in terms of bandwidth…how much data do I need to move between my replicas. But there’s another design factor your need to consider, network latency. Why? Hint, it’s not going to have anything to do with synchronous availability mode and HADR_SYNC_COMMIT waits. Let’s talk about some fundamentals of TCP for a second.

How TCP Moves Data

The unit of transfer in TCP is called a TCP Segment. TCP requires positive acknowledgement. This means each segment sent, must be acknowledged by the receiver by sending an acknowledgement back to the sender confirming receipt of the segment. From that acknowledgement, TCP tracks how long it takes for a receiver to acknowledge each segment. How long? That’s where network latency comes in…acknowledgements will take at least as long as your network’s latency.

Now if TCP waited for each and every TCP segment to be acknowledged before sending the next segment, our system would never fully realize the network link’s capacity. We’d only consume the bandwidth of one segment at a time, which isn’t very much. To overcome this, TCP has a concept called “Congestion Window” what this means is that TCP can have several unacknowledged segments in flight at a point in time and thus consume more of our network connection’s capacity. The number of unacknowledged segments in-flight depends on our network’s conditions, specifically latency and reliability. If our network latency is high…TCP will reduce the number of segments in the Congestion Window. Which means our effective bandwidth utilization will be low.  If the network latency is low, TCP will increase the number of unacknowledged segments in our Congestion Window and our effective bandwidth utilization will be high. If the link isn’t very reliable TCP will decrease the Congestion Window’s size in response to unacknowledged segments, i.e. “dropped packets” so if your network line is dropping packets for some reason…expect a reduction in throughput. The Congestion Window is also variable, as conditions change TCP will increase and decrease based on changing network conditions.

Determining your maximum throughput is pretty easy, check out this calculator here.  Enter your physical link bandwidth, your round trip latency time, and the size of your TCP Window and you’ll get the maximum throughput of a single TCP stream, in other words…how fast a single TCP connection and transmit data.

Here’s a practical example, if you have a 1Gb link to your DR site and it has 40ms of latency a single TCP stream will only use 13Mb/sec. Wow…huh? CIOs everywhere just cringed reading that.

Now this is just the maximum for a single TCP stream, we can of course potentially have multiple streams sending, In AGs each database mirroring endpoint uses a TCP stream between the primary and each secondary replica in a standard Availability Group (not Distributed)

How Does Network Latency Impact My Availability Group’s Availability?

  1. Effective bandwidth can decrease – As network latency increases, effective bandwidth decreases…this means our transaction log throughput is reduced.
  2. Availability Group Flow Control – AGs track the number of unacknowledged messages sent from primary to secondary and if this gets too high the primary will enter flow control mode and will slow down or stop sending messages to the secondary. What can cause the number of unacknowledged AG messages to increase…network latency. A secondary can initialize flow control too, if it’s experiencing resource contention, it will message the primary and say slow down. 

In both of these cases our ability to move transaction log between replicas is reduced due to network latency and this can impact availability.  If you want to know more about how AGs move data check out this post and this one. Also it’s important to note, these issues are present regardless of AG availability mode. Synchronous-commit or asynchronous-commt this will impact your replication latency. In fact this just doesn’t apply to AGs, it’s ANY single TCP stream.

Where do we go from here?

Well, if you’re designing or operating high availability systems using Availability Groups, make sure you understand your network infrastructure and its performance characteristics. If you have AG replicas in a remote data center and it’s across a high latency link…test your workload to ensure you’re getting the throughput you need for your transaction log volume. You may have to tune your TCP settings on your operating system to better handle high latency connections or even have to make network changes to support your workload. 

If you want to dive deeper into how TCP works check out my Pluralsight course – LFCE: Advanced Linux Networking. While it’s not a SQL Server course, the TCP concepts are the same.

Want more SQL goodness like this sent straight to your inbox? Sign up for our newsletter here!

Microsoft Most Valuable Professional – Data Platform

Today, I’m proud to announce that I have been named a Microsoft MVPData Platform.  This is an exceptional honor and I’m humbled to be included in this group of exceptional data professionals. I really look forward to working with everyone in the MVP community and continuing to contribute to our unmatched SQL Community!

MVP Logo Horizontal Secondary Blue286 CMYK 300ppi

What is an MVP?

Here’s the definition according to Microsoft

Microsoft Most Valuable Professionals, or MVPs, are technology experts who passionately share their knowledge with the community. They are always on the “bleeding edge” and have an unstoppable urge to get their hands on new, exciting technologies. They have very deep knowledge of Microsoft products and services, while also being able to bring together diverse platforms, products and solutions, to solve real world problems. MVPs are driven by their passion, community spirit and their quest for knowledge. Above all and in addition to their amazing technical abilities, MVPs are always willing to help others – that’s what sets them apart.

For 2017, I have been named a Data Platform MVP, which means my technical specialization is on data products like SQL Server. The group of people that have received this award is quite small…by my count 403 worldwide and 100 in the US. I’m honored to be in this group of extremely talented professionals.

Why I’m excited to be an MVP?

Honestly, the primary reason I’m excited to be an MVP is to give back (more), I’ve learned so much from other MVPs and receiving this award will help me build relationships with other MVPs and Microsoft employees to further help develop the Data Platform itself and the community that surrounds that platform.

At the start of 2016 I had set a goal of being an MVP in 5 years. I don’t know why I picked that number, but what I figured was…MVP would be validation of consistent, quality work for our community and being recognized for the work that I’ve contributed. Things like blogging, social medial, public speaking and more. You learn a ton by teaching!

People that have helped along the way

I’d like to thank some folks that have helped me along the way…

  • My wife and family – I certainly couldn’t have done this without their support.
  • Other MVPs – you folks give your time freely and people like me consume what you produce to enrich ourselves. Thank you!
  • Paul Randal – I was in Paul’s 2015 mentoring class, he helped me set the direction of my community involvement. Invaluable guidance!
  • Brent Ozar – without his career blog and I’d have to figure our a lot of stuff on my own. Thanks bud!
  • Steve Jones – him and SQLServerCentral.com have really help give my blog a larger audience. I’ll never forget that first time I got an email about being on the front page of his site :)
  • Microsoft – thanks to you for this recognition!

Speaking at SQLSaturday Nashville!

Speaking at SQLSaturday Nashville!

I’m proud to announce that I will be speaking at SQL Saturday Nashville on January 14th 2017! This will be my first speaking event this year and I look forward to seeing you there! 

If you don’t know what SQLSaturday is, it’s a whole day of free SQL Server training available to you at no cost!

If you haven’t been to a SQLSaturday, what are you waiting for! Sign up now!

My presentation is Performance Monitoring AlwaysOn Availability Groups (which is one of my favorite sessions)

This is an updated session including new Availability Group Monitoring Extended Events and SQL 2016!

NewImage

Here’s the abstract for the talk

Have you deployed Availability Groups in your data center? Are you monitoring your Availability Groups to ensure you can meet your recovery objectives? If you haven’t this is the session for you. We will discuss the importance of monitoring and trending Availability Group Replication, how AGs move data between replicas and the impact replication latency can have on the availability of your systems. We’ll also give you the tools and techniques to go back to the office and get started monitoring and trending right away! 

SQL Server on Linux – How I think they did it!

OK, so everyone wants to know how Microsoft did it…how they got SQL Server running on Linux. In this article, I’m going to try to figure out how.

Update: Since the publication of this post, Microsoft has published a blog post detailing the implementation here

There’s a couple of approaches they could take…a direct port or some abstraction layer…A direct port would have been hard, basically any OS interaction would have had to been looked at and that would have been time consuming and risk prone. Who comes along to save the day? Abstraction. The word you hear about a million times when you take Operating Systems classes in undergrad and grad computer science courses. :) 

Well things are finally starting to come to light on how it was done. I had a Twitter conversation this weekend with Slava Oks, who is a leader on the project team and several other very active people in the SQL Community Klaus AschenbrennerEwald Cress, and Lonny Niederstadt. This got my gears turning…to find out…how they did it!

What do we know so far?

So here’s what we know, there’s some level of abstraction going on using a layer called SQL Platform Abstraction Layer (SQLPAL) and also some directly ported code via SQLOSv2. From a design standpoint this it a pretty good compromise. Check out Figure 1, here you can see SQLPAL sits between the Database Engine and the underlying operating system. Whichever one it may be, Windows, Linux and oh yeah “other OS in Future” :)

SQLServer on Linux

Figure 1 – SQL Server on Linux – source @SQLRockstar

Background information

So to understand how we got here, it’s worth looking at the Drawbridge project from Microsoft Research. Drawbridge is basically application, or more specifically, process virtualization with a contained OS inside that process space. This is called a picoprocess. Since the process is abstracted away from the underlying operating system, the process will need some part of an OS inside its address space. This is called the Library OS. With that abstracted away…each process has a consistent view of it’s own operating environment. In figure 2, you can see the Library OS and it’s roots into ntoskrnl.dll, which is an NT user-mode kernel. This provides a consistent OS interface for the application. Essentially program code doesn’t need to change.

Now it’s up to the picoprocess as a whole to provide some abstraction back to the actual operating system and that’s where the Platform Abstraction Layer (PAL) comes in. All that’s left is to provide an application binary interface for the picoprocess and you have a completely self-contained process without the need to interact directly the host operating system. This is amazing stuff!

Drawbridge

Figure 2 – Drawbridge Architecture – Source MS Research

 

SQLPAL – SQL Server Platform Abstraction Layer

So, I wanted to see this in action. In the Windows world, hard core SQL people are familiar with attaching a debugger to a SQL process and loading debug symbols to get a view into what’s going on inside of SQL Server. Well in Linux, we can do the same, and it’s a LOT easier. On Linux, there’s a tool called strace, which will give you a view into your programs execution and any interactions it has with the OS. So I launched SQL Server and strace and here’s what I found.

So to launch strace and SQL Server, we add the SQL Server binary as a parameter to strace. Caution, do not do this as root as it may cause a permission issue with log files generated by the sqlservr process. Use sudo to change to the msssql user.

[mssql@rhel1 ~]$ strace /opt/mssql/bin/sqlservr


The first thing you’ll see is a call to execve, which is a LINUX system call to start a new process. A regular old Linux process. So that means that sqlservr is a program binary compiled for Linux.

execve(“/opt/mssql/bin/sqlservr”, [“/opt/mssql/bin/sqlservr”], [/* 24 vars */]) = 0


At this point we see it loading all the local natively compiled libraries required for the program. Here’s one example, open is a system call to open a file, subsequent reads will occur when needed. There are many more libraries loaded.

open(“/lib64/libstdc++.so.6”, O_RDONLY|O_CLOEXEC) = 3


Now, we see something interesting, a load of a library called libc++abi.so.1. This file is in the /opt/mssql/lib/ directory and is shipped in the SQL Server package. So my guess is that this is the application binary interface for SQL Server’s picoprocess.

open(“/opt/mssql/bin/../lib/libc++abi.so.1”, O_RDONLY|O_CLOEXEC) = 3


Now we see a transition into Drawbridge like functionality, with the system.sfp open. This looks like it’s responsible for setting up the OS like substrate for the application’s execution environment. 

open(“/opt/mssql/lib/system.sfp”, O_RDONLY) = 3


During the load of system.sfp, we see several libraries, registry and DLL loads that look like they’re responsible for setting up the kernel level abstraction.

pread(3, “Win8.dbmanifest\0”, 16, 4704) = 16


Reading in the registry? Man that’s never going away :)
 

pread(3, “windows.hiv\0”, 12, 4753)     = 12


Reading in the NtOsKrn.dll, the NT user-mode kernel

pread(3, “NtOsKrnl.dll\0”, 13, 5123)    = 13


Next SFP we see load is system.common.sfp. This looks to be a second stage boot process, perhaps Drawbridge’s library OS? 

open(“/opt/mssql/lib/system.common.sfp”, O_RDONLY) = 4


During this phase we see many other DLLs loading. Looks like we’re setting up an environment…here’s an example of something loaded at this time. Clearly higher level OS provided functionality.
 

pread(4, “kerberos.dll\0”, 13, 15055)   = 13

 
After a few more SFP files are opened for certificates and NetFX4, and then we end up at sqlservr.sfp. And inside here, it loads things familiar to deep dive SQL Server pros…first we see the program binary load sqlservr.exe, SqlDK.dll, sqllang.dll, SQLOS.dll, and sqlmin.dll. I omitted some output for readability.

open(“/opt/mssql/lib/sqlservr.sfp”, O_RDONLY) = 7

…omitted

pread(7, “sqlservr.exe\0”, 13, 13398)   = 13

…omitted

pread(7, “SqlDK.dll\0”, 10, 14079)      = 10

…omitted

pread(7, “sqllang.dll\0″, 12, 14382)    = 12

…omitted

pread(7, “SQLOS.dll\0”, 10, 14418)      = 10

…omitted

pread(7, “sqlmin.dll\0”, 11, 14511)     = 11


And finally, we end up with application output, something we’ve all seen…SQL Server starting up.

nanosleep({999999999, 0}, 2016-11-17 14:11:37.53 Server      Microsoft SQL Server vNext (CTP1) – 14.0.1.246 (X64) 

Nov  1 2016 23:24:39 

Copyright (c) Microsoft Corporation

on Linux (Red Hat Enterprise Linux)


Oh, and now it makes much more sense why SQL Server on Linux is using Windows like file pathing inside the application, right? Well, think it through, SQL Server is interacting with an operating system that it thinks is still Windows, via the platform abstraction layer.
 

2016-11-17 14:11:37.53 Server      Logging SQL Server messages in file ‘C:\var\opt\mssql\log\errorlog’.

SQLOSv2

So in that Twitter conversation I had with Slava and others, we learned it’s not straight PAL, but a SQL Server specific PAL. This allows the product team to provide another path to the underlying OS for performance sensitive code. Look back at figure 1, you’ll see two paths from SQL Sever into SQLPAL. One uses the Win32 APIs, likely provided by Drawbridge (or some variant), and the other is perhaps natively compiled code…really that’s just a guess on my part. 

Final thoughts

All, this is a pretty awesome time we’re getting into…Microsoft embracing Linux, SQL on Linux, PowerShell on Linux. I’ve said this many times…Windows, Linux…it’s just an OS. I would like to thank Slava for his insight and also the product team for a fantastic preview release. It’s amazing how seamless this really is.

In a sidebar conversation with Ewald, he made the point that as SQL Server professionals that our investment in the understanding of SQL Server’s internals will persist with this implementation. Which I think is a huge relief for those that have invested years into understanding it’s internals! 

Please leave some comments on what your thoughts are on how this works. If you want to contact me directly, you can reach me at aen@centinosystems.com or @nocentino

 

Disclaimer

Well, if you made it this far…awesome! I want you to know, I don’t have any inside knowledge of how this was developed. I basically sat down and traced the code with the techniques I showed here.  

References 

https://www.microsoft.com/en-us/research/project/drawbridge/

https://blogs.msdn.microsoft.com/wsl/2016/05/23/pico-process-overview/

Building Open Source PowerShell

Open Source PowerShell is available on several operating systems, that really what’s special about the whole project! To get PowerShell to function on these various systems we need to build (compile) the software in that environment. This is what will produce the actual executable program that is powershell.

To facilitate the build process the PowerShell team has documented how to do this for the currently available platforms, Linux, MacOS and Windows. In this post I want to talk about why this is important, point you to the resources available online to help you build Open Source PowerShell and tell you my experiences building PowerShell on the Windows, macOS and Linux!

Why would one want to build PowerShell?

Well for me, I’m and internals geek and I want to be able to debug running PowerShell code so I can follow the flow of control during program execution. This will enable me to learn the internals of certain commands. A great way to see what’s happening on the inside.

Another reason is perhaps you want to contribute, you yourself can download the code…make a change and submit it to the PowerShell team for review. Pretty cool stuff at the “New Microsoft”. Following these steps you’ll be able to have a functioning environment to develop in.

Getting Started with building PowerShell

In general building complex software projects is not a trivial task, but the PowerShell team has done an exceptional job making this as easy as possible for everyone. The documented build processes leverage PowerShell scripts for installing the appropriate dependencies on your system and then managing the build process itself. At a high level, it’s really five easy steps. For more details on building for your platform, check out the links at the bottom of this post.

  1. Download the code from GitHub
  2. Install PowerShell
  3. Import the build module – build.psm1
  4. Install the build dependencies (toolchain setup) with Start-PSBootStrap
  5. Build PowerShell with Start-PSBootBuild (once this is finished, you’ll have a powershell executable)

My notes on building PowerShell

  • On the Linux side of things, it was VERY easy. The PowerShell team includes installation of all build dependencies and package installation for things like make and g++ inside Start-PSBootStrap then built powershell with Start-PSBootBuild
     
  • Well Windows was pretty easy too, but I had to install a few things manually. First I installed Visual Studio 2015, added the required C++ components, installed Chocolatey, installed cmake, downloaded the PowerShell source, ran Start-PSBootStrap to get the build dependencies, then built PowerShell with  Start-PSBootBuild
     
  • macOS was a little tougher, updated my installation of XCode, I installed Brew, installed cmake, downloaded the source, ran Start-PSBootStrap to get the build dependencies, then built PowerShell with Start-PSBootBuild. It failed with this error (which has since been corrected)
     
    •       deprecated in macOS 10.12 – syscall(2) is unsupported; please switch to a supported interface. For SYS_kdebug_trace use kdebug_signpost().

            [-Werror,-Wdeprecated-declarations]

          tid = syscall(SYS_thread_selfid);

                ^

      /usr/include/unistd.h:733:6: note: ‘syscall’ has been explicitly marked deprecated here

      int      syscall(int, …);

               ^

    • Basically what’s going on here is there’s a deprecated system call on macOS 10.12 which causes the completion to fail. To get the build to work, I changed the function to just return 0. Doing this will likely break something, so I’m not suggesting you do this. I just did this to get the build to work.  I’ve submitted this issue to the PowerShell team via GitHub here

What’s next?

Well, now that I’m able to get Open Source PowerShell built on three major operating systems I’m going to take some time using debugging techniques on each to see what’s going on under the hood inside of PowerShell when I execute certain commands. And of course, up first…Get-Process ;)

Resources for building Open Source PowerShell 

Here are some resources for you to get started working with the PowerShell Projects. 

Configuring Passwordless PowerShell Remoting over SSH

Open Source PowerShell has been on fire, getting tons of community support and really making people think about what’s to come with a single language to manage a heterogenous data center.

To highlight this point, in my recent Pluralsight Play By Play Microsoft Open Source PowerShell on Linux and Mac with Jason Helmick and Jeffrey Snover I did a demo on using PowerShell remoting where I connected from a Linux machine to three other machines and retrieved lists of top processes from each…two Linux and one Windows. I used one script to accomplish this and no passwords. A simple implementation highlighting a very big idea. After, some people have asked…how did I do this without passwords? 

Open Source PowerShell Remoting uses SSH as its communication protocol, so when we connect to a remote system using PowerShell Remoting we’ll need to enter a password. Normally SSH requires passwords to log into remote systems but it also allows for what’s called passwordless authentication, which means users can log into remote systems without having to key in a password. It does this, securely, by using a key pair to authenticate the user to the server. Basically you generate a key pair, copy the public key to the remote server and there you have it…you no longer have to enter a password when you SSH into the remote system. Let’s see how this is done.

You need a couple things to set up this demo

  1. A user account with the same name on each computer – create a user on each machine, Linux and Windows, with the same username.
  2. OpenSSH configured on all hosts – easy on Linux. It’s there by default. On Windows check out this link. Once you complete the installation of OpenSSH on your Windows system, test logging into that system from a remote computer with SSH. This will use the password for a user on that Windows system (likely the one you just created in step 1). If that doesn’t work, you won’t be able to proceed.
  3. Open Source PowerShell installed on all hosts – check out this link here
  4. Enable PowerShell Remoting over SSH – check out this link here. Once you have this configured, be certain to test PowerShell remoting, using passwords. Test Linux to Linux and also Linux to Windows. 

Now once we have the ability to connect to our hosts with SSH and we’ve confirmed we can use PowerShell SSH Remoting, we can move on to configuring passwordless authentication. 

First, on your Linux machine (I’m using a Mac, but there literally is no difference here) you can use your existing public key if you have one, which is stored in your home directory in .ssh/id_rsa.pub or you can generate a new one. 

To generate a new SSH key pair on your Linux machine

  1. Type sshkeygen
  2. The program will ask you for a file name, just press enter
  3. It will then ask you for a passphrase, press enter again and once more to confirm
You should get output that looks like this:

Demo-MacBook-Pro:.ssh demo$ ssh-keygen 

Generating public/private rsa key pair.

Enter file in which to save the key (/Users/demo/.ssh/id_rsa): 

Enter passphrase (empty for no passphrase): 

Enter same passphrase again: 

Your identification has been saved in /Users/demo/.ssh/id_rsa.

Your public key has been saved in /Users/demo/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:g5SyXmke+OAmYSl4nxc4wcRnsyeDO6RE9/Q9FKlcpKY demo@demo-MacBook-Pro.local

The key’s randomart image is:

+—[RSA 2048]—-+

|   ..    .oo     |

|  .oo =. .+      |

| . .+*o=o=       |

|. ..oB=== o      |

|o.=o*.E+S  .     |

| +.=oO o .       |

|  . *.+          |

|   o .           |

|                 |

+—-[SHA256]—–+

 
Copy the public key from the Linux machine to the Windows Server

Now copy the contents of the id_rsa.pub file you just created to C:\Users\username\.ssh\authorized_keys on your Windows machine. Where username is the user you want to use for Remoting. You’ll likely copy and paste the contents of this file to the remote computer…if you do this ensure the contents are all on one line. I’m not going to go into how to configure this on Linux as there are plenty of blogs about how to do this on Linux – check it out here.
 
You’ll want to make sure you copy the same public key to all the hosts you’d like to authenticate with from this private key. In our case, the two Linux machines and the Windows machine have the same public key in the authorized_keys file on each server, inside user accounts with the same name.
 
Confirm you have Authorized Keys configured on your Windows SSH server
 
Now on the Windows machine in C:\Program Files\OpenSSH\sshd_config verify that this line is uncommented, which it should be by default. If not, uncomment it and restart the ssh service. This is the place SSH will look for keys when a user logs into the system via…SSH.

AuthorizedKeysFile .ssh/authorized_keys

Confirm SSH passwordless access from Linux (or Mac) to Windows

With that you should be able to connect from your Linux (or Mac) to your Windows machine from the machine where you generated your SSH key without any password. Likewise for your Linux machines.

Demo-MacBook-Pro:~ demo$ ssh demo@172.16.94.9

Microsoft Windows [Version 10.0.14393]

(c) 2016 Microsoft Corporation. All rights reserved.

 

demo@DESKTOP C:\Users\demo>

Let that sink in for a second, I just SSH’d into a Windows machine…

…and finally connect via PowerShell remoting over SSH with passwordless authentication

OK now we’re in the home stretch…we can now create a PowerShell remoting session over SSH with passwordless authentication. 

PS /Users/demo> Enter-PSSession-HostName 172.16.94.9 -UserName demo                                       

[172.16.94.9]: PS C:\Users\demo\Documents>

And there we have it we’re able to connect to using PowerShell Remoting over SSH without a password.

Questions about Linux? PowerShell? Please feel free to ask aen@centinosystems.com or on Twitter @nocentino

 

5 Must Haves Before You Start Consulting

Please join me at IT/Dev Connections on Oct. 12 at 8:00AM* where I’ll be hosting a Birds of a Feather session “Moving to Independent Consulting” Bring your questions!

*Yes, an 8:00AM session in Las Vegas, but if you’re serious about going out on your own…you’ll already be up :)

The most common questions I’m asked during networking sessions at technical conferences and events aren’t technical! People want to know what it’s like being an independent consultant. Things like how to get started and what to look out for are common themes.  So I wanted to share the some of the discussion points I bring up when I’m having these conversations. In this post I’m going to boil it down to the top 5 “must haves” before you start consulting, there’s certainly more…many books have been written about it!

  1. Defining Your Niche 

    This is what you’re going to sell, the thing that your client wants or needs. It’s crucial that you specialize in an area. For me I have a very wide breadth of knowledge but I also have extraordinary depth in many areas. This is due to the excessive :) amount of education and training I’ve put myself through and also my career experiences. That all makes me an exceptional problem solver. The domain of the problem doesn’t matter that much. Give me the information and I’ll work out a solution. But guess what, “problem solver” doesn’t sell! Why? Because when people are looking for consultants, they’re looking for someone to make their problems to go away. These are usually very well defined problems. So define what you’re exceptional at doing, that’s what you’re going to sell. Write it down. Try to build a paragraph out of those ideas. That will be your pitch to your client. This is such a crucial step. It defines who you are to your client. For me I’ve used marketing consultants and mentors to help define my niche. The consultants I’ve worked with are worth every penny and the mentors are invaluable. The funny thing is I’m still fine tuning this. 

  2. Finding the Right Client 

    Once you know what your niche is, you need to identify who you’re marketing to, the consumer of your services. I’d like to be able to say that this “must have” is the most important but they’re all so crucial to success. Who purchases your services and what does that client look like? For me, the people that want my services are Chief (CIO) or Director level people that have a well defined problem to solve that they can’t solve with their internal resources. This can be a system performance issue, high availability design related or an overall system scalability issue. These are the people that make the decisions and sign the contacts. 

    Now the people I work with are the individual contributors on the teams. The architects, engineers and administrators, we develop the solutions and solve the problems, together. What I’ve learned through the years is I like working in smaller teams that have big, interesting problems. So in this sense, size matters. Smaller teams are more agile and as a individual consultant I can affect more positive change in a smaller amount of time. This isn’t entirely going to exclude a potential client, but is something I look at closely when onboarding a new client. Because…personality matters! You need to find a group that you sync up with well. Would you want to go out after work with your team? For me that’s a big facet of finding the right client. Because when you’re in a conference room for hours working out a solution, if you get along with your client, everything will work better. 

    What this all boils down to is…don’t just take any work. This idea is core to your success. You need to be happy with the work you’re performing and who your performing it for. If you’re enjoying it, you’ll produce better results and your client will be happy. Simple enough.

  3. Pricing Your Services 

    You’re worth more than you think, for whatever reason it’s human nature not set your value accurately. It’s also our nature as consultants to want to make our clients happy. But when it comes to setting your rate…you both need to be happy. Think about it this way, if you give a client a huge discount today and later a perfect client comes along at your normal rate, who are you going to want to spend most of your time with? Your focus shifts and your original client isn’t getting the attention they deserve and their satisfaction decreases. Remember, we’re in the business of keeping clients happy! There’s tons of empirical data on the Internet for setting the actual dollar amount based on you’re skills so I won’t go into that. The key here is setting a value that you and your client are pleased with. After a while, your client will care less about your rate because you’re providing value. Solving problems, making their lives easier.

  4. Time Management

    I’m going to be honest, this is my Achilles heel. It’s hard. In fact, scheduling is proven to be NP Hard :) Again there’s tons of data in the web about this and here’s what I do. 

    Time blocking – most of my clients have me on a retainer. I work for them for a fixed amount of time each month (This ties in with pricing, longer term contracts mean better rates for clients and more consistent work for me). But we’re in IT and somethings will take longer than you’ve expected or sometimes something will blow up for one client when you’ve allocated that day to another client. So I allocate my calendar based on my commitments and leave a whole day, each week, for that potential skew. If a client loses time during their scheduled allocation because of a fire, I allocate time out of that extra day. 

    Every day make a list – every morning I sit down and literally write down in a notebook what I need to get done that day. If it’s a big project, break it down into smaller tasks and do those. Doing this provides you a metal boost, a sense of accomplishment. It motivates you to keep moving. 

    Get up early – I wake up around 4:30AM. Yea, don’t laugh. I use this time to wade the sea of email I get and make that list I just told you about. I also read blogs and do the social media thing during this time. It’s my time, the rest of the working day will be my clients’ time. 

    Outsource everything you don’t like doing – Find things you can get rid of and give them to someone else to do for you.

    Billing – in theory this is not completely outsourced as I do my own time and billing. I use Freshbooks for my accounting package, which makes this insanely easy. Freshbooks does all my timekeeping for billable hours, invoicing and expenses. It literally takes me 10 minutes to send bills to clients that include line item details of hours worked and expenses with receipts attached. 

    Get an accountant – taxes are hard and time consuming. I used to like doing them myself, but I found I spent three to four days a year working on this. Not an effective use of my time. 

     

  5. Protecting You and Your Client

    Find an attorney you trust – Have him/her write a general contract for your services with your terms. This will be the base for your negotiations with your client. You’ll send it over to them and if they have a legal team, which many clients do, they’ll send back a version with revisions and sent that right back to your attorney. I have my attorney review every contract, my eyes literally cross when I read them (Disclaimer: I am not an attorney, but I offer my experiences to you as a consultant).

    Insurance – Be certain to have some sort of protection for yourself, there’s many types of insurances for businesses. Some I’ve seen are general liability, professional liability and even cyber liability. On the grand scheme of things these things don’t cost a lot of money and can really help you out of something goes south!

I hope this post gets you started on your road to independent consulting. Take the time to sit down and think about what your motivations are, set some goals and like any technical project you’ve ever worked on build a plan and do all the thinking up front!

Check out these references I used in this post – 

The Secrets of Consulting  – Gerald Weinberg

Brent Ozar’s Personal Blog