Tag Archives: PowerShell

OpenSSH Resources for Windows and PowerShell

Had a conversation with a good friend in the SQL Community about OpenSSH and how it fits as a transport layer for PowerShell Remoting. I pointed him towards several resources I have online. So here’s a post aggregating those resources.

If you’re looking to get started with OpenSSH on Linux and Windows Systems check out thisPowerShell Summit presentation I did in 2018. This covers OpenSSH in theory and practice.

Session: OpenSSH Internals for PowerShell Pros

  • Remote Access Concepts
  • OpenSSH Architecture
  • Authentication Methods (including Key Based Authentication)
  • OpenSSH Server and Client Configuration

If you’re looking to get started with OpenSSH based PowerShell Remoting check out this session from PowerShell Summit in 2018 co-presented with Richard Siddaway.

Session: PowerShell Remoting: Installing and TroubleShooting in a Multiplatform Environment

  • Installing OpenSSH on Windows and Linux
  • Authentication Users (Including AD Authentication)
  • Setting Up PowerShell Remoting on Windows and Linux
  • Troubleshooting OpenSSH

Shortly after I did the sessions above, OpenSSH was released as a Window Capability, check out this blog post on how to install OpenSSH at the command line using Add-WindowsCapability.  This also applies to Windows 2016 Server. 

Session: Firewall Evasion and Remote Access with OpenSSH

  • OpenSSH is much more than just remote terminal access to servers, it provides a full suite of remote connectivity methods to your network and its services. In this session, we will look at how to use OpenSSH and its forwarding, tunneling and VPN capabilities so that we can reach securely reach network services that are behind firewalls and other security boundaries. Common use cases for these techniques are cloud jump boxes, secure access into segmented networks and being able to get remote access and move data around in poorly secured networks….these tips are things that will likely get you some extra attention from your security team

Blog Post: Configuring Passwordless PowerShell Remoting over SSH

Blog Post: Distributing SSH User Keys via PowerShell

Blog Post: Installing OpenSSH Server on Windows 10

Blog Post: PowerShell Remoting in Multi-Platform Environments using OpenSSH

Using PowerShell in Containers

The vision for PowerShell Core is to be able to run PowerShell anywhere. In this article, I’m going to discuss how you can use Docker Containers to enable just that. We’ll look at running PowerShell in a container, running cmdlets, running different versions of PowerShell at the same time, and also how to build our own “serverless” computing platform.

Let’s address a few reasons why you would want to run PowerShell in a container.

  • Speed and agility – this for me is probably the number one reason to run PowerShell in a container.  The PowerShell container images are coming in at around 375MB, this means with a modern Internet connection you’ll be able to pull a PowerShell container image and be up in running in a very small amount of time.
  • Version – there are container images available for every release of PowerShell Core, including preview/release candidate code. With containers, you can run multiple versions of PowerShell Core in a way where they will not conflict with each other.
  • Platform independence – there are container images for Ubuntu, Fedora, Windows Server Core, Nano Server and more. This allows you to be able to consume PowerShell Core regardless of your underlying platform. You can select whichever image you want, pull the container and go. 
  • Testing – if you need to test your scripts across various versions of PowerShell Core you can pull the container, run the script on the exact version you need. You can have multiple containers on your system running multiple versions of PowerShell and be able to run them all at the same time.  
  • Isolation – containers will allow you to have self-contained environments for execution, security, environment, and configuration settings. You can also use this idea to isolate conflicting modules from each other. This is particularly valuable when developing modules and/or cmdlets.

Getting Up and Running

Let’s get started with using PowerShell Core in a container. First up, we will want to pull the Docker Container Image to our local machine. This will pull the image with the latest tag. Which at the time of this post is 6.2.0-ubuntu-18.04.

With the container image local, let’s go ahead and start up the container. In this first go, I’m going to start up the container with the docker run command and with the –interactive and –tty flags. What these flags do is, when the container starts, attach to the terminal of the container so I can use PowerShell Core interactively at the command line.

This will get you a PowerShell prompt. I told you this was going to be fast.

From that prompt, we can do the normal PowerShell things we need to do. Let’s start our journey like all good PowerShell demos do and run Get-Process. You’ll notice that there is only one process running in the container, and that’s your pwsh session. This is due to the isolation concepts of Containers. With this isolation, problems like conflicting modules and settings go away. The container gives you script an isolated execution environment. If you need to have two conflicting versions of a module, DLL or library to run your workload or script…you can use a container to isolate their execution giving them the ability to co-exist on the same system.

We can use exit to get out of PowerShell. When you exit PowerShell the container will stop. You can see that status of your container with docker ps.

 
If you’d like to get back into your container you can use docker start pwsh-latest -i where pwsh-latest is the container name we just created and -i is for interactive (we used –interactive earlier). Run that and you’ll land right back at a PowerShell prompt again. 

Running a cmdlet When Starting a Container

Now, let’s say we wanted to start our container up and non-interactively run a cmdlet right away, we can do that. With the docker run command, we can tell the container that we want it to start pwsh and pass in a cmdlet as a parameter into pwsh, with the -c parameter and that cmdlet will be executed. Let’s check out how.
 
From a performance standpoint, I want to point out the time it takes to do this work, we can use the time command to help us with that. Less than two seconds to start the container, start pwsh and execute our cmdlet and shut down the container.
 
Now let’s say I wanted to test a cmdlet execution against a specific version of PowerShell Core, perhaps even a Release Candidate. Let’s change the tag from latest to preview and docker will pull that container, start it up and we immediately have an environment for testing. This could be leveraged for script testing, cmdlet testing, module testing and so on. In the output below, you can see the preview tag points to the 6.2.0-rc1 version of PowerShell Core.
 
Now, each time we started a container so far in this post and then exited pwsh, the container shut down and was still on our system. We can see the containers with a docker ps -a. We can restart any of these containers and get them back by using the command mentioned previously.
 
We can delete each container by name, using docker rm then specifying the name as a parameter. For example, docker rm pwsh-latest would delete that container.

Running a Script When Starting a Container

When a container is deleted, the data “inside” the container is deleted too. So if we created a script inside a container and then delete the container that means the script would go away too. In Docker, we can use a volume to help us with this. A volume allows us to store our data externally to the container, we can mount the volume inside the container and it looks like it’s part of the container’s file system.
 
With volumes, when we delete the container, the data stays inside the volume. We can then create a new container and attach the volume to that new container and the data will be there for us to work with.
 
Let’s start a container and attach a volume at the /scripts location of the container’s file system. Let’s also add the –detach parameter. This is going to start the container, start pwsh and then stop the container. Then I’m going to copy a script from my local file system into the container. The container does not need to be running for the copy operation to succeed.
 
Here’s the code to copy the script from my local file system into the container where pwsh-script is the container name and /scripts is the location we want to copy the script to inside the container. This is the volume we attached to the container. The script is a simple hello-world script.
 
With that, let’s go ahead and remove the container. We used it just to copy the script into the volume. I kind of feel bad, but we’ll keep moving on.
 
With that, let’s create a new container in interactive mode, with the volume attached. This will put us at a pwsh prompt.
 
Now, since our script is in the volume and we attached that volume when we created this new container, it’s available for us inside the container. Let’s go ahead and run that script inside the container and then delete the container with docker rm when it’s finished. 

Sounds Like…Serverless?

Now let’s take that technique we just stepped through, where we started the container, ran a script and deleted the container and combine all of that into one step. To do so, we’ll use the following command options for docker run. We specify the –rm option which will delete the container when it exits, add the /scripts volume and tell pwsh to run the script that’s in our volume by specifying its location with the parameter -F /scripts/Get-Containers.ps1.
 
Now, with that last technique, we’ve encapsulated the entire lifecycle of the execution of that script into one line of code. It’s like this script execution never happened…or did it ;) All kidding aside, we effectively have a serverless computing platform now. Using this technique in our data centers, we can spin up a container, on any version of PowerShell on any platform, run some workload/script and when the workload finishes, the container just goes away. For this to work well, we will need something to drive that process. In an upcoming blog post, we’ll talk more about how we can automate the running of PowerShell containers in Kubernetes.
 
In this post, we covered a lot, we looked at how you can interactively run PowerShell Core in a container, how you can pass cmdlets into a container at runtime, running different versions of PowerShell Core and also how you can persistently store scripts outside of containers in volumes and run those scripts in your containers. We also looked at how you can encapsulate the whole execution of a script and the containers life cycle into one line of code. Really giving you the ability to run PowerShell Core anywhere on any platform.
 
I hope you enjoyed this and are as excited as I am about how we can leverage this technology to solve new and unique problems in your data center and IT operations.
 

Speaking at PowerShell Summit 2019

Speaking at PowerShell Summit 2019!

I’m proud to announce that I will be speaking at PowerShell + DevOps Global Summit 2019 the conference runs from April 29th 2018 through May 3rd 2019. This is an incredible event packed with fantastic content and speakers. Check out the amazing schedule! All the data you need on going is in this excellent brochure right here!

This year I have two sessions!

On Tuesday, April 30th at 11:00AM – I’m presenting “Firewall Evasion and Remote Access with OpenSSH

Here’s the abstract

OpenSSH is much more than just remote terminal access to servers, it provides a full suite of remote connectivity methods t you network and its services. In this session, we will look at how to use OpenSSH and its forwarding, tunneling and VPN capabilities so that we can securely reach network services that are behind firewalls and other security boundaries. 

Common use cases for these techniques are cloud jump boxes, secure access into segmented networks, and being able to get remove access and moving data around in poorly secured networks…these tips are are things that will likely get you some extra attention from your security team.

We will look at the following techniques:
– Accessing remote services with SSH Tunneling
– Building SWSH connections for multi-hop remote access using ProxyHosts
– Proxying HTTP/HTTPS connections with a Sock Proxy
– Using aliases to store these advanced configurations for easy use
– Controlling and preventing TCP tunneling 

On Thursday, May 1st at 1:00PM – I’m presenting Containers – You Better Get on Board!

Here’s the abstract

Containers are taking over, changing the way systems are developed and deployed…and that’s now hyperbole. Just imagine if you could deploy SQL Server or even your whole application stack in just minutes. You can do that, leveraging containers! In this session, we’ll get your started on your container journey, learn some common container scenarios and introduce deployment automation with Kubernetes.

In this session we’ll look at
– Container Fundamentals
– Common Container Scenarios
– Automation with Kubernetes
– Using PowerShell Containers 

I look forward to seeing you there.

Testing if a Port is Open With PowerShell

Ever want to confirm that a port is accessible from one computer to another? There’s a PowerShell cmdlet for that, Test-NetConnection. With the -Port option, this cmdlet will do a quick TCP three-way handshake on the remote system to confirm that the service is available and reports back if it succeeded or not. Check out that last line of output TcpTestSucceeded: False. That indicates that this port is not accessible. You can see, however, that the system is reachable via ICMP (Ping), PingSuceeded: True so we know that the remote system is alive, just not listening on the port we want to access.

This cmdlet is from Windows PowerShell 5.1

PS C:\Users\demo\Documents> Test-NetConnection server1.centinosytems.lab -Port 3389
WARNING: TCP connect to server1.centinosytems.lab:3389 failed

ComputerName : server1.centinosytems.lab
RemoteAddress : 172.16.94.10
RemotePort : 3389
InterfaceAlias : Ethernet0
SourceAddress : 172.16.94.11
PingSucceeded : True
PingReplyDetails (RTT) : 1 ms
TcpTestSucceeded : False 

In PowerShell Core, there’s a Test-Connection which in version 6.0 supports basic ICMP ECHO functionality.  In version 6.1, a -TcpPort option was added for testing connectivity to a port. The current implementation of the v6 cmdlet returns a Boolean based on the reachability of the port.

If you really want to get under the hood and learn how networking…works. Check out my LFCE: Advanced Linux Networking course. In this course, we cover IP addressing, routing, TCP internals and troubleshooting. We use the Linux command line, not PowerShell, but it really shows you how things move around a network. It’s one of my most popular courses.

Installing OpenSSH Server on Windows 10

So in yesterday’s post we learned that the OpenSSH client is included with the Windows 10, Update 1803!  Guess, what else is included in this server, an OpenSSH Server! Yes, that’s right…you can now run an OpenSSH server on your Windows 10 system and get a remote terminal! So in this post, let’s check out what we need to do to get OpenSSH Server up and running.

First, we’ll need to ensure we update the system to Windows 10, Update 1803. Do that using your normal update mechanisms.

With that installed, let’s check out the new Windows Capabilities (Features) available in this Update, we can use PowerShell to search through them.

Now to install OpenSSH server, we can use the Add-WindowsCapability cmdlet

To confirm it’s installation we can use the Get-WindowsCapability cmdlet again, and this time it’s state is “Installed”

With that installed, let’s take a look at where sshd lives on our Windows system and that’s in C:\Windows\System32\OpenSSH\

On Windows systems, network daemons run as “Services”. We can see with the Get-Service cmdlet, the installer added ssd and also ssh-agent!

As you can see the state is stopped, so let’s start the Services and also set them to start on boot

We can use netstat to see if we’re up and running

So now that it’s up and running, you should know that the configuration files and host keys live in ProgramData\ssh\ so if you need to change the behavior of SSH you’ll head for the sshd_config file and when finished, restart your service with Restart-Service -Name sshd 

You’ll likely need to open your Windows firewall, which can be done with the following cmdlet on PowerShell 5.1

So let’s test it out, I’m going to ssh from my Mac into my Windows 10 laptop

And that’s it, you can now install OpenSSH server on your Windows 10 system. I can only imagine it’s a matter of time before this hits the server side of things! Bravo PowerShell Team, bravo!

OpenSSH is now Part of Windows!

Today is a big day! The OpenSSH client version 7.6p1 is now part of the Windows 10 operating system! Microsoft released Windows 10 Update 1803 and included in that release is the OpenSSH client, which is installed as part of the update.

That’s right an SSH client as part of the Windows operating system by default! Also included with this update is the OpenSSH Server which is included as an Windows Feature on Demand.

Let’s take a look at what this is all made of!

Start off by updating your system to Windows 10, version 1803. You can do this via your normal Windows Update mechanism.

Here you see I have installed Windows 10, version 1803.

Screen Shot 2018 05 16 at 8 07 53 PM

With that, let’s look at what we got in the update! We’ll search our Windows Capabilities (Features)

Cool, so we know OpenSSH is installed, but where? Let’s check out C:\Windows\System32\OpenSSH\

Let’s look a littler closer at the ssh.exe

So this looks like all of the usual suspects in an OpenSSH installation. But it does look like sshd.exe and ssh_config_default came along for the ride during the update even though we didn’t install the OpenSSH.Server Feature!  More on that in my next blog post…

A big shoutout goes out to the PowerShell team for making this happen, check out the project on GitHub. The code is here and the issues and releases are here!

Distributing SSH User Keys via PowerShell

Folks in the Linux world are used to moving SSH keys to and from systems enabling password-less authentication. Let’s take a minute to look at what it takes to use PowerShell to distribute SSH user keys to remote systems.

In the OpenSSH package there’s a command ssh-copy-id which is a bash script that copies a user’s public key to a remote system. There’s a little intelligence in the script to set things up properly on the remote system for password-less key based authentication. If the appropriate directory and key file aren’t set up, ssh-copy-id will create the directory and key file with the correct permissions on remote system. As far as I can tell, ssh-copy-id has not been implemented in the Win32-OpenSSH port. So that leaves us with implementing this functionality ourselves, in PowerShell.

Since ssh-copy-id isn’t implemented on the OpenSSH port for Windows (because it’s a bash script), I wanted to replicate that functionality so that I could easily copy ssh user keys to systems, consistently and easily. So I implemented this functionality as PowerShell. 

Let’s walk though this…and first up, let’s discuss what’s needed for password-less, key based authentication.

The Components of Password-less Key Based Authentication

For password-less key based authentication to work, you need to copy the user’s public key from the local system you want to authenticate from, to the remote system. On the remote system, this key file has to live in a place where the SSH deamon expects it, and that’s in the file ~./ssh/authorized_keys by default.

Let’s take a second to look at the details of how this needs to be configured on a remote system.

  • authorized_keys – this is the default file in which user public keys are stored. The permissions on this file should be 600. Which is read/write for the owner and no access to group or other/world.

-rw-r–r–. 1 demo demo 412 Feb 18 08:53 .ssh/authorized_keys 


  • ~./ssh – the authorized_keys file lives in a hidden directory in your home directory. That’s what that syntax means, the ~ (tilde) is short for the current user’s home directory and that . (dot) indicates that the directory is a hidden directory. Now, the permissions on this directory should be 700, this means it’s it’s read/write/execute to the owner and no access to group or other/world. The execute bit on a directory gives you access to list the contents of the directory and enter that directory.

drwx——. 2 demo  demo         29 Feb 18 08:53 .ssh

It’s kinda like ssh-copy-id, but in PowerShell

First up, I’m assuming that you have SSH remoting already configured, have generated your ssh user key and that you’re on a Windows, Linux/Mac system and you want to copy and SSH user key to a Linux/Mac system. I plan on covering copying keys to Windows systems in an upcoming post. The only real difference between the two is how you set permissions on the .ssh directory and the authorized_keys file. 
 
The first thing that we want to do is to create a PSSession to our host. We’ll reuse this session a few times to execute the required commands on the remote host. This demo user is the user we will want to setup key based authentication for. This session creation will ask for our password. Hopefully this is the last time you have to type it ;)
 

$s New-PSSession -HostName “172.16.94.10” -UserName demo


Then, we’ll read in our public key from our local system into a variable. It’s imperative that you read the public key, id_rsa.pub. The other file, id_rsa is your private key. That needs to stay on the system you want to authenticate from and needs to stay secure.

$key Get-Content -Path ~/.ssh/id_rsa.pub


Next, we’ll want to check to see if the .ssh directory exists in the home directory of our user on the remove system. If not, create the .ssh directory.

Invoke-Command -Session $s -ScriptBlock { If(!(Test-Path -Path ~./ssh)) { New-Item ~/.ssh -ItemType Directory} } 

 
Now, with the directory in place, let’s be sure the permissions are set properly, and that’s 700 in octal notation.
 

Invoke-Command -Session $s -ScriptBlock { chmod 700 ~/.ssh  }  

 
After that, we can copy our key to the remote system’s authorized_keys file. We’ll take advantage of the Out-File cmdlet and use the -Append switch to handle file existence on the remote system and append our key to an existing file or create a new file if it doesn’t exist yet. All that fancy syntax around Invoke-Command is so we can pass a local variable into the Out-File cmdlet over our remoting session.
 

Invoke-Command -Session $s -ScriptBlock { param([string] $key) Out-File -FilePath ~/.ssh/authorized_keys -Append -InputObject $key } -Args $key

 
Now, with the file on the remote system, let’s ensure the permissions are set properly.
 

Invoke-Command -Session $s -ScriptBlock { chmod 600 ~/.ssh/authorized_keys  }

 
..and with that let’s take it for a test run and see if we can open a PSSession without a password using Enter-PSSession
 

PS /Users/demo> Enter-PSSession -HostName server1 -UserName demo
[server1]: PS /home/demo> 

 
Now, there’s a few things I want to point out. This code here is to highlight the needed steps to configure key based authentication. I certainly could (and should) make this code more production ready…but I’ll leave that up to you as the reader. What I really want to highlight here are the steps required for proper key distribution to remote systems, such as directories, files and the required permissions. Oh, if you’re like why don’t you just use ssh-copy-id…fan out remoting. We can use this technique to easily distribute our keys to many systems.

I hope this helps you get an understanding of how key based authentication works, how to configure it and also how to get those keys out to your remote systems!

Speaking at PowerShell Summit 2018!

I’m proud to announce that I will be speaking at PowerShell + DevOps Global Summit 2018 on the conference runs from April 9th 2018 through April 12th 2018. This is an incredible event packed with fantastic content and speakers. Check out the amazing schedule! All the data you need on going is in this excellent brochure right here!

This year I have two sessions!

On Tuesday, April 10th at 2:00PM – I’m presenting “OpenSSH Internals for PowerShell Pros

Here’s the abstract

In PowerShell Core we can use OpenSSH as the transport layer to carry our remoting sessions between our systems. In this session we’ll look at OpenSSH architecture, Authentication methods, including key authentication, sshd configuration, and troubleshooting methods when things go wrong!
  In this session we’ll cover the following: 
                – OpenSSH Architecture
                – Authentication methods
                – Key based authentication
                – sshd Configuration
                – Troubleshooting OpenSSH 

On Wednesday, April 11th at 9:00AM – I’m presenting a workshop with none other than Richard Siddaway on PowerShell Remoting – Installing and troubleshooting in a Multiplatform environment

Here’s the abstract

PowerShell Core is about choice and the transport layer for remoting is one of those choices. In this session we’ll look at remoting in Multiplatform environments, configuring both OpenSSH and WinRM based remoting and how we can leverage remoting to really scale up our administrative capabilities.

I look forward to seeing you there. If you’re on the fence about registering, don’t wait! Click here and do so now. It’s selling out fast!

PowerShell Summit

Warning Handling in dbatools Automation Tasks

So I’ve been using dbatools for automated restore tasks and came across a SQL Server Agent job that I wrote that was reporting success but the job was actually failing.

What I found was the function I used, Restore-DbaDatabase, was not able to access the path that I was trying to restore databases from. The Restore-DbaDatabase function, and all dbatools functions according to the dbatools team on Slack, will throw a Warning rather than an Error by design.

When scheduling PowerShell scripts using dbatools in SQL Server’s Agent, we need use the SQL Agent Subsystem CmdExec so we can load in additional modules.  So we’ll have a SQL Agent job step that looks like this.

SQL Agent Job - cmdexec

 

Now, you see that line “Process exit code of a successful command” and it’s set to 0, we’ll that’s the first thing that I tested. I wanted to see if the warning generated by Restore-DbaDatabase returned a non-zero value…it didn’t it returns 0.  You can check this by checking %ERRORLEVEL% when running the PowerShell script defined in this job step’s command box at the command line.  

These scripts are very small, most only do one thing…restore a database. So I want them to report failure when something goes wrong, so how can we get that warning to cause the SQL Agent job to report failure?

We have to options here

Our first option is to adjust how our session handles warnings, we can do that with 

Doing this will cause the script to stop executing when it hits the warning and then the job will report failure.  

Our next option is to use the -Silent parameter on our Restore-DbaDatabase function call. The -Silent parameter cause the warnings in our script to report as errors. 

Both of these options cause the return value of our CmdExec subsystem’s call to the powershell.exe to return 1…which will cause our Agent job to report failure. This is exactly what I want!

One other thing I tested, both of these options cause the script to stop at the point of the error. When using -Silent, the function returns what it tried to do to standard output. When using $WarningPreference I did not get that output.

Thanks to Friedrich Weinmann and Shawn Melton for helping me sort this all out!

T-SQL Tuesday

Thanks to SQL DBA with A Beard for this event – https://sqldbawithabeard.com/2017/09/05/tsql2sday-94-lets-get-all-posh/

Reflecting on the Last Year of Microsoft’s OpenSource Technologies

This past year has certainly been interesting in the world of Linux. Microsoft has taken a new strategy and is embracing the open source model. It’s releasing it’s key software products with versions for Linux. It’s truly a remarkable time. In this post I want to highlight some of the bigger events and cover what does this mean to you and where you can go do get some training on these topics.

Here’s some of the highlights from the last year

Microsoft becomes a Platinum Member in the Linux Foundation – this means Microsoft is committing itself to a long term investment in the Open Source community and continuing to develop open source software. Don’t believe me on the Open Source thing…well check out their GitHub repo. Who would have seen this coming? 

Now, let’s Look at the new tools you have to build cross platform applications and develop your systems

  • .NET Core – Literally you can build native .NET applications to run on any platform, Windows, Linux, Mac…Docker!
  • bash Ubuntu on Windows – One of the primary reasons I bought my first Mac years ago was I wanted a bash shell, well now I’m not tied to this hardware anymore. 
  • Visual Studio Code – With all this cross platform stuff, you’ll need a consistent development environment, VS Code runs on Windows, Linux and Mac. And it’s darn nice too. Very extensible with many languages available. 
  • SQL Server on Linux – This is the real deal, it’s fast and consistent with your existing SQL Server experience. I’ve blogged about it a bit :)
  • PowerShell Core – Microsoft adds another management tool to your tool belt with this. Windows, Linux and Mac…can be managed all with one Language. For me, this was mind blowing, I got to do a training video with literally the inventor of PowerShell Jeffrey Snover and MVP Jason Helmick! I blogged about PowerShell a bit too.
NewImage

What does this mean to you?

So what’s this mean to you? Get out there and start learning about this stuff and discover how it can impact you. In the coming years new solutions are going to be developed using these components and it’s upon you to train yourself and learn how to leverage these tools to solve problems. 

I’ve spent the last year developing some fun training at Pluralsight I think you should check out. The training is based on the Linux Foundation Certified Engineer curriculum and takes you from installation up to a running Linux system. 

  • Understanding and Using Essential Tools for Enterprise Linux 7 – If you’re new to Linux, start here! This will course will help you install Linux and get oriented with the operating system and the command line interface. 
  • LFCE: Advanced Network and System Administration – Next, you’ll need to learn how to control your system’s services, install packages, manage performance and share data between systems. Check this course out to make your Linux system really work for you
  • LFCE: Advanced Linux Networking – Your systems don’t stand alone, in this course you’ll dive deep into how data moves between Linux systems. Protip, these concepts apply to Windows systems too.
  • LCFE: Network and Host Security – My newest course, let’s learn how to secure our Linux systems from both the networking and host perspective. We’ll cover security concepts and architectures, securing Linux services and take a deep dive into OpenSSH and remote access.
  • LFCE: Linux Service Management – HTTP Services – I’m currently developing a course on HTTP Services – you’ll learn how to install, configure and manage Apache.
  • More to follow – announcements coming up soon! I can’t wait to tell you what’s next.

I’ve got tons of blog posts on these topics, 

So go ahead, get digging in there learn download Linux (yes, I prefer CentOS), install SQL Server and PowerShell and start moving your skills towards where the technology is going to take you!