The Goal
Sometimes package management solutions aren't the right tool for the job; say for example you want to push/install packages as part of a single, one time effort. This has been the case for me on more than a few contracts; we have a piece of software we intend on distributing across a class of machines generally not managed by SCCM or a similar tool. For example one may want to install something like Splunk on all servers in an organization.
To accomplish this, my tool of choice is PowerShell. As a control mechanism it has come a long way in the past few years. Jobs can be used for huge deployments to asynchronously process multiple steps on many machines simultaneously. One of the trickier things to do, however, has been to install MSI packages as jobs using WinRM (remotely).
Concepts
Here's what we'll be working with in this article:
- PowerShell
- SSH (1)(2) if managing *nix as well (for installing .deb, .etc)
- PowerShell Jobs
- WinRM
- MSI Packages
- Billions
and Billions of endpoints
Approach
As is the case for all scripts that manage massive numbers of endpoints, we need to make sure we can scale our approach. Most often this means split out all tasks into jobs and move on; this includes determining platform specifics, file distribution, and triggering installations. To accommodate this strategy per-machine information is generally stored in hash tables where it can be quickly referenced by downstream tasks. Take the following bare-bones example; this is a subset of a script I commonly use to distribute files. Note there is quite a bit that can be done to enhance the functionality here; my only purpose with this is to illustrate how to trigger and then track many jobs:
So note in the example above the system running the PowerShell command will launch as many threads as possible (limiting would be easy with a few lines of code) and then circle back and check the status of each. This basic framework can work for nearly any remote operation. Obvious enhancements to the code above would be:
- Logging
- Error handling of each condition
- Throttling the entire operation to x outstanding jobs
- Using a round-robin or geo associated file copy sources to distribute load (specific to this file copy)
- ... and more!©
The Reason for This Article
This framework is the basis for my "major operations" using PowerShell and works well in many situations, however I ran into a serious problem with using this strategy to install MSI packages. While it should be easy to use invoke-command -asjob or something similar to launch an install job as a job remotely, I found that the tracking mechanism and the session created for the command were often broken by the behavior of msiexec.exe.
As it turns out this is due to the fact that due to their layout, some MSI packages quickly terminate the calling msiexec.exe and launch a few instances thereafter. Since the launched instances aren't tracked as child processes of the calling .exe, PowerShell considers the job "done" and terminates the remote session, killing the sub processes before the install is finished. The following solution is a modular (i.e. re-usable code) approach to addressing this issue.
Solving the Problem
To solve the problem, we need to launch our own session manually and track success with an external criteria that we devise. This can be as complex as a specific line in a specific log file or as simple as a timer. I won't cover all that external criteria here because that's for you to decide. We will cover the base strategy and give an example of a timer-based session execution.
Before I go into the code that does work, let's cover what doesn't. The following remote execution strategies will not work with an MSI that branches:
- Invoke-Command (-asjob)
- Invoke-Command (-asjob) ; {Start-Process}
- New-PSSession ; Invoke-Command (-asjob) ; {Start-Process} ; Remove-PSSession
Here is the code that does work:
- New-PSSession ; Invoke-Command (-asjob) ; {Start-Process} ; wait based on external criteria ; Remove-PSSession
We'll get on with the real code example here, but before I do let me make a note of a feature of the preceeding and to-follow code: You'll note I use the line [System.Collections.ArrayList]$Needs_Install=$Copy_Success followed by foreach ($server in $($Needs_Install)) . The reason for this is because this .NET array type, unlike the standard PowerShell array, allows for easy removal of elements. In the "foreach" line I enclose the array variable name in an extra set of parens to render it a copy for each iteration, avoiding errors when I remove an element. This allows me to use my original array as a dynamically sized list of servers to operate on.
That said, here's a code example:
Code Discussion
(Note: some of my variable names clearly won't make sense for your adaptation) So, examining the code we see a couple key lines:
- [System.Collections.ArrayList]$Needs_Install=$Copy_Success : There's that .NET array type we're talking about
- Do { ... }until ($Needs_Install.count -eq 0) : And this is why. This whole process takes place until the to-be-processed array is empty. Note you could easily wrap all parts of a given install script in a larger array for tracking all parts of the process.
- foreach ($server in $($Needs_Install)) : Double parens makes removal of items within the loop possible since it creates the list as a copy rather than a reference
- $session=New-PSSession -ComputerName $server : Here's the start of the session we're talking about. You could if desired use a hash table to track session names per server if desired (New-PSSession -name ($hashtable.get_item($server)))
- $script=[ScriptBlock]::Create("msiexec.exe /i $tmpVar $argumentList") : create the script to be executed remotely. Note that $tmpVar includes the machine specific location for execution.
- $server_Install_Session_Start.Set_Item($server,(Get-Date)) : track the install time for this endpoint. Only need this line if using time tracking for session expiry
- if ($Mins_Per_Job) ; If we specify this at the top of the script as a variable then we're using it. This allows easy code-reuse, adding more specific completion detection routines as necessary. Note that minutes/job would probably work in most cases where you're processing few enough endpoints that a single machine can handle all connections simultaneously. Once you surpass the outgoing session capacity you'll need to be more aggressive.
- if (($server_Install_Session_Start.Get_Item($server)) -le [datetime]::Now.AddMinutes(-$Mins_Per_Job)) {....} : A bit of date logic here to test the session age. If it is past the configured then we terminate the session, remove the job, and take other end-of-job relative steps!
In Closing
Using this methodology you can easily scale up a more complex solution with full error tracking, verification, etc. It's amazing how far we've come in the automation front in the last ten years, and I can't wait to see what the future holds. For example, think of the possibilities when combined with things like Desired State Configuration.