Monday, September 15, 2014

Using PowerShell Jobs to Trigger Remote MSI Installs

So you want to deploy an MSI package to potentially thousands of machines using PowerShell? Odd, me too.

The Goal

Sometimes package management solutions aren't the right tool for the job; say for example you want to push/install packages as part of a single, one time effort. This has been the case for me on more than a few contracts; we have a piece of software we intend on distributing across a class of machines generally not managed by SCCM or a similar tool. For example one may want to install something like Splunk on all servers in an organization.

To accomplish this, my tool of choice is PowerShell. As a control mechanism it has come a long way in the past few years. Jobs can be used for huge deployments to asynchronously process multiple steps on many machines simultaneously. One of the trickier things to do, however, has been to install MSI packages as jobs using WinRM (remotely).


Here's what we'll be working with in this article:


As is the case for all scripts that manage massive numbers of endpoints, we need to make sure we can scale our approach. Most often this means split out all tasks into jobs and move on; this includes determining platform specifics, file distribution, and triggering installations. To accommodate this strategy per-machine information is generally stored in hash tables where it can be quickly referenced by downstream tasks. Take the following bare-bones example; this is a subset of a script I commonly use to distribute files. Note there is quite a bit that can be done to enhance the functionality here; my only purpose with this is to illustrate how to trigger and then track many jobs:

Copy Jobs Example (Click to Expand)  + 

So note in the example above the system running the PowerShell command will launch as many threads as possible (limiting would be easy with a few lines of code) and then circle back and check the status of each. This basic framework can work for nearly any remote operation. Obvious enhancements to the code above would be:
  • Logging
  • Error handling of each condition
  • Throttling the entire operation to x outstanding jobs
  • Using a round-robin or geo associated file copy sources to distribute load (specific to this file copy)
  • ... and more!©

The Reason for This Article

This framework is the basis for my "major operations" using PowerShell and works well in many situations, however I ran into a serious problem with using this strategy to install MSI packages. While it should be easy to use invoke-command -asjob or something similar to launch an install job as a job remotely, I found that the tracking mechanism and the session created for the command were often broken by the behavior of msiexec.exe. 

As it turns out this is due to the fact that due to their layout, some MSI packages quickly terminate the calling msiexec.exe and launch a few instances thereafter. Since the launched instances aren't tracked as child processes of the calling .exe, PowerShell considers the job "done" and terminates the remote session, killing the sub processes before the install is finished. The following solution is a modular (i.e. re-usable code) approach to addressing this issue. 

Solving the Problem

To solve the problem, we need to launch our own session manually and track success with an external criteria that we devise. This can be as complex as a specific line in a specific log file or as simple as a timer. I won't cover all that external criteria here because that's for you to decide. We will cover the base strategy and give an example of a timer-based session execution.

Before I go into the code that does work, let's cover what doesn't. The following remote execution strategies will not work with an MSI that branches:

Here is the code that does work:

  • New-PSSession ; Invoke-Command (-asjob) ; {Start-Process} ; wait based on external criteria ; Remove-PSSession

We'll get on with the real code example here, but before I do let me make a note of a feature of the preceeding and to-follow code: You'll note I use the line [System.Collections.ArrayList]$Needs_Install=$Copy_Success followed by foreach ($server in $($Needs_Install)) . The reason for this is because this .NET array type, unlike the standard PowerShell array, allows for easy removal of elements. In the "foreach" line I enclose the array variable name in an extra set of parens to render it a copy for each iteration, avoiding errors when I remove an element. This allows me to use my original array as a dynamically sized list of servers to operate on.

That said, here's a code example:

MSI Install Job Example (Click to Expand)  + 

Code Discussion

(Note: some of my variable names clearly won't make sense for your adaptation) So, examining the code we see a couple key lines:
  • [System.Collections.ArrayList]$Needs_Install=$Copy_Success : There's that .NET array type we're talking about
  • Do { ... }until ($Needs_Install.count -eq 0) : And this is why. This whole process takes place until the to-be-processed array is empty. Note you could easily wrap all parts of a given install script in a larger array for tracking all parts of the process.
  • foreach ($server in $($Needs_Install)) : Double parens makes removal of items within the loop possible since it creates the list as a copy rather than a reference
  • $session=New-PSSession -ComputerName $server : Here's the start of the session we're talking about. You could if desired use a hash table to track session names per server if desired (New-PSSession -name ($hashtable.get_item($server)))
  • $script=[ScriptBlock]::Create("msiexec.exe /i $tmpVar $argumentList") : create the script to be executed remotely. Note that $tmpVar includes the machine specific location for execution.
  • $server_Install_Session_Start.Set_Item($server,(Get-Date)) : track the install time for this endpoint. Only need this line if using time tracking for session expiry
  • if ($Mins_Per_Job) ; If we specify this at the top of the script as a variable then we're using it. This allows easy code-reuse, adding more specific completion detection routines as necessary. Note that minutes/job would probably work in most cases where you're processing few enough endpoints that a single machine can handle all connections simultaneously. Once you surpass the outgoing session capacity you'll need to be more aggressive.
  • if (($server_Install_Session_Start.Get_Item($server)) -le [datetime]::Now.AddMinutes(-$Mins_Per_Job)) {....} : A bit of date logic here to test the session age. If it is past the configured then we terminate the session, remove the job, and take other end-of-job relative steps!

In Closing

Using this methodology you can easily scale up a more complex solution with full error tracking, verification, etc. It's amazing how far we've come in the automation front in the last ten years, and I can't wait to see what the future holds. For example, think of the possibilities when combined with things like Desired State Configuration.

Wednesday, July 30, 2014

Why Schannel EventID 36888 / 36874 Occurs and How to Fix It

Having trouble talking to your webserver? Seeing the aforementioned errors? Are you hungry?

I can fix two of those. I ran into this error at a large, highly distributed client site. Because of the nature of the problem (sporadic) it took longer to solve than I would have liked. Hopefully this article will save you that time.

What Components are Involved?

This error involves two sides: a "client" and a server. Client is in quotes because it can be, and often is, an application consuming a web service or similar. On the server side this problem generally occurs on Windows 2008 or newer. The "client" can be any platform.

What Errors Again?

Generally, but not always, these errors are manifested into following events:
  • System Log, Schannel source, EventID 36888
  • System Log, Schannel source, EventID 36874

These errors can occur on either side, provided obviously that side is Windows. What errors you receive on the other side depend entirely on the platform.

What is Happening?

At a high level, the client and server are failing to agree on a way to talk to each other securely. To communicate securely, the server and client must agree on a methodology to communicate involving 4 main components. Those are:
  • How to authenticate each other (Key Exchange)
  • How to encrypt data to be exchanged (Encryption Cipher)
  • How to verify the message hasn't been tampered with (Message Authentication Code)
  • How to determine random numbers for seeding keys (Pseudorandom Function)

The client and server must agree to the same implementation of each of these items. Bundled together, these are referred to as a cipher suite

The client and server each have preferences as to which portions of the cipher suite hold which priority. Based on this prioritization, a set of supported cipher suites is compiled and proposed at the beginning of any SSL/TLS connection. The client first proposes what it would like, then the server compares the client list to its own list and selects the first matching suite.

So therein lies the problem: Your server doesn't like any of the proposals from the client. 


This is why I decided to write this article. While there are several hits on the internet regarding this problem, I have yet to see one that nails it. Initially (and originally published in this article) I suspected the problem was due to an incorrect cryptographic service provider but thanks to some insights from one of my colleagues I took another look. Turns out that due to the nature of this problem it can appear sporadically and be difficult to troubleshoot. If you're experiencing this problem the following may be true of your environment:

If those things aren't true, don't worry because here are the details: If your CSR is requesting a certificate that is valid for signing only, rather than signing and encryption, and your CA has a policy that allows for encryption even when the request was signing only, then you will likely see this problem... sometimes. Clearly a certificate requested for signature only shouldn't work at all when used for encryption, but if your CA overrides the request to allow for encryption that will create a situation where encryption will work, but only under circumstances when the client supports a couple specific protocol suites. Identifying certificates causing this problem is complicated; since the CA overrode the We'll cover the specifics further in the next two sections...

Detecting The Problem

Feel free to skip this section if you want to jump to the fix. Detection can be pretty easy using tools like Wireshark. Fire up the tool on either the client or server with the proper capture filters to reduce noise, and then attempt the failing connection. You will see only a handful of packets (5 or so) as the rejection happens pretty quickly. To see the detail appropriately, you'll need to tell Wireshark this is SSL/TLS by right clicking->decode as->SSL.

If a protocol negotiation is the issue, you'll see the connection reset by the server immediately after the client suggests a list of cipher suites. This packet from the client will have the info of "client hello" followed immediately with a TCP RST (reset) from the server.

If you drill into the details of the "client hello" packet you will be able to see the suites the client is proposing.

You can then attempt a successful TLS connection if you are able to produce one (if not just jump to the fix and try it) using the same methodology. I found that while using the affected cert type listed above, my server only supported TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA  and TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, clearly a very limited subset. 

The Fix

To remediate this issue you'll need to make sure that  certificate ordered is for the correct purpose. Rather than recreate that article I'll direct you to my favorite one here, however note that the [strings],[Extensions],and [RequestAttributes] sections may not be needed depending on your situation. The main takeaway from that article is that at the very least the KeySpec and KeyUsage settings need to be specified (see link under references for more info).  Request, retrieve, and install this certificate.

You can use any other method you would like to obtain a certificate (perhaps you do), but it's critical to ensure your request has the correct parameters including the certificate usage. If you are using Windows PKI with AD integrated templates, you can "hard code" this in the templates if you like.

If this fix didn't work for you, wait for the "Wait There's More" section because it's likely due to a misconfigured set of cipher suites. Speaking of that...

Wait There's More

As a security best practice, you should also control (restrict) your available cipher suites on Windows/IIS. This is pretty easy to do; it can be done via Group Policy for large sets of servers and one-by-one with registry settings or better yet with this easy tool from Nartac. For more guidance check out these three links.

Thanks for reading and feel free to add your own experience below!


Microsoft Support: How to Determine the Cipher Suite for the Server and Client

Microsoft Support: How to restrict the use of certain cryptographic algorithms and protocols in Schannel.dll

MSDN: Cipher Suites in Schannel

Technet: Certreq.exe syntax

Sunday, July 6, 2014

Setup Your Own Chocoloatey/NuGet Repository

In this article we'll examine setting up a NuGet/Chocolatey repository in your enterprise to distribute software. This will allow you to easily distribute development and software packages throughout your network.

NuGet? I don't need any more candy.

NuGet started life NuPack (not to be confused with NuPack), an open source solution for managing .NET packages. Since then it has evolved into a mature platform with numerous interfaces including a Visual Studio plugin, command line, and Mono support. Chocolatey and PowerShellGet are built on that framework. Speaking of Chocolate...

Chocolatey? I told you already, no more candy.

Where NuGet was meant for .NET packages, Chocolatey, which is built on the same infrastructure, is meant for machine (Windows) packages. Think of it like apt or yum for windows. PowerShell has also already shipped a preview of PowerShell OneGet which can use Chocolatey repositories.

But There are Already Repositories for These and I'm Hungry! Why Would I Make My Own?

Yup! There are great public repositories for Chocolatey and NuGet, but those are geared at freely available public software that wouldn't be built for an explicit purpose. By hosting your own you can host custom .NET packages specific to your business unit or even package commercially available software for distribution with Chocolatey provided your licensing is up to snuff.

Hosting Options

There are several options to get going that vary in terms of hosting location, ease of installation, and scalability. Some of the more popular options include:
  • NuGet Server : A basic server that runs on-premise and is easy to setup. Doesn't have granular security features and will only scale so far before it slows down.
  • NuGet Gallery: More complex NuGet server package that includes advanced security features and will scale for larger implementations (this is what the main public NuGet repo uses). 
  • MyGet: A commercial NuGet repo service hosted in Azure. Has a limited free tier and reasonably priced paid tiers. Worth consideration if you don't want to host your own infrastructure. 

The assumption with this article is that you're hosting your first NuGet/Chocolately repo for the purposes of your enterprise or team use. Since the NuGet server includes most needed functionality for that purpose and can be replaced with the NuGet Gallery as you grow, we'll set up a NuGet Server(the first option) in this article.

Let's Get Started!


  • We'll be setting up on a Windows Server 2012 R2. This will work on Windows Server 2008 and up. I'm assuming you have one set up and ready to go.
  • You will need Visual Studio Express 2010 or newer on your workstation. (Preferably not on the server)
  • You will need Admin rights on both the server and your workstation.
  • In enterprise environments you often have to make due with the resources you have available, so we'll be setting up the repo as a virtual application in IIS rather than its own site so that it can share port 80 for the sake of simplicity.

IIS Setup on Server

We'll walk through installing the minimum IIS requirements to run the NuGet Server package. Everything here could be very easily done with PowerShell but we'll use the GUI to make for a more visual tutorial.

  1. On the server where you will host the application, start the "Add Roles and Features Wizard"
  2. Click "Next" until you advance to the "Server Roles" section. If you're executing remotely make sure you select the correct server.
  3. Of the Roles listed, select "Web Server (IIS)" and select "Add Features" when prompted. Click "Next". 

  4. On the "Features" page, expand ".NET Framework 4.5 Features" and ensure ".NET Framework 4.5" and "ASP.NET 4.5" are checked. Click "Next".

  5. Click "Next" to advance to the "Role Services" section under "Web Server Role (IIS)" and select the following (only the most granular required, not the headings):
    • Web Server
      • Common HTTP Features
        • Default Document
        • Static Content
      • Health and Diagnostics
        • HTTP Logging
      • Performance
        • Static Content Compression
      • Security
        • Request Filtering
      • Application Development
        • .NET Extensibility 4.5
        • ASP.NET 4.5
        • ISAPI Extensions
        • ISAPI Filters
    • Management Tools
      • IIS Management Console

  6. Click "Next" and then click "Install".
  7. You shouldn't need to reboot, but check the installation status and do so if requested.

IIS Setup on Server

Now we'll set up the site that the NuGet Server will be served from. As mentioned earlier, I will walk you through setting it up as a virtual application off of the default web site. This configuration would allow you to share port 80 with an existing site as well as show you how to configure this application below the root, which does need a bit of special consideration worth mentioning.
  1. Create the directory structure for your site. I always put my IIS sites on a non-system drive with the permissions locked down. In this example, I'll be using D:\Sites\NuGetRepo .
  2. Create the directory for the NuGet/Chocolately package repo. We'll configure this below. This directory uses a different permissions structure and could potentially be shared out over your LAN, so it may be beneficial to place this separate from the site. In my example I'll be using D:\NuGetRepo .
  3. (Optional/Best Practice) At the D:\Sites and D:\NuGetRepo levels disable inheritance ensure only Administrators and SYSTEM have write access. Do not allow any other access at this time, we'll get to that below.
  4. On the server to host NuGet Server, open the IIS management tool.
  5. Right click the Default Web Site (note this could be any web site) and select "Add Application..." (A virtual directory will not work!)

  6. Set the alias to "NuGet", leave the "Application Pool" on "DefaultApplicationPool" (again, you could change this if desired) and set the physical path to what you created for the site. For our example we're using "D:\Sites\NuGetRepo" . 

  7. The default out-of-box settings should work for the site, but in case the Default Web Site settings have been changed you may want to refresh your view and ensure IIS authentication is set to "Anonymous". If desired, change the logging location as well (D:\logfiles\IIS\NuGetRepo for example).

Setting File System Permissions on IIS Server

For clients to access the site and repo successfully we need to set file system level permissions. If you have used different directory names above substitute them here accordingly.

Note: We're assuming you have administrative access to this server from your Workstation as well to deploy the code in the steps below. If not, you'll need to grant whoever will be deploying the site access to the site folder. If you are admin, don't worry about it.

  1. The directory containing the website needs to be read by the AppPool account and the Anon user account. Right click D:\Sites\NuGetRepo and select "Properties".
  2. Click "Security", "Edit", and then "Add". Change the "Location" to the local system name.
  3. Give the default web site application pool virtual service account and anonymous account permissions by typing "IIS APPPOOL\DefaultAppPool;IUSR", clicking "Check Names" and then "OK". Again, if you have elected to use a different site/pool/acct you will need to take that into account. This should resolve to two accounts, "DefaultAppPool" and IUSR".

  4. Give each of the added users "Read & Execute", "List folder contents", and "Read" permissions and then click "OK".

  5. The directory containing the actual repo only needs to be read by the AppPool account. Right click D:\NuGetRepo and select "Properties".
  6. Click "Security", "Edit", and then "Add". Change the "Location" to the local system name.
  7. Give the default web site application pool virtual service account permissions by typing "IIS APPPOOL\DefaultAppPool", clicking "Check Names" and then "OK". Again, if you have elected to use a different site/pool/acct you will need to take that into account. This should resolve to one account, "DefaultAppPool".
  8. Grant it "Read & Execute", "List folder contents", and "Read" permissions and then click "OK".

NuGet Server Config on Workstation

Now we'll grab the NuGet Server package and configure it accordingly. Note some of these options will vary slightly depending on which version of Visual Studio you are using. I'm using 2012 Premium but everything is possible in 2010 Express and up.

Note: We are assuming your IIS server is accessible to you and has file sharing turned on to push the site. If you are unable to get to the filesystem of the server from this machine you will need to use a different deployment mechanism when we get to that step.

  1. On your workstation, open Visual Studio and start a new Project by selecting "File"->"New"->"Project"

  2. Navigate to "Installed"->"Templates"->"Visual C#"->"Web" and select "ASP.NET Empty Web Application"

  3. Right click on your newly created application under the solution and select "Manage NuGet Packages"

  4. Assuming the defaults of the feed and "Stable Only" are selected, type "nuget.server" in the search box and hit Enter
  5. Select the "NuGet.Server" package and click "Install". This will install the NuGet server package and any dependencies. Accept license agreements associated with the other packages to continue and then close the package management window.

  6. The only thing we need to customize is the web.config  file for our installation. In the Solution Explorer click "Web.config" under the web application. Note: This file is also where you can control API Key behavior, but that is outside the scope of this article.
  7. Look for the add key="packagesPath" entry in the web.config file under the "<appSettings>" heading. We need to set this to the location of our repository. Change <add key="packagesPath" value=""/> to <add key="packagesPath" value="D:\NuGetRepo"/> (or other directory if appropriate). Note that there is no trailing slash. Save your project.

  8. Now we need to publish. Click "Build"->"Publish WebApplication..."

  9. If you already have a working publishing profile for the web server, select it and skip to step 12. Otherwise, select <new profile> from the drop-down box, enter a name, and click "OK". 

  10. Change "Publish method" to "File System" and enter the full path to the web server site location, I.E. "\\<server>\d$\sites\NuGetRepo\" . Click "Next".
  11. Accept the default publishing settings and click "Next". 
  12. Review the settings and click "Publish". 
  13. Review the Output window to ensure there weren't any errors.

  14. Test your NuGet Server by navigating to http://<servername>/NuGet/ . If you encounter errors be sure to browse to it locally on the server to get the full error information.

That's it! Now all you need to do is configure the source in your clients, make packages, and enjoy! For instructions on those steps see below, and stay tuned for more. Thanks for reading!

Creating NuGet/Chocolatey Packages

Chocolatey Docs: Create Packages Quick Start
NuGet Docs: Nuspec Reference
Chocolatey Docs: Chocolately Templates
NuGet Docs: Creating and Publishing a Package
Scott Hanselman: Creating a NuGet Package in 7 Easy Steps!
Chocolatey Docs: Creating Chocolatey Packages
Hong Xu: Create and Publish Chocolatey Packages
NuGet Docs: Configuration File and Source Code Transformations

Configuring Sources

Chocolatey Docs: Source command
NuGet Docs: Visual Studio Package Sources


NuGet Docs: Hosting Your Own Feeds
Scott Hanselman: Is the Windows User Ready for Apt-Get?
MBrownNYC: Create Your Own NuGet Server to Serve Packages for Chocolatey
NuGet Docs: An Overview of the NuGet Ecosystem

Tuesday, May 27, 2014

Cloudy I/O Performance - Increasing Azure IOPS (Part 2 of 2)

Note: This is part 2 of a 2 part post. You can find part 1 here.


In the last article we discussed a repeatable testing methodology to quantify storage performance in the cloud, and in this article we'll put that methodology into practice. I've done substantial testing in Azure and aim to illustrate what your options are for scaling performance at this point in time.


I undertook this project to see what can be done to increase disk I/O in Windows Azure IaaS. Upon researching the topic I found several interesting articles. Among those are:

There seems to be little consensus regarding disk striping in Windows Azure IaaS. Some blogs recommend this while some of Microsoft's own writing seems to discourage it. After combing through the options the following points stand out:

  • Disk Striping (Software RAID 0) may or may not increase performance based on your workload.
  • Striping will increase I/O capacity to a degree (which we'll test here).
  • What software striping solution works better: legacy (Windows software RAID from 2000 to present) or Storage Spaces (new software "RAID" in Windows 2012 and up)?
  • How does NTFS cluster size impact performance?
  • If striping, disable geo-replication as Microsoft explicitly warns against the use of geo-replication with this solution.
  • If possible, use native application load distribution rather that disk striping to split I/O.  (For example, split DB files in SQL across disks)
  • Some articles reference needing to use multiple storage accounts to get maximum performance. This is not true; as of 6/7/2012 storage account targets are 20,000 IOPS per account. Unless you will exceed the 20,000 keep all your disks on one account for the sake of simplicity. We will prove that does not have an impact on performance.

With that said, I want to quantify the solution for my given scaling problem with the notion that if the tests are simple enough to run, this approach can be used for any future scaling problem as well.

Putting it All Together

We'll use the testing methodology outlined in part 1 of this article to collect the results. In this case we need to first add disks and set up stripes in Azure Windows VMs.

Note: To jump straight to Azure disk performance tests, scroll to the bottom of this article.

Create New Disks and Attach to Designated VM

In order to run all the tests listed below, you need to know how to create new disks and attach them to your virtual machine. My favorite solution to this is to use a locally created dynamic VHD and upload it to the location you would like using PowerShell. Let's go through the process of attaching one disk as a primer:
  1. Decide which storage account you will use for these disks. If you plan on doing striping of any kind, ensure the storage account is set to "Locally Redundant" replication (Storage->Desired Storage Account->Configure), as "Geo Redundant" is not supported. Since the replication setting applies to all blobs (Azure's terminology; disks) in that account you may want to have a dedicated account for these disks to keep your others Geo Redundant.

  2. Determine what container you would like to store your Azure disk blob by opening the Azure management portal and navigating to Storage->Desired Storage Account->Containers and copy the URL to your clipboard. To keep things simple you may want to create a new storage container, so do so now and use that URL if desired.
  3. Using Hyper-V (On Windows 2008 or higher including Windows 8) create an empty dynamically expanding VHD disk of your desired size. For my testing I have been using 10GB disks. Note 1: Do not create a VHDX; Azure uses the older VHD format. Note 2: You'll need to re-create the VHD for each disk if you intend on using Storage Spaces as each disk must have a unique ID. 
  4. #create a dynamically expanding 10GB VHD; change size as appropriate
    New-VHD –Path $sourceVHD –SizeBytes 10GB -Dynamic
  5. This disk will be uploaded to the container we selected in step 1. Determine the name you want the disk to be referenced by in Azure and execute the following script:
  6. #import Azure cmdlets
    import-module azure.psd1
    #specify your subscription so PS knows what account to upload the data to
    select-azuresubscription "mysubscriptionname"
    #$sourceVHD should be the location of your empty vhd file
    $sourceVHD = "D:\Skydrive\Projects\Azure\AzureEmpty10G_Disk.vhd"
    #$destinationVHD should be the URL of the container and the name of the vhd you want created in your account. Obviously for subsequent disks you need to change the VHD name. 
    $destinationVHD = ""
    #now upload it. 
    Add-AzureVhd -LocalFilePath $sourceVHD -Destination $destinationVHD

  7. Add this new disk as available to VMs by navigating to Virtual Machines->Disks->Create

  8. Enter the desired management name for this disk and input or browse to the URL of the VHD you just uploaded and click the check box.
  9. Attach the disk to your VM by navigating to Virtual Machines->Ensure your desired VM is highlighted->Attach->Attach Disk

  10. Select the disk we just added. Your cache preference will depend on the application. In my case this is off but you will want to use the methodology outlined in the first part of this article to test caching impact for your application. Note a change of cache status requires a VM reboot.

Now for a brief tutorial on how to set up our two types of striped disks; you'll likely only be using one of the two but I'll cover both just in case. Performance results of each are outlined later in this article.

Set Up a Traditional Software Stripe in Windows

Setting up a traditional software stripe is easy. I've tested this on Windows 2003 and higher.

  1. Logon to your VM as an admin and open the Disk Management tool.
  2. If prompted, allow the initialization of the disks.
  3. Right-click on one of the newly created empty volumes and select New Striped Volume.

  4. Select the desired disks and continue.

  5. Create and format a new NTFS disk using your striped volume. Make sure to pay attention to the cluster size (results below).

Setup a Storage Spaces Software Stripe in Windows 2012 or Higher

Microsoft introduced a new approach to disk pooling in Windows Server 2012 and Windows 8 called Storage Spaces. This interesting new tech allows for a myriad of different configuration options including disk tiering which can be useful for on-premise servers. In this case we'll be using the "simple" pool type which is similar to disk striping.

  1. Open Server Manager and navigate to File and Storage Services -> Volumes -> Storage Pools
  2. Under Storage Pools you should see "Primordial". (As opposed to "Unused Disks". I'm guessing someone was pretty proud of that.) Right click it and select "New Storage Pool".

  3. Walk through the Wizard selecting each disk you would like to be part of the pool.

  4. On the results page, ensure "Create a virtual disk when this wizard closes" is selected and click "Close".

  5. Walk through the Virtual Disk Wizard, specifying a meaningful name and selecting simple storage layout and fixed provisioning.

  6. On the results page, ensure "Create a volume when this wizard closes" is selected and click "Close".
  7. Complete the New Volume Wizard specifying your desired drive letter and desired NTFS cluster size.

Run Tests/Collect Results!

Now that we have our disks configured, we need to run our tests. For instructions how how to do so, see part 1 of this topic here.

When analyzing the IOMeter output you will want to pay special attention to the following metrics:
  • IOPS (Cumlative, Read, Write, Higher is better)
  • MBps (Cumlative, Read, Write, Higher is better)
  • Response Time (Avg, Avg Read, Avg Write, lower is better) 

If putting the data together for a report, Excel works nicely as I'll display below.


Now for the most important part, the findings. Tests performed:

Sector Size Tests:

  • 1 Disk, 4k Sector Size (default)
  • 1 Disk, 8k Sector Size
  • 1 Disk, 16k Sector Size
  • 1 Disk, 32k Sector Size
  • 1 Disk, 64k Sector Size
  • 3 Disks, 4k Sector Size (results confirmation test)
  • 3 Disks, 32k Sector Size (results confirmation test)

Table 1-Cluster Size Tests
Table 2-Cluster Size Verification

Sector size tests echo what others have observed with Azure; since IOPS are capped at 500 (or 300 for basic VMs) larger sector sizes can result in higher throughput. In my case 32k was the sweet spot; depending on your workload your results will vary slightly. I have seen consistently (albeit slightly) higher performance with larger sector sizes in Azure.

Legacy Disk Striping Tests:

  • 1 Disk, 32k Sector Size
  • 2 Disks, Striped Volume, 32k Sector Size
  • 2 Disks in 2 Storage Accounts, Striped Volume, 32k Sector Size (Multiple Storage Account Test)
  • 3 Disks, Striped Volume, 32k Sector Size
  • 4 Disks, Striped Volume, 32k Sector Size

<See Bar Charts Below Under Disk Striping Methodology>

Table 3-Legacy Striping and Storage Account Tests

You can see with one disk we get 500 IOPS as expected. From there we can see a scaling trend that is most definitely not linear. Two disks result in 33% higher performance, while three disks add an additional 23% (64% from one disk). Adding the fourth disk actually results in a drop from three disks, coming it at 5% lower than three and 56% higher than one.

Additionally, we also see that splitting disks across storage accounts makes no appreciable difference.  Note: Bar charts for this results section have been combined into the graphs below.

Disk Striping Methodology Tests:

  • 2 Disks, Striped Volume 32k Sector Size
  • 2 Disks, Storage Spaces Simple, 32k Sector Size
  • 3 Disks, Striped Volume, 32k Sector Size
  • 3 Disks, Storage Spaces Simple, 32k Sector Size
  • 4 Disks, Striped Volume, 32k Sector Size
  • 4 Disks, Storage Spaces Simple, 32k Sector Size

Table 4-Legacy Striping vs. Storage Spaces Test

Now we compare legacy striping to the newly introduced Storage Spaces. Two disk scaling is a definitive win for Storage Spaces, while beyond that legacy striping generally performs better (save max latency). In my opinion two disk Storage Spaces stripe is the sweet spot here (56% IOPS improvement!) when considering that the with more disks we add complexity that doesn't pan out on the performance side.


I hope you have found these results interesting; I certainly have. Even if you choose not to run these tests yourself I hope my results prove helpful when sizing your machines. Since the access pattern I used is relatively universal it should be applicable in most scenarios.

Software level disk striping works relatively well in Microsoft Azure to increase per-disk performance in lieu of a provider level solution similar to Amazon EBS provisioned IOPS. Splitting the workload across logical disks or VMs is preferred but not applicable to all workloads. When employing this solution make sure you select only locally redundant replication because Micrsoft warns that geo-redundant replication may cause data consistency issues on the replication target.

For additional information see the links near the top of this article. Thanks for reading!

Tuesday, May 20, 2014

Cloudy I/O Performance - Deciphering IOPS in IaaS (Part 1 of 2)

Note: This is part 1 of a 2 part post. Part 2 can be found here.


Disk performance scaling options in the public cloud seem limited (particularly in Azure as of this writing), but there are ways to increase your IOPS in IaaS solutions. To add to the performance problem, transactional costs of running application tests can be not only time consuming but expensive. To tune your storage performance reliably you will need a fast, consistent way to test different configurations. This article will cover that methodology and lead into a results/guidance article for Azure (but applicable to others) IaaS storage performance.

We'll be doing this testing on Windows, but you could also easily do this on Linux and the results that I'll be sharing are just as applicable there. To accomplish this testing we'll be using the following tools:

Let's begin!


We will proceed in the following order:
  1. Analyze Workload
  2. Create Test Scenarios
  3. Collect and Analyze Results (Mainly in Part 2)
  4. Findings (In Part 2)


If you plan on emulating my tests you'll need to have access to the following:
  • Microsoft Windows Azure account (note this methodology will worth with EC2 or any other platform including standard hardware/on-prem VMs)
  • IaaS VM Configured. A medium size is recommended for testing 4 disks or fewer to limit the available memory. More on that below.
  • Administrator access to your VM.
  • Your workload is in fact disk I/O bound. If you're not sure of that you may want to start with this article.
  • Awareness that you will incur additional storage transaction costs by running these tests.

Analysis/Create Workload

Note: If you're just trying to get a general sense for your VM I/O performance capability, you don't need to collect data for a custom access specification. IOMeter includes several tests you can use so skip to the "Install IOMeter..." section below.

The first thing we need to do is create our workload. By using IOMeter we can develop custom access patterns that model common workloads and have the tool and workloads installed and configured in minutes on any machine. There is nearly endless information on this topic, so I won't attempt to create a definitive source here. For details on how to configure and use IOMeter, see the following videos/articles:

 To create an accurate workload you will need a good understanding of the access pattern of your application. If you don't have that information you can use a tool like Perfmon to do analysis on a fully configured platform. The following counters will be of interest when creating your access specification:

  • Physical or Logical Disk: Average Disk Bytes per Read
  • Physical or Logical Disk: Average Disk Bytes per Write
  • Physical or Logical Disk: Disk Read Bytes/sec
  • Physical or Logical Disk: Disk Write Bytes/sec
  • Physical or Logical Disk: Disk Reads/sec
  • Physical or Logical Disk: Disk Writes/sec

For further information, see this excellent Technet Article.

By collecting this data during the access pattern you wish to emulate you can accurately estimate (with one caveat) the information needed to create the IOMeter access specification. That caveat is determining the sequential vs. random access pattern of the platform since Perfmon analysis will reveal the rest. To determine that, you'll need an understanding of how the platform stores and accesses/writes data. In my case I'm tuning my VM for Splunk, which uses a Map/Reduce functionality that has a highly sequential read/write pattern. If you are unsure of your access pattern then err on the side of configuring for mostly random access (90% or so) since it is generally more common and demanding of the underlying storage subsystem. 

Install IOMeter and Config Access Specification

The following actions can be done on your target testing platform or a different machine to stage settings. We'll be saving our settings for quick use later.

  1. Download and install IOMeter on your server. There are a series of ways to stage files on any VM, but if you're looking for a quick way in the Microsoft ecosystem check out my Onedrive/Azure post.

  2. Open IOMeter as administrator.

  3. Under "Topology" configure your workers. Each worker represents one thread generating I/O. By default it will create one per CPU thread available, but in most cases you will only want one worker per process you are emulating. In my case I'm assuming one large query at a time (and we'll scale from there), so I'll be testing with one worker. If you are unsure stick to one worker and you can move up from there when you become more familiar.

  4. Under "Disk Targets" select the disk you wish to test. This can change in later runs so if the disk you want to test isn't present here select a placeholder.
  5. Under "Disk Targets" configure your "Maximum Disk Size". This configures the size of your test file in sectors, which are considered to be 512 bytes each. To lessen the impact of OS caching you need to ensure this value exceeds the amount of RAM present on the machine to be tested. In my case I'll be testing on a 6GB RAM machine with a (approx) 7.5GB file, so I've configured it for 15000000 sectors. (15000000 sectors * 512 bytes per sector=7,680,000,000 bytes)  To do this quickly take your total desired size (in bytes!) and divide it by 512. (If you aren't certain you got it right, check the size iobw.tst file created at the root of your target drive after the first test is complete)

  6. Testing T: With a 4.5GB Test File

  7. Under "Disk Targets" configure your maximum outstanding I/O. This varies depending on access spec and OS, but I've had consistent (with real application access) results testing with 16 maximum outstanding I/O on windows. 
  8. Under "Test Setup" configure your "Ramp Up Time" and "Run Time". Ramp up need only be about 20 seconds for most scenarios and run time is best between 1 and 10 minutes. My results are based on (many per config) 5 minute tests. 
  9. Under "Access Specification" select your access spec. There is far too much to get into here; either select one or many existing access specifications that suit you needs ("4k 75% read" is a good start if you don't care) or create your own based on your findings from the Analysis/Create workload section. For the purposes of my test I made a "_Splunk" access spec with the following characteristics ascertained from my earlier performance testing:
    1. Transfer Request Size: 32kB (NOTE: My access spec may not reflect yours. Most won't be this large)
    2. Percent Read/Write Distribution: 53% Write/47% Read (NOTE: My access spec may not reflect yours. Most specs won't be this write heavy)
    3. Percent Random/Sequential Distribution: 75% Sequential/25% Random (NOTE: My access spec may not reflect yours. Most specs won't be this sequential)

  10. Add your access specification to the list of queued tests if you haven't done so already (removing all others).

  11. Click the disk icon to save the settings to an ICF file. This file will save all your settings including custom access specifications if applicable. Since this file is what you'll use to shortcut future testing, save it somewhere easy to transfer to other VMs such as OneDrive, Dropbox, SpiderOak, etc.

Run the Test

After setting up or loading your test settings, all you need do is click the green flag to start the test and then select where you would like to save the results. Make sure you don't overwrite any previous results and give the file a meaningful name so you remember what this test represents later, i.e. "results_3disk_1_StorAcct_Striped_32k_sectors_noCache_run1.csv" or similar. 

The test will run for the configured time and then you will be able to run additional tests or analyze results. Since the output is in CSV format, the natural place to look at this data is Excel. When IOMeter starts for the first time on a given disk it needs to create the test file. This will take quite awhile in both Amazon EC2 and Azure. (15 mins for my 7.5G for example) I believe this is due to the way space is allocated on the backend storage. Once this is created, however, you can run subsequent tests on the same volume without needing to wait for the test file to be created. Once the run is done I recommend running several more to ensure your tests aren't subject to wild performance swings. More on analysis in part 2 of this article.

How Much Will This Cost?

Since you're charged by transaction I'm sure you will be wondering how much this will cost. Let's break down your above baseline (system running) cost in Azure:

IOPS are currently capped at 500 for standard tier machines (300 for basic). Storage transactions are currently $0.01 per 100,000. (halved on 3/14/14) For every 5 minute test per disk you access you will then execute a maximum of 150,000 transactions. As a one time per configuration cost, you will need to build the test file which will be (test file size/volume sector size) transactions. For example, a 7.5 Gib test file will be approximately 1,875,000 transactions assuming a default 4kb sector size. (7,500,000,000/4000)

Test transactions + creation transactions = 2 million or so IOPS, or $0.20 @ .01 per 100,000. So... not much. The amount is generally trivial on Amazon EC2 as well. While this methodology will save you some in transaction costs, the main savings will be in time & labor. (which is usually our real cost anyhow!)

Further Optimization

Once you are comfortable with this process I would advise doing the following to further optimize this process. After doing so you may be able to automate the whole routine!

  • Create standard Perfmon counter sets for disk access and save/import them as a template
  • Script the Perfmon analysis with PowerShell
  • Create or download IOMeter templates for common access routines and include them with your set.
  • Script the installation and running of IOMeter, including multiple runs and uploading results to a common location. Easy to do with PowerShell and refer to the IOMeter manual for command line options (page 75 or so).
  • Package up all your assets with a custom installer and put it in an easy to get location. (mmmm... Chocolatey)
  • If you want angry followers and think digital bits are out there to be wasted, auto tweet your results! (maybe not this)

In Closing

I/O testing in the cloud is certainly feasible but requires a little extra discipline. With several access specifications in your toolkit you can conquer most performance problems quickly. What to do if your cloud platform doesn't provide your desired IOPS? Coming up in part 2!