Sunday, July 6, 2014

Setup Your Own Chocoloatey/NuGet Repository

In this article we'll examine setting up a NuGet/Chocolatey repository in your enterprise to distribute software. This will allow you to easily distribute development and software packages throughout your network.

NuGet? I don't need any more candy.

NuGet started life NuPack (not to be confused with NuPack), an open source solution for managing .NET packages. Since then it has evolved into a mature platform with numerous interfaces including a Visual Studio plugin, command line, and Mono support. Chocolatey and PowerShellGet are built on that framework. Speaking of Chocolate...

Chocolatey? I told you already, no more candy.

Where NuGet was meant for .NET packages, Chocolatey, which is built on the same infrastructure, is meant for machine (Windows) packages. Think of it like apt or yum for windows. PowerShell has also already shipped a preview of PowerShell OneGet which can use Chocolatey repositories.

But There are Already Repositories for These and I'm Hungry! Why Would I Make My Own?

Yup! There are great public repositories for Chocolatey and NuGet, but those are geared at freely available public software that wouldn't be built for an explicit purpose. By hosting your own you can host custom .NET packages specific to your business unit or even package commercially available software for distribution with Chocolatey provided your licensing is up to snuff.

Hosting Options

There are several options to get going that vary in terms of hosting location, ease of installation, and scalability. Some of the more popular options include:
  • NuGet Server : A basic server that runs on-premise and is easy to setup. Doesn't have granular security features and will only scale so far before it slows down.
  • NuGet Gallery: More complex NuGet server package that includes advanced security features and will scale for larger implementations (this is what the main public NuGet repo uses). 
  • MyGet: A commercial NuGet repo service hosted in Azure. Has a limited free tier and reasonably priced paid tiers. Worth consideration if you don't want to host your own infrastructure. 

The assumption with this article is that you're hosting your first NuGet/Chocolately repo for the purposes of your enterprise or team use. Since the NuGet server includes most needed functionality for that purpose and can be replaced with the NuGet Gallery as you grow, we'll set up a NuGet Server(the first option) in this article.

Let's Get Started!


  • We'll be setting up on a Windows Server 2012 R2. This will work on Windows Server 2008 and up. I'm assuming you have one set up and ready to go.
  • You will need Visual Studio Express 2010 or newer on your workstation. (Preferably not on the server)
  • You will need Admin rights on both the server and your workstation.
  • In enterprise environments you often have to make due with the resources you have available, so we'll be setting up the repo as a virtual application in IIS rather than its own site so that it can share port 80 for the sake of simplicity.

IIS Setup on Server

We'll walk through installing the minimum IIS requirements to run the NuGet Server package. Everything here could be very easily done with PowerShell but we'll use the GUI to make for a more visual tutorial.

  1. On the server where you will host the application, start the "Add Roles and Features Wizard"
  2. Click "Next" until you advance to the "Server Roles" section. If you're executing remotely make sure you select the correct server.
  3. Of the Roles listed, select "Web Server (IIS)" and select "Add Features" when prompted. Click "Next". 

  4. On the "Features" page, expand ".NET Framework 4.5 Features" and ensure ".NET Framework 4.5" and "ASP.NET 4.5" are checked. Click "Next".

  5. Click "Next" to advance to the "Role Services" section under "Web Server Role (IIS)" and select the following (only the most granular required, not the headings):
    • Web Server
      • Common HTTP Features
        • Default Document
        • Static Content
      • Health and Diagnostics
        • HTTP Logging
      • Performance
        • Static Content Compression
      • Security
        • Request Filtering
      • Application Development
        • .NET Extensibility 4.5
        • ASP.NET 4.5
        • ISAPI Extensions
        • ISAPI Filters
    • Management Tools
      • IIS Management Console

  6. Click "Next" and then click "Install".
  7. You shouldn't need to reboot, but check the installation status and do so if requested.

IIS Setup on Server

Now we'll set up the site that the NuGet Server will be served from. As mentioned earlier, I will walk you through setting it up as a virtual application off of the default web site. This configuration would allow you to share port 80 with an existing site as well as show you how to configure this application below the root, which does need a bit of special consideration worth mentioning.
  1. Create the directory structure for your site. I always put my IIS sites on a non-system drive with the permissions locked down. In this example, I'll be using D:\Sites\NuGetRepo .
  2. Create the directory for the NuGet/Chocolately package repo. We'll configure this below. This directory uses a different permissions structure and could potentially be shared out over your LAN, so it may be beneficial to place this separate from the site. In my example I'll be using D:\NuGetRepo .
  3. (Optional/Best Practice) At the D:\Sites and D:\NuGetRepo levels disable inheritance ensure only Administrators and SYSTEM have write access. Do not allow any other access at this time, we'll get to that below.
  4. On the server to host NuGet Server, open the IIS management tool.
  5. Right click the Default Web Site (note this could be any web site) and select "Add Application..." (A virtual directory will not work!)

  6. Set the alias to "NuGet", leave the "Application Pool" on "DefaultApplicationPool" (again, you could change this if desired) and set the physical path to what you created for the site. For our example we're using "D:\Sites\NuGetRepo" . 

  7. The default out-of-box settings should work for the site, but in case the Default Web Site settings have been changed you may want to refresh your view and ensure IIS authentication is set to "Anonymous". If desired, change the logging location as well (D:\logfiles\IIS\NuGetRepo for example).

Setting File System Permissions on IIS Server

For clients to access the site and repo successfully we need to set file system level permissions. If you have used different directory names above substitute them here accordingly.

Note: We're assuming you have administrative access to this server from your Workstation as well to deploy the code in the steps below. If not, you'll need to grant whoever will be deploying the site access to the site folder. If you are admin, don't worry about it.

  1. The directory containing the website needs to be read by the AppPool account and the Anon user account. Right click D:\Sites\NuGetRepo and select "Properties".
  2. Click "Security", "Edit", and then "Add". Change the "Location" to the local system name.
  3. Give the default web site application pool virtual service account and anonymous account permissions by typing "IIS APPPOOL\DefaultAppPool;IUSR", clicking "Check Names" and then "OK". Again, if you have elected to use a different site/pool/acct you will need to take that into account. This should resolve to two accounts, "DefaultAppPool" and IUSR".

  4. Give each of the added users "Read & Execute", "List folder contents", and "Read" permissions and then click "OK".

  5. The directory containing the actual repo only needs to be read by the AppPool account. Right click D:\NuGetRepo and select "Properties".
  6. Click "Security", "Edit", and then "Add". Change the "Location" to the local system name.
  7. Give the default web site application pool virtual service account permissions by typing "IIS APPPOOL\DefaultAppPool", clicking "Check Names" and then "OK". Again, if you have elected to use a different site/pool/acct you will need to take that into account. This should resolve to one account, "DefaultAppPool".
  8. Grant it "Read & Execute", "List folder contents", and "Read" permissions and then click "OK".

NuGet Server Config on Workstation

Now we'll grab the NuGet Server package and configure it accordingly. Note some of these options will vary slightly depending on which version of Visual Studio you are using. I'm using 2012 Premium but everything is possible in 2010 Express and up.

Note: We are assuming your IIS server is accessible to you and has file sharing turned on to push the site. If you are unable to get to the filesystem of the server from this machine you will need to use a different deployment mechanism when we get to that step.

  1. On your workstation, open Visual Studio and start a new Project by selecting "File"->"New"->"Project"

  2. Navigate to "Installed"->"Templates"->"Visual C#"->"Web" and select "ASP.NET Empty Web Application"

  3. Right click on your newly created application under the solution and select "Manage NuGet Packages"

  4. Assuming the defaults of the feed and "Stable Only" are selected, type "nuget.server" in the search box and hit Enter
  5. Select the "NuGet.Server" package and click "Install". This will install the NuGet server package and any dependencies. Accept license agreements associated with the other packages to continue and then close the package management window.

  6. The only thing we need to customize is the web.config  file for our installation. In the Solution Explorer click "Web.config" under the web application. Note: This file is also where you can control API Key behavior, but that is outside the scope of this article.
  7. Look for the add key="packagesPath" entry in the web.config file under the "<appSettings>" heading. We need to set this to the location of our repository. Change <add key="packagesPath" value=""/> to <add key="packagesPath" value="D:\NuGetRepo"/> (or other directory if appropriate). Note that there is no trailing slash. Save your project.

  8. Now we need to publish. Click "Build"->"Publish WebApplication..."

  9. If you already have a working publishing profile for the web server, select it and skip to step 12. Otherwise, select <new profile> from the drop-down box, enter a name, and click "OK". 

  10. Change "Publish method" to "File System" and enter the full path to the web server site location, I.E. "\\<server>\d$\sites\NuGetRepo\" . Click "Next".
  11. Accept the default publishing settings and click "Next". 
  12. Review the settings and click "Publish". 
  13. Review the Output window to ensure there weren't any errors.

  14. Test your NuGet Server by navigating to http://<servername>/NuGet/ . If you encounter errors be sure to browse to it locally on the server to get the full error information.

That's it! Now all you need to do is configure the source in your clients, make packages, and enjoy! For instructions on those steps see below, and stay tuned for more. Thanks for reading!

Creating NuGet/Chocolatey Packages

Chocolatey Docs: Create Packages Quick Start
NuGet Docs: Nuspec Reference
Chocolatey Docs: Chocolately Templates
NuGet Docs: Creating and Publishing a Package
Scott Hanselman: Creating a NuGet Package in 7 Easy Steps!
Chocolatey Docs: Creating Chocolatey Packages
Hong Xu: Create and Publish Chocolatey Packages
NuGet Docs: Configuration File and Source Code Transformations

Configuring Sources

Chocolatey Docs: Source command
NuGet Docs: Visual Studio Package Sources


NuGet Docs: Hosting Your Own Feeds
Scott Hanselman: Is the Windows User Ready for Apt-Get?
MBrownNYC: Create Your Own NuGet Server to Serve Packages for Chocolatey
NuGet Docs: An Overview of the NuGet Ecosystem

Tuesday, May 27, 2014

Cloudy I/O Performance - Increasing Azure IOPS (Part 2 of 2)

Note: This is part 2 of a 2 part post. You can find part 1 here.


In the last article we discussed a repeatable testing methodology to quantify storage performance in the cloud, and in this article we'll put that methodology into practice. I've done substantial testing in Azure and aim to illustrate what your options are for scaling performance at this point in time.


I undertook this project to see what can be done to increase disk I/O in Windows Azure IaaS. Upon researching the topic I found several interesting articles. Among those are:

There seems to be little consensus regarding disk striping in Windows Azure IaaS. Some blogs recommend this while some of Microsoft's own writing seems to discourage it. After combing through the options the following points stand out:

  • Disk Striping (Software RAID 0) may or may not increase performance based on your workload.
  • Striping will increase I/O capacity to a degree (which we'll test here).
  • What software striping solution works better: legacy (Windows software RAID from 2000 to present) or Storage Spaces (new software "RAID" in Windows 2012 and up)?
  • How does NTFS cluster size impact performance?
  • If striping, disable geo-replication as Microsoft explicitly warns against the use of geo-replication with this solution.
  • If possible, use native application load distribution rather that disk striping to split I/O.  (For example, split DB files in SQL across disks)
  • Some articles reference needing to use multiple storage accounts to get maximum performance. This is not true; as of 6/7/2012 storage account targets are 20,000 IOPS per account. Unless you will exceed the 20,000 keep all your disks on one account for the sake of simplicity. We will prove that does not have an impact on performance.

With that said, I want to quantify the solution for my given scaling problem with the notion that if the tests are simple enough to run, this approach can be used for any future scaling problem as well.

Putting it All Together

We'll use the testing methodology outlined in part 1 of this article to collect the results. In this case we need to first add disks and set up stripes in Azure Windows VMs.

Note: To jump straight to Azure disk performance tests, scroll to the bottom of this article.

Create New Disks and Attach to Designated VM

In order to run all the tests listed below, you need to know how to create new disks and attach them to your virtual machine. My favorite solution to this is to use a locally created dynamic VHD and upload it to the location you would like using PowerShell. Let's go through the process of attaching one disk as a primer:
  1. Decide which storage account you will use for these disks. If you plan on doing striping of any kind, ensure the storage account is set to "Locally Redundant" replication (Storage->Desired Storage Account->Configure), as "Geo Redundant" is not supported. Since the replication setting applies to all blobs (Azure's terminology; disks) in that account you may want to have a dedicated account for these disks to keep your others Geo Redundant.

  2. Determine what container you would like to store your Azure disk blob by opening the Azure management portal and navigating to Storage->Desired Storage Account->Containers and copy the URL to your clipboard. To keep things simple you may want to create a new storage container, so do so now and use that URL if desired.
  3. Using Hyper-V (On Windows 2008 or higher including Windows 8) create an empty dynamically expanding VHD disk of your desired size. For my testing I have been using 10GB disks. Note 1: Do not create a VHDX; Azure uses the older VHD format. Note 2: You'll need to re-create the VHD for each disk if you intend on using Storage Spaces as each disk must have a unique ID. 
  4. #create a dynamically expanding 10GB VHD; change size as appropriate
    New-VHD –Path $sourceVHD –SizeBytes 10GB -Dynamic
  5. This disk will be uploaded to the container we selected in step 1. Determine the name you want the disk to be referenced by in Azure and execute the following script:
  6. #import Azure cmdlets
    import-module azure.psd1
    #specify your subscription so PS knows what account to upload the data to
    select-azuresubscription "mysubscriptionname"
    #$sourceVHD should be the location of your empty vhd file
    $sourceVHD = "D:\Skydrive\Projects\Azure\AzureEmpty10G_Disk.vhd"
    #$destinationVHD should be the URL of the container and the name of the vhd you want created in your account. Obviously for subsequent disks you need to change the VHD name. 
    $destinationVHD = ""
    #now upload it. 
    Add-AzureVhd -LocalFilePath $sourceVHD -Destination $destinationVHD

  7. Add this new disk as available to VMs by navigating to Virtual Machines->Disks->Create

  8. Enter the desired management name for this disk and input or browse to the URL of the VHD you just uploaded and click the check box.
  9. Attach the disk to your VM by navigating to Virtual Machines->Ensure your desired VM is highlighted->Attach->Attach Disk

  10. Select the disk we just added. Your cache preference will depend on the application. In my case this is off but you will want to use the methodology outlined in the first part of this article to test caching impact for your application. Note a change of cache status requires a VM reboot.

Now for a brief tutorial on how to set up our two types of striped disks; you'll likely only be using one of the two but I'll cover both just in case. Performance results of each are outlined later in this article.

Set Up a Traditional Software Stripe in Windows

Setting up a traditional software stripe is easy. I've tested this on Windows 2003 and higher.

  1. Logon to your VM as an admin and open the Disk Management tool.
  2. If prompted, allow the initialization of the disks.
  3. Right-click on one of the newly created empty volumes and select New Striped Volume.

  4. Select the desired disks and continue.

  5. Create and format a new NTFS disk using your striped volume. Make sure to pay attention to the cluster size (results below).

Setup a Storage Spaces Software Stripe in Windows 2012 or Higher

Microsoft introduced a new approach to disk pooling in Windows Server 2012 and Windows 8 called Storage Spaces. This interesting new tech allows for a myriad of different configuration options including disk tiering which can be useful for on-premise servers. In this case we'll be using the "simple" pool type which is similar to disk striping.

  1. Open Server Manager and navigate to File and Storage Services -> Volumes -> Storage Pools
  2. Under Storage Pools you should see "Primordial". (As opposed to "Unused Disks". I'm guessing someone was pretty proud of that.) Right click it and select "New Storage Pool".

  3. Walk through the Wizard selecting each disk you would like to be part of the pool.

  4. On the results page, ensure "Create a virtual disk when this wizard closes" is selected and click "Close".

  5. Walk through the Virtual Disk Wizard, specifying a meaningful name and selecting simple storage layout and fixed provisioning.

  6. On the results page, ensure "Create a volume when this wizard closes" is selected and click "Close".
  7. Complete the New Volume Wizard specifying your desired drive letter and desired NTFS cluster size.

Run Tests/Collect Results!

Now that we have our disks configured, we need to run our tests. For instructions how how to do so, see part 1 of this topic here.

When analyzing the IOMeter output you will want to pay special attention to the following metrics:
  • IOPS (Cumlative, Read, Write, Higher is better)
  • MBps (Cumlative, Read, Write, Higher is better)
  • Response Time (Avg, Avg Read, Avg Write, lower is better) 

If putting the data together for a report, Excel works nicely as I'll display below.


Now for the most important part, the findings. Tests performed:

Sector Size Tests:

  • 1 Disk, 4k Sector Size (default)
  • 1 Disk, 8k Sector Size
  • 1 Disk, 16k Sector Size
  • 1 Disk, 32k Sector Size
  • 1 Disk, 64k Sector Size
  • 3 Disks, 4k Sector Size (results confirmation test)
  • 3 Disks, 32k Sector Size (results confirmation test)

Table 1-Cluster Size Tests
Table 2-Cluster Size Verification

Sector size tests echo what others have observed with Azure; since IOPS are capped at 500 (or 300 for basic VMs) larger sector sizes can result in higher throughput. In my case 32k was the sweet spot; depending on your workload your results will vary slightly. I have seen consistently (albeit slightly) higher performance with larger sector sizes in Azure.

Legacy Disk Striping Tests:

  • 1 Disk, 32k Sector Size
  • 2 Disks, Striped Volume, 32k Sector Size
  • 2 Disks in 2 Storage Accounts, Striped Volume, 32k Sector Size (Multiple Storage Account Test)
  • 3 Disks, Striped Volume, 32k Sector Size
  • 4 Disks, Striped Volume, 32k Sector Size

<See Bar Charts Below Under Disk Striping Methodology>

Table 3-Legacy Striping and Storage Account Tests

You can see with one disk we get 500 IOPS as expected. From there we can see a scaling trend that is most definitely not linear. Two disks result in 33% higher performance, while three disks add an additional 23% (64% from one disk). Adding the fourth disk actually results in a drop from three disks, coming it at 5% lower than three and 56% higher than one.

Additionally, we also see that splitting disks across storage accounts makes no appreciable difference.  Note: Bar charts for this results section have been combined into the graphs below.

Disk Striping Methodology Tests:

  • 2 Disks, Striped Volume 32k Sector Size
  • 2 Disks, Storage Spaces Simple, 32k Sector Size
  • 3 Disks, Striped Volume, 32k Sector Size
  • 3 Disks, Storage Spaces Simple, 32k Sector Size
  • 4 Disks, Striped Volume, 32k Sector Size
  • 4 Disks, Storage Spaces Simple, 32k Sector Size

Table 4-Legacy Striping vs. Storage Spaces Test

Now we compare legacy striping to the newly introduced Storage Spaces. Two disk scaling is a definitive win for Storage Spaces, while beyond that legacy striping generally performs better (save max latency). In my opinion two disk Storage Spaces stripe is the sweet spot here (56% IOPS improvement!) when considering that the with more disks we add complexity that doesn't pan out on the performance side.


I hope you have found these results interesting; I certainly have. Even if you choose not to run these tests yourself I hope my results prove helpful when sizing your machines. Since the access pattern I used is relatively universal it should be applicable in most scenarios.

Software level disk striping works relatively well in Microsoft Azure to increase per-disk performance in lieu of a provider level solution similar to Amazon EBS provisioned IOPS. Splitting the workload across logical disks or VMs is preferred but not applicable to all workloads. When employing this solution make sure you select only locally redundant replication because Micrsoft warns that geo-redundant replication may cause data consistency issues on the replication target.

For additional information see the links near the top of this article. Thanks for reading!

Tuesday, May 20, 2014

Cloudy I/O Performance - Deciphering IOPS in IaaS (Part 1 of 2)

Note: This is part 1 of a 2 part post. Part 2 can be found here.


Disk performance scaling options in the public cloud seem limited (particularly in Azure as of this writing), but there are ways to increase your IOPS in IaaS solutions. To add to the performance problem, transactional costs of running application tests can be not only time consuming but expensive. To tune your storage performance reliably you will need a fast, consistent way to test different configurations. This article will cover that methodology and lead into a results/guidance article for Azure (but applicable to others) IaaS storage performance.

We'll be doing this testing on Windows, but you could also easily do this on Linux and the results that I'll be sharing are just as applicable there. To accomplish this testing we'll be using the following tools:

Let's begin!


We will proceed in the following order:
  1. Analyze Workload
  2. Create Test Scenarios
  3. Collect and Analyze Results (Mainly in Part 2)
  4. Findings (In Part 2)


If you plan on emulating my tests you'll need to have access to the following:
  • Microsoft Windows Azure account (note this methodology will worth with EC2 or any other platform including standard hardware/on-prem VMs)
  • IaaS VM Configured. A medium size is recommended for testing 4 disks or fewer to limit the available memory. More on that below.
  • Administrator access to your VM.
  • Your workload is in fact disk I/O bound. If you're not sure of that you may want to start with this article.
  • Awareness that you will incur additional storage transaction costs by running these tests.

Analysis/Create Workload

Note: If you're just trying to get a general sense for your VM I/O performance capability, you don't need to collect data for a custom access specification. IOMeter includes several tests you can use so skip to the "Install IOMeter..." section below.

The first thing we need to do is create our workload. By using IOMeter we can develop custom access patterns that model common workloads and have the tool and workloads installed and configured in minutes on any machine. There is nearly endless information on this topic, so I won't attempt to create a definitive source here. For details on how to configure and use IOMeter, see the following videos/articles:

 To create an accurate workload you will need a good understanding of the access pattern of your application. If you don't have that information you can use a tool like Perfmon to do analysis on a fully configured platform. The following counters will be of interest when creating your access specification:

  • Physical or Logical Disk: Average Disk Bytes per Read
  • Physical or Logical Disk: Average Disk Bytes per Write
  • Physical or Logical Disk: Disk Read Bytes/sec
  • Physical or Logical Disk: Disk Write Bytes/sec
  • Physical or Logical Disk: Disk Reads/sec
  • Physical or Logical Disk: Disk Writes/sec

For further information, see this excellent Technet Article.

By collecting this data during the access pattern you wish to emulate you can accurately estimate (with one caveat) the information needed to create the IOMeter access specification. That caveat is determining the sequential vs. random access pattern of the platform since Perfmon analysis will reveal the rest. To determine that, you'll need an understanding of how the platform stores and accesses/writes data. In my case I'm tuning my VM for Splunk, which uses a Map/Reduce functionality that has a highly sequential read/write pattern. If you are unsure of your access pattern then err on the side of configuring for mostly random access (90% or so) since it is generally more common and demanding of the underlying storage subsystem. 

Install IOMeter and Config Access Specification

The following actions can be done on your target testing platform or a different machine to stage settings. We'll be saving our settings for quick use later.

  1. Download and install IOMeter on your server. There are a series of ways to stage files on any VM, but if you're looking for a quick way in the Microsoft ecosystem check out my Onedrive/Azure post.

  2. Open IOMeter as administrator.

  3. Under "Topology" configure your workers. Each worker represents one thread generating I/O. By default it will create one per CPU thread available, but in most cases you will only want one worker per process you are emulating. In my case I'm assuming one large query at a time (and we'll scale from there), so I'll be testing with one worker. If you are unsure stick to one worker and you can move up from there when you become more familiar.

  4. Under "Disk Targets" select the disk you wish to test. This can change in later runs so if the disk you want to test isn't present here select a placeholder.
  5. Under "Disk Targets" configure your "Maximum Disk Size". This configures the size of your test file in sectors, which are considered to be 512 bytes each. To lessen the impact of OS caching you need to ensure this value exceeds the amount of RAM present on the machine to be tested. In my case I'll be testing on a 6GB RAM machine with a (approx) 7.5GB file, so I've configured it for 15000000 sectors. (15000000 sectors * 512 bytes per sector=7,680,000,000 bytes)  To do this quickly take your total desired size (in bytes!) and divide it by 512. (If you aren't certain you got it right, check the size iobw.tst file created at the root of your target drive after the first test is complete)

  6. Testing T: With a 4.5GB Test File

  7. Under "Disk Targets" configure your maximum outstanding I/O. This varies depending on access spec and OS, but I've had consistent (with real application access) results testing with 16 maximum outstanding I/O on windows. 
  8. Under "Test Setup" configure your "Ramp Up Time" and "Run Time". Ramp up need only be about 20 seconds for most scenarios and run time is best between 1 and 10 minutes. My results are based on (many per config) 5 minute tests. 
  9. Under "Access Specification" select your access spec. There is far too much to get into here; either select one or many existing access specifications that suit you needs ("4k 75% read" is a good start if you don't care) or create your own based on your findings from the Analysis/Create workload section. For the purposes of my test I made a "_Splunk" access spec with the following characteristics ascertained from my earlier performance testing:
    1. Transfer Request Size: 32kB (NOTE: My access spec may not reflect yours. Most won't be this large)
    2. Percent Read/Write Distribution: 53% Write/47% Read (NOTE: My access spec may not reflect yours. Most specs won't be this write heavy)
    3. Percent Random/Sequential Distribution: 75% Sequential/25% Random (NOTE: My access spec may not reflect yours. Most specs won't be this sequential)

  10. Add your access specification to the list of queued tests if you haven't done so already (removing all others).

  11. Click the disk icon to save the settings to an ICF file. This file will save all your settings including custom access specifications if applicable. Since this file is what you'll use to shortcut future testing, save it somewhere easy to transfer to other VMs such as OneDrive, Dropbox, SpiderOak, etc.

Run the Test

After setting up or loading your test settings, all you need do is click the green flag to start the test and then select where you would like to save the results. Make sure you don't overwrite any previous results and give the file a meaningful name so you remember what this test represents later, i.e. "results_3disk_1_StorAcct_Striped_32k_sectors_noCache_run1.csv" or similar. 

The test will run for the configured time and then you will be able to run additional tests or analyze results. Since the output is in CSV format, the natural place to look at this data is Excel. When IOMeter starts for the first time on a given disk it needs to create the test file. This will take quite awhile in both Amazon EC2 and Azure. (15 mins for my 7.5G for example) I believe this is due to the way space is allocated on the backend storage. Once this is created, however, you can run subsequent tests on the same volume without needing to wait for the test file to be created. Once the run is done I recommend running several more to ensure your tests aren't subject to wild performance swings. More on analysis in part 2 of this article.

How Much Will This Cost?

Since you're charged by transaction I'm sure you will be wondering how much this will cost. Let's break down your above baseline (system running) cost in Azure:

IOPS are currently capped at 500 for standard tier machines (300 for basic). Storage transactions are currently $0.01 per 100,000. (halved on 3/14/14) For every 5 minute test per disk you access you will then execute a maximum of 150,000 transactions. As a one time per configuration cost, you will need to build the test file which will be (test file size/volume sector size) transactions. For example, a 7.5 Gib test file will be approximately 1,875,000 transactions assuming a default 4kb sector size. (7,500,000,000/4000)

Test transactions + creation transactions = 2 million or so IOPS, or $0.20 @ .01 per 100,000. So... not much. The amount is generally trivial on Amazon EC2 as well. While this methodology will save you some in transaction costs, the main savings will be in time & labor. (which is usually our real cost anyhow!)

Further Optimization

Once you are comfortable with this process I would advise doing the following to further optimize this process. After doing so you may be able to automate the whole routine!

  • Create standard Perfmon counter sets for disk access and save/import them as a template
  • Script the Perfmon analysis with PowerShell
  • Create or download IOMeter templates for common access routines and include them with your set.
  • Script the installation and running of IOMeter, including multiple runs and uploading results to a common location. Easy to do with PowerShell and refer to the IOMeter manual for command line options (page 75 or so).
  • Package up all your assets with a custom installer and put it in an easy to get location. (mmmm... Chocolatey)
  • If you want angry followers and think digital bits are out there to be wasted, auto tweet your results! (maybe not this)

In Closing

I/O testing in the cloud is certainly feasible but requires a little extra discipline. With several access specifications in your toolkit you can conquer most performance problems quickly. What to do if your cloud platform doesn't provide your desired IOPS? Coming up in part 2!

Monday, April 21, 2014

Web Application Proxy Server in 2012 R2

Setup of Web Application Proxy Server in Windows 2012 R2 When Microsoft discontinued Threat Management Gateway (which once was Proxy and then ISA server) I must admit I was disappointed; it was a relatively inexpensive authenticated reverse proxy that worked with Exchange Server as well as many other complicated products. In the interim we were told that Unified Access Gateway would be the replacement, but that product isn't as well suited to the task.

Several alternatives are out there, including: Kemp, F5, Nginx, and Squid but either the price or the relative difficulty of setup isn't in line with TMG. Fortunately starting in Windows 2012R2 Microsoft introduced Web Application Proxy which largely fills the gap.

Web Application Proxy/Server 2012r2 release party.Trust me, I paid big bucks for this insider photo.

What is Web Application Proxy?

Web Application Proxy (WAP from henceforth) is based on and replaces Active Directory Federation Services Proxy 2.0. In addition to the ADFS Proxy functionality it also introduces the ability to expose internal resources to external users. These users can be pre-authenticated (and then impersonated for SSO) against your Active Directory infrastructure using ADFS prior to being allowed access to resources. 

Wait, This is ADFS Proxy 3.0?

Yup! That and more. Here's what you can do with it:

  • Authorize external users for access to other claims-aware external or internal resources (Generally SaaS).
  • Allow access (by "reverse" proxy) to multiple internal applications on the same port.
  • Pre-Authenticate users against Active Directory via Kerberos or NTLM to facilitate SSO and access to internal applications (if desired)
  • Expose multiple internal resources on a single IP address/port (generally 443) differentiated by hostname
  • Loadbalance using a session affinity based solution in front of WAP

Let's Go!

This article will cover the following:

  • WAP requirements
  • Set up
  • Forwarding a couple of sample applications
  • Troubleshooting

Software Requirements

Web application proxy is available on Windows Server 2012 R2 and higher, and it requires ADFS 3.0 to be available on the back end. For assistance in setting up ADFS 3.0, see my article here. If you would like to proxy authentication for non-claims aware applications, I.E. Exchange OWA pre-2013 SP1 (SP1 Claims) or Kerberos/NTLM apps, you will need to have the WAP server joined to your domain.

Additionally, you'll need the certificate (private and public key) from your ADFS server and one certificate (again, private and public) for each application you intend to proxy. These certificates must be trusted by your clients, so generally external globally trusted (Digicert for example) certificate authorities are preferred. The certificates need to be installed under the "Personal" portion of the "Local Machine" store on the machine you intend to use as your WAP proxy. If you only intend to host internal resources to domain-joined computers connecting remotely you can use an internal PKI provided your clients trust your issuing CA(s). For information on how to setup an internal CA, see my article here. If you need help exporting your public and private key from your ADFS server and other services, see this article. Note that if these certificates are marked as non-exportable you will need new certificates for those services, so make sure you plan accordingly.

Connectivity and Hardware/VM Requirements

Preferably, your WAP server should be placed in a De-Militarized Zone with a firewall on either side of it. The machine can operate with either one or two Network Interface Cards, but for proper security I recommend two NICs; one internal and one external. Other connectivity options will work, including branching into your internal network on the inside interface, but I won't be covering those scenarios in detail. For all connectivity options see the following diagram:

As for the hardware you can use either real hardware or a VM assuming you have a proper DMZ NIC setup on your Hyper-V/ESX/Xen/whatever host(s). WAP is not a particularly demanding application and uses very little I/O. It is also horizontally scalable with a network level load balancer (f5) so I won't give direct guidance on specifications since it would likely have little relevance to your configuration. As in most cases, performance evaluation and configuration change is the way to go.

After deciding on your hardware and installing the OS, you'll need to configure the NICs. We'll cover that in the next section...


Now that the hardware and OS are ready to go, let's configure the NICs:

Network Configuration

  1. First open the "Network and Sharing Center" and click "Change Adapter Settings". Re-name the NICs "External" and "Internal" according to how they are connected to avoid confusion during set up and troubleshooting.

  2. Give each NIC appropriate IP address settings. The subnet for each will depend on your firewall/switch configuration. Some firewall configurations may require communication stay on a single subnet but if given a choice it is generally better to have them on different subnets. (2 NICs) Leave the default gateway on the internal NIC blank. If your WAP server is not domain joined because you intend on using only claims auth or passthrough (not delegation) then leave your DNS servers blank on the internal NIC as well and be sure to execute step 4.
  3. If the WAP server needs to access resources (ADFS, DC, App) on a subnet other than that the internal NIC is connected to, you will need to add a static route to the server so it knows how to get to that network. For example, if your WAP server is on, your ADFS server is, and your gateway is, you would issue the following command from an elevated command prompt: route ADD MASK IF -p . For more information, see this article.
  4. <Only if you haven't specified DNS servers on the internal NIC>To look up the ADFS server for claims verification you will need to add each internal ADFS server address to your %SYSTEMROOT%\system32\drivers\etc\hosts file. Do this now; if you need further instructions see this article.
  5. Now we'll secure the external NIC. Open the properties of that NIC and on the "Networking" tab unbind everything except for "QoS Packet Scheduler" and the protocol you intend on using (IPv4 or IPv6).
  6. If using IPv4, drill into the properties of that protocol and select "Disable NetBIOS over TCP/IP" under the "WINS" tab. Also ensure you disable "Register this connection's address in DNS" on the "DNS" tab.

  7. On your external firewall, open the ports for the services you wish to forward. (443 would be common)
  8. On your internal firewall, open ports necessary for AD/other communication. Here is an excellent guide.

WAP Installation

  1. In server manager, click "Manage->Add Roles and Features".
  2. Click "Next" on the "Before you begin" screen.
  3. For "Installation Type" select "Role-based or feature-based installation" & click "Next".

  4. Select your desired WAP server and click "Next".
  5. On "Add Roles and Features Wizard", select the "Remote Access" role and click "Next".

  6. You do not need to select any features; click "Next" on the "Select features" page.
  7. Read the dialog presented on the "Remote Access" screen and click "Next".
  8. Leave "Include management tools" checked and click "Add Features".

  9. On the "Select role services" page select "Web Application Proxy" and click "Next".

  10. When presented with the confirmation screen, click "Install".

WAP Configuration

Prerequisite Note: For this step you will need the public and private key for your internal ADFS server(s) installed to the "Personal" section of the "Local Computer" store on your WAP server. For more information, refer to "Software Requirements" above.

  1. After installation, server manager will notify you that configuration is required. Click the notification flag and select "Open the Web Application Proxy Wizard".

  2. On the "Welcome" screen of the "Web Application Proxy Wizard" click "Next".
  3. On the "Federation Server" screen, enter the external fully qualified domain name of your federation service. This needs to be registered in external DNS (i.e. resolvable from the internet).  For more information, see my article linked under "Software Requirements". Insert the username/password of a domain administrator account to properly register this as a proxy server. This account will not be used after this point, so a service account is not necessary. Click "Next".

  4. Select the ADFS certificate you installed earlier from the dropdown and click "Next".

  5. You'll be presented with the configuration details. If you intend on setting up another WAP server for load balancing copy the powershell command down for later use. Click "Configure" to continue.

  6. You should see the message "Web Application Proxy was configured successfully".

Setup Verification

To verify basic functionality:

  1. On the WAP server, open up Tools->Remote Access Management Console
  2. On the left-hand navigation pane, select "Operations Status"
  3. The status of the WAP server will be relayed in the middle pane. Do not be surprised to see the server listed twice, once for the FQDN and once for netbios. This is normal. 

Now that setup is complete, let's move on to publishing!

Example A: Proxying Exchange 2010 OWA (Pre-auth/Non-Claims/Delegated Authentication)

Now that we've completed the ADFS/WAP setup, let's walk through the setup of a non-claims aware application using Kerberos/NTLM delegation. A popular example would be Exchange Outlook Web Access; I'll be using version 2010 SP3.

Prerequisite Note: For this step you will need the public and private key for the services you wish to host (Exchange OWA in this case) installed to the "Personal" section of the "Local Computer" store on your WAP server. Requests destined for your back-end service are decrypted and re-encrypted at the WAP server. For more information, refer to "Software Requirements" above.

Trust Setup

First, we must set up the new trust on the ADFS server. On your back-end ADFS server (not the WAP server) do the following: 

  1. Open the AD FS management tool and click the "Trust Relationships" folder on the left hand navigation pane. 
  2. In the right hand action pane, click "Add Non-Claims-Aware Relaying Party Trust".

  3. A wizard will pop up; click "Start" on the welcome screen.

  4. Give this rule a (human) meaningful name, i.e." <Servername> Exchange OWA" along with a description if desired and click "Next".

  5. Now we'll add the non-claims aware relaying trust party identifier (which in this case is simply a URL). Enter the external fully qualified domain name of the server complete with url path ending in a trailing forward slash, i.e. and click "Next". Note: WAP identifiers must end in a trailing slash even though the MSFT example doesn't look that way.

  6. On the next screen, "Configure Multi-Factor Authentication Now?", you can set up multi-factor authentication should you desire. I will not be configuring multi-factor for this demonstration, but note you can always set it up later if desired. Leave "I do not want to configure..." selected and click "Next".

  7. Review your configuration on the "Ready to Add Trust" screen and click "Next".
  8. The "Finish" screen will have a checkbox starting with "Open the Edit Authorization Rules dialog..." that is checked by default. Leave it checked because we will want to specify who is allowed access through to the back-end via this rule right away. Click "Finish".

  9. A dialog box titled "Edit Claim Rules for <Rule Name>" will come up allowing us to define who should be allowed access via this rule. Click "Add Rule'.

  10. You will be asked to select a rule template. What you select here will depend on what is reasonable for your environment. You should create (a) rule(s) that correspond with the least access required possible as anyone getting past this point will be able to attempt to authenticate directly against the target internal resource. You may, for example, want to use a specific Active Directory group with only the users that need access to this resources. For the purposes of testing and this article, however, I will be using a simple "Permit All Users" rule. This will allow anyone in AD through and is suitable for testing or in addition to other rules. Select the rule template and click "Next".

  11. Click "Finish" to close the "Add Issuance Authorization Claim Rule Wizard"
  12. So long as you do not want any additional rules, click "OK" to close the Edit Claim Rules dialog box.

Back-end Service Configuration

Now we need to configure our back-end service to accept the authentication coming from the WAP server. In our case we will need to change the  authentication mechanism allowed by Exchange from forms based to integrated authentication.Your steps here will differ depending on what service you are offering up.

  1. Open the Exchange management console and Click on "Server Configuration"->"Client Access"
  2. For each server in your Exchange farm, click the "Outlook Web App" tab, then right click "owa (Default Web Site)" and click "properties".

  3. Select the "Authentication" tab and click "Use one or more standard authentication methods:" then select only "Integrated Windows authentication".

  4. Click "OK" on the warning.
  5. Repeat steps 2 and 3 for the "ecp (Default Web Site)" under "Exchange Control Panel" on each server
  6. Using an elevated command prompt or PowerShell, execute "iisreset -noforce" to restart IIS. (This should be done in a maintenance window)

Configure Delegation

Now we'll configure the WAP server AD computer object so that it can pass authentication to your back-end server(s). Note the SPNs referenced to not need to be manually registered at a domain level.
  1. With domain administrator privileges, open the Active Directory Administrative Center. (Active Directory Users and Computers if you prefer)
  2. Navigate to and open the properties of the WAP server computer object.

  3. Click or scroll down to the "Delegation" section of the object.

  4. Select "Trust this computer for delegation to specified servers only" and "Use any authentication protocol" (since we'll be using NTLM here; select Kerberos only for applications that support it) then click "Add..."
  5. When presented with the "Add Services" dialog, click "Add Users or Computers...".

  6. Type the name of the back-end Exchange server(s) and click "Check Names" and then "OK"
  7. Scroll to the http/SERVERNAME.domain.ext (since we're serving up the HTTP protocol; change if your app differs) and select it, then click "OK". Note: If using Active Directory Administrative Center you need to add the FQDN name and the NETBIOS name; if using Active Directory Users Computers you need only add the FQDN and both will be added.

Configure Application Publishing on WAP Server

Finally we'll configure WAP publishing for this application.
  1. On the WAP server, open the Remote Access Management Console (can be found in admin tools or tools from Server Manager)
  2. In the left hand navigation plane, select "Configuration"->"Web Application Proxy"
  3. On the right hand action pane, click "Publish"

  4. A wizard will come up. Click "Next" on the welcome screen.
  5. When prompted for preauthentication type, select "Active Directory Federation Services (AD FS)". This ensures requests are authenticated by ADFS prior to being passed onto the back-end server. Click "Next".

  6. For "Relying Party", select the trust rule we created earlier under the "Trust Setup" section above and click "Next".

  7. Now the meat of the settings; on the "Publishing Settings" step enter a meaningful name for this connection (i.e. Exchange 2010 OWA), the external URL it will be accessed by (i.e., select the external certificate for that service (see "Software Requirements" above), the internal URL (preferably should match the external but doesn't have to in all cases), and the server SPN that we specified on the step above, then click "Next".

  8. You will be shown the confirmation screen with the appropriate PowerShell command line for the options you have configured. Should you want to repeat a similar publishing step, copy and retain this command line for use later. Click "Publish".

  9. The results screen will display the publishing status. Assuming all is well, click "Close" to close the wizard.

Example B: RDP Proxy (No Pre-auth/Passthrough)

Passthrough applications are substantially easier (and less secure) because they do not require any set up in ADFS and do not subject the user connection attempt to any authentication before passing it on. This isn't to say the back-end service won't require authentication, however, but it is still less secure since you are opening your back-end service up to processing logon requests directly from the internet. 

Publish RDP Proxy on WAP Server

In this example I will publish RDP proxy direct to the internet proxied through the WAP server. This allows me to serve up this application on the same IP address and port as other services assuming the hostname requested is unique. Again, this section assumes the public and private keys associated with the URL you intend to use installed in the WAP server's "personal" store. In my example I use a hostname of ""
  1. On the WAP server, open the Remote Access Management Console (can be found in admin tools or tools from Server Manager)
  2. In the left hand navigation plane, select "Configuration"->"Web Application Proxy"
  3. On the right hand action pane, click "Publish"
  4. A wizard will come up. Click "Next" on the welcome screen.
  5. When prompted for preauthentication type, select "Pass-through" and click "Next".

  6. On the "Publishing Settings" step enter a meaningful name for this connection (i.e. RDProxy), the external URL it will be accessed by (i.e., select the external certificate for that service (see "Software Requirements" above), and the internal URL (preferably should match the external but doesn't have to in all cases). Click "Next".

  7. You will be given a summary of the publishing rule about to be created and a Powershell command of it's equivalent. If you are satisfied with the details click "Publish".


Something not working? Check out the following locations:

Event Logs

Applications and Services Logs->AD FS/Admin
Applications and Services Logs->Microsoft->Windows->WebApplicationProxy/Admin


Should you need to enable debug logging, there is an excellent article here demonstrating how to do so. One word of caution, however; should you edit the C:\Windows\ADFS\Config\microsoft.identityServer.proxyservice.exe.config referenced therein I recommend backing it up first. If not formatted correctly WAP will start up successfully with the values listed in the file, but when it comes time to rotate the ADFS Proxy Trust certificate (an automatic action that happens once every 3 weeks) the configuration of the new cert will fail. In that case you would see an Event ID 422 logged to AD FS/Admin stating "Unable to retrieve proxy configuration data from the Federation Service.".

(Excellent!) References

Want more? Here are some wonderful resources!

Technet: Web Application Proxy Overview
Technet: Install and Configure the Web Application Proxy Server
Technet: Installing and Configuring Web Application Proxy for Publishing Internal Applications
Technet Overview Guide: Connect to Applications and Services from Anywhere with Web Application Proxy
Technet Social: On WAP and IPv6
Technet Social: ADFS, WAP, and Logging
Technet Ask PFE: FAQ on ADFS Part 1, Excellent coverage of SQL vs. Internal DB and certificates for AD FS (Not WAP per se)
Marc Terblanche: Windows 2012 R2 Preview Web Application Proxy - Exchange 2013 Publishing Tests
Ask the DS Team: Understanding the ADFS 2.0 Proxy (Not about WAP but excellent coverage of AD FS proxy functionality)
Rob Sanders: Troubleshooting ADFS 2.0 (Not about 3.0/WAP but too good not to be mentioned)
Technet: Configure Event Logging on a Federation Server Proxy (Still partially relevant)
Technet: Things to check before troubleshooting ADFS 2.0 (Still partially relevant)
Technet: Configuring Computers for Troubleshooting AD FS 2.0 (Still partially relevant)

Thanks for reading, if you have questions or comments leave them below!