:::: MENU ::::

Thursday, May 29, 2014

Azure Hosted Services offers several really awesome features over using physical servers or standard VM infrastructure. Two of these are the staged deployment model and management SDK, which includes a powershell module. Using these two features, we are going to build a deployment script that deploys a new set of services (servers) in Azure, using a Virtual IP swap to replace the existing production instances only after the new deployment is fully running.
The goal of this post is to build a powershell script that will:
  • Upload a compiled Package to Azure Storage
  • Create a new Staging deployment
  • Wait for all of the instances of the new deployment to be running
  • Promote the new deployment to Production
  • Stop the instances of the old production deployment and keep them handy in the Staging slot
The sample project and script are available on github: tarwn/AzureHostedServiceDeploymentSample on github
This script is not intended to be production ready. I have spent no time at all refactoring into readily re-usable methods and do not use it in a production environment myself. It will show you how to use the individual methods and give you the pieces you need to build one that fits your processes.

Initial Steps

If you would like to build a sample project of your own and follow along, here’s the steps you will need to perform first:
  1. Create an Azure project in Visual Studio – Create/attach one or more web or worker roles
  2. Remove the Diagnostics entry in the web.config or add storage settings
  3. In the Project References, select “Microsoft.Web.Infrastructure” and set “Copy Local” to “True”
  4. Create a Hosted Service in the Azure Dashboard
  5. Create a Storage Account in the Azure Dashboard (pick the same region as prior step)
  6. Install the latest Azure SDK + Azure Powershell Module (available in Web Platform Installer)
  7. Download your publish settings from https://windows.azure.com/download/publishprofile.aspx
If you know your way around Azure, steps 4-7 are mostly reading xkcd while the installers run.

Create the Deployment Script

Now that we have a project and all the prerequisites out of the way, let’s start building the script. As a reminder, these are the steps we intend to follow:
  • Upload a compiled Package to Azure Storage
  • Create a new Staging deployment
  • Wait for all of the instances of the new deployment to be running
  • Promote the new deployment to Production
  • Suspend the instances of the old production deployment and keep them handy in the Staging slot
Let’s go!

Connect to Azure

The first thing we need to do is import the Powershell module and use the publish settings to set our subscription.
PowerShell
1
2
3
4
5
6
7
Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\ServiceManagement\Azure\Azure.psd1"
 
Import-AzurePublishSettingsFile $publishSettingsPath
 
Set-AzureSubscription $subscriptionName -CurrentStorageAccount $storageAccountName
 
Select-AzureSubscription $subscriptionName -Current
$publishSettingsPath, $subscriptionName, and $storageAccountName are parameters I have passed into my script
We load the Azure module from the Microsoft SDKs folder (this is where it installs from Web PI). We then use the *.publishsettings file to “log in” to the Azure subscription, set the storage account we will be using by default, and set this subscription as the default one for our current powershell session.
Import-AzurePublishSettingsFile basically logs into your Azure account using the supplied publishsettings file, storing a management certificate and a subscription data file. Once we’re “logged in”, we can use the rest of the Azure cmdlets to interact with our Azure resources.
Set-AzureSubscription sets the “current” storage account for the subscription, basically defining a default so we don’t have to specify it throughout the script. Another option would be to use New-AzureStorageContext to create context for the Storage Account and pass this to the calls that interact with Storage.
Select-AzureSubscription does exactly what you would expect, it updates the subscription data in our Powershell context. By specifying -Current, we only update the subscription for our current session.

Upload a compiled Package to Azure Storage

Now that we have access to Azure, we can move on to upload the package. This package can be generated from Visual Studio by right clicking on the Cloud Project and choosing “Package”. In an automated process, we can use MSBuild to create this package before calling this script to upload and deploy it.
PowerShell
1
2
3
4
5
6
7
8
9
10
11
12
$container = Get-AzureStorageContainer -Name $containerName -ErrorAction SilentlyContinue
 
if(!$container){
    New-AzureStorageContainer -Name $containerName
}
 
Set-AzureStorageBlobContent -File $packagePath -Container $containerName `
                            -Blob $fullTargetPackageName -Force
 
$blobInfo = Get-AzureStorageBlob  -Container $containerName -blob $fullTargetPackageName
 
$packageUri = $blobInfo.ICloudBlob.Uri
$packagePath and $containerName are parameters passed to the script, $fullTargetPackageName is generated with a timestamp.
First we create the container if it doesn’t already exist, then we upload the package (without prompting), and once that is complete we capture the blob information and extract the URL for later use in the deployment.
Get-AzureStorageContainer attempts to retrieve a container with the given name. In this case I’ve used the ErrorAction of SilentlyContinue so that if it doesn’t exist I can create it.
New-AzureStorageContainer creates a container with the given name. Since I haven’t specified permissions, the container will be created with the most restrictive rights.
Set-AzureStorageBlobContent uploads the contents of a file specified by -File to the given -Container value with a final name specified by the -Blob property. The -Force overrides any questions the command might have, like “are you sure you want to do that”.
Get-AzureStorageBlob retrieves the information about a given Blob, allowing us to extract the Uri property for later use.

Create a new Staging Deployment

Once we have the package uploaded to blob storage, we are ready to create the new Staging deployment.
PowerShell
1
2
3
4
5
6
7
8
9
10
$deployment = Get-AzureDeployment -ServiceName $serviceName -Slot Staging `
                                  -ErrorAction SilentlyContinue 
 
if($deployment.name -ne $null){
    Remove-AzureDeployment -ServiceName $serviceName -Slot Staging -Force
}
 
New-AzureDeployment -ServiceName $serviceName -Slot Staging -Package $packageUri `
                    -Configuration $configPath -Name $fullTargetDeploymentName `
                    -TreatWarningsAsError
The $servicename, $fulltargetDeploymentName, and $configPath are assumed to have been provided, while the $packageUri was defined in the previous step
Before we can create the new deployment, we check to see if there is already a deployment present in the Staging slot and delete it. We then create the new deployment, using the package we just uploaded and a local configuration (*.cscfg) file.
Get-AzureDeployment retrieves details on the current deployment in the specified slot. I’ve used ErrorAction SilentlyContinue here because I am only making this call to determine if something is already there and don’t want to exit out if the slot turns out to be empty.
Remove-AzureDeployment removes the deployment we have detected in the Staging slot, using -Force to again suppress any interactive questions the command might have.
New-AzureDeployment creates a new deployment in the specified slot, using the supplied package URI and the configuration file path. I opted to treat warnings as errors because I’d rather clean up warnings immediately. Unfortunately this parameter does not support URLs. By default the deployment will be started, though there is a -DoNotStart parameter if you do not want this behavior.

Wait for all of the instances…

The new deployment has been created and told to start, but it takes time for the individual instances to be provisioned and to go through their start-up sequence.
PowerShell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$statusReady = "ReadyRole"
$statusStopped = "StoppedVM"
 
function Get-AllInstancesAreStatus($instances, $targetStatus){
    foreach ($instance in $instances)
    {
        if ($instance.InstanceStatus -ne $targetStatus)
        {
            return $false
        }
    }
    return $true
}
 
# ... ... ...
 
$deployment = Get-AzureDeployment -ServiceName $serviceName -Slot Staging
 
$waitTime = [System.Diagnostics.Stopwatch]::StartNew()
while ((Get-AllInstancesAreStatus $deployment.RoleInstanceList $statusReady) -eq $false)
{
    if($waitTime.Elapsed.TotalSeconds -gt $instancePollLimit){
        Throw "$instancePollLimit seconds elapsed without all the instances reaching 'ReadyRun'"
    }
 
    Start-Sleep -Seconds $instancePollRate
 
    $deployment = Get-AzureDeployment -ServiceName $serviceName -Slot Staging
}
$serviceName is supplied as a script parameter.
While there are any instances that are not in ‘ReadyRun’ status, we sleep for $instancepollRate seconds and continue to check again. If more than $instancePollLimit seconds go by while waiting, we’ll throw an error that will cause our script to exit.
This poll limit is necessary. In the real world of Azure, you can have azure instances that do not boot for long periods of time. Additional logic has been added in Azure that is supposed to detect VMs not booting and replace them, but no one writes perfect code and I have experienced deployments hung for hours or more due to non-booting instances. We also can break our own code, resulting in rapidly re-booting instances that we would not want to deploy to production.
Get-AzureDeployment gets the azure deployment details, including the list of instances with their names, current statuses, size, etc.

Promote the new deployment to Production, Suspend the old one

Once the staging deployment is up and running, we can promote it to the Production slot.
PowerShell
1
2
3
4
5
6
7
8
9
10
11
Move-AzureDeployment -ServiceName $serviceName
 
$deployment = Get-AzureDeployment -ServiceName $serviceName -Slot Staging `
                                  -ErrorAction SilentlyContinue
 
if($deployment.DeploymentName -ne $null){
    Set-AzureDeployment -Status -ServiceName $serviceName -Slot Staging `
                        -NewStatus Suspended
}
 
Remove-AzureAccount -Name $subscriptionName -Force
$serviceName is a parameter passed to the script
Performing the VIP swap is a simple command and the Powershell cmdlet turns that asynchronous method into a synchronous call for us, like so many of the others. Once the swap is complete, if we have a deployment in the Staging slot (the old Production one), we go ahead and tell it to suspend, but don’t wait for the individual instances to stop before exiting.
Move-AzureDeployment performs a VIP swap to swap the Staging and Production deployments.
Get-AzureDeployment gets the azure deployment details, including the list of instances with their names, current statuses, size, etc.
Set-AzureDeployment with the -Status parameter is used to change the status of a given deployment, in this case Suspending the deployment in the Staging slot.
Remove-AzureAccount is used to remove the Azure subscription data from the Powershell session, basically the “logout” equivalent to Import-AzurePublishSettingsFile’s “login”

And we’re deployed…

There is a full script available in the github repository here: /scripts/deployHostedService.ps1. It is not clean and pretty, but it does have more output and error handling than the snippets above. Among other things, it does not clean out all those packages it uploads to blob storage and it most definitely should not be blindly pasted and used for your production environment.
While this may not be a production-ready script, it’s not far off (and I’ve used worse). The few cmdlets above should start to show the pattern that Microsoft used with this Powershell library. There are plenty of additional cmdlets to interact with storage services, VMs, affinity groups, HDInsight, Media Services…you name it, it’s probably in there.
Writing this post, I am reminded how magical this all is. That sample project was only configured to ask for a single server, but I could just as easily have asked for 4 16-core servers and then added in additional web or worker roles, each with their own servers. And I could have done all of that without changing anything at all about this script and I would have had tons of servers deployed, load balanced, and ready to go with just a minor blip as I swapped them into production. I can remember projects with multi-hour manual deployment processes (and month or more system provisioning times), and we just replaced them with a one page script.
The best part is that, unlike some Microsoft frameworks/packages, this magic doesn’t just make a great demo, it also works in real production environments.

Wednesday, May 28, 2014

Optimizing for website performance includes setting long expiration dates on our static resources, such s images, stylesheets and JavaScript files. Doing that tells the browser to cache our files so it doesn’t have to request them every time the user loads a page. This is one of the most important things to do when optimizing websites.
In ASP.NET on IIS7+ it’s really easy. Just add this chunk of XML to the web.config’s element:
 
   
The above code tells the browsers to automatically cache all static resources for 365 days. That’s good and you should do this right now.
The issue becomes clear the first time you make a change to any static file. How is the browser going to know that you made a change, so it can download the latest version of the file? The answer is that it can’t. It will keep serving the same cached version of the file for the next 365 days regardless of any changes you are making to the files.

Fingerprinting

The good news is that it is fairly trivial to make a change to our code, that changes the URL pointing to the static files and thereby tricking the browser into believing it’s a brand new resource that needs to be downloaded.
Here’s a little class that I use on several websites, that adds a fingerprint, or timestamp, to the URL of the static file.
using System; 
using System.IO; 
using System.Web; 
using System.Web.Caching; 
using System.Web.Hosting;
public class Fingerprint 
{ 
  public static string Tag(string rootRelativePath) 
  { 
    if (HttpRuntime.Cache[rootRelativePath] == null) 
    { 
      string absolute = HostingEnvironment.MapPath("~" + rootRelativePath);
      DateTime date = File.GetLastWriteTime(absolute); 
      int index = rootRelativePath.LastIndexOf('/');
      string result = rootRelativePath.Insert(index, "/v-" + date.Ticks); 
      HttpRuntime.Cache.Insert(rootRelativePath, result, new CacheDependency(absolute)); 
    }
      return HttpRuntime.Cache[rootRelativePath] as string; 
  } 
}
All you need to change in order to use this class, is to modify the references to the static files.

Modify references

Here’s what it looks like in Razor for the stylesheet reference:
<link rel="stylesheet" href="@Fingerprint.Tag("/content/site.css")" />
…and in WebForms:
<link rel="stylesheet" href="<%=Fingerprint.Tag("/content/site.css") %>" />
The result of using the FingerPrint.Tag method will in this case be:
<link rel="stylesheet" href="/content/v-634933238684083941/site.css" />
Since the URL now has a reference to a non-existing folder (v-634933238684083941), we need to make the web server pretend it exist. We do that with URL rewriting.

URL rewrite

By adding this snippet of XML to the web.config’s section, we instruct IIS 7+ to intercept all URLs with a folder name containing “v=[numbers]” and rewrite the URL to the original file path.
 
   
     
       
       
   
 
 
 
You can use this technique for all your JavaScript and image files as well.
The beauty is, that every time you change one of the referenced static files, the fingerprint will change as well. This creates a brand new URL every time so the browsers will download the updated files.
FYI, you need to run the AppPool in Integrated Pipeline mode for the section to have any effect.

Tuesday, May 13, 2014

Recently I encountered a customer asking if it’s possible to download a site using msdeploy.exe. This is pretty easy using msdeploy.exe. I’ll demonstrate this with Microsoft Azure Web Sites but you can use this with any hosting provider that supports Web Deploy (aka MSDeploy).
To perform a sync with msdeploy.exe the structure of the command that we need to execute is as follows.
msdeploy.exe –verb:sync –source: –dest:
For the source property we will use the remote Azure Web Site, and for the dest property we will write to a folder on the local file system. You can get the Web Deploy publishing settings in the Azure Web Site by clicking on the Download the publish profile link in the Configure page.

This will download an XML file that has all the publish settings you’ll need. For example below is a publish settings file for my demo site.


  
    
  
  
    
  
 
The publish settings file provided by Azure Web Sites has two profiles; an MSDeploy profile and an FTP profile. We can ignore the FTP profile and just use the MSDeploy one. The relevant settings from the profile that we will use are the following values.
  • publishUrl
  • msdeploySite
  • userName
  • usePWD
We will use the contentPath MSDeploy provider to download the files. On the source parameter we will need to include the relevant details of the remote machine. The full command to execute is below. I’ll break it down a bit after the snippet.

"C:\Program Files (x86)\IIS\Microsoft Web Deploy V3\msdeploy.exe" 
    -verb:sync 
    -source:contentPath=sayeddemo2,
        ComputerName="https://waws-prod-bay-001.publish.azurewebsites.windows.net/msdeploy.axd?site=sayeddemo2",
        UserName=$sayeddemo2,Password=***removed***,AuthType='Basic'  
    -dest:contentPath=c:\temp\pubfromazure -disablerule:BackupRule
 
More 

 

Thursday, May 8, 2014


On behalf of the Kendo UI team, I am tremendously excited to announce Kendo UI Core, a free and open-source distribution of Kendo UI (released under the Apache License, version 2.0). We have published the kendo-ui-core repository to GitHub and are acceptingcontributions from the community. A pre-built version is also available for download. Finally, our commercial version of Kendo UI, Kendo UI Complete has been renamed to Kendo UI Professional.

A Brief History of Kendo UI

986 days ago, we announced Kendo UI to the world. Our team's original goal was to solve the core problem of "framework assembly" that frequently plagued jQuery-based development at the time. Since then, the product's focus and features have expanded to include Kendo UI Mobile, MVVM support, AngularJS bindings, adaptive widgets, and much more. We've incorporated Kendo UI into products and frameworks like the Telerik Platform, Telerik Analytics, Sitefinity, TeamPulse, Test Studio, Google Chrome, Visual Studio, NuGet, Sublime Text,ASP.NET, JSP, PHP, and many others. We've built countless demos, tackled assumptions about HTML5, written whitepapers, and iterated like crazy throughout this entire journey. Looking back, it's humbling when you review just how much has happened: