:::: MENU ::::

Wednesday, August 22, 2012

The recent preview release of OData support in Web API  is very exciting (see the new nuget package and codeplex project). For the most part it is compatible with the previous [Queryable] support because it supports the same OData query options. That said there has been a little confusion about how [Queryable] works, what it works with and what its limitations are, both temporary and long term. 

The rest of this post will outline what is currently supported, what limitations currently exist and which limitations are hopefully just temporary.

Current Support

Support for different ElementTypes

In the preview the [Queryable] attribute works with any IQueryable<> or IEnumerable<> data source (Entity Framework or otherwise), for which a model has been configured or can be inferred automatically.

Today this means that the element type (i.e. the T in IQueryable<T>) must be viewed as an EDM entity. This implies a few constraints:

  • All properties you wish to expose must be exposed as CLR properties on your class.
  • A key property (or properties) must be available
  • The type of all properties must be either:
    • a clr type that is mapped to an EDM primitive, i.e. System.String == Edm.String
    • Or clr type that is mapped to another type in your model, be that a ComplexType or an EntityType

NOTE: using IEnumerable<> is recommended only for small amounts of data, because the options are only applied after everything has been pulled into memory.

Null Propagation

This feature takes a little explaining, so please bear with me. Imagine you have an action that looks like this:

[Queryable]
public IQueryable<Product> Get()
{
    …
}

Now imagine someone issues this request:

GET ~/Products?$filter=startswith(Category/Name,’A’)

You might think the [Queryable] attribute will translate the request to something like this:
Get().Where(p => p.Category.Name.StartsWith(“A"));

But that might be very bad…
If your Get() method body looks like this:

return _db.Products; // i.e. Entity Framework.

It will work just fine. But if your Get() method looks like this:

return products.AsQueryable();

It means the LINQ provider being used is LINQ to Objects. L2O evaluates the where predicate in memory simply by calling the predicate. Which could easily null ref if either p.Category or p.Category.Name are null.

The [Queryable] attribute handles this automatically by injecting null guards into the code for certain IQueryable Providers. If you dig into the code for ODataQueryOptions you’ll see this code:


string queryProviderAssemblyName = query.Provider.GetType().Assembly.GetName().Name;
switch (queryProviderAssemblyName)
{
    case EntityFrameworkQueryProviderAssemblyName:
        handleNullPropagation = false;
        break;
    case Linq2SqlQueryProviderAssemblyName:
        handleNullPropagation = false;
        break;
    case Linq2ObjectsQueryProviderAssemblyName:
        handleNullPropagation = true;
        break;
    default:
    handleNullPropagation = true;
    break;
}
return ApplyTo(query, handleNullPropagation);

As you can see for Entity Framework and LINQ to SQL we don’t inject null guards (because SQL takes care of null guards/propagation automatically), but for L2O and all other query providers we inject null guards and propagate nulls.
If you don’t like this behavior you can override it by dropping down and callingODataQueryOptions.Filter.ApplyTo(..) directly.

Supported Query Options

In the preview the [Queryable] attribute supports only 4 of OData’s 8 built-in query options, namely $filter, $orderby, $skip and $top.

What about the 4 other query options? i.e. $select, $expand, $inlinecount and $skiptoken. Today you need to useODataQueryOptions rather than [Queryable], hopefully that will change overtime.

Dropping down to ODataQueryOptions

The first thing to understand is that this code:

[Queryable]
public IQueryable<Product> Get()
{
    return _db.Products;
}
Is roughly equivalent to:

public IEnumerable<Product> Get(ODataQueryOptions options)
{
// TODO: we should add an override of ApplyTo that avoid all these casts!
    return options.ApplyTo(_db.Products as IQueryable) as IEnumerable<Product>;
}

Which in turn is roughly equivalent to:

public IEnumerable<Product> Get(ODataQueryOptions options)
{

    IQueryable results = _db.Products;
if (options.Filter != null)
        results = options.Filter.ApplyTo(results);
    if (options.OrderBy != null) // this is a slight over-simplification see this.
        results = options.OrderBy.ApplyTo(results);
    if (options.Skip != null)
        results = options.Skip.ApplyTo(results);
    if (options.Top != null)
        results = options.Top.ApplyTo(results);

    return results;
}

This means you can easily pick and choose which options to support. For example if your service doesn’t support$orderby you can assert that ODataQueryOptions.OrderBy is null.

ODataQueryOptions.RawValues

Once you’ve dropped down to the  ODataQueryOptions you also get access to the RawValues property which gives you the raw string values of all 8 ODataQueryOptions… So in theory you can handle more query options.

ODataQueryOptions.Filter.QueryNode

The ApplyTo method assumes you have an IQueryable, but what if you backend has no IQueryable implementation?

Creating one from scratch is very hard, mainly because LINQ allows so much more than OData allows, and essentially obfuscates the intent of the query.
To avoid this complexity we provide ODataQueryOptions.Filter.QueryNode which is an AST that gives you a parsed metadata bound tree representing the $filter. The AST of course it tuned to allow only what OData supports, making it much simpler than a LINQ expression.

For example this test fragment illustrates the API:
var filter = new FilterQueryOption("Name eq 'MSFT'", context);
var node = filter.QueryNode;
Assert.Equal(QueryNodeKind.BinaryOperator, node.Expression.Kind);
var binaryNode = node.Expression as BinaryOperatorQueryNode;
Assert.Equal(BinaryOperatorKind.Equal, binaryNode.OperatorKind);
Assert.Equal(QueryNodeKind.Constant, binaryNode.Right.Kind);
Assert.Equal("MSFT", ((ConstantQueryNode)binaryNode.Right).Value);
Assert.Equal(QueryNodeKind.PropertyAccess, binaryNode.Left.Kind);
var propertyAccessNode = binaryNode.Left as PropertyAccessQueryNode;
Assert.Equal("Name", propertyAccessNode.Property.Name);

If you are interested in an example that converts one of these ASTs into another language take a look at theFilterBinder class. This class is used under the hood by ODataQueryOptions to convert the Filter AST into a LINQ Expression of the form Expression<Func<T,bool>>.

You could do something very similar to convert directly to SQL or whatever query language you need. Let me assure you doing this is MUCH easier than implementing IQueryable!

ODataQueryOptions.OrderBy.QueryNode

Likewise you can interrogate the ODataQueryOptions.OrderBy.Query for an AST representing the $orderbyquery option.

Possible Roadmap?

These are just ideas at this stage, really we want to hear what you want, that said, here is what we’ve been thinking about:

Support for $select and $expand

We hope to add support for both of these both as QueryNodes (like Filter and OrderBy), and natively by the[Queryable] attribute.

But first we need to work through some issues:

  • The OData Uri Parser (part of ODataContrib) currently doesn’t support $select / $expand, and we need that first.
  • Both $expand and $select essentially change the shape of the response. For example you are still returningIQueryable<T> from your action but:
    • Each T might have properties that are not loaded. How would the formatter know which properties are not loaded?
    • Each T might have relationships loaded, but simply touching an unloaded relationship might cause lazyloading, so the formatters can’t simply hit a relationship during serialization as this would perform terribly, they need to know what to try to format.
  • There is no guarantee that you can ‘expand’ an IEnumerable or for that matter an IQueryable, so we would need a way to tell [Queryable] which options it is free to try to handle automatically.

Support for $inlinecount and $skiptoken

Again we hope to add support to [Queryable] for both of these.
That said today you can implement both of these by returning ODataResult<> from your action today.
Implementing $inlinecount is pretty simple:

public ODataResult<Product> Get(ODataQueryOptions options)

{

    var results = (options.ApplyTo(_db.Products) as IQueryable<Product>);

    var count = results.Count;

    var limitedResults = results.Take(100).ToArray();

    return new ODataResult<Product>(results,null,count);

}

However implementing server driven paging (i.e. $skiptoken) is more involved and easy to get wrong.
I’ll blog about how to do Server Driven Pages pretty soon.

Support for more Element Types.

We want to support both Complex Types (Complex Type are just like entities, except they don’t have a key and have no relationships) and primitive element types. For example both:

public IQueryable<string> Get(); – maps to say GET ~/Tags

and

public IQueryable<Address> Get(parentId); – maps to say GET ~/Person(6)/Addresses

where no key property has been configured or can be inferred for Address.

You might be asking yourself how do you query a collection of primitives using OData? Well in OData you use the$it implicit iteration variable like this:

GET ~/Tags?$filter=startswith($it,’A’)

Which gets all the Tags that start with ‘A’.

Virtual Properties and Open Types

Essentially virtual properties are things you want to expose as properties via your service that have no corresponding clr property. A good example might be where you to use methods to get and set a property value. This one is a little further out, but it is clearly useful.

Conclusion

As you can see [Queryable] is a work in progress that is layered above ODataQueryOptions, we are planning to improve both over time, and we have a number of ideas. But as always we’d love to hear what you think!

More

Tuesday, August 7, 2012

Quite a while ago I presented a scrappy little macro I created to update version numbers in multiple Visual Studio projects. At the time I commented that Visual Studio 11 wouldn't be supporting macros so, now that VS2012 has RTM'd, here's a "port" to a C# version, using the Visual Studio Extensibility mechanisms.

The starting point is to use the project wizard to create a Visual Studio Add-in (to be found in the Extensibility section of the template list). I chose C# as my programming language in the first step and Visual Studio 2012 as the application host in the next (the techniques work equally well for Visual Studio 2010). The next page asks for a name and description; on page 4, I specified yes to a tools menu item; and on the fifth page I felt a bit lazy and omitted the about box. The wizard creates a bunch of boilerplate code, the most interesting bit of which is the Exec method near the end of Connect.cs: as the name might suggest, this is the routine that gets run when you click the button, select the menu item or otherwise invoke the new command. Here's mine, with my additions in bold:

public void Exec(string commandName, vsCommandExecOption executeOption, ref object varIn, ref object varOut, ref bool handled)
{
handled = false;
if(executeOption == vsCommandExecOption.vsCommandExecOptionDoDefault)
{
if(commandName == "VersionUpdate.Connect.VersionUpdate")
{
using (var w = new MainForm(_applicationObject))
{
w.ShowDialog();
}
handled = true;
return;
}
}
}

I'm showing a dialog box, in which the user can specify version information - I'd better describe that MainForm... It's a Windows Forms window which contains the same set of controls I used in the macro from last time: a checked list box in which to display found version information, a text box into which to type a new version number, and apply and cancel buttons. (You'll need to add System.Windows.Forms to project references to be able to use these objects.)

The form constructor iterates over all projects in the solution, adding version data to the list - pretty much the same as the macro, but with a few minor changes in syntax and where resources are found:

private void PopulateList(EnvDTE80.DTE2 dte)
{
var sol = dte.Solution;
foreach (Project proj in sol.Projects)
{
dynamic csp = proj;
foreach (var propertyName in new string[] { "AssemblyVersion", "AssemblyFileVersion" })
{
try
{
var version = proj.Properties.Item(propertyName).Value as string;
if (!string.IsNullOrWhiteSpace(version))
{
var versionRef = new VersionReference { Project = proj, Id = propertyName, Version = version };
this.listProjects.Items.Add(versionRef, true);
}
}
catch
{
}
}
} // end foreach proj
}

The argument to this function is the _applicationObject passed into MainForm's constructor and offers access to the Visual Studio object model. Rather than work out what type of project each is to determine if the version information is available, I've been incredibly lazy: I cast the project to "dynamic" and then wrap accesses through that object in a try...catch block (which would be tripped when the object doesn't support property access - other errors will trip it too, of course... did I mention I was lazy?). The VersionReference class mentioned in there is just a holder for the three items indicated, with a ToString override to show the project name, version property name and value, that ends up automatically being used when rendering the list (less effort than providing a custom data template):

internal class VersionReference
{
public override string ToString()
{
return string.Format("{0}, {1}={2}", Project.Name, Id, Version);
}

internal EnvDTE.Project Project { get; set; }
internal string Id { get; set; }
internal string Version { get; set; }
}

To complete the operation, the OK button invokes:

private void btnOK_Click(object sender, EventArgs e)
{
foreach (VersionReference v in this.listProjects.CheckedItems)
v.Project.Properties.Item(v.Id).Value = txtVersion.Text;

this.Close();
}

Putting all that together gives you a new command which can be invoked via the command window, or via a menu item - once it's been registered. I've not investigated building an installer for the add-in yet, but the wizard will register it on the development machine, which is good enough for me for the time being. The process is explained on MSDN.

I've also not changed the icon from the default. Connect.OnConnection contains calls to AddNamedCommand2 for each command you add: a pair of the parameters identify the icon to be used - the fifth indicates if the icon is internal or custom, and the one after identifies the icon. The default smiley face is number 59, it seems. There are acouple of pages on MSDN explaining how to change the icon - a small job for another day, I'll stick with a smiley face until I get bored with it.

The code here is very very close to that in the macro version I wrote before, which isn't too surprising since most of it involves traversing object models that are the same in both cases. There's a minor syntax translation from VB to C# - if I'd thought about it sooner, I could have written this add-in in VB.NET and copied and pasted chunks of code directly.

Deploying the add-in is a case of copying the binary and the .addin file into an appropriate location on the destination machine as described in MSDN documentation. Maybe I'll get round to creating an installation package and publishing it on the Visual Studio gallery, but don't hold your breath waiting for that to happen!

More

I’ve picked something up where Yao Huang Lin of Microsoft left off. For preliminary material, check out his blog and check out his posts on generating documentation.

In one of his later posts, he suggested creating a help controller. This is where I’ve picked things up. In Yao’s solution, he’s rendering html-based views. While that works well and makes for a nice presentation, I wanted to remain within the mode of just returning data, whether it is JSON or XML. Before continuing on with this post, please be sure to read Yao’s posts on the topic as I will be picking up where he left off on this post where he talks about other implemenations.

The first thing we need is a help controller.  Here is the one I’ve created:

using System.Collections.Generic;

using System.Net;

using System.Web.Http;

using System.Web.Http.Description;

namespace WebAPI.Controllers

{

[ApiExplorerSettings(IgnoreApi = true)]

public class HelpController : ApiController

{

public List Get()

{

return APIDocumentationRepository.Get();

}

public APIEndPoint Get(string api)

{

return APIDocumentationRepository.Get(api);

}

}

}

Nothing all that complicated here. Like all good controllers, this one is thin – with just enough logic to expose and service the end points. I’ve created an APIDocumenationRepository Class to handle all of the data-related operations. One point to focus on is the attribute: [ApiExplorerSettings(IgnoreApi = true)]. We don’t want the help controller itself to appear in the documentation. No need to do that since in order to get to the help documentation, you need to know the help endpoint exists in the first place!

There are two endpoints: one to get all of the endpoints and another to get a specific endpoint. In my earlier posts, I was referencing a simple Products Controller. I’m continuing to use that same controller here. For review, here is the listing for that controller:

using System;

using System.Linq;

using System.Net.Http;

using System.Web.Http;

using WebApi.Models;

namespace WebApi.Controllers

{

public class ProductsController : ApiController

{

/// <summary>

/// Returns the Product Collection.

/// </summary>

/// <returns></returns>

[Queryable]

public IQueryable<Product> GetProducts()

{

return ProductsRepository.data.AsQueryable();

}

/// <summary>

/// Returns an individual Product.

/// </summary>

/// <param name="id">The Product id.</param>

/// <returns></returns>

public Product GetProduct(int id)

{

try

{

return ProductsRepository.get(id);

}

catch (NotFoundException)

{

throw new HttpResponseException(new HttpResponseMessage()

{

StatusCode = System.Net.HttpStatusCode.NotFound

});

}

}

/// <summary>

/// Deletes the Products Collection and reverts back to original state.

/// </summary>

/// <returns></returns>

[HttpDelete]

public void ResetProducts()

{

ProductsRepository.reset();

}

/// <summary>

/// Deletes an individual Product.

/// </summary>

/// <param name="id">The Product id.</param>

/// <returns></returns>

public void DeleteProduct(int id)

{

try

{

ProductsRepository.delete(id);

}

catch (NotFoundException)

{

throw new HttpResponseException(new HttpResponseMessage()

{

StatusCode = System.Net.HttpStatusCode.NotFound

});

}

}

/// <summary>

/// Updates an individual Product.

/// </summary>

/// <param name="product">The Product object.</param>

/// <returns></returns>

public void PutProduct(Product product)

{

ProductsRepository.update(product);

}

/// <summary>

/// Creates a new Product.

/// </summary>

/// <param name="product">The Product object.</param>

/// <returns></returns>

public void PostProduct(Product product)

{

ProductsRepository.add(product);

}

}

}

There are a few changes from the earlier versions of this controller. As you can see, I’m using the XML Documentation features Yao talks about in his post. I’ve simply employed the technique he describes.

The next thing to cover is the APIDocumenationRepository Class. Here is the code for that class:

using System;

using System.Collections.Generic;

using System.Linq;

using System.Runtime.Serialization;

using System.Web;

using System.Web.Http;

using System.Web.Http.Description;

namespace WebAPI

{

public class APIDocumentationRepository

{

public static APIEndPoint Get(string apiName) {

return getAPIEndPoint(apiName);

}

public static List<APIEndPoint> Get()

{

var Controllers = GlobalConfiguration

.Configuration

.Services

.GetApiExplorer()

.ApiDescriptions

.GroupBy(x => x.ActionDescriptor.ControllerDescriptor.ControllerName)

.Select(x => x.First().ActionDescriptor.ControllerDescriptor.ControllerName)

.ToList();

var apiEndPoints = new List<APIEndPoint>();

foreach (var controller in Controllers) {

apiEndPoints.Add(getAPIEndPoint(controller));

}

return apiEndPoints;

}

static APIEndPoint getAPIEndPoint(string controller) {

var apis = GlobalConfiguration

.Configuration

.Services

.GetApiExplorer()

.ApiDescriptions

.Where(x => x.ActionDescriptor.ControllerDescriptor.ControllerName == controller);

List<APIEndPointDetail> apiEndPointDetails = null;

if (apis.ToList().Count > 0)

{

apiEndPointDetails = new List<APIEndPointDetail>();

foreach (var api in apis)

{

apiEndPointDetails.Add(getAPIEndPointDetail(api));

}

}

else

{

controller = string.Format("The {0} api does not exist.",controller);

}

return new APIEndPoint(controller,apiEndPointDetails);

}

static APIEndPointDetail getAPIEndPointDetail(ApiDescription api) {

if (api.ParameterDescriptions.Count > 0)

{

var parameters = new List<APIEndPointParameter>();

foreach (var parameter in api.ParameterDescriptions)

{

parameters.

Add(new APIEndPointParameter(parameter.Name, parameter.Documentation, parameter.Source.ToString()));

}

return new APIEndPointDetail(api.RelativePath, api.Documentation, api.HttpMethod.Method, parameters);

}

else

{

return new APIEndPointDetail(api.RelativePath, api.Documentation, api.HttpMethod.Method);

}

}

}

[DataContract]

public class APIEndPoint {

[DataMember] public string Name { get; private set; }

[DataMember] public List<APIEndPointDetail> APIEndPointDetails { get; private set; }

public APIEndPoint(string name, List<APIEndPointDetail> apiEndPointDetails)

{

Name = name;

APIEndPointDetails = apiEndPointDetails;

}

}

[DataContract]

public class APIEndPointDetail

{

[DataMember]

public string RelativePath { get; private set; }

[DataMember]

public string Documentation { get; private set; }

[DataMember]

public string Method { get; private set; }

[DataMember]

public List<APIEndPointParameter> Parameters { get; private set; }

public APIEndPointDetail(string relativePath, string documentation, string method,

List<APIEndPointParameter> parameters) : this(relativePath, documentation, method)

{

Parameters = parameters;

}

public APIEndPointDetail(string relativePath, string documentation, string method)

{

RelativePath = relativePath;

Documentation = documentation;

Method = method;

}

}

[DataContract]

public class APIEndPointParameter

{

[DataMember]

public string Name { get; set; }

[DataMember]

public string Documentation { get; private set; }

[DataMember]

public string Source { get; private set; }

public APIEndPointParameter(string name, string documentation, string source)

{

Name = name;

Documentation = documentation;

Source = source;

}

}

}

With the everything in place, including all of the things outlined in Yao’s post, with this url:
http://localhost:18950/api/help?api=Products – the following is the api documenation for the Products API:

{

"Name":"Products",

"APIEndPointDetails":[

{

"RelativePath":"api/Products",

"Documentation":"Returns the Product Collection.",

"Method":"GET"

},

{

"RelativePath":"api/Products/{id}",

"Documentation":"Returns an individual Product.",

"Method":"GET",

"Parameters":[

{

"Name":"id",

"Documentation":"The Product id.",

"Source":"FromUri"

}

]

},

{

"RelativePath":"api/Products",

"Documentation":"Deletes the Products Collection and reverts back to original state.",

"Method":"DELETE"

},

{

"RelativePath":"api/Products/{id}",

"Documentation":"Deletes an individual Product.",

"Method":"DELETE",

"Parameters":[

{

"Name":"id",

"Documentation":"The Product id.",

"Source":"FromUri"

}

]

},

{

"RelativePath":"api/Products",

"Documentation":"Updates an individual Product.",

"Method":"PUT",

"Parameters":[

{

"Name":"product",

"Documentation":"The Product object.",

"Source":"FromBody"

}

]

},

{

"RelativePath":"api/Products",

"Documentation":"Creates a new Product.",

"Method":"POST",

"Parameters":[

{

"Name":"product",

"Documentation":"The Product object.",

"Source":"FromBody"

}

]

}

]

}

And if XML is your thing, no problem. Simply set the content-type header to application/xml:

<APIEndPoint xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/WebAPI">

<APIEndPointDetails>

<APIEndPointDetail>

<Documentation>Returns the Product Collection.</Documentation>

<Method>GET</Method>

<Parameters i:nil="true" />

<RelativePath>api/Products</RelativePath>

</APIEndPointDetail>

<APIEndPointDetail>

<Documentation>Returns an individual Product.</Documentation>

<Method>GET</Method>

<Parameters>

<APIEndPointParameter>

<Documentation>The Product id.</Documentation>

<Name>id</Name>

<Source>FromUri</Source>

</APIEndPointParameter>

</Parameters>

<RelativePath>api/Products/{id}</RelativePath>

</APIEndPointDetail>

<APIEndPointDetail>

<Documentation>Deletes the Products Collection and reverts back to original state.</Documentation>

<Method>DELETE</Method>

<Parameters i:nil="true" />

<RelativePath>api/Products</RelativePath>

</APIEndPointDetail>

<APIEndPointDetail>

<Documentation>Deletes an individual Product.</Documentation>

<Method>DELETE</Method>

<Parameters>

<APIEndPointParameter>

<Documentation>The Product id.</Documentation>

<Name>id</Name>

<Source>FromUri</Source>

</APIEndPointParameter>

</Parameters>

<RelativePath>api/Products/{id}</RelativePath>

</APIEndPointDetail>

<APIEndPointDetail>

<Documentation>Updates an individual Product.</Documentation>

<Method>PUT</Method>

<Parameters>

<APIEndPointParameter>

<Documentation>The Product object.</Documentation>

<Name>product</Name>

<Source>FromBody</Source>

</APIEndPointParameter>

</Parameters>

<RelativePath>api/Products</RelativePath>

</APIEndPointDetail>

<APIEndPointDetail>

<Documentation>Creates a new Product.</Documentation>

<Method>POST</Method>

<Parameters>

<APIEndPointParameter>

<Documentation>The Product object.</Documentation>

<Name>product</Name>

<Source>FromBody</Source>

</APIEndPointParameter>

</Parameters>

<RelativePath>api/Products</RelativePath>

</APIEndPointDetail>

</APIEndPointDetails>

<Name>Products</Name>

</APIEndPoint>

Enjoy…

 

More

.NET Framework Cleanup Tool User's Guide

Introduction

This .NET Framework cleanup tool is designed to automatically perform a set of steps to remove selected versions of the .NET Framework from a computer.  It will remove files, directories, registry keys and values and Windows Installer product registration information for the .NET Framework.  The tool is intended primarily to return your system to a known (relatively clean) state in case you are encountering .NET Framework installation, uninstallation, repair or patching errors so that you can try to install again.

There are a couple of very important caveats that you should review before using this tool to remove any version of the .NET Framework from your system:

  • This tool is designed as a last resort for cases where install, uninstall, repair or patch installation did not succeed for unusual reasons.  It is not a substitute for the standard uninstall procedure.  You should try the steps listed in this blog post before using this cleanup tool.
  • This cleanup tool will delete shared files and registry keys used by other versions of the .NET Framework.  If you run the cleanup tool, you will need to perform a repair/re-install for all other versions of the .NET Framework that are on your computer or they will not work correctly afterwards.

Download location

The .NET Framework cleanup tool is available for download at the following locations:

The .zip file that contains the tool also contains a file named history.txt that lists when the most recent version of the tool was published and what changes have been made to the tool over time.

Supported products

The .NET Framework cleanup tool supports removing the following products:

  • .NET Framework - All Versions
  • .NET Framework - All Versions (Tablet PC and Media Center)
  • .NET Framework - All Versions (Windows Server 2003)
  • .NET Framework - All Versions (Windows Vista and Windows Server 2008)
  • .NET Framework - All Versions (Windows 7)
  • .NET Framework - All Versions (Windows 8)
  • .NET Framework 1.0
  • .NET Framework 1.1
  • .NET Framework 2.0
  • .NET Framework 3.0
  • .NET Framework 3.5
  • .NET Framework 4
  • .NET Framework 4.5

Not all of the above products will appear in the UI for the .NET Framework cleanup tool on every operating system.  The cleanup tool contains logic so that if it is run on an OS version that includes the .NET Framework as an OS component, it will not offer the option to clean it up.  This means that running the cleanup tool on Windows XP Media Center Edition or Tablet PC Edition will not offer the option to clean up the .NET Framework 1.0, running it on Windows Server 2003 will not offer the option to clean up the .NET Framework 1.1 and running it on Windows Vista or Windows Server 2008 will not offer the option to clean up the .NET Framework 2.0 or the .NET Framework 3.0.

When choosing to remove any of the above versions of the .NET Framework, the cleanup tool will also remove any associated hotfixes and service packs.  You do not need to run any separate steps to remove the service pack(s) for a version of the .NET Framework.

Silent installation mode

The .NET Framework cleanup tool supports running in silent mode.  In this mode, the tool will run without showing any UI, and the user must pass in a version of the .NET Framework to remove as a command line parameter.  To run the cleanup tool in silent mode, you need to download the cleanup tool, extract the file cleanup_tool.exe from the zip file, and then run it using syntax like the following:

cleanup_tool.exe /q:a /c:"cleanup.exe /p <name of product to remove>"

The value that you pass with the /p switch to replace <name of product to remove> in this example must exactly match one of the products listed in theSupported products section above.  For example, if you would like to run the cleanup tool in silent mode and remove the .NET Framework 1.1, you would use a command line like the following:

cleanup_tool.exe /q:a /c:"cleanup.exe /p .NET Framework 1.1"

One important note – as indicated above, the cleanup tool will not allow you to remove a version of the .NET Framework that is installed as part of the OS it is running on.  That means that even if you try this example command line on Windows Server 2003, the tool will exit with a failure return code and not allow you to remove the .NET Framework 1.1 because it is a part of that OS.

Similarly, you cannot use the cleanup tool to remove the .NET Framework 1.0 from Windows XP Media Center Edition or Windows XP Tablet PC Edition or remove the .NET Framework 2.0 or 3.0 from Windows Vista or Windows Server 2008.  In addition, if you run the cleanup tool on an OS that has any edition of the .NET Framework installed as a part of the OS, it will prevent you from using the .NET Framework - All Versions option because there is at least one version that it cannot remove.

If you are planning to run the cleanup tool in silent mode, you need to make sure to detect what OS it is running on and not pass in a version of the .NET Framework with the /p switch that is a part of the OS or make sure that you know how to handle the failure exit code that you will get back from the cleanup tool in that type of scenario.

Unattended installation mode

The .NET Framework cleanup tool supports running in silent mode.  In this mode, the tool will run and only show a progress dialog during removal, but will require no user interaction.  Unattended mode requires the user to pass in a version of the .NET Framework to remove as a command line parameter.  To run the cleanup tool in unattended mode, you need to download the cleanup tool, extract the file cleanup_tool.exe from the zip file, and then run it using syntax like the following:

cleanup_tool.exe /q:a /c:"cleanup.exe /p <name of product to remove> /u"

For example, if you would like to run the cleanup tool in unattended mode and remove the .NET Framework 1.1, you would use a command line like the following:

cleanup_tool.exe /q:a /c:"cleanup.exe /p .NET Framework 1.1 /u"

Exit codes

The cleanup tool can returns the following exit codes:

  • 0 - cleanup completed successfully for the specified product
  • 3010 - cleanup completed successfully for the specified product and a reboot is required to complete the cleanup process
  • 1 - cleanup tool requires administrative privileges on the machine
  • 2 - the required file cleanup.ini was not found in the same path as cleanup.exe
  • 3 - a product name was passed in that cannot be removed because it is a part of the OS on the system that the cleanup tool is running on
  • 4 - a product name was passed in that does not exist in cleanup.ini
  • 100 - cleanup was able to start but failed during the cleanup process
  • 1602 - cleanup was cancelled

Log files

The cleanup tool creates the following log files:

  • %temp%\cleanup_main.log - a log of all activity during each run of the cleanup tool; this is a superset of the logs listed below as well as some additional information
  • %temp%\cleanup_actions.log - a log of actions taken during removal of each product; it will list files that it finds and removes, product codes it tries to remove, registry entries it tries to remove, etc.
  • %temp%\cleanup_errors.log - a log of errors and warnings encountered during each run of the cleanup tool

More

Wednesday, August 1, 2012

It is often good idea to isolate our domain model from consuming applications by using service layer and data transfer objects (DTO) or application specific models. Using DTO-s means that we need two-way mapping between domain classes and DTO-s. In this posting I will show you how to use AutoMapper to build generic base class for your mappers.

AutoMapper

AutoMapper is powerful object to object mapper that is able to do also smart and complex mappings between objects. Also you can modify existing and define your own mappings. Although it is possible to write way faster lightweight mappers still AutoMapper offers very good performance considering all the nice features it provides.

What’s most important – AutoMapper is easy to use and it fits perfect to the context of this posting.

Why mapping?

To those who have no idea about the problem scope I explain a little bit why mapping between domain classes and application specific models or DTO-s is needed. Often domain classes have complex dependencies between each other and they may also have complex dependencies with their technical environment. Domain classes may have cyclical references that makes it very hard to serialize them to text based formats. And domain classes may be hard to create.

By example, if you are using vendor offered powerful grid component then this component may want to serialize its data source so it can use it on client side to provide quick sorting and filtering of grid data. Moving from server to client means serialization to JSON or XML. If our domain objects have cyclical references (and it is normal they have) then we are in trouble. We have to use something lighter and less powerful, so we use DTO-s and models.

If you go through all complexities mentioned before you will find more issues with using domain classes as models. As we have to use lightweight models we need mappings between domain classes and models.

Base mapper

Instead of writing mapper for each type-pair mappings you can avoid writing many lines of repeating code when using AutoMapper. Here is my base class for mappers.


public abstract class BaseMapper<T, U> where T : BaseEntity where U : BaseDto, new()
{
protected IMappingExpression<U, T> DtoToDomainMapping { get; private set; }
protected IMappingExpression<T, U> DomainToDtoMapping { get; private set; }

public BaseMapper()
{
DomainToDtoMapping = Mapper.CreateMap<T, U>();

var mex = Mapper.CreateMap<U, T>()
.ForMember(m => m.Id, m => m.Ignore());

var refProperties = from p in typeof(T).GetProperties()
where p.PropertyType.BaseType == typeof(BaseEntity)
select p;

foreach (var prop in refProperties)
{
mex.ForMember(prop.Name, m => m.Ignore());
}

Mapper.CreateMap<PagedResult<T>, PagedResult<U>>()
.ForMember(m => m.Results, m => m.Ignore());
}

public U MapToDto(T instance)
{
if (instance == null)
return null;

var dto = new U();

Mapper.Map(instance, dto);

return dto;
}

public IList<U> MapToDtoList(IList<T> list)
{
if (list == null)
return new List<U>();

var dtoList = new List<U>();

Mapper.Map(list, dtoList);

return dtoList;
}

public PagedResult<U> MapToDtoPagedResult(PagedResult<T> pagedResult)
{
if (pagedResult == null)
return null;

var dtoResult = new PagedResult<U>();
Mapper.Map(pagedResult, dtoResult);
Mapper.Map(pagedResult.Results, dtoResult.Results);

return dtoResult;
}

public void MapFromDto(U dto, T instance)
{
Mapper.Map(dto, instance);
}
}






It does all the dirty work and in most cases it provides all functionality I need for type-pair mapping.

In constructor I define mappings for domain class to model and model to domain class mapping. Also I define mapping for PagedResult – this is the class I use for paged results. If inheriting classes need to modify mappings then they can access protected properties.

Also notice how I play with domain base class: the code avoids situations where AutoMapper may overwrite ID-s and properties that extend domain base class. When you start using mapping then you very soon find out how bad mess AutoMapper can create if you don’t use it carefully.

Methods of mapper base:


  • MapToDto – takes domain object and returns mapped DTO.
  • MapToDtoList – takes list of domain objects and returns list of DTO-s.
  • MapToDtoPagedResult – takes paged result with domain objects and returns paged result with DTO-s.
  • MapFromDto – maps DTO properties to domain object.

If you need more mapping helpers you can upgrade my class with your own code.

Example

To give you better idea about how to extend my base class here is the example.



public class FillLevelMapper : BaseMapper<FillLevel, FillLevelDto>
{
public FillLevelMapper()
{
DomainToDtoMapping.ForMember(
l => l.Grade, m => m.MapFrom(l => l.Grade.GradeNo)
);
}
}






Mapper classes extend from BaseMapper and add their specifics to mappings that base mapper doesn’t provide.

Conclusion

Mapping is also one repeating patterns in many systems. After building some mappers from zero you start recognizing parts they have in common. I was able to separate common operations of my mappers to base class using generics and AutoMapper. Mapper classes are very thin and therefore also way easier to test. AutoMapper makes a lot of dirty work for me that is otherwise time consuming to code. Of course, by all it’s power you must use AutoMapper carefully so it doesn’t do too much work.

More