:::: MENU ::::

Thursday, July 31, 2008

Tuesday, July 29, 2008

If you don't know about CTRL+I in Visual Studio, go try it, I'll be waiting...

So? Ain't that cool?

(CTRL+I does incremental search, so it will progressively select the first occurrence of whatever you type. It's a lot less disruptive than CTRL+F as a means to search your files)

 

Thursday, July 24, 2008

I recently added membership, accounts, login, etc. to the AspDotNetMVC site. While doing so I decided I wanted to support OpenID, too. However, I didn't want to go with only OpenID because I needed ASP.NET Membership in place to work in conjunction with another application, a Kigg site used as a service for rating content on the AspDotNetMVC site. I could have probably converted the Kigg code base to use OpenID but I also wanted to allow people who may not have OpenID to create traditional accounts on the site without signing up for OpenID. Following are the steps I took to implement an OpenID login and integrate it with "traditional ASP.NET" membership. Follow these six steps and you can do the same.

 

More

In fact in my latest project, I've removed the ScriptManager from my ASP.NET pages entirely.  Originally I was including both jQuery and the ScriptManager on my pages because I just couldn't live without the ease and simplicity of calling page methods with ASP.NET AJAX. 

Well, with a little help from Dave Ward and Rick Strahl, I realized that calling ASP.NET page methods (and web services) from jQuery is really pretty simple.  I haven't seen any one place where the exact code to do this is presented, so here is the code that I use to call page methods with jQuery: 

Step 1: Define your web methods in your code behind:

view plaincopy to clipboardprint?

  1. [WebMethod()]  
  2. public static int TestNoParams()  
  3. {  
  4. return 1;  
  5. }  
  6. [WebMethod()]  
  7. public static string TestWithParams(string a, int b)  
  8. {  
  9. return a + b.ToString();  
  10. }  
[WebMethod()]
public static int TestNoParams()
{
return 1;
}
[WebMethod()]
public static string TestWithParams(string a, int b)
{
return a + b.ToString();
}

Step 2: Include the jQuery library:

view plaincopy to clipboardprint?

  1. <script type="text/javascript"         
  2. src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js">  
  3. </script>  
<script type="text/javascript"       
src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js">
</script>

I also use the following (optional) helper function to call page methods:

view plaincopy to clipboardprint?

  1. <script type="text/javascript">  
  2. function PageMethod(fn, paramArray, successFn, errorFn)  
  3. {  
  4.   var pagePath = window.location.pathname;  
  5.   //Create list of parameters in the form:  
  6. //{"paramName1":"paramValue1","paramName2":"paramValue2"}  
  7.   var paramList = '';  
  8.   if (paramArray.length > 0)  
  9.   {  
  10.     for (var i=0; i<paramArray.length; i+=2)  
  11.     {  
  12.       if (paramList.length > 0) paramList += ',';  
  13.       paramList += '"' + paramArray[i] + '":"' + paramArray[i+1] + '"';  
  14.     }  
  15.   }  
  16.   paramList = '{' + paramList + '}';  
  17.   //Call the page method  
  18.   $.ajax({  
  19.       type: "POST",  
  20.       url: pagePath + "/" + fn,  
  21.       contentType: "application/json; charset=utf-8",  
  22.       data: paramList,  
  23.       dataType: "json",  
  24.       success: successFn,  
  25.       error: errorFn  
  26.   })  
  27. ;}  
  28. </script>  
<script type="text/javascript">
function PageMethod(fn, paramArray, successFn, errorFn)
{
  var pagePath = window.location.pathname;
  //Create list of parameters in the form:
//{"paramName1":"paramValue1","paramName2":"paramValue2"}
  var paramList = '';
  if (paramArray.length > 0)
  {
    for (var i=0; i<paramArray.length; i+=2)
    {
      if (paramList.length > 0) paramList += ',';
      paramList += '"' + paramArray[i] + '":"' + paramArray[i+1] + '"';
    }
  }
  paramList = '{' + paramList + '}';
  //Call the page method
  $.ajax({
      type: "POST",
      url: pagePath + "/" + fn,
      contentType: "application/json; charset=utf-8",
      data: paramList,
      dataType: "json",
      success: successFn,
      error: errorFn
  })
;}
</script>

*NOTE: It's possible to simplify this even more, but I prefer this because it closely emulates the way ASP.NET AJAX page methods are called.


Step 3:
Now in your Javascript code, you can easily call page methods like so:

PageMethod("TestNoParams", [], AjaxSucceeded, AjaxFailed); //No parameters
PageMethod("TestWithParams", ["a", "value", "b", 2], AjaxSucceeded, AjaxFailed); //With parameters

Step 4: Handle the result

view plaincopy to clipboardprint?

  1. function AjaxSucceeded (result)  
  2. {   
  3. alert(result.d);  
  4. }  
function AjaxSucceeded (result)
{ 
alert(result.d);
}

Note that the parameter names ("a" and "b" in the above example) must match the variable names on your WebMethod exactly. Also notice in the success function that the actual data is in the 'd' property of the result.


That's it!  You can use the same kind of code to call ASP.NET web services, you just need to change the pagePath variable to the path of the .asmx file. 

Happy jQuerying! 

EDIT: I've updated it to show how to handle the result

 

More

This article is one in a series of articles on ASP.NET's membership, roles, and profile functionality.

·  Part 1 - learn about how the membership features make providing user accounts on your website a breeze. This article covers the basics of membership, including why it is needed, along with a look at the SqlMembershipProvider and the security Web controls.

·  Part 2 - master how to create roles and assign users to roles. This article shows how to setup roles, using role-based authorization, and displaying output on a page depending upon the visitor's roles.

·  Part 3 - see how to add the membership-related schemas to an existing database using the ASP.NET SQL Server Registration Tool (aspnet_regsql.exe).

·  Part 4 - improve the login experience by showing more informative messages for users who log on with invalid credentials; also, see how to keep a log of invalid login attempts.

·  Part 5 - learn how to customize the Login control. Adjust its appearance using properties and templates; customize the authentication logic to include a CAPTCHA.

·  Part 6 - capture additional user-specific information using the Profile system. Learn about the built-in SqlProfileProvider.

·  Part 7 - the Membership, Roles, and Profile systems are all build using the provider model, which allows for their implementations to be highly customized. Learn how to create a custom Profile provider that persists user-specific settings to XML files.

·  Part 8 - learn how to use the Microsoft Access-based providers for the Membership, Roles, and Profile systems. With these providers, you can use an Access database instead of SQL Server.

·  Part 9 - when working with Membership, you have the option of using .NET's APIs or working directly with the specified provider. This article examines the pros and cons of both approaches and examines the SqlMembershipProvider in more detail.

·  Part 10 - the Membership system includes features that automatically tally the number of users logged onto the site. This article examines and enhances these features.

·  Part 11 - many websites require new users to verify their email address before their account is activated. Learn how to implement such behavior using the CreateUserWizard control.

·  Part 12 - learn how to apply user- and role-based authorization rules to methods and classes.

Tuesday, July 22, 2008

I was reading this particular entry from AlexJ's blog, in which he discussed how to use anonymous type returned from one function in another function.  I strongly recommend you read it first before reading any further.

Basically, most people would think that there is no easy way to return a strongly typed anonymous type from a function since you wouldn't know what it is that you are returning beforehand. Thus, if you wish to return an anonymous type from a function, you are forced to return it as object like so:

   1:          object CreateAnonymousObject()
   2:          {
   3:              return new { Name = "John Doe" };
   4:          }

The problem of doing it like this is that you lost the ability to access the Name property since object does not have a Name property.  So, your option is to cast it back to its original anonymous type, but alas, how could you do it?

It turned out that there is a way to do this using generic.  If you create a generic function that will return a generic type T and also pass an object of type T in the parameter, you can cast whatever object that you pass as T.  In effect, you are using the object of type T as your example to cast the boxed object to.  See the following code:

   1:  T CastByExample<T>(object source, T example)
   2:  {
   3:      return (T)source;
   4:  }

So now, you can do something like this (for the sake of clarity, I am not going to shorthand these examples):

   1:  var o = CreateAnonymousObject();
   2:  var prototype = new { Name = "" };
   3:  var result = CastByExample(o, prototype);
   4:  Console.WriteLine(result.Name); //This should work just fine

To make things a little bit easier to read, we can use the new extension method feature of .NET 3.5 like so:

   1:  public static class MyExtensions
   2:  {
   3:      public static T CastToTypeOf<T>(this object source, T example)
   4:      {
   5:          return (T)source;
   6:      }
   7:  }

And rewrite the example to:

   1:  var o = CreateAnonymousObject();
   2:  var prototype = new { Name = "" };
   3:  var result = o.CastToTypeOf(prototype);
   4:  Console.WriteLine(result.Name); //This should work just fine.

This is great.  I don't know what I am going to use it for, but it's nice to know that I can do it when I need it.

And then, another idea comes to my mind.  Hey, what if we need to do similar thing to an anonymous object collection.  How could I do that? How could one creata a generic list of anonymous object?  As it turned out, Kirill Osenkov already solve this particular problem.  You can see it here.

So, let's come up with something interesting to explore this problem.  Lets say that you are querying a Contact database through LINQ, but instead of returning the entire Contact fields, you wish to only return some of them.

Making thing simpler, I decided to mock the Contact class like so (too lazy to create a dbml, so I am querying from in-memory object :) ):

   1:  public class Contact
   2:  {
   3:      public string FirstName { get ; set; }
   4:      public string LastName { get; set; }
   5:      public string PhoneNumber { get; set; }
   6:      public string StreetAddress { get; set }
   7:      public string City { get; set; }
   8:  }

And here is an example function that will return you a list of Contacts:

   1:  IEnumerable<Contact> GetContacts()
   2:  {
   3:      return new List<Contact> {
   4:          new Contact { FirstName = "John", LastName = "Doe", PhoneNumber = "555-5501", StreetAddress = "123 Lala Lane", "Los Angeles" },
   5:          new Contact { FirstName = "Jane", LastName = "Doe", PhoneNumber = "555-5502", StreetAddress = "567 Lala Lane", "New York" }
   6:      };
   7:  }

And you only wish to return the FirstName and LastName from your LINQ query like so:

   1:  var query = from c in GetContacts()
   2:              select new { Name = c.FirstName + " " + c.LastName };

Again, for whatever reason, you really really want to encapsulate this as a method, so, you do something like this:

   1:  object GetContactNamesFrom(IEnumerable<Contact> contacts)
   2:  {
   3:      var query = from c in contacts
   4:                          select new { Name = c.FirstName +  " " + c.LastName };
   5:      return query.AsEnumerable();
   6:  }

Great!  Now you have something that can return a generic collection of an anonymous type.  But how can we use this in our own function and allow our function to access the Name property?

Well, combining what we've learnt so far, we can come up with something like this...

First, we know we can now cast something to an anonymous type (sort of) using the generic method CastByExample or CastToTypeOf as seen above.

Second, we need a way to create an anonymous collection.

Let's start with the second one first.  We need a way to create an anonymous collection.  So let's think about this a bit.  i hope you've read the blog post by Kirill by now.  If not, well, here is how you do it...

   1:  IEnumerable<T> CreateEnumerablesByExample<T>(T example)
   2:  {
   3:      return new List<T>();
   4:  }

Now you can do something like:

   1:  var prototype = new { Name = "" };
   2:  var prototypeCollection = CreateEnumerablesByExample(prototype); //This is now a generic collection of anonymous.

What's left for us to do is cast our LINQ result back as the prototypeCollection so we can get access to the Name property from our code like so:

   1:  var contactInfo = GetContactNamesFrom(GetContact());
   2:  var prototype = new { Name = "" };
   3:  var prototypeCollection = CreateEnumerablesByExample(prototype);
   4:  var result = contactInfo.CastToTypeOf(prototypeCollection);
   5:   
   6:  foreach(var c in result)
   7:  {
   8:      Console.WriteLine(c.Name);
   9:  };

We can do one better by using extension method.  Let's refactor CreateEnumerableFrom as an extension method like so:

   1:  public static IEnumerable<T> CreateEnumerablesOfThis<T>(this T prototype)
   2:  {
   3:      return new List<T>();
   4:  }
 

The refactored code of our final example will look like this:

   1:  var contactInfo = GetContactNamesFrom(GetContact());
   2:  var prototype = new { Name = "" };
   3:  var protoypeCollection = prototype.CreateEnumerablesOfThis();
   4:  var result = contactInfo.CastToTypeOf(prototypeCollection);
   5:   
   6:  foreach(var c in result)
   7:  {
   8:      Console.WriteLine(c.Name);
   9:  }

So, why would go through all these trouble?  Honestly, I have no idea.  I am just exercising my brain just to see if I can do such a thing.  I have no real need for this type of things yet.  Who knows... maybe someday...

More

I have worked on a variety of ASP.NET web applications over the past few years and almost all of them needed to collect address data in some form or another. One thing I have never been able to find out there is a decent ASP.NET based user control that allowed the selection of locale (state / province & country.) With this idea in mind, I set out to create my ASP.NET “Locale LINQ” user control.

What It Do

So here are the basic features:
- Displays the corresponding States/Provinces based on the selected Country
- Includes a pretty exhaustive list of Countries & their associated States/Provinces
- Allows for an initial Country or State/Province to be selected via markup or server side code
- Caches the list of States/Provinces & Countries for great performance
- Based on an XML file containing the list of States/Provinces & Countries
- Uses LINQ for all of the data access & queries
- Based on the .NET 3.5 Framework

Markup Properties

The following properties are available in the user control ASPX markup:

<uc1:ucLocale ID="ucLocale1" runat="server"
EnableProvinceAutoPostback="true"
InitialCountrySelected="United States"
InitialProvinceSelected
="Hawaii"
OnSelectedCountryChanged
="ucLocale1_SelectedCountryChanged"
OnSelectedProvinceChanged="ucLocale1_SelectedProvinceChanged" />

EnableProvinceAutoPostback: Sets whether or not a PostBack occurs when a State/Province is selected
InitialCountrySelected: Sets the initial Country that you would like to be selected when the control is displayed
InitialProvinceSelected: Sets the initial State/Province that you would like to be selected when the control is displayed. Setting this value will also ensure that the related Country is also displayed.
OnSelectedCountryChanged: Event handler for the control's OnSelectedCountryChanged event
OnSelectedProvinceChanged: Event handler for the control's OnSelectedProvinceChanged event

Data & Caching

All of the Province/State & Country information is in the App_Data\CountriesAndProvinces.xml file. I use LINQ to XML to load all of the data from the file and then cache the data in memory after the initial load. Caching the data places it in the server's memory, which will greatly improve performance as a read from the data file will only take place when the cache expires.

// Cache timeout length
public static DateTime CacheTimeout
{
get
{
return DateTime.Now.AddHours(24);
}
}

Download

You can download the control & full source here:
http://aiecpg.blu.livefilestore.com/y1p0t40ilR8hsdGGS8U-2uafnrWQ7S3q99apBcJ7aY5vYSal9vKeoaU7LVOx-6hbK7SB9c4suXmCH0/LocaleLinq.zip?download

More

Monday, July 21, 2008

Use the following setting in the in the machine.config to reduce the startup time of the application.

<configuration>
    <runtime>
        <generatePublisherEvidence enabled="false"/>
    </runtime>
</configuration>

In one of the recent projects, the home page took ~54 seconds to load on IISreset. This came down to ~10 seconds by the above setting in machine.config.

When generatePublisherEvidence element is set to true, the signed assemblies needs to verified by the Certificate Authority. This requires network/internet access increasing the cost. In our performance environment, the web server had no internet access and .NET would timeout after 45 seconds.

Some references on this

See authenticode verification section in

1) http://msdn.microsoft.com/en-us/magazine/cc337892.aspx

Also see

2) http://msdn.microsoft.com/en-us/library/bb629393.aspx

More

I ran across a C# file that had been removed from its csproj file, but it hadn't been deleted from version control.  So James wrote a script (Chris Sidi had already written one, though) to find the .cs files that weren't in the "containing" .csproj file

 

param([string]$csproj = $(throw 'csproj file is required'))
 
$csproj = Resolve-Path $csproj
$dir = Split-Path $csproj
 
# get the files that are included in compilation
$xml = [xml](Get-Content $csproj)
$files_from_csproj = $xml.project.itemgroup | 
        %{ $_.Compile } | 
        %{ $_.Include } |
        ?{ $_ } | 
        %{ Join-Path $dir $_ } |
        Sort-Object
 
# get the files from the dir
$files_from_dir = Get-ChildItem $dir -Recurse -Filter *.cs |
        %{ $_.FullName } |
        Sort-Object
        
Compare-Object $files_from_csproj $files_from_dir

 

Friday, July 18, 2008

Abstract
HttpHandlers are powerful tools of ASP.NET; they are used fairly often, and give great power to developers using them. In this article, Brendan describes the basics of how to use these tools. He describes in easy to understand terms how one can implement the IHttpHandler class, and illustrates his explanation using code snippets and screen shots.

 

Introduction

One of the more interesting pieces of ASP.NET is the concept of HttpHandlers. Sadly, this article does not delve much into what handlers are or for what they are used. Instead, this article just gives a quick introduction for you to be able to write your own HttpHandler. The text following explains how to create a simple HttpHandler, and a few of the design decisions you will have to make when creating your own. You can download the sample project for this article here or at the bottom of the article.

There are plenty of ways in which one can benefit from the use of HttpHandlers. They are very versatile. They can be applied to a lot of different tasks. Keep in mind there are also HttpModules which are also extremely useful tools, but they are a bit different from HttpHandlers. I will not be delving into that topic in this article. Please read elsewhere to learn more about the differences between HttpHandlers and HttpModules.

I know plenty of people who have configured IIS to send all requests for gifs, pngs, jpgs, etc. to ASP.NET HttpHandlers so that site owners can clamp down on hotlinking, leeching, or whatever you want to call it. It is where someone else links to your sites content. It uses your bandwidth and you do not even get the traffic, and usually it does not mention where the content came from.

Other people will configure IIS to handle zip files so that an HttpHandler can step in and execute code every time someone wants to download a file from a site. This is much easier than some of the scripted downloads, and certainly gives great server-side control over the download - counting the number of downloads and storing in a database or whatever you want to do.

One other use of HttpHandlers, which I plan to write about soon, is to use them to watermark your images. If you have configured an HttpHandler to prevent people from leeching your images you might also use a handler to watermark your images with a copyright. This makes it less likely that people will simply save your images and display them on their own pages. Plenty of people and companies are quite concerned about this. Keep in mind that if you publish something on the internet it can never be completely safe, but you can at least discourage people.

More

I was browsing through the usual newsgroups I visit and came up with this post about LINQ to SQL vs EDM and thought I should share it here.

LINQ to SQL and the Entity Framework have a lot in common, but each have features targeting different scenarios.

LINQ to SQL has features targeting "Rapid Development" against a Microsoft SQL Server database. Think of LINQ to SQL as allowing you to have a strongly-typed view of your existing database schema. LINQ to SQL supports a direct, 1:1 mapping of your existing database schema to classes; a single table can be mapped to a single inheritance hierarchy (i.e., a table can contain persons, customers, and employees) and foreign keys can be exposed as strongly-typed relationships.  You can build LINQ queries over tables/views/table valued functions and return results as strongly typed objects, and call stored procedures that return strongly typed results through strongly typed methods.  A key design principle of LINQ to SQL is that it "just work" for the common cases; so, for example, if you access a collection of orders through the Orders property of a customer, and that customer's orders have not previously been retrieved, LINQ to SQL will automatically get them for you.  LINQ to SQL relies on convention, for example default insert, update, and delete logic through generated DML can be overwritten by exposing appropriately named methods (for example, "InsertCustomer", "UpdateCustomer", "DeleteCustomer").  These methods may invoke stored procedures or perform other logic in order to process changes.

The Entity Framework has features targeting "Enterprise Scenarios".  In an enterprise, the database is typically controlled by a DBA, the schema is generally optimized for storage considerations (performance, consistency, partitioning) rather than exposing a good application model, and may change over time as usage data and usage patterns evolve.  With this in mind, the Entity Framework is designed around exposing an application-oriented data model that is loosely coupled, and may differ significantly, from your existing database schema.  For example, you can map a single class (or "entity") to multiple tables/views, or map multiple classes to the same table/view. You can map an inheritance hierarchy to a single table/view (as in LINQ to SQL) or to multiple tables/views (for example, persons, customers, and employees could each be separate tables, where customers and employees contain only the additional columns not present in persons, or repeat the columns from the persons table).  You can group properties into complex (or “composite”) types (for example, a Customer type may have an “Address” property that is an Address type with Street, City, Region, Country and Postal code properties). The Entity Framework lets you optionally represent many:many relationships directly, without representing the join table as an entity in your data model, and has a new feature called "Defining Query" that lets you expose any native query against the store as a "table" that can be mapped just as any other table (except that updates must be performed through stored procedures).  This flexible mapping, including the option to use stored procedures to process changes, is specified declaratively in order to account for the schema of the database evolving over time without having to recompile the application.

The Entity Framework includes LINQ to Entities which exposes many of the same features as LINQ to SQL over your conceptual application data model; you can build queries in LINQ (or in “Entity SQL”, a canonical version of SQL extended to support concepts like strong typing, polymorphism, relationship navigation and complex types), return results as strongly typed CLR objects, execute stored procedures or table valued functions through strongly-typed methods, and process changes by calling a single save method.

However, the Entity Framework is more than LINQ to Entities; it includes a "storage layer" that lets you use the same conceptual application model through low-level ADO.NET Data Provider interfaces using Entity SQL, and efficiently stream results as possibly hierarchical/polymorphic DataReaders, saving the overhead of materializing objects for read-only scenarios where there is no additional business logic. 

The Entity Framework works with Microsoft SQL Server and 3rd party databases through extended ADO.NET Data Providers, providing a common query language against different relational databases through either LINQ to Entities or Entity SQL.

So while there is a lot of overlap, LINQ to SQL is targeted more toward rapidly developing applications against your existing Microsoft SQL Server schema, while the Entity Framework provides object- and storage-layer access to Microsoft SQL Server and 3rd party databases through a loosely coupled, flexible mapping to existing relational schema.

 

We have an old .NET 1.1 web application which I have to support and a recent change in the login process for a select few customers has been causing haywire with every users session. The folks at QA have been giving me a really hard time lately with this bug and I just couldn't get around as to what was causing this weird behavior.

The problem was that if we set the forms authentication and session timeouts to 10 minutes and after the 10th minute the user clicked on any link the app would redirect the user to the login page but the session was not abandoned i.e. the forms authentication ticket had expired but not the session state timeout. To make matters worse I was unable to reproduce it on DEV or QA instance with my automated test script but was able to reproduce it by manually following the steps.

After a lot of googling I finally realized the solution was right there and I had just overlooked it. The problem was in the way timeouts work for authentication tickets vs session state.

Forms authentication ticket can time out in two ways. The first scenario occurs if you use absolute expiration. With absolute expiration, you set an expiration of 20 minutes, and a user visits the site at 2:00 PM. The user will be redirected to the login page if the user visits the site after 2:20 PM. Even if the user visited some pages in between 2:00 PM and 2:20 PM the user will still be redirected to the login page after 2:20 PM.

Now if you are using sliding expiration for forms authentication and session state the scenario gets a bit complicated. With sliding expiration the session state timeout is updated on every visit but the cookie and the resulting authentication ticket are updated if the user visits the site after the expiration time is half-expired.

For example, you set an expiration of 20 minutes for forms authentication ticket and session state and you set sliding expiration to true. A user visits the site at 2:00 PM, and the user receives a cookie that is set to expire at 2:20 PM. The authentication ticket expiration is only updated if the user visits the site after 2:10 PM. If the user visits the site at 2:08 PM, the authentication ticket is not updated but the session state timeout is updated and the session now expires at 2:28 PM. If the user then waits 12 minutes, visiting the site at 2:21 PM, the authentication ticket will be expired and the user is redirected to the login page, but guess what, the session timeout has not yet expired.

Here is the MSDN link which explains this http://support.microsoft.com/kb/910439

So, how do we synch these two timeouts? Or force the other to timeout if one of them expires? The workaround we came up with was to set the authentication timeout to double the value of session timeout and have the following code in the global.asax.cs.

protected void Application_AcquireRequestState(object sender, SystemEventArgs e) 
{

    if
(Session!= null && Session.IsNewSession)
    {
       
string szCookieHeader= Request.Headers["Cookie"];
       
if((szCookieHeader!= null)&& (szCookieHeader.IndexOf("ASP.NET_SessionId")>= 0))
        {
           
if(User.Indentity.IsAuthenticated)
            {
                FormsAuthentication.SignOut();
                Response.Redirect(Request.RawUrl);
            }
        }
    }
}

 

More

Wednesday, July 16, 2008

So I thought I would put all the information together in one place that I have been creating over the past few months.  I’ll try to go through all the steps and the different things that you will need to use in order to track down a problem.

Realizing there is a problem

So the first step is finding out there is some kind of problem in the first place.  There are a number of methods that you could use to find this out:

  • Visitor of your web site reports a problem
  • Review Event logs and see some entries in there regarding an issue
  • Monitor Perfmon and see values that are out of acceptable values, like described in ASP.NET and Performance
  • Other means (reviewing logged data, receive email, etc)

Determine the problem type

Once you know there is a problem, the next step really depends on what the problem is.  It could be something as simple as checking to make sure the network cable is connected properly.

Once you know the type of problem, then you can follow the steps I have laid out to troubleshoot it.

Gather data

The first thing would be to gather the correct data to see what is happening.  You can follow this link to see the full list of issue types and how to get the correct data for each, ASP.NET Tips- What to gather to troubleshoot.  If you come across a situation that isn’t covered here, please let me know.

There are some other ways to get data, for example, ASP.NET Debugging - ASP.NET Tips- How to use DebugDiag to track down which is for performance problems.

Debugging the issue

So now that we have the data we need, it is time to analyze it.  This can be a difficult task depending on the scenario that you are dealing with.  There is a lot of documentation around about using SOS and Windbg to troubleshoot things.  That is the best means as you can always track down what is happening.  For more help on actually tracking down the problem from the data, I would suggest looking at my blog for the particular issue you are having.  I have many posts that help with the situations:

If all else fails, you can always look at the Debugging or SOS tags to find a whole lot of information.

Additional Information

So what if you don’t have a problem and just want to learn about these steps and tools?  Well, one way would be to read through the various information found here.  Another way would be to actually go through some of it.  Tess has some great debugging labs around this stuff that you can find here.  You may also want to look at her 21 most popular blog posts.

More

Tuesday, July 15, 2008


LINQPad supports everything in C# 3.0 and Framework 3.5:

  • LINQ to SQL
  • LINQ to Objects
  • LINQ to XML

LINQPad is also a great way to learn LINQ: it comes preloaded with 200 examples from my book, C# 3.0 in a Nutshell. There's no better way to experience the coolness of LINQ and functional programming.

And LINQPad is more than just a LINQ query tool: it's a code snippet IDE. Instantly execute any C# 3 or VB 9 expression or statement block!

Best of all, LINQPad is free and needs no installation: just download and run. The executable is only 2MB and is self-updating.


Download LINQPad

More

ASP.NET has powerful built-in caching capabilities that you can access through the cache object’s familiar and intuitive "key and item" data access model. Using this model, it’s natural to write code that uses the cache object as you would a collection, searching for cached data by keys and relying on the absence of a key to indicate that you need to retrieve the data from a data store. More advanced developers encapsulate the caching behavior into whatever data retrieval methods they write, but they still rely on the missing key condition to determine whether their code needs to retrieve the data or return the cached copy.

However, there is an elegant technique for data caching that leverages the common asynchronous programming patterns used in the .NET Framework. Using this technique, you’ll be able to make the caching engine tell you when you need to retrieve your expired data.

Delegating your caching authority
To use this technique, you’ll first need to declare a delegate variable of type CacheItemRemovedCallback, as in the following C# fragment:
 
CacheItemRemovedCallback onRemove = null;
 
Delegates are simply type-safe function pointers used to store the address of a method that can later be executed. They’re widely used in asynchronous programming, or in any situation where loosely coupled communication is desirable, such as event-handling code. For more information on delegates, I’d recommend reading “Simplify .NET class communication with delegates.”

Next, create a method with an appropriate signature for the CacheItemRemovedCallback delegate where you’ll place the code to retrieve whatever data you’ve cached:
 
public void RemovedProducts(string k, Object v, CacheItemRemovedReason r)
{
// Go get the data again
}
 
You’ll notice that this callback method accepts the key of the item that was removed, the item itself, and an argument that indicates the reason the item was removed from the cache. These reasons are encapsulated within the CacheItemRemovedReason enum and include Expired, Removed, Underused, and DependencyChanged. While the first two of these arguments need no explanation, the third may occur if ASP.NET needs to recover the memory and flushes the item from the cache. I will discuss the fourth one in the next section.

The final step is to declare an instance of the CacheItemRemovedCallback delegate and pass it to the Insert method when an item is added to the cache. To cache a DataTable named dt with akey value of Products and set a six-hour expiration, you could use the following code:
 
onRemove = new CacheItemRemovedCallback(this.RemovedProducts);

this.Cache.Insert("Products", dt, null,
DateTime.Now.AddHours(6), TimeSpan.Zero,
CacheItemPriority.High, onRemove);

The caching engine will then call the delegate function RemovedProducts whenever the value associated with the Products key is removed for whatever reason. If the DataTable were removed from the cache because it had expired, then RemovedProducts would be called with a removal reason argument of Expired, which would be your cue to refresh the data and recache it.

Tracking cached dependent objects
As hinted at by the DependencyChanged member of the CacheItemRemovedCallback enumeration, the ASP.NET caching engine can track dependencies between cached items and other elements. Dependencies can be formed between two items in the cache or between a cached item and a file system object such as a file, a directory, or an array of either. For example, the following VB.NET snippet sets up a dependency between an item in the cache referred to as Parent and an item called Child:

<code> 
Me.Cache("Parent") = "parent data"
Dim dependencyKey(0) As String dependencyKey(0) = "Parent"
Dim dep As new CacheDependency(Nothing, dependencyKey)
Me.Cache.Insert("Child", "child data", dep)
</code>


When the Parent item is removed from the cache, the Child item is removed as well. Combine this with the delegate mechanism I introduced above, and you can have a sophisticated caching system consisting of entire groups of related objects that can automatically refresh themselves when they expire from your application’s cache.

Let me at the start of the post first say that I love ReSharper. It is by far the best refactoring support that can be found for VB.NET. I haven't yet used it for C# but are told but esteemed colleague that it rocks.


But... (there is always a but isn't it?) it messes up the Intellisense in my Visual Studio. The same colleague (kudos to Jocke) tipped me on how to solve it and here it is;

Open the options for ReSharper and choose Intellisens->General->Use Visual Studio. This will not give you as much support for "Smart Completion" but I'll take that over missing Intellisense everyday in the week, and twice on Sundays.


Next - open the Visual Studio options and recheck that you have Intellisense enabled for are your languages.



Finally restart the Visual Studio - just to be sure that this is saved properly.

Again - the refactoring support with ReSharper is great compared to everything else out there, for VB.NET. But this is not so good - at least now you know how to solve it.

More

In Visual Studio 2008, many new features were implemented that eliminate the top issues found in the VS 2005 Web Test recorder, as covered in the white paper Web Test Authoring and Debugging Techniques.

While many areas have been addressed, there will still be times when record/playback fails or does not give the desired result. I’ll be writing a series of blog articles on how the recorder works, “knobs” in the registry that will let you control what does and does not get recorded, and problems you may encounter. Finally, I’ll introduce new debugging techniques that will enable you to find and fix your tests.

New Recorder Technologies in VS 2008

The VS 2008 recorder introduced two key new technologies that eliminate the majority of the problems encountered in the VS 2005 recorder:

1)      The new recorder now picks up all requests, whereas the 2005 recorder did not record AJAX request or certain types of popup requests, and

2)      The new recorder has a feature to detect dynamic parameters and automatically add the extraction rules and bindings to the test to properly correlate them. A common class of these parameters was dynamic parameters on the query string, such as a session id.

3)      If a page has a redirect, the Expected Response URL records the page that was redirected to. In VS 2005, a redirect to the error page was not detected as an error. In VS 2008 the recorder adds the Response URL validation rule to catch this and flag it as an error.

Filtering Requests

Even though the recorder now captures all requests, some requests are still filtered when the web test is generated.

First of all, “dependent” requests are filtered. Web Test has a feature to “Parse Dependent Links”, in which resources on a page such as images, java scripts sources, and css references are not recorded, but at run time the web test engine parses the response, finds all references to dependents, and then fetches them.  This works the same way in VS 2008. When the parser runs, it looks for all IMG, SCRIPT, and LINK tags to find the dependent resources and fetch them.

However, a web page can also download content using java script. There was a “hole” in the VS 2005 recorder, in that it could not pick up these requests.  A couple of examples of this are mouse over images that are fetched via java script, or images fetched in a mapping program via AJAX calls.

By default, the recorder is configured to filter “static” content such as images and css files. You can override what gets recorded using these registry settings below. These settings are the default (set in code, these registry entries aren’t present after install):

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\EnterpriseTools\QualityTools\WebLoadTest]
"WebTestRecorderMode"="exclude"
"ExcludeMimeTypes"="image;application/x-javascript;application/x-ns-proxy-autoconfig;text/css"
"ExcludeExtensions"=".js;.vbscript;.gif;.jpg;.jpeg;.jpe;.png;.css;.rss"

You can see that these settings will filter out images, java script source files, and css files. If you want to record everything, you can simply set “ExcludeMimeTypes” and “ExcludeExtensions” to the empty string.

The recorder will also work in an “include” mode, where you specify a list of mime types and extensions to include. For example, the default include list is:

Windows Registry Editor Version 5.00

 

[HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\EnterpriseTools\QualityTools\WebLoadTest]

"WebTestRecorderMode"="include"

"IncludeMimeTypes"="text/html;text/xml;text/xhtml;application/xml;application/xhtml+xml;application/soap+xml;application/json"

"IncludeExtensions"=""

Note you have to include the xml, soap, etc., in order to pick up ajax calls.

Folding in Additional Requests

One thing the recorder is not smart about is that requests picked up by the low level recorder are always treated as top-level requests. If you do set the recorder to include images and such, it does not try to figure out which request the dependent came from and store it under the appropriate top-level request. Instead any additional requests are recorded as top-level requests.

Also, asynchronous requests are recorded at the top level of the web test and will be played back synchronously. We hope to add an “Async” property to top-level requests in our next release that will enable you to more accurately simulate the request pattern generated by the browser for AJAX requests.

Note that dependents are fetched in parallel over two connections in the same way the browser fetches them.

Filtering HTTP Headers

HTTP header filtering works in a similar way to request filtering.

Normally in a web test, most HTTP headers are set from the browser template file. Here are the contents of the IE7 browser template file:

<Browser Name="Internet Explorer 7.0">

  <Headers>

    <Header Name="User-Agent" Value="Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)" />

    <Header Name="Accept" Value="*/*" />

    <Header Name="Accept-Language" Value="{{$IEAcceptLanguage}}" />

    <Header Name="Accept-Encoding" Value="GZIP" />

  </Headers>

</Browser>

 

Notice the MSIE 7.0 in the User-Agent header, which identifies it as IE7.

By default, the recorder only records these additional HTTP headers:

Windows Registry Editor Version 5.00

 

[HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\EnterpriseTools\QualityTools\WebLoadTest]

"RequestHeadersToRecord"="SOAPAction;Pragma;x-microsoftajax"

 

Your application may send additional custom headers. In that case, you can change the recorder settings to add the headers your app sends.

The recorder is set to never record these headers, which are automatically handled in the HTTP engine:

"Authorization", "Proxy-Connection", "Connection", "Host", "Expect", "Content-Length"

Recorder Settings Summary

You can see the new web test recorder is powerful, but the default settings may night be right for your application.

1.       The default filtering of static content may mask performance problems in your application. Consider recording additional requests.

2.       HTTP Headers that your application depends on may be filtered out. If your application uses custom headers, consider changing your recorder settings to pick them up.

Detecting Dynamic Parameters

A major new feature in VS 2008 is the ability to detect dynamic parameters. A dynamic parameter is a parameter whose value is generated each time a user runs the application, and therefore playback of recorded values won’t work.

The best example of this is a dynamically generated session ID. For apps that support login, each time a user logs in, the server generate s a unique session ID to track the user. This session ID may then be passed back to the server via a cookie, form field, or query string parameter. In VS 2005, Web tests handled cookies and hidden fields. Note there were some bugs in hidden field binding in VS 2005, some of which were fixed in SP1 and some have been fixed in VS 2008.

VS 2008 adds support for two more types of dynamic parameters: query string parameters and form fields (other than hidden fields).

The way it works is at record time, the value of each form post and query string parameter is recorded, and an appropriate extraction rule is computed. Immediately after recording the web test is played back by the correlation tool. During playback, the tool applies the extraction rule and compares the extracted value with the recorded value. If they differ, it flags the parameter as a dynamic parameter.

Dynamic Detection Playback Fails

A subtle problem you may encounter is Dynamic Detection playback failing. As mentioned above, in order to detect dynamic parameters, the web test is played back under the covers. If this playback fails you may or may not get a complete list of parameters to promote. You can see whether or not it failed by looking in the result window, where the result of the playback is displayed. If it does fail, fix your test per the guidance below and then re-run dynamic parameter detection from the web test toolbar.

When to not Promote a Parameter to a Dynamic Parameter

There may be times when playback thinks a parameter is dynamic when in fact it is not.

One example of this is when cookies in IE are used to populate form fields. For example, our Web test forums provide an option to save your user name for login. If you record a Web test with this setting turned on, a cookie is used to set the value of the user name so it is automatically filled in. When dynamic parameter detection runs, it looks at the user name value and sees that it is different than the recorded value. Aha! A dynamic parameter! If you accept this as a dynamic parameter, the web test engine will scrape the value out of the response rather than playing back what you typed in, clearly not the desired behavior.

To avoid problems like this, clear your browser cookie cache prior to recording.

Playback Doesn’t Work, Now What?

In general, the problem is that the http requests sent by the web test engine are somehow different than the requests sent by IE. The first challenge is figuring out what IE is sending over http. There are different tools for doing this, include Fiddler (http://www.fiddler2.com/), or NetMon. There is also a new feature in the web test recorder for generating a recording log from a recording. Turning this on is probably the easiest way to see what IE is sending.

Common Problems to Look For

Common problems we’ve seen customers encounter:

1)      Missing custom HTTP headers. See the comments on HTTP headers in the recorder section above.

2)      Cookies saved on your local machine.  A good example of this is cookies stored on your machine to automatically fill out the user name. See the section on Detecting Dynamic Parameters above.

Using the Recorder Log

To turn on the recorder log, set these registry entries:

Windows Registry Editor Version 5.00

 

[HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\EnterpriseTools\QualityTools\WebLoadTest]

"CreateLog"=dword:00000001

"RecorderLogFolder"="c:\\RecorderLogs\"

 

This will result in a log file for each recording session. Open the log file and find the failing request, then carefully compare all parts of the request in the request log to the request in web test playback:

·         URI

·         Query string parameters and values

·         HTTP headers, including custom headers and cookies

·         Post body

Once you identify the difference, you can go back to the web test to fix the problem. Some areas to look out for:

·         Missing custom http headers in the web test

·         Incorrectly handled dynamic parameters (including

o   parameters marked as dynamic that aren’t, or )

o   Incorrect extraction rules

Using Fiddler

Because of the underlying technology, there may be times when either the web test recorder or web test playback viewer does not accurately reflect what is actually getting sent over the wire in subtle ways. There may even be times when the web test recorder interferes with the requests sent by IE. To get a true picture of what is getting sent, you can use a tool like Fiddler or NetMon.

One example of this is in VS2005, web test playback always showed cookies being sent in a single header, when in fact they were sent in multiple http headers.

You can also configure Fiddler to run while the web test is running to capture the http traffic the web test is sending. To do this, you need to create a web test plugin and add this code to the constructor:

            this.Proxy = "http://localhost:8888";
            WebProxy webProxy = (WebProxy)this.WebProxy;
            webProxy.BypassProxyOnLocal = false;

Web Test Logging

Web test playback is a great tool for seeing what is going on in the web test engine. However, there may be times when you just want to dump the entire session to a text file. One limitation in Web test playback is there is no way to search across all the requests and responses in the session. We have developed a web test logging sample plugin that will do just that and plan to release it to CodePlex soon at http://www.codeplex.com/TeamTestPlugins.

Conclusion

Record playback in VS 2008 has been vastly improved over VS 2005. The recorder now has the capability to pick up all requests sent from IE, the Dynamic Parameter Detection feature catches the most common cases for dynamic parameter correlation, the Response URL validation rule, and a number of bugs have been fixed to make playback more reliable. Even with these new features, there may be times where the recorder by default isn’t capturing the meaningful requests for your application, and you will want to tweak the recorder settings to record additional requests.

There may also be times where record/playback will fail, and you will need to debug the web test to figure out what is going wrong. The addition of the recorder logging feature will make this easier.

More