:::: MENU ::::

Thursday, May 29, 2008

UFrame combines the goodness of UpdatePanel and IFRAME in a cross browser and cross platform solution. It allows a DIV to behave like an IFRAME loading content from any page either static or dynamic. It can load pages having both inline and external Javascript and CSS, just like an IFRAME. But unlike IFRAME, it loads the content within the main document and you can put any number of UFrame on your page without slowing down the browser. It supports ASP.NET postback nicely and you can have DataGrid or any other complex ASP.NET control within a UFrame. UFrame works perfectly with ASP.NET MVC making it an replacement for UpdatePanel. Best of all, UFrame is implemented 100% in Javascript making it a cross platform solution. As a result, you can use UFrame on ASP.NET, PHP, JSP or any other platform.

<div class="UFrame" id="UFrame1" src="SomePage.aspx?ID=UFrame1" >

  <p>This should get replaced with content from Somepage.aspx</p>

</div>

Response from SomePage.aspx is rendered directly inside the UFrame. Here you see two UFrame's are used to load the same SomePage.aspx as if they are loaded inside IFRAME. Another UFrame is used to load AnotherPage.aspx that shows photos from Flickr.

See it in action!

You can test UFrame from:

What is UFrame?

UFrame can load and host a page (ASP.NET, PHP or regular html) inside a DIV. Unlike IFRAME which loads the content inside a browser frame that has no relation with the main document, UFrame loads the content within the same document. Thus all the Javascripts, CSS on the main document flows through the loaded content. It's just like UpdatePanel with IFRAME's src attribute.

The above UFrames are declared like this:

<div id="UFrame1" src="SomePage.aspx" >
    <p>This should get replaced with content from Somepage.aspx</p>
</div>

The features of UFrame are:

  • You can build regular ASP.NET/PHP/JSP/HTML page and make them behave as if they are fully AJAX enabled! Simple regular postback will work as if it's an UpdatePanel, or simple hyperlinks will behave as if content is being loaded using AJAX.
  • Load any URL inside a DIV. It can be a PHP, ASP.NET, JSP or regular HTML page.
  • Just like IFRAME, you can set src property of DIVs and they are converted to UFrames when UFrame library loads.
  • Unlike IFRAME, it loads the content within the main document. So, main document's CSS and Javascripts are available to the loaded content.
  • It allows you to build parts of a page as multiple fully independent pages.
  • Each page is built as standalone page. You can build, test and debug each small page independently and put them together on the main page using UFrames.
  • It loads and executes both inline and external scripts from loaded page. You can also render different scripts during UFrame postback.
  • All external scripts are loaded before the body content is set. And all inline scripts are executed when both external scripts and body has been loaded. This way the inline scripts execute when the body content is already available.
  • It loads both inline and external CSS.
  • It handles duplicates nicely. It does not load the same external Javascript or CSS twice.

Download the code

You can download latest version of UFrame along with the VS 2005 and VS 2008 (MVC) example projects from CodePlex:

www.codeplex.com/uframe

Please go to the "Source Code" tab for the latest version. You are invited to join the project and improve it or fix bugs.

Read the article about UFrame

I have published an article about UFrame at CodeProject:

http://www.codeproject.com/KB/aspnet/uframe.aspx

The article explains in details how the UFrame is built. Be prepared for a big dose of Javascript code.

If you find UFrame or the article useful, please vote for me at CodeProject.

More

A ASP.NET performance advisors, we are typically brought into a project when it's already in trouble. In many cases, the call doesn't come until after the application has been put into production. What worked great for the developers isn't working well for users. The complaint: the site is too slow. Management wants to know why this wasn't discovered in testing. Development can't reproduce the problem. At least one person is saying that ASP.NET can't scale. Sound familiar?

Some of the busiest Web sites in the world run on ASP.NET. MySpace is a great example; in fact, it was migrated to ASP.NET after running on a number of different platforms. The fact is, performance problems can creep into your app as it scales up, and when they do, you need to determine what the actual problem is and find the best strategies to address it. The biggest challenge you'll face is creating a set of measurements that cover the performance of your application from end to end. Unless you're looking at the whole problem, you won't know where to focus your energy.

 

The Performance Equation

In September 2006, Peter Sevcik and Rebecca Wetzel of NetForecast published a paper called "Field Guide to Application Delivery Systems." The paper focused on improving wide area network (WAN) application performance and included the equation in Figure 1. The equation looks at WAN performance, but with a few minor modifications it can be used to measure Web application performance.

 

More

Wednesday, May 28, 2008

Visual Studio 2008 and .NET 3.5 are already here, but some of us, sometimes, still need to use .NET 2.0.
Visual Studio 2008, support in a new feature called “Multi Targeting”. You can use Visual Studio 2008 ad IDE for .NET 2.0 and .NET 3.0 too. Simply, in the “New Project” form, choose the version you want.

The common between all this versions you can use VS2008 to develop for it, is that this entire versions are actually based on the same version of CLR – CLR 2.0.

.NET 3.5 based in the same version of CLR that .NET 2.0 uses. Of course, there are a lot of new features, new C# and VB.NET versions, but under the covers the same CLR works.

Why is it so important? Because theoretically, you can use some of the mew features of .NET 3.5 in .NET 2.0 when you work from VS 2008.

In this post, I'll show you some of the new C# 3 and .NET 3.5 features and how they work in .NET 2.0 project (that created and developed from Visual Studio 2008). For every feature I'll give you a look of which code is actually generated. I give you the original code, and the disassembly code from Reflector.

All the code in this post written in Visual Studio 2008 in a .NET 2.0 Console Application projects, without reference to any addition assemblies, and checked in machine without .NET 3.5.

Auto-Properties
one of the new features in C# 3 is auto-properties. because, actually, a lot of the properties we write are very simple, we don't have to write the full code (declare private variable, create getter and setter). We just have to write this:

public string Name { get; set; }

The same code of auto-properties works from .NET 2.0 projects under Visual Studio 2008. That's because the auto-properties is only a compiler trick.  Let's see how this code looks in reflector (in .NET 2.0 project, same like in .NET 3.0 project):

   1:      // Fields

   2:      [CompilerGenerated]

   3:      private string <Name>k__BackingField;

   4:   

   5:      // Properties

   6:      public string Name

   7:      {

   8:          [CompilerGenerated]

   9:          get

  10:          {

  11:              return this.<Name>k__BackingField;

  12:          }

  13:          [CompilerGenerated]

  14:          set

  15:          {

  16:              this.<Name>k__BackingField = value;

  17:          }

  18:      }

  19:  }

As you can see, what's actually happened, is that the compile create new private variable, and public property with simple getters and setters. because that's actually a compiler trick, we can use it in .NET 2.0 projects too.

Object Initializers

   1:  List<Book> books = new List<Book>();

   2:  books.Add(new Book() { Name = "Enders Game", ISDN = "13456789" });

This simple code is an example of the new Object Initializers in C# 3. But, this code works when you create a .NET 2.0 project too. Because, again, it's simple compiler trick. This is the output of the Reflector's disassemble for this code (in .NET 2.0 project under Visual Studio 2008):

   1:  internal class Program

   2:  {

   3:      // Methods

   4:      private static void Main(string[] args)

   5:      {

   6:          List<Book> books = new List<Book>();

   7:          Book <>g__initLocal0 = new Book();

   8:          <>g__initLocal0.Name = "Enders Game";

   9:          <>g__initLocal0.ISDN = "13456789";

  10:          books.Add(<>g__initLocal0);

  11:      }

  12:  }

  13:   

  14:   

Behind the scenes, this code creates a new instance of the Book object, and simply give values to the properties. Because .NET 2.0 and .NET 3.5 are actually working on the same CLR, the same code generated, and it's working on .NET 2.0 too.

"var" keyword

the keyword "var" is new in C# 3, and give us the option to declare a new variable without specify the type. the type will be the type of the value that we will put to the variable:

var i =5;

this code will work in .NET 2.0 project in Visual Studio 2008 too. because again, it's compiler code. If I'll check the Reflection's disassembly, I'll see:

Empty!!!

the compiler is smart. when i only declare a variable but never use it, it will not compile. but, if I'll change a little the code, and print this variable:

   1:  var i = 5;

   2:  Console.WriteLine(i);

then, we'll see this:

We can use this feature in .NET 2.0 project, because it's only a compiler trick. actually, "var" only tell the compiler to replace it with the type name of the value we use. in this example, we see that int replaced the var keyword.

More than that. we also can use Anonymous Type in .NET 2.0 project:

   1:  var i = new { Name = "Enders Game", ISDN = "123456789" };

   2:  Console.WriteLine(i.Name);

The disassembly:

WOW! What is this "var" keyword in the disassembly? this is .NET 2.0 project, and in C# 2 there is no "var" keyword.
Actually, a new class was generated:

But, this class is hidden with DebuggerHiddenAttribute and DebuggerDisplayAttribute so, in the debugging we can't see it.

And again, we see how we can use .NET 3.5 feature and C# 3 keyword in .NET 2.0 project under Visual Studio 2008. It's possible just because this (and everything else we see in this post) is a compiler trick, which doesn't requied any additional assemblies or features - and use CLR v2.0.

Extension Methods
Daniel Moth wrote about using Extension Methods in .NET 2.0 here.

Summary

Visual Studio 2008 support Multi Targeting which give us the option to develop .NET 2.0 project in VS 2008 and compile it with new version of the compiler. this compiler create a regular .NET 2.0 code - but give us the option to use .NET 3.5 and C# 3 features, that are actually compiler-tricks to work easily.
Behind the scenes, the compiler generate regular .NET 2.0 code.

 

Note:  The point of the post is to explain why unit tests can actually save you time in the long run even if you or your boss don't currently use or believe in them.  It's not my goal to go into some silly religious discussion about why unit tests should or should not be used in a project.  There are plenty of forums out there for arguing over various technical concepts and methodologies if you have the time to waste.

Many different philosophies have been proposed that offer solutions for writing quality code.  As a result, two people will generally give two different answers if you question them about the best way to write quality code.  Regardless of your views on writing quality code, testing has to fit into the picture at some point.  We could argue over exactly where testing fits into a project but I don't think anyone would argue that testing can be skipped (and if you're one of those people you can save yourself some time and stop reading now :-)).

I've never been one of those "letter of the law" people when it comes to just about anything and that applies to concepts like testing code as well.  I believe balance has to be reached regardless of what you're doing.  There is such a thing as going "overboard" when it comes to software development, studying for a test at school, training for sports or many other things.  However, everyone's definition of "overboard" differs and I respect that so I'll move on to the heart of the matter which is unit testing and how it can benefit your projects even if you don't agree with the "letter of the law" people out there.

If you're new to unit testing, I'll sum up the general concept quickly:  Write tests followed by code to satisfy those tests.  That one statement doesn't do unit testing justice, but hopefully you get the basic idea.  Do I always write my tests first along with the application's general stub code?  Simple answer is "I try to".  I try to follow best practices but in reality it doesn't always work out exactly how I want. 

Regardless of your view on the overall process, unit tests can still provide significant benefits in my opinion even if you write them later in the development process.  The "letter of the law" people out there probably consider that to be completely wrong, but I try hard to keep in mind that everyone and every project has different skill levels, needs and constraints.  Whether you write your unit tests at the beginning, in the middle or even at the end of a project (pushing them off until the end is NOT recommended and your hand should be slapped with a ruler if you do that :-)), they can still provide you many benefits.

There are numerous Websites dedicated to the topic of unit tests.  Wikipedia provides a nice overview here. A few of the benefits unit tests bring to the table include:

  • Forces more thorough consideration of an application's design
  • Simplifies code integration
  • Catches bugs more quickly in the development process
  • Simplifies maintenance
  • Provides visual test results
  • Provides a testing history
  • Makes you feel better about the stability of your application (assuming you have good code coverage)
  • Many more....

For some people writing unit tests before writing application code (more than just stub code) is absolutely required for writing good software.  I agree with that stance overall since if you build testing into a project from the beginning then you should definitely have higher quality code in the end if you stick with it and ensure that you've achieved good code coverage with your tests.  However, I've also been on projects where writing all of the unit tests up front simply wasn't going to happen due to the time constraints (which are sometimes ridiculously out of touch with reality) placed on a project.  Regardless of your situation, unit tests can still help you in the short-term and long-term. 

Here are a few reasons you should consider using them if you're not already.  I've found that they actually save time in the long run if you spend a little time up front.

  • How many times have you written test harness code to test a particular feature?  A lot of people whip up a quick console project (or multiple console projects) to do this type of thing.  By using unit tests you can save yourself the time of writing test harnesses since unit test frameworks do that for you plus have the ability to run the tests anytime and see their status (green light, red light) aggregated together in one nice report.  If you've never worked with unit tests before here's an example of what tests results can look like (this one is generated by Visual Studio 2008):
  • How many times have you or someone else asked, "How will this one quick change affect the application?".  For many applications a "small" or "quick" change is made without knowing the impact on the overall application.  It's kind of a guessing game for some people. You think it's a simple change only to realize that the one "simple" change affected many other things as well that you or someone else may have forgotten about.  If unit tests were in place (assuming good code coverage) you could make the change, run the tests and instantly know how it affects things.  It's like having a crystal ball in some ways since by having unit tests you can more accurately predict future changes and the impact they'll have.
  • How do you know if code contributed from multiple people on a project integrates well?   If unit tests were in place you could test their code along with your code and see if things play nicely together.  This of course assumes that everyone is writing unit tests for the code.
  • How do you get test data into and out of a database?  While you certainly don't need unit tests for this, startup and cleanup unit test methods can be used to automatically populate a DB with test data (with a little work on your part) as tests are run so that your functional tests can be run against real data.  People normally write some code to insert test data anyway (or use 3rd party products to do it like some from RedGate Software) so why not do it as part of your testing process? 
  • How many times have you been asked for a status report on an application?  If things are organized really well you'll probably have a project plan in place that can be updated.  In other cases, sending the results of unit tests (sending an image like the one above) does wonders to help people see where things stand on a project.
  • How many times have you had to provide production support for an application someone else wrote?  Debugging someone else's code is never a fun task especially if documentation is light or non-existent.  Imagine inheriting a project that already has good unit tests in place though!  You can add your tests (if needed), make the updates or bug fixes and then run all of the tests to see how things look.  It's much better than guessing if an application works properly or not especially if you don't know much about the application to start. Plus, unit tests provide a built-in type of documentation since you know what the key methods are in the application that you need to understand and worry about.

To sum things up, I'm a big fan of unit tests simple because I end up writing test harness programs at some point anyway, need to debug a production application or am forced into a situation where someone wants to make that one "simple" change in an application and I don't know the true impact.  By using test frameworks such as Visual Studio, nUnit, xUnit.net, mbUnit, etc. (I personally use the features found in Visual Studio)  I save myself the time of getting test data into the database, writing test harnesses, maintaining an application, plus much more.  By writing unit tests are you going to catch all of the bugs?  Obviously not...but you'll be well ahead of where you would have been without them.

Don't take my word for it though, start creating a few unit tests for an application you're already working on (even if the tests are created after the fact in this scenario) and you'll see how nice it is to see green lights as your tests pass or red lights when you need to fix something you wouldn't have caught otherwise.  It sure beats pushing off all testing to your end users and wasting their time reporting bugs that you could have dealt with much earlier in the development process. 

If you're interested in getting started with the overall concept of unit tests some nice starter articles can be found here and here.  I'm sure there are other "How many times have you...." type questions where unit tests can help and even save time.  If you have additional suggestions please add a comment.

More

 

Monday, May 26, 2008

Microsoft released a new cool tool called Microsoft Source Analysis for C#.  It's an internal tool that does somewhat FxCop does.  While FxCop analyse the IL, Source Analysis analyse the source code itself.  It comes with about 200 built in rules that are however not customizable.  These rules cover:

  • Microsoft Layout of elements, statements, expressions, and query clauses
  • Placement of curly brackets, parenthesis, square brackets, etc
  • Spacing around keywords and operator symbols
  • Line spacing
  • Placement of method parameters within method declarations or method calls
  • Standard ordering of elements within a class
  • Formatting of documentation within element headers and file headers
  • Naming of elements, fields and variables
  • Use of the built-in types
  • Use of access modifiers
  • Allowed contents of files
  • Debugging text

After installation, the tool is integrated into the VS 2005 or 2008 IDE. 

You can read about Source Analysis for C# here:
http://blogs.msdn.com/sourceanalysis/

You can download it here:
http://code.msdn.microsoft.com/sourceanalysis/Release/ProjectReleases.aspx?ReleaseId=1047

 

A long time ago I was watching Joe Stagner's ASP.NET AJAX videos and I saw him purposefully indenting the attributes in his ASP.NET markup so that they lined up neatly underneath each other. I really took to this concept because it's so much easier to read a laundry list of attributes than it is to scroll across your page trying to hunt down your markup. I emailed Joe, asking him what the Visual Studio hotkey was to perform the lineup, and he replied that he did it all by hand. I've done the same thing since watching those videos and while it's time consuming, I really prefer the readability.

 

At DevTeach, Rob Burke gave an excellent talk on building line-of-business applications using WPF
and Silverlight . He is another practitioner of listed attributes, and watching him painstakingly line them up by hand during the presentation was the last straw for me; I wasn't going to waste another keystroke on presenting these attributes how we want them!

 

More

Friday, May 23, 2008

ASP.NET 2.0 applications on IIS 7.0 are hosted using the ASP.NET Integrated mode by default.  This new mode enables a myriad of exciting scenarios including using super-valuable ASP.NET features like Forms Authentication for your entire Web site, and developing new ASP.NET modules to do things like URL rewriting, authorization, logging, and more at the IIS level.  For more information about the ASP.NET Integration in IIS 7.0, see: ASP.NET Integration with IIS7

 

As you know, with great power comes great responsibility.  Similarly, with making ASP.NET applications more powerful in IIS 7.0 comes the responsibility of making sure that existing ASP.NET applications continue to work.  This has been a major challenge for us as we re-architected the entire core engine of ASP.NET, and in the end we were highly successful in meeting it.  As a result, most ASP.NET applications should work without change.

 

This post lists the changes in behavior that you may encounter when deploying your ASP.NET applications on IIS 7.0 on Windows Vista SP1 and Windows Server 2008.  Unless noted, these breaking changes occur only when using the default ASP.NET Integrated mode.

 

 

Using Classic ASP.NET mode

IIS 7.0 also offers the ability to run ASP.NET applications using the legacy Classic ASP.NET Integration mode, which works the same way as ASP.NET has worked on previous versions of IIS.  However, we strongly recommend that you use a workaround where available to change your application to work in Integrated mode instead.  Moving to Classic mode will make your application unable to take advantage of ASP.NET improvements made possible in Integrated mode, leveraging future features from both Microsoft and third parties that may require the Integrated mode.  Use Classic mode as a last resort if you cannot apply the specified workaround.  For more information about moving to Classic mode, see: Changing the ASP.NET integration mode.

 

I’ve blogged in detail about some of the breaking changes below.  Those changes include links to the posts that contain additional details and workaround information.  If you require more information on a particular problem, please leave a comment.

 

 

Migration errors

These errors occur due to changes in how some ASP.NET configuration is applied in Integrated mode.  IIS will automatically detect this configuration and issue an error asking you to migrate your application, or move it to classic mode if migration is not acceptable (See breaking change #3 below).

 

1)    ASP.NET applications require migration when specifying configuration in <httpModules> or <httpHandlers>.

You will receive a 500 - Internal Server Error.  This can include HTTP Error 500.22, and HTTP Error 500.23: An ASP.NET setting has been detected that does not apply in Integrated managed pipeline mode.

It occurs because ASP.NET modules and handlers should be specified in the IIS <handlers> and <modules> configuration sections in Integrated mode.

Workaround:

1) You must migrate the application configuration to work properly in Integrated mode.  You can migrate the application configuration with AppCmd:

> %windir%\system32\inetsrv\Appcmd migrate config "<ApplicationPath>"

2) You can migrate manually by moving the custom entries in in the <system.web>/<httpModules> and <system.web>/<httpHandlers> configuration manually to the <system.webServer>/<handlers> and <system.webServer>/<modules> configuration sections, and either removing the <httpHandlers> and <httpModules> configuration OR adding the following to your application’s web.config:

<system.webServer>

    <validation validateIntegratedModeConfiguration="false" />

</system.webServer>

 

2)    ASP.NET applications produce a warning when the application enables request impersonation by specifying <identity impersonate=”true”> in configuration

You will receive a 500 - Internal Server Error. This is HTTP Error 500.24: An ASP.NET setting has been detected that does not apply in Integrated managed pipeline mode.

It occurs because ASP.NET Integrated mode is unable to impersonate the request identity in the BeginRequest and AuthenticateRequest pipeline stages.


Workaround
:

1) If your application does not rely on impersonating the requesting user in the BeginRequest and AuthenticateRequest stages (the only stages where impersonation is not possible in Integrated mode), ignore this error by adding the following to your application’s web.config:
<system.webServer>

    <validation validateIntegratedModeConfiguration="false" />

</system.webServer>

2) If your application does rely on impersonation in BeginRequest and AuthenticateRequest, or you are not sure, move to classic mode.

 

3)    You receive a configuration error when your application configuration includes an encrypted <identity> section.

You will receive a 500 – Internal Server Error.  This is HTTP Error 500.19: The requested page cannot be accessed because the related configuration data for the page is invalid
The detailed error information indicates that “Configuration section encryption is not supported”.

It occurs because IIS attempts to validate the <identity> section and fails to read section-level encryption.


Workaround:

1) If your application does not have the problem with request impersonation per breaking change #2, migrate your application configuration by using AppCmd as described in breaking change #1:

> %windir%\system32\inetsrv\Appcmd migrate config "<ApplicationPath>"

This will insure that the rest of application configuration is migrated, and automatically add the following to your application’s web.config to ignore the <identity> section:

<system.webServer>

    <validation validateIntegratedModeConfiguration="false" />

</system.webServer>

2) If your application does have the problem with request impersonation, move to classic mode.

 

Authentication, Authorization, and Impersonation

In Integrated mode, both IIS and ASP.NET authentication stages have been unified.  Because of this, the results of IIS authentication are not available until the PostAuthenticateRequest stage, when both ASP.NET and IIS authentication methods have completed.  This causes the following changes:

 

 

4)    Applications cannot simultaneously use FormsAuthentication and WindowsAuthentication

Unlike Classic mode, it is not possible to use Forms Authentication in ASP.NET and still require users to authenticate with an IIS authentication method including Windows Authentication, Basic Authentication, etc.  If Forms Authentication is enabled, all other IIS authentication methods except for Anonymous Authentication should be disabled.
In addition, when using Forms Authentication, the following changes are in effect:

-       The LOGON_USER server variable will be set to the name of the Forms Authentication user.

-       It will not be possible to impersonate the authenticated client.  To impersonate the authenticated client, you must use an authentication method that produces a Windows user instead of Forms Authentication.

Workaround:

1) Change your application to use the pattern explained in Implementing a two level authentication scheme using Forms Authentication and another IIS authentication method in IIS 7.0.

 

 

5)    Windows Authentication is performed in the kernel by default.  This may cause HTTP clients that send credentials on the initial request to fail.

IIS 7.0 Kernel-mode authentication is enabled by default in IIS 7.0.  This improves the performance of Windows Authentication, and simplifies the deployment of Kerberos authentication protocol.  However, it may cause some clients that send the windows credentials on the initial request to fail due to a design limitation in kernel-mode authentication.  Normal browser clients are not affected because they always send the initial request anonymously.

NOTE: This breaking change applies to both Classic and Integrated modes.


Workaround:

1) Disable kernel-mode authentication by setting the userKernelMode to “false” in the system.webServer/security/authentication/windowsAuthentication section.  You can also do it by AppCmd as follows:

> %windir%\system32\inetsrv\appcmd set config /section:windowsAuthentication /useKernelMode:false

 

6)    Passport authentication is not supported.

You will receive an ASP.NET 500 – Server Error: The PassportManager object could not be initialized. Please ensure that Microsoft Passport is correctly installed on the server.

Passport authentication is no longer supported on Windows Vista and Windows Server 2008.  NOTE: This breaking change applies to both Classic and Integrated modes.

 

 

7)    HttpRequest.LogonUserIdentity throws an InvalidOperationException when accessed in a module before PostAuthenticateRequest

You will receive an ASP.NET 500 – Server Error: This method can only be called after the authentication event.

HttpRequest.LogonUserIdentity throws an InvalidOperationException when accessed before PostAuthenticateRequest, because the value of this property is unknown until after the client has been authenticated.


Workaround:

1) Change the code to not access HttpRequest.LogonUserIdentity until at least PostAuthenticateRequest

 

 

8)    Client impersonation is not applied in a module in the BeginRequest and AuthenticateRequest stages.

The authenticated user is not known until the PostAuthenticateRequest stage.  Therefore, ASP.NET does not impersonate the authenticated user for ASP.NET modules that run in BeginRequest and AuthenticateRequest stages.  This can affect your application if you have custom modules that rely on the impersonating the client for validating access to or accessing resources in these stages.

Workaround:

1) Change your application to not require client impersonation in BeginRequest and AuthenticateRequest stages.

 

9)    Defining an DefaultAuthentication_OnAuthenticate method in global.asax throws PlatformNotSupportedException

You will receive an ASP.NET 500 – Server Error: The DefaultAuthentication.Authenticate method is not supported by IIS integrated pipeline mode.

In Integrated mode, the DefaultAuthenticationModule.Authenticate event in not implemented and hence no longer raises. In Classic mode, this event is raised when no authentication has occurred. 


Workaround:

1) Change application to not rely on the DefaultAuthentication_OnAuthenticate event.  Instead, write an IHttpModule that inspects whether HttpContext.User is null to determine whether an authenticated user is present.

 

 

10) Applications that implement WindowsAuthentication_OnAuthenticate in global.asax will not be notified when the request is anonymous[M2] 

If you define the WindowsAuthentication_OnAuthenticate method in global.asax, it will not be invoked for anonymous requests.  This is because anonymous authentication occurs after the WindowsAuthentication module can raise the OnAuthenticate event.

 

Workaround:

1) Change your application to not use the WindowsAuthentication_OnAuthenticate method.  Instead, implement an IHttpModule that runs in PostAuthenticateRequest, and inspects HttpContext.User.

 

Request limits and URL processing

The following changes result due to additional restrictions on how IIS processes incoming requests and their URLs.

 

11) Request URLs containing unencoded “+” characters in the path (not querystring) is rejected by default

You will receive HTTP Error 404.11 – Not Found: The request filtering module is configured to deny a request that contains a double escape sequence.

This error occurs because IIS is by default configured to reject attempts to doubly-encode a URL, which commonly represent an attempt to execute a canonicalization attack.


Workaround:

1) Applications that require the use of the “+” character in the URL path can disable this validation by setting the allowDoubleEscaping attribute in the system.webServer/security/requestFiltering  configuration section in the application’s web.config.  However, this may make your application more vulnerable to malicious URLs:

<system.webServer>

    <security>

            <requestFiltering allowDoubleEscaping="true" />

    </security>

</system.webServer>

 

12) Requests with querystrings larger then 2048 bytes will be rejected by default

You will receive an HTTP Error 404.15 – Not Found: The request filtering module is configured to deny a request where the query string is too long.

IIS by default is configured to reject querystrings longer than 2048 bytes.  This may affect your application if it uses large querystrings or uses cookieless ASP.NET features like Forms Authentication and others that cumulatively exceed the configured limit on the querystring size.

NOTE: This breaking change applies to both Classic and Integrated modes.


Workaround
:

1) Increase the maximum querystring size by setting the maxQueryString attribute on the requestLimits element in the system.webServer/security/requestFiltering configuration section in your application’s web.config:

<system.webServer>

    <security>

        <requestFiltering>

            <requestLimits maxQueryString="NEW_VALUE_IN_BYTES" />

        </requestFiltering>

    </security>

</system.webServer>

Changes in response header processing

These changes affect how response headers are generated by the application.

 

13) IIS always rejects new lines in response headers (even if ASP.NET enableHeaderChecking is set to false)

If your application writes headers with line breaks (any combination of \r, or \n), you will receive an ASP.NET 500 – Server Error: Value does not fall within the expected range. 

IIS will always reject any attempt to produce response headers with line breaks, even if ASP.NET’s enableHeaderChecking behavior is disabled.  This is done to prevent header splitting attacks.

NOTE: This breaking change applies to both Classic and Integrated modes.

 

14) When the response is empty, the Content-Type header is not suppressed

If the application sets a Content-Type header, it will remain present even if the response is cleared.  Requests to ASP.NET content types will typically have the “Content-Type: text/html” present on responses unless overridden by the application.


Workaround:
1) While this should not typically have a breaking effect, you can remove the Content-Type header by explicitly setting the HttpResponse.ContentType property to null when clearing the response.

 

15) When the response headers are cleared with HttpResponse.ClearHeaders, default ASP.NET headers are not generated.  This may result in the lack of Cache-Control: private header that prevents the caching of the response on the client

HttpResponse.ClearHeaders does not re-generate default ASP.NET response headers, including “Content-Type: text/html” and “Cache-Control: private”, as it does in Classic mode.  This is because ASP.NET modules may call this API for requests to any resource type, and therefore generating ASP.NET-specific headers is not appropriate.  The lack of the “Cache-Control” header may cause some downstream network devices to cache the response.


Workaround:
1) Change application to manually generate the Cache-Control: private header when clearing the response, if it is desired to prevent caching in downstream network devices.

 

Changes in application and module event processing

These changes affect how the application and module event processing takes place.

 

16) It is not possible to access the request through the HttpContext.Current property in Application_Start in global.asax

If your application accesses the current request context in the Application_Start method in global.asax as part of application initialization, you will receive an ASP.NET 500 – Server Error: Request is not available in this context.

This error occurs because ASP.NET application initialization has been decoupled from the request that triggers it.  In Classic mode, it was possible to indirectly access the request context by accessing the HttpContext.Current property.  In Integrated mode, this context no longer represents the actual request and therefore attempts to access the Request and Response objects will generate an exception.


Workaround:

1) See Request is not available in this context exception in Application_Start for a detailed description of this problem and available workarounds.

 

17) The order in which module event handlers execute may be different then in Classic mode

The following differences exist:

-       For each event, event handlers for each module are executed in the order in which modules are configured in the <modules> configuration section.  Global.asax event handlers are executed last.

-       Modules that register for the PreSendRequestHeaders and PreSendRequestContent events are notified in the reverse of the order in which they appear in the <modules> configuration section

-       For each event, synchronous event handlers for each module are executed before asynchronous handlers.  Otherwise, event handlers are executed in the order in which they are registered.

Applications that have multiple modules configured to run in either of these events may be affected by these change if they share a dependency on event ordering.  This is not likely for most applications.  The order in which modules execute can be obtained from a Failed Request Tracing log.


Workaround:
1) Change the order of the modules experiencing an ordering problem in the system.webServer/modules configuration section.

 

18) ASP.NET modules in early request processing stages will see requests that previously may have been rejected by IIS prior to entering ASP.NET.  This includes modules running in BeginRequest seeing anonymous requests for resources that require authentication.

ASP.NET modules can run in any pipeline stages that are available to native IIS modules.  Because of this, requests that previously may have been rejected in the authentication stage (such as anonymous requests for resources that require authentication) or other stages prior to entering ASP.NET may run ASP.NET modules.

This behavior is by design in order to enable ASP.NET modules to extend IIS in all request processing stages. 

 

Workaround:

1) Change application code to avoid any application-specific problems that arise from seeing requests that may be rejected later on during request processing.  This may involve changing modules to subscribe to pipeline events that are raised later during request processing.

 

 

Other application changes

Other changes in the behavior of ASP.NET applications and APIs.

 

 

19) DefaultHttpHandler is not supported.  Applications relying on sub-classes of DefaultHttpHandler will not be able to serve requests.[M3] 

If your application uses DefaultHttpHandler or handlers that derive from DefaultHttpHandler, it will not function correctly.  In Integrated mode, handlers derived from DefaultHttpHandler will not be able to pass the request back to IIS for processing, and instead serve the requested resource as a static file.  Integrated mode allows ASP.NET modules to run for all requests without requiring the use of DefaultHttpHandler.

 

Workaround:

1) Change your application to use modules to perform request processing for all requests, instead of using wildcard mapping to map ASP.NET to all requests and then using DefaultHttpHandler derived handlers to pass the request back to IIS.

 

 

20) It is possible to write to the response after an exception has occurred.

In Integrated mode, it is possible to write to and display an additional response written after an exception has occurred, typically in modules that subscribe to the LogRequest and EndRequest events. This does not occur in Classic mode. If an error occurs during the request, and the application writes to the response in EndRequest after the exception has occurred, the response information written in EndRequest will be shown. This only affects requests that include unhandled exceptions. To avoid writing to the response after an exception, an application should check HttpContext.Error or HttpResponse.StatusCode before writing to the response.

 

 

21) It is not possible to use the ClearError API to prevent an exception from being written to the response if the exception has occurred in a prior pipeline stage

Calling Server.ClearError during the EndRequest event does not clear the exception if it occurred during an earlier event within the pipeline.  This is because the exception is formatted to the response at the end of each event that raises an exception.

Workaround:

1) Change your application to call Server.ClearError from the Application_OnError event handler, which is raised whenever an exception is thrown.

 

 

22) HttpResponse.AppendToLog does not automatically prepend the querystring to the URL.[M4] 

When using HttpResponse.AppendToLog to append a custom string to the URL logged in the request log file, you will manually need to prepend the querystring to the string you pass to this API.  This may result in existing code losing the querystring from the logged URL when this API is used.


Workaround:

1) Change your application to manually prepend HttpResponse.QueryString.ToString() to the string passed to HttpResponse.AppendToLog.

 

Other changes

Other changes.

 

23) ASP.NET threading settings are not used to control the request concurrency in Integrated mode

The minFreeThreads, minLocalRequestFreeThreads settings in the system.web/httpRuntime configuration section and the maxWorkerThreads setting in the processModel configuration section no longer control the threading mechanism used by ASP.NET.  Instead, ASP.NET relies on the IIS thread pool and allows you to control the maximum number of concurrently executing requests by setting the MaxConcurrentRequestsPerCPU DWORD value (default is 12) located in the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ASP.NET\2.0.50727.0 key.  This setting is global and cannot be changed for individual application pools or applications.

 

Workaround:

1) To control the concurrency of your application, set the MaxConcurrentRequestsPerCPU setting.

 

24) ASP.NET application queues are not used in Integrated mode.  Therefore, the “ASP.NET Applications\Requests in Application Queue” performance counter will always have a value of 0

ASP.NET does not use application queues in Integrated mode.

 

25) IIS 7.0 always restarts ASP.NET applications when changes are made to the application’s root web.config file.  Because of this, waitChangeNotification and maxWaitChangeNotification attributes have no effect.

IIS 7.0 monitors changes to the web.config files as well, and cause the ASP.NET application corresponding to this file to be restarted without regard to the ASP.NET change notification settings including the waitChangeNotification and maxWaitChangeNotification attributes in the system.web/httpRuntime configuration sections.

 

 

 

We hope that your move to IIS 7.0’s Integrated mode is as painless as possible, so you can immediately start to take advantage of IIS 7.0’s features and the power of Integrated ASP.NET in your applications. 

 

Let us know if you are having trouble with any of these breaking changes, or if you encounter another behavior change in your application not listed here, by posting on the http://forums.iis.net

 

More

If you have read any of my posts you have probably noticed that I am very partial to windbg and the debugging tools for windows.  I often get friendly nudges from the developers of debugdiag when I suggest using adplus and windbg on internal discussion lists, and to be fair I have to beat on the drum a bit for debug diag as well.

My answer to the question "Should I use DebugDiag 1.1 or Windbg" is both... it just depends on the scenario.  I often lean towards windbg but to a large extent that is because that is what I use all the time, so in many cases where both fit the bill equally well I simlply haven't invested the time to see how it can be done with debug diag and therefore I suggest what I know works.

Before I start the comparison, I just want to mention that dumps created with debug diag can be analysed in windbg and vice versa.  They use the same APIs and create the same types of memory dumps.  Where they differ largely is how you configure them to gather these dumps and logs, how you analyse them in the different tools and by the fact that DebugDiag has a nice feature that allows you to monitor memory leaks in the process.

I personally use debug diag to gather dumps for native memory leaks, and I use it to analyse asp and other native issues in conjunction with windbg.  For everything else I use windbg because a) I like the logs that adplus creates and b) I like the fact that I have full control and can execute any commands I want.

Having said this I know that it is much easier to gather dumps in debug diag, and it has excellent automated analysis features which is great if you don't do post-mortem debugging on a daily bases, so I would strongly recommend trying both and making up your own mind about which one fits your style best.

 

Where do you get the tools?

Debug diag can be downloaded and installed here.

Windbg comes with the debugging tools for windows which you can download here.

More

I started this blog 2.5 years ago today, mostly because I felt that the same types of issues came up over and over and over in our support cases. I figured that if I started writing about them, a lot of people would be able to resolve them on their own, or even better avoid them in the first place.

A lot of water passed under the bridge since then, but looking back at some of those earlier posts they are still very applicable today, and they still seem to continue to get a lot of hits.  Here is a list of the 21 most popular ones...

  1. ASP.NET 2.0 Crash case study: Unhandled exceptions
  2. ASP.NET Case Study: Lost session variables and appdomain recycles
  3. ASP.NET Memory: If your application is in production… then why is debug=true
  4. .NET Memory Leak Case Study: The Event Handlers That Made The Memory Baloon
  5. ASP.NET Performance Case Study: Web Service calls taking forever
  6. .NET Memory usage - A restaurant analogy
  7. .NET Garbage Collector PopQuiz - Followup
  8. .Net memory leak: Unblock my finalizer
  9. .NET Hang Debugging Walkthrough
  10. ASP.NET Case Study: Bad perf, high memory usage and high CPU in GC - Death By ViewState
  11. ASP.NET Crash: Bad CacheItemRemovedCallback - Part II
  12. ASP.NET 2.0 - Investigating .NET Exceptions with WinDbg (Compilation and Load Exceptions)
  13. .NET Memory Leak: XmlSerializing your way to a Memory Leak
  14. .NET Garbage Collection PopQuiz
  15. A .NET Crash: How not to write a global exception handler
  16. Things to ignore when debugging an ASP.NET hang
  17. Are you aware that you have thrown over 40,000 exceptions in the last 3 hours?
  18. Short note on some debugging related stuff...
  19. ASP.NET Memory Leak Case Study: Sessions Sessions Sessions…
  20. ASP.NET Case Study: High CPU in GC - Large objects and high allocation rates
  21. A Case of Invalid Viewstate

More