:::: MENU ::::

Monday, April 28, 2008

I do a LOT of interviewing here, and for a while we were hiring ASP.NET people.  Here's some of the questions that I asked them.  I came up with these questions because you'd "just know" this stuff if you spent time working on a REAL WORLD ASP.NET site - through design, development, debugging, production debugging, and deployment.

Do they suck? Did I miss any?  How do you think people did?

  • From constructor to destructor (taking into consideration Dispose() and the concept of non-deterministic finalization), what the are events fired as part of the ASP.NET System.Web.UI.Page lifecycle. Why are they important? What interesting things can you do at each?
  • What are ASHX files?  What are HttpHandlers?  Where can they be configured?
  • What is needed to configure a new extension for use in ASP.NET? For example, what if I wanted my system to serve ASPX files with a *.jsp extension?
  • What events fire when binding data to a data grid? What are they good for?
  • Explain how PostBacks work, on both the client-side and server-side. How do I chain my own JavaScript into the client side without losing PostBack functionality?
  • How does ViewState work and why is it either useful or evil?
  • What is the OO relationship between an ASPX page and its CS/VB code behind file in ASP.NET 1.1? in 2.0?
  • What happens from the point an HTTP request is received on a TCP/IP port up until the Page fires the On_Load event?
  • How does IIS communicate at runtime with ASP.NET?  Where is ASP.NET at runtime in IIS5? IIS6?
  • What is an assembly binding redirect? Where are the places an administrator or developer can affect how assembly binding policy is applied?
  • Compare and contrast LoadLibrary(), CoCreateInstance(), CreateObject() and Assembly.Load().

And more http://www.techinterviews.com/index.php?cat=9

Compression is a very "cheap and easy" way to improve responsiveness of your site. Here is an excerpt from an excellent white paper: High Performance Web Sites: The Importance of Front-End Performance

There are three main reasons why front-end performance is the place to start.

  1. There is more potential for improvement by focusing on the front-end. Cutting it in half reduces response times by 40% or more, whereas cutting back-end performance in half results in less than a 10% reduction.
  2. Front-end improvements typically require less time and resources than back-end projects (redesigning application architecture and code, finding and optimizing critical code paths, adding or modifying hardware, distributing databases, etc.).
  3. Front-end performance tuning has been proven to work. Over fifty teams at Yahoo! have reduced their end-user response times by following our performance best practices, often by 25% or more.

Here is the link to a post that contains PDF and PPT: http://nate.koechley.com/blog/2007/06/12/high-performance-web-sites/, or slideshare link: http://www.slideshare.net/techdude/high-performance-web-sites/

Compression is number 4 out of 14 Rules.

————————————–

Steps below are for enabling compression in IIS6.

I found very good instructions around this subject here: Scott Forsyth's post, but I am a visual person and decided to add some screenshots to make this process easier when I have to repeat it in the future.

Please refer to Scott's post for background information about compression and compression in IIS6 specifically.

Steps below are required because IIS6 doesn't have an easy to use interface, but provides all the necessary functionality through its metabase. You can think of metabase as registry for IIS6. All the settings that you see in IIS Manager and all those that you do not see can be confugred using metabase. The good news is that in IIS6 metabase is an XML file that can be modified with a Notepad. And just like with registry, you have to be very careful when making changes — and definitely BACKUP before starting.

  • Backup the metabase:
    1. Open up IIS Manager
    2. Right-click on the Server node
    3. Choose All Tasks -> Backup/Restore Configuration
    4. Follow the screens
    5. The good news it looks like IIS automatically backups the metabase
  • Before enabling compression on your site, take some baseline measurements to see how much bandwidth you will be saving (you and your users)
    • If you haven't played with Fiddler, you should spend some time with it.
    • It is an excellent tool for seeing exactly what your application (in most cases Browser) requests, amount of information coming back, time it takes, etc
    • I provide stats towards the bottom of this post
  • Enable Compression within IIS Manager
    • Unfortunately setting only these settings is not enough but required nonetheless
    • IIS Manager, right click on "Web Sites" node and choose properties
    • "Service" Tab, check
      • "Compress application files"
      • "Compress static fiels"
    • You should set max size to make sure you don't run out of space. IIS6 allows a max of 1024MB
    • Default location of compressed files is "%windir%¥IIS Temporary Compressed Files". Scott mentions that IUSR_{machinename} needs to have Write rights to this directory. I didn't give those rights to IUSR and it compression works for me. Be default that directory seems to have IIS_WPG group with full control.
  • Next step is "Add new Web Service Extension for gzip.dll"
    • IIS Manager
    • Click on Web Service Extensions
    • Right click in empty space and choose "Add a new Web service extension.."
    • Call it HTTP Compression
    • Point it to c:¥windows¥system32¥inetsrv¥gzip.dll
    • Check the "Set extension status to Allowed" checkbox
  • Enable Direct Metabase Edit
    • IIS Manager, right click on Server node, choose properites
    • Set "enable direct metabase edit"
    • Below is excerpt from Scott's post

One of many large improvement with IIS 6 is that the metabase isn't in binary format anymore and can be edited directly using Notepad or any other tools that allows editing an XML file. Personally I prefer to enable Direct Metabase Edit so that I can edit it and the change takes affect immediately. If this isn't enabled, you will need to stop and start the web services for any changes to take affect. Of course, like editing the windows registry there is always the chance of something going wrong so be careful. Unlike the windows registry though, if you make a mistake and the metabase is saved and doesn't conform to the proper XML scheme, it won't take affect, so thanks to the IIS team it's quite difficult to completely mess up the metabase. To enable this, right-click on the server (top level) in the IIS snap-in. There is a single checkbox that needs to be checked. This part couldn't get easier.

  • Metabase edits
    • MAKE A BACKUP (see steps above)
    • Open the metabase located at C:¥Windows¥system32¥inetsrv¥metabase.xml in Notepad
    • Search for <IIsCompressionScheme
    • There should be two of them, one for deflate and one for gzip. Basically they are two means of compression that IIS supports.
    • Add your extensions (ASPX, JS, CSS, etc) to the list extensions in HcScriptFileExtensions. Make sure to follow the existing format carefully, an extra space will keep this from working correctly. Do this for both deflate and gzip.
    • HcDynamicCompressionLevel="9 (from 0)
      • see Scott's post about the reasons why it should be 9 and not 10
    • Please see my next post about which extensions I added and some surprising results.
  • Last step is to IISRESET
  • Here are some statistics for the home page of www.reviewbasics.com. Images are not included in those numbers
    • Before compression:

351,210
text/javascript: 99,693
application/x-javascript: 143,287
text/xml: 649
application/x-shockwave-flash: 50,092
text/x-component: 1,832
text/html: 24,272
‾headers: 6,681
text/css: 24,704

·          

    • After compression:

223,629
text/javascript: 38,878
application/x-javascript: 108,231
text/xml: 649
application/x-shockwave-flash: 50,092
text/x-component: 3,664
text/html: 7,258
‾headers: 7,621
text/css: 7,236

This is ‾37% saving. I think that's pretty impressive for about 30 minutes of work. See my next post for some other interesting and impressive results

More

In this walkthrough you will learn how to create Web-Sites, Web Applications, Virtual Directories and Application Pools.

Introduction

The IIS PowerShell namespace consists of items like Web-Sites, Apps, Virtual Directories and Application Pools. Creating new namespace items and managing them is very easy using the built-in PowerShell cmdlets.

Creating Web-Sites

 If you are familiar with PowerShell you know that the New-Item cmdlet is used to create new items in the various PowerShell namespaces. The command "New-Item c:\TestDirectory" creates a new filesystem directory for example (most people use the "MD" or "MKDIR" alias for New-Item however). New-Item is also used to create new Web-Sites within the IIS 7.0 PowerShell namespace.

Parameters

Specifying the name of the directory is the only argument needed when you create a new file system directory. Unfortunately this is not enough when you create a Web-Site. Additional parameters like the file system path and network bindings are needed to create a Web-Site. Here is the command to create a new Web-Site followed by a dir command:

PS IIS:\Sites> New-Item iis:\Sites\TestSite -bindings @{protocol="http";bindingInformation=":80:TestSite"} -physicalPath c:\test

PS IIS:\Sites> dir

Name             ID   State      Physical Path                  Bindings
----             --   -----      -------------                  --------
Default Web Site 1    Started    f:\inetpub\wwwroot             http *:80:
DemoSite         2    Started    c:\test                        http :80:TestSite 

Using the -physicalPath argument is pretty straightforward. But you might ask yourself why the -bindings argument looks so complex.

The construct used is a hashtable (go here to learn more about PowerShell hash tables). Within the hash table key=value pairs indicate the settings that reflect the attributes within the IIS site bindings section:

<bindings>
        <binding protocol="http" bindingInformation=":80:TestSite" />
</bindings>

Now here is the reason why we use a hash table: IIS configuration is completely extensible (see  here for more details) with additional sections and attributes. You can imagine that somebody extending the <binding> element with additional attributes. Key value pairs within a hash table provide the flexibility to incorporate these new attributes without having to completely rewrite the IIS PowerShell Provider.

Granted, the syntax is a bit complex. We are thinking about wrapping some typical tasks like creating sites with additional functions or scripts in a later Tech Preview.

Deleting Sites

Here is how you delete the site you just created.

PS IIS:\ >Remove-Item IIS:\Sites\TestSite

Creating Web Applications

Creating Web Applications is easier than creating sites. Here we go: 

PS IIS:\> New-Item 'IIS:\Sites\Default Web Site\DemoApp' -physicalPath c:\test -type Application

 

Name                     ApplicationPool          EnabledProtocols         PhysicalPath
----                     ---------------          ----------------         ------------
DemoApp                  DefaultAppPool           http                     c:\test

The only parameter you have to specify is the type (-type) because underneath a Web-Site you might want to create an Applications or a Virtual Directories. By specifying the -type parameter you tell the IIS Provider to create an application.

To delete the application you can also use Remove-Item.  

Creating Virtual Directories

To create a Virtual Directory you also use the New-Item cmdlet. Let's create a Virtual Directory underneath the 'Default Web Site' but and a second one underneath the Web Application we created in the previous step.

PS IIS:\> New-Item 'IIS:\Sites\Default Web Site\DemoVirtualDir1' -type VirtualDirectory -physicalPath c:\test\virtualDirectory1

Name                                              PhysicalPath
----                                              ------------
DemoVirtualDir1                                   c:\test\virtualDirectory1


PS IIS:\> New-Item 'IIS:\Sites\Default Web Site\DemoApp\DemoVirtualDir2' -type VirtualDirectory -physicalPath c:\test\virtualDirectory2

Name                                              PhysicalPath
----                                              ------------
DemoVirtualDir2                                   c:\test\virtualDirectory2

Creating Application Pools

But it gets even simpler. Creating a new AppPool only requires the name to be specified.

PS IIS:\> new-item AppPools\DemoAppPool

Name                     State
----                     -----
DemoAppPool              {}

Simple, wasn't it? Now let's put this together to an end-to-end scenario.

Putting it all Together

In the following end-to-end scenario we will execute the following step:

  1. Create a set of new file system directories for the sites, web applications and virtual directories we will create a little later.
  2. Copy some very simple web content into the newly created directories.
  3. Create new Application Pool
  4. Create a new site, a new application and two new virtual directories and assign them to newly created Application Pool.
  5. Request the web content via the web browser.

Step 1: Create New Directories

 We use the New-Item cmdlet to create four new file system directories. Execute the following commands (use 'md' instead of New-Item if you don't want to specify the -type parameter):

New-Item C:\DemoSite -type Directory

New-Item C:\DemoSite\DemoApp -type Directory

New-Item C:\DemoSite\DemoVirtualDir1 -type Directory

New-Item C:\DemoSite\DemoVirtualDir2 -type Directory

Step 2: Copy Content

Now let's write some simple html content to these directories:

Set-Content C:\DemoSite\Default.htm "DemoSite Default Page"

Set-Content C:\DemoSite\DemoApp\Default.htm "DemoSite\DemoApp Default Page"

Set-Content C:\DemoSite\DemoVirtualDir1\Default.htm "DemoSite\DemoVirtualDir1 Default Page"

Set-Content C:\DemoSite\DemoVirtualDir2\Default.htm "DemoSite\DemoApp\DemoVirtualDir2 Default Page"

Step 3: Create New Application Pool

Create the new Application Pool 'DemoAppPool' for the new site if you deleted the one we created in the previous sample.  

New-Item IIS:\AppPools\DemoAppPool

Step 4: Create New Sites, Web Applications and Virtual Directories and Assign to Application Pool

Here comes the beef. We create DemoSite, DemoApp and two Virtual Directories - DemoVirtualDir1 is directly underneath DemoSite and DemoVirtualDir2 is underneath DemoApp. We are assigning DemoSite and DemoApp to DemoAppPool created in the previous step. DemoSite is assigned to port 8080 to not conflict with the 'Default Web Site'

New-Item IIS:\Sites\DemoSite -physicalPath C:\DemoSite -bindings @{protocol="http";bindingInformation=":8080:"}

Set-ItemProperty IIS:\Sites\DemoSite -name applicationPool -value DemoAppPool

New-Item IIS:\Sites\DemoSite\DemoApp -physicalPath C:\DemoSite\DemoApp -type Application

Set-ItemProperty IIS:\sites\DemoSite\DemoApp -name applicationPool -value DemoAppPool

New-Item IIS:\Sites\DemoSite\DemoVirtualDir1 -physicalPath C:\DemoSite\DemoVirtualDir1 -type VirtualDirectory

New-Item IIS:\Sites\DemoSite\DemoApp\DemoVirtualDir2 -physicalPath C:\DemoSite\DemoVirtualDir2 -type VirtualDirectory

Voila. All that's left is to request the web content.

Step 5: Request the Web Content

You can of course open the browser and enter http://localhost:8080/ and all the other URLs. But its a PowerShell walkthrough and we'll use PowerShell to do it by using the .NET WebClient classes:

$webclient = New-Object Net.WebClient

$webclient.DownloadString("http://localhost:8080/");

$webclient.DownloadString("http://localhost:8080/DemoApp");

$webclient.DownloadString("http://localhost:8080/DemoVirtualDir1");

$webclient.DownloadString("http://localhost:8080/DemoApp/DemoVirtualDir2");

If you feeling adventurous you can also use Internet Explorer object itself:

$ie = new-object -com InternetExplorer.Application

$ie.Visible = $true

$ie.Navigate("http://localhost:8080/");

Summary

In this walkthrough you learned how to create Web-Sites, Web Applications, Virtual Directories and Application Pools with PowerShell. Additional PowerShell features were used to build a functional end-to-end scenario.

More

Thursday, April 24, 2008

If you have multiple controls in your ASP.Net, especially third-party ones, which in turn have a dependency on another third-party component, you are in for a world of pain if different controls use different versions of the same third-party component. Since your app has only one "bin" folder where all the assemblies reside, putting one version of the third-party component will break controls that rely on the other version. Luckily, the .Net developers devised some good solutions. Although I would not call this an elegant solution to DLL hell, at least its possible to have a solution that works.

Here are some sample web application scenarios and solutions involving assembly version conflicts, different assembly versions and location of assemblies. This is a small portion of a much larger .Net design involving the GAC etc., but I am not going to delve into that.

1) Placing assemblies outside the bin folder.

Sometimes you have an app with many assemblies and you want to keep them more organized. .Net allows this by providing a way to specify the folders where it should search. Modify your web.config as below:

<configuration>
  <runtime>
     <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
        <probing privatePath="bin;somefolder\bin" />
     </assemblyBinding>
  </runtime>
</configuration

You can specify as many folders as necessary in a semi-colon separated list which fusion will use to locate an assembly.

2) Having multiple versions of the same assembly in the same app space.

In this example, two versions of the ThirdParty assembly are placed in numbered sub-folders of bin. Both can co-exist and will work correctly.

<configuration>
        <runtime>
             <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
                 <dependentAssembly>
                     <assemblyIdentity name="ThirdParty" publicKeyToken="16dcc87a28w7w7b1" />
                     <codeBase version="1.4.0.0" href="bin\14\ThirdParty.dll" />
                     <codeBase version="1.2.0.0" href="bin\12\ThirdParty.dll" />
                </dependentAssembly>
             </assemblyBinding>  
       </runtime>
</configuration>

3) Redirecting an assembly from an older to a newer version.

<configuration>
   <runtime>
      <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
         <dependentAssembly>
            <assemblyIdentity name="myAssembly" publicKeyToken="32ab4ba45e0a69a1" culture="neutral" />
            <bindingRedirect oldVersion="1.0.0.0"  newVersion="2.0.0.0"/>
         </dependentAssembly>
      </assemblyBinding>
   </runtime>
</configuration>

You can only use #2 or #3 with strong named assemblies.

 

Did you know there's T4 (Text Template Transformation Toolkit) support inside VS2008 now? Add a file with the .tt or .t4 extension to your project and you got a T4 template which VS2008 will automatically run when you save it. If you're into this and want to try it out, go to http://www.t4editor.net/ and download the T4-editor from Clarius. It gives you proper editing support from within Visual Studio. They got a version for vs2003 as well.

This may not be as attractive as it used to be, now that we got designer tools for Linq to Sql, entity framework and so on to help us generate boilerplat code for database access, but I'm sure I can come up with some good use for this. The ASP3-like syntax is very easy to grasp, so basically there is no excuse for not using this if you know you're going to produce lots of code that looks or behaves the same. As long as you have some metadata to work from, be it a database, xml file or text file, I'm sure this can be of help to you. Ah yes, you can write the templates in either c# or VB.

Some T4 resources for you maybe:

Hilton Giesenow blogs about T4 and you can download a couple of VS T4 templates from him!

Oleg Sych has a large number of very, very good blog posts about T4, Oleg is THE T4 blogger out there.

The T4-editor website by Clarius has a couple of videos.

The Text Template documentation for vs2008 on MSDN.

On MSDN there's also a good intro video on the topic that you might want to look at if this is of interest to you.

 

Wednesday, April 23, 2008

Introduction

Developing a nice custom control is just one part of the story. As a control author you should also pay attention about the experience of other developers who will be using your control. In most of the real world cases developers use Visual Studio as the IDE for developing .NET applications. You can enhance the experience of other developers using your control by providing proper designer support. For example, you can control how your control properties and events are displayed in property window and toolbox. A set of attributes often called as Design Time Attributes allow you to accomplish this.

Common Design Time Attributes

The following sections explain the common design time attributes that allow you to change how the control behaves in the property window and toolbox. Most of these attributes reside in System.ComponentModel namespace. In the following sections we use the GraphicButton control that we created earlier in this series.

Deciding whether a property will be visible in the property window

By default all the public properties of a custom control are displayed in the property window. Sometimes you may want that developers should set a property only via code and not via property window. The [Browsable] attribute allows you to control just that. The usage of this attribute is as follows:

[Browsable(false)]
public string ImageUrl
{
get { return ViewState["imgurl"] as string; }
set { ViewState["imgurl"] = value; }
}

The value of false indicates that the ImageUrl property will not be displayed in the property window.

Read More

Monday, April 14, 2008

I get this question all the time and if you happen to attend one of my presentations on using each you should know by the end of the presentation. I also see this asked often on the ASP.NET forums and I thought it would be worthwhile to address the question here for a reference point.

The quick answer is httpModule works with the web application as sort of an adjunct to the request pipeline because modules generally add some processing in response to an event in the pipeline. An httpHandler actually processes the request type and renders the output being sent to the browser. A module can invoke a handler, not the other way around.

Custom modules implement the ihttpModule interface, which consist of a Dispose and Init method. In the Init method you would typically add event handlers to catch ASP.NET events, like AuthorizeRequest to perform custom logic. While you can access the Response filter to actually adjust the actual markup being sent to the client, this is very rare. Typically the module will process things outside of producing actual markup.

Custom handlers implement the ihttpHandler interface, which consist of the IsReusable property and the ProcessRequest method. The IsResuable property indicates to the ASP.NET engine if an instance of the handler can be reused for simultaneous requests. So typically this will return true for static content and false for dynamic content.

The ProcessRequest method operates much like the PageLoad event handler in the Page class (which by the way is an httpHandler). From the ProcessRequest method you initiate your code to actually build the markup being sent to the client.

I think these two distinctive aspects of ASP.NET get confused by many because they both serve near the metal. Both are responsible for separate tasks, but typically work in harmony to make ASP.NET the great framework it is.

 

Custom errors and early error-logging effort

In the beginning, webs were just "webs", webs of interconnected links to static HTML pages. Now webs resemble more of communities bustling with activities, with members of different roles, media in all forms and shapes, etc. As a web site grows more complex, the opportunities for errors and exceptions multiply. As it attracts more users, it becomes a bigger target of malicious attacks. Now it has become a must for a website to have a systematic approach to log web events, monitor the ebbs and flows of user activities, locate traffic bottlenecks, and handle exceptions and errors gracefully.

For ASP pages, programmers generally do not have many choices but sprinkle response.write statements to check the inner working mechanism of a segment of code.

With ASP .NET 1.x, programmers are equipped with a set of tools for tracing and error handling. And it became a standard to configure custom error pages whenever an error occurs. Errors are part of programming life, however it is not acceptable to have an application crash and throw at users' face an ugly yellow page filled with obtuse non-English words. It could also be dangerous, if critical technical details are exposed to "evil" eyes.

To use custom error pages, we use the customerErrors section in the web.config (For .asp pages, the way to do it is going to the IIS administration console to specify the path for different custom error pages for different error codes, however if you do not have access to the IIS, it would not be an option).

  1. <System.Web>  
  2. ...  
  3.     <customErrors mode="RemoteOnly" defaultRedirect="~/Error.aspx">  
  4.          <error statusCode="404" redirect="~/Error.aspx?code=404" />  
  5.          <error statusCode="408" redirect="~/Error.aspx?code=408" />  
  6.          <error statusCode="505" redirect="~/Error.aspx?code=505" />  
  7.     </customErrors>  
  8. ...  
  9. </System.Web>  

There are three modes in the customErrors section, Off, On and RemoteOnly. Off means to disable custom errors and show the yellow page with raw error details, which is only recommended to programmers in developing stage; On means shut off raw error message completely and display custom errors only; RemoteOnly will display designated custom error pages to remote clients, however original error pages to whoever (the debugging programmer hopefully) is using the website hosting computer.

While the customErrors section takes care of setting up a friendlier client-side front, the more important step is to log errors for later review and debugging. In ASP .NET 1.x, for logging and trapping errors, we could write some custom code in the page's Error event handler, or in a custom page base class, as in the following:

  1. public class BasePage: System.Web.UI.Page  
  2. {  
  3.     protected override void OnError(EventArgs e)  
  4.     {  
  5.           HttpContext currentContext = HttpContext.Current;  
  6.           Exception exception = currentContext.Server.GetLastError ();  
  7.           string errorInfo =   
  8.              "<br>URL: " + currentContext.Request.Url.ToString () +  
  9.              "<br>Message: " + exception.Message +  
  10.           currentContext.Response.Write (errorInfo);  
  11.     }  
  12. }    

We can also trap and log error details on application level by coding the Application_Error event handler, in the Global.asax.cs code behind file:

  1. protected void Application_Error(Object sender, EventArgs e)  
  2.     {   
  3.         // log last exception by calling Server.GetLastError()  
  4.     }  

Configuring the health monitoring system

All of the above, customErrors section in the web.config file, coding to log error details in either the Application_Error or Page_Error event handler, works for ASP.NET 1.x and above. However, with ASP .NET 2.0, there is something much better, more comprehensive and systematic. This is the health monitoring system.

Read More

In this article, Steve walks through the steps required to implement a Session Logged Out page that users are automatically sent to in their browser when their ASP.NET session expires. He examines each step with the help of detailed explanation supported by relevant source code.

 

Introduction

In many applications, an authenticated user's session expires after a set amount of time, after which the user must log back into the system to continue using the application. Often, the user may begin entering data into a large form, switch to some other more pressing task, then return to complete the form only to find that his session has expired and he has wasted his time. One way to alleviate this user interface annoyance is to automatically redirect the user to a "session expired" page once their session has expired. The user may still lose some work he was in the middle of on the page he was on, but that would have been lost anyway had he tried to submit it while no longer authenticated. At least with this solution, the user immediately knows his session has ended, and he can re-initiate it and continue his work without any loss of time.

Technique

The simplest way to implement a cross-browser session expiration page is to add a META tag to the HTML headers of any pages that require authentication and/or a valid session. The syntax for the META tag, when used for this purpose, is pretty simple. A typical tag would look like this:

<meta http-equiv='refresh' content='60;url=/SessionExpired.aspx' />

The first attribute, http-equiv, must be set to refresh. The META tag supports a number of other options, such as providing information about page authors, keywords, or descriptions, which are beyond the scope of this article (learn more about them here). The second attribute, content, includes two parts which must be separated by a semicolon. The first piece indicates the number of seconds the page should delay before refreshing its content. A page can be made to simply automatically refresh itself by simply adding this:

<meta http-equiv='refresh' content='60' />

However, to complete the requirement for the session expiration page, we need to send the user's browser to a new page, in this case /SessionExpired.aspx which is set with the url= string within the content attribute. It should be pretty obvious that this behavior is really stretching the intended purpose of the <meta> tag, which is why there are so many fields being overloaded into the content attribute. It would have made more sense to have a <refresh delay='60' refreshUrl='http://whatever.com' /> tag, but it is no simple task to add a new tag to the HTML specification and then to get it supported in 1.2 million different versions of user agents. So, plan on the existing overloaded use of the <meta> tag for the foreseeable future.

With just this bit of code, you can start hand-coding session expirations into your ASP.NET pages to your heart's content, but it is hardly a scalable solution. It also does not take advantage of ASP.NET's programmability model at all, and so I do not recommend it. The problem that remains is how to include this meta tag into the appropriate pages (the ones that require a session) without adding it to public pages, and how to set up the delay and destination URL so that they do not need to be hand-edited on every ASPX page. But before we show how to do that, let us design our session expired page

The code for this page is pretty trivial, but is included in Listings 1 and 2.

Listing 1 - Session Expired Page

<%@ Page Language="C#" AutoEventWireup="true" 
CodeBehind="SessionExpired.aspx.cs" 
Inherits="SessionExpirePage.SessionExpired"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" 
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
    <title>Session Expired</title>
</head>
<body>
    <form id="form1" runat="server">
    <div>
    <h1>Session Expired</h1>
    <p>
    Your session has expired.  
    Please <a href="Default.aspx">return to the home page</a> 
    and log in again to continue accessing your account.</p>
    </div>
    </form>
</body>
</html>

Listing 2 - Session Expired Page CodeBehind

using System;
namespace SessionExpirePage
{
    public partial class SessionExpired : System.Web.UI.Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {
            Session.Abandon();
        }
    }
}

Of course the code in Listing 1 is extremely simple and you will want to update it to use your site's design, ideally with a Master Page. Note in Listing 2 the call to Session.Abandon(). This is important and ensures that if the client countdown and the server countdown are not quite in sync, the session is terminated when this page is loaded.

There are several ways we could go about including the session expiration META tag on a large number of secured pages. We could write it by hand - not a good idea. We could use an include file (yes, those still exist in ASP.NET) - even worse idea. We could write a custom control and include it by hand. Slightly better, but still requires touching a lot of ASPX files. We could create a base page class or extend one that is already in use. This is actually a promising technique that would work, but is not the one that I will demonstrate. You could easily implement it using a variation of my sample, though. Or you could use an ASP.NET master page. This is the simplest, most elegant solution, in my opinion, and is the one I will demonstrate.

In most applications I have worked with, it is typical to have a separate master page for the secure, admin portion of the site from the public facing area of the site. This technique works best in such situations. Essentially, the application's secure area will share a single master page file, which for this example will be called Secure.Master. Secure.Master will include some UI, but will also include a ContentPlaceHolder in the HTML <head> section that will be used to render the Session Expiration META tag. Then, in the Master page's codebehind, the actual META tag will be constructed from the Session.Timeout set in the site's web.config and the URL that should be used when the session expires (in this case set as a property of the master page, but ideally this would come from a custom configuration section in web.config). The complete code for the master page is shown in Listings 3 and 4.

Listing 3 - Secure.Master

<%@ Master Language="C#" AutoEventWireup="true" 
CodeBehind="Secure.master.cs" 
Inherits="SessionExpirePage.Secure" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" 
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server" id="PageHead">
    <title>Secure Page</title>
    <asp:ContentPlaceHolder ID="head" runat="server">
    </asp:ContentPlaceHolder>
</head>
<body>
    <form id="form1" runat="server">
    <div>
        <h1>
            Your Account [SECURE]</h1>
        <asp:ContentPlaceHolder ID="Main" runat="server">
        </asp:ContentPlaceHolder>
        <p>
            Note: Your session will expire in
            <%=SessionLengthMinutes %>
            minute(s), <%=Session["name"] %> .
        </p>
    </div>
    </form>
</body>
</html>

Listing 4 - Secure.Master CodeBehind

using System;
using System.Web.UI;
 
namespace SessionExpirePage
{
    public partial class Secure : System.Web.UI.MasterPage
    {
        public int SessionLengthMinutes
        {
            get { return Session.Timeout; }
        }
        public string SessionExpireDestinationUrl
        {
            get { return "/SessionExpired.aspx"; }
        }
        protected override void OnPreRender(EventArgs e)
        {
            base.OnPreRender(e);
            this.PageHead.Controls.Add(new LiteralControl(
                String.Format("<meta http-equiv='refresh' content='{0};url={1}'>"
                SessionLengthMinutes*60, SessionExpireDestinationUrl)));
        }
    }
}

The important work is all done within the OnPreRender event handler, which adds the <meta> tag to the page using String.Format. One important thing to note about this approach is that it follows DRY (Don't Repeat Yourself) by keeping the actual session timeout period defined in only one place. If you were to hardcode your session timeouts in your META tags, and later the application's session timeout changed, you would need to update the META tags everywhere they were specified (and if you did not, you would not get a compiler error, just a site that did not work as expected). Setting the session timeout is easily done within web.config and completes this example. The relevant code is shown in Listing 5.

Listing 5 - Set Session Timeout in web.config

<system.web>
  <sessionState timeout="1" mode="InProc" />
</system.web>

Considerations

One thing to keep in mind with this approach is that it will start counting from the moment the page is first sent to the browser. If the user interacts with that page without loading a new page, such as adding data or even working with the server through AJAX callbacks or UpdatePanels, the session expiration page redirect will still occur when the session would have timed out after the initial page load. In practice, this is not an issue for most pages since if a user is going to work with the page they will do so soon after it first loads, and if they do not use it for some time, they will return to find the Session Expired page and will re-authenticate.  However, if your site makes heavy use of AJAX (or Silverlight or any other client-centric technology), you may need to consider another (more complex) approach, such as using an AJAX Timer for this purpose.

Summary

Providing users with a clear indication that their session has expired within the browser, rather than waiting for them to hit the server, can greatly improve the user experience. This technique demonstrates a fairly simple and robust way to achieve this with a minimum of code and, more importantly, with minimal repetition of code. Adding this to an existing site that limits session length should take less than an hour, and your users will thank you.

 

Friday, April 11, 2008

After almost one year of work and organization, I am very happy to share this project with you:

http://code.msdn.microsoft.com/vlinq

The Visual Linq query builder is a Visual Studio 2008 addin. It's a designer that helps you create Linq to Sql queries in your application. Both C# and VB projects are supported.

As you will read it in this post, this project developed by interns is a prototype for a new kind of designers.

Please give us your feedbacks !!!

Project history

It is an academic project developed during a Microsoft France internship in collaboration with Microsoft Corporation.
I have been the 'local' manager and technical lead of the project. I wanted to create a VS designer using WPF for a long time and I had the idea of a query builder for Linq to Sql. Then came the opportunity to organize a 6 months long internship in collaboration with Ms Corp.

I have recruited two french students that I want to thank again today for their excellent job.

- Simon Ferquel from SupInfo who is now working in a french company (Winwise). You may know him from the time he was student as the author of a funny tool for Vista : myExposé.

- Johanna Piou from ISEN Toulon who is still student this year and who is well known for her brilliant Imagine Cup participation in the Interface Design category.

You can find the French description of the project here: http://msdn.microsoft.com/fr-fr/vcsharp/default.aspx (coming soon).

The project goal

Linq to Sql and Linq more generally speaking, is a new technology mainly based on language evolutions. As any new syntax, you have to take some time to get familiar with it.

The VLinq project as any designer helps you to build graphically Linq to Sql queries but we wanted to keep it visually very close from the code. The goal is not to hide the generated code but to make it visible in the designer. It's a kind of mix between a classical designer and a graphical intellisense.

VLinq also helps you grouping all queries at the same place allowing easy management (edit, add, remove) and previewing and testing.

Last goal: releasing the whole solution, including source code to share with you our experience about using WPF with VS2008 extensibility.

What do we release ?

The whole project has been developed using Visual Studio 2008 (betas then RTM) and Expression Blend. We provide the whole solution (binaries + source code). The solution contains a Setup project for a quick installation (msi file).

You can get all the stuff here: http://code.msdn.microsoft.com/vlinq/ under the 'Releases' tab. (msi, quick reference guide, user documentation, webcast).

 

Read More

Tuesday, April 8, 2008

Overview:

Like mathematicians, developers talk about their code in aesthetic and sometimes hygienic terms. The code is “elegant,” or it “looks clean,” and sometimes it “smells bad”. I have even heard developers refer to their code as “sexy.” Talking about your code as sexy is surely a sign that you need to get out more! Achieving elegant code is not easy and so as I deepen my experience with .Net 2.0, I am always pleased to discover when the framework offers a way to do something that I could have done in 1.1, but can now do much more elegantly. Predicates and the related Actions and Converters are just such additions to the framework. They will not revolutionize how you code, but used properly they will reduce the amount of code needed, encourage reuse, and just look sexier.

This article will examine the following questions:

  • What are Predicates?
  • How they are used?
  • How does their performance stack up against similar foreach routines?
  • What are Actions and Converters?

So What are Predicates?

A Predicate is a new class introduced by the .Net 2.0 framework. The class has the following signature: public delegate bool Predicate (T obj) and is used by collections such as List and Array to perform methods like RemoveAll, Find, FindAll, Exists, etc. Its signature reveals that a Predicate is a Delegate. What that means is that so long as the method signatures are the same –i.e. the methods have the same return type and accept the same arguments—then any method that conforms to that signature can be called in its place. C# Delegates have been compared to C or C++ function pointers, callback functions, etc. They enable you to specify what the method that you want to call looks like without having to specify, at compile time, which actual method will be called.

Also revealed by its signature is that a Predicate is a Generic method. There are a lot of good introductions to Generics so I will not traverse that ground here. Suffice it to say in this context Generic refers to the fact that the same Predicate can be used for Collections with different Types.

One slight twist with Predicates worth mentioning is that since the Predicate class is baked into the .Net framework you do not need to specify the type argument of the generic method or to create the delegate explicitly. Both of these are determined by the compiler from the method arguments you supply. You will see from the examples what this means in practice.

The chief benefit of using Predicates, besides the “coolness factor,” is that they are more expressive, require less code, promote reuse, and surprisingly, are faster than other alternatives. Using a Predicate turns out to have better performance than a similar foreach operation. This is demonstrated below.

How do I Use Them?

The best way to illustrate the use of Predicates is to compare them to foreach routines. For example, say you have a List collection like the one below. You want to find every member of that collection that starts with “a”. One reasonable approach is to use foreach and return a new List which contains only those members of the original list that start with “a”.

The List collection below will be used to illustrate the key concepts in this article.

Sample List Collection

List<string> items = new List<string>();
items.Add("Abel");
items.Add("Adam");
items.Add("Anna");
items.Add("Eve");
items.Add("James");
items.Add("Mark");
items.Add("Saul");

To filter all members of this collection that begins with "a" using forech, the method (or routine) would look something like this:

Finding All items starting with "a"

public List<string> FindWithA(List collection)
{
   List<string> foundItems = new List<string>() ;
   foreach(string s in collection)
   {
      if ( s.StartsWith("a", 
         StringComparison.InvariantCultureIgnoreCase) )
      {
         foundItems.Add(s) ;
      }
   }
   return foundItems ;
}

In this case the List collection returned would contain the following items: Abel, Adam, and Anna.

Now what if instead of retrieving every member of the collection that starts with “a” you just wanted to confirm that at least one member of the collection started with an "a"? In this case you’d likely create a new method like the one below. You might even be clever and factor out the common code to evaluate if a particular member of the collection started with “a” and use that for both the Filter and Exists methods.

Checking if an item starting With "a" exists

public bool CheckAExists(List collection)
{
    foreach (string s in items)
    {
       if (StartsWith(s)) //calls the refactored method
          return true;
    }
    return false;
}
 
// the test to see if the string starts with "a" is now factored 
// out to its own method so that both the existence check and the
// find all method could use it.
public bool StartsWith(string s)
{
   return s.StartsWith("a", 
      StringComparison.InvariantCultureIgnoreCase) ;
}

There is nothing particularly stinky about this approach and prior to Predicates, this was the best way to achieve the desired result. But let’s see how we could achieve the same ends using Predicates.

Using a Predicate to find all and check existence

public void SomeMethod()
{
   // Uses StartsWithA method to check for existence.
   if ( items.Exists(StartsWithA)) 
   {
      // Also uses the StartsWithA method, but now 
      // to find all values
      List<string> foundItems = items.FindAll(StartsWithA); 
   }
}
 
// Method conforms to the Predicate signature. 
// It returns a bool and takes a string. 
// (Note: Even though a Predicate is a generic,
// you do not need to supply the type.  The compiler 
// handles it for you.)
public bool StartsWithA(string s)
{
   return s.StartsWith("a", 
      StringComparison.InvariantCultureIgnoreCase);
}

Using Predicates just smells better --at least to my refined olfactory sensibility. The Predicate code could have also been written using an Anonymous Method. This is a good approach if the logic you are applying to the collection is not going to be used again in the application. If that logic might be reused, then my bias is to put it in a method so you don’t run the risk of code duplication. I have also found that Anonymous Methods decrease clarity as many beginning programmers do not understand the syntax. So if you do use them, make sure everyone on the team understands their use.

Using a Predicate as an Anonymous Method to find all items starting with "a"

public void SomeMethod()
{
   // FindAll now uses an Anonymous Method. 
   List<string> foundItems = items.FindAll(delegate(string s) 
      { 
         return s.StartsWith("a", 
            StringComparison.InvariantCultureIgnoreCase); 
      } ) ;
}

Sweet fancy Moses that is some elegant code! Sure the results are not different than what was achieved using a foreach loop, but using Predicates smells like it was just bathed in rosewater, swaddled, and then pampered with frankincense and myrrh. (If you are like me and wondered just what the f*#! are frankincense and myrrh, here's a link.)

Scent aside, there is a limitation to the standard implementation of Predicates. It is often the case that you will want to pass in a parameter to the method which the Predicate points to. It would be a pain to have to define a method for StartsWithA and another method for StartsWithB, etc. Because Predicates are delegates, however, you cannot change the signature to pass in additional arguments. Fortunately, it is easy enough to wrap the predicate in another class so you have access to additional parameters in your predicate method. There is a good article by Alex Perepletov entitled “Passing parameters to predicates” demonstrating this technique.

Of course, I've only demonstrated a few of the methods in the .Net framework that utilize predicates. I encourage you to review the API for Array and List to view the other methods of those classes that use Predicates. (I am not sure if there are any other classes that have methods which use Predicates, so if there are any outside of Array and List, please let me know so that I can post them too.)

Performance

I would argue that even if the Predicate class is a little slower than similar looping structures like foreach or for they are still preferable as Predicates have other more important virtues. I know there are performance Nazis out there who agonize over nanosecond differences, but if a nanosecond difference is that important to your application than it is likely you should not be using C# at all. Still having an understanding of the performance impact of your programming decisions is a good thing and I was curious about how Predicates stacked up so I put together a simple head-to-head comparison of Predicate vs. foreach. The test compared a List of 100,000 string values to see which ones started with "1." For each test iteration, the search was performed 100 times. The results are below.

C# Predicate Performance

As you can see, Predicates are the winner. We are, however, talking milliseconds and, in most situations, nanoseconds so I would not take one approach or the other on performance considerations alone.

Also a few days after I completed these tests, I found another article that conducts a more through comparison between Predicates and foreach. You can find that comparison at Jon Skeet's Coding Blog.

As Long as We're Here: Actions and Converters

In addition to the Predicate class, .Net 2.0 also introduces the Action and Converter classes. Like Predicates, these are generic delegates. The Action class provides a simple way to walk all items of a collection and call a method on each member. Both Action and Converter work in the same way as the Predicate class --including the ability to either define or use Anonymous methods. For example, if you wanted to display all the members of a List collection, you could use the following:

Using an Action to display all members of a collection

public void SomeMethod()
{
   // I am using an Anonymous Method. As with Predicates,
   // I could have also defined a method and used it.
   items.ForEach(delegate(string s)
   {
      Console.WriteLine("value: {0}", s);
   });
}

The Converter class is used to change all members of a collection from one type to another type. The signature of the Converter class is a little different than the Predicate or Action classes: public List ConvertAll (Converter converter). It is, however, used basically in the same way. For example, if I wanted to turn a List collection of strings into a list collection of Names, (assuming for the moment that I had created a Name class,) I could do the following.

Using a Converter to convert a List of strings.

public void SomeMethod()
{
   // The Converter must specify the type to be converted, string, 
   // the type it is being converted to, Name, and the method
   // doing the conversion, StringToNameConverter.
   List<Name> names = 
      items.ConvertAll(new Converter<string, Name>( StringToNameConverter));
}
 
// The method used by the ConvertAll method to do the actual
// conversion of strings to Names.
private Name StringToNameConverter(string s)
{
   return new Name(s);
}
 
// Our Name class might be defined as follows:
public class Name
{
   private string m_value;
   public string Value
   {
      get { return m_value; }
      set { m_value = value; }
   }
 
   public Name(string name)
   {
      Value = name;
   }
}

I did not benchmark the Action or Converter class, but my hunch is that they too would offer slight performance benefits over similar routines using foreach. Like the Predicate class, the chief benefit they offer is that they make it easier to write the same elegant, sweet smelling code and will likely encourage reuse as well. Finally, if you need to pass parameters to either class, you can wrap them in another class as shown in the Predicate example cited earlier.

Conclusion:

So we have arrived at the end of the article. Likely at this point you are so seduced by the sexiness of the code that the Predicate, Action, and Converter classes make possible that you are contemplating leaving your significant other and moving to Massachusetts to marry your application --which is, I believe, legal here in Boston. I wish you both well in that endeavor!

As always, do not hesitate to email me if you have any questions (my email is in the "About the Author" tab above.) If you extend what I've done or have additional information I missed, I would also greatly appreciate your letting me know. (Leaving a comment, positive or incredibly positive, is also always appreciated.)