:::: MENU ::::

Wednesday, December 24, 2008

ASP.NET User Controls are pretty useful.  They allow functional modules of code and markup to be encapsulated in such a way that reuse is convenient and easy, without sacrificing the power or integration of the ASP.NET model.  As we move into an era of AJAX-driven websites, this modularity is still very important. Can the user controls that we all know and (mostly) love still help with this encapsulation, despite being engineered before AJAX techniques emerged?  I think they can.  But at this point in the ASP.NET timeline, user controls are in need of some help.

The Fundamental Problem

With AJAX, more and more content is being dynamically loaded by the client on demand, rather than being included in the original http response.  This fundamental change conflicts with the user control's current usage model of being attached to the control heirarchy during the page lifecycle on the server--either through markup, or using the Page.LoadControl method in code.  For user controls to be useful in the world of AJAX and demand loading, we would need to find a way to load them outside of the normal page lifecycle, and use javascript to get the the rendered HTML and inject it into our page.  Luckily, this isn't too difficult to accomplish.

The following example illustrates a basic scenario in which we have a page that uses jQuery to load a user control when a button is clicked.  The calling page is pretty simple:

As you can see, all I've done in jQuery's ready event handler is wire up the click event of the button to make an ajax call to a web service.  The data result that is returned from the ajax call is then added into the content div on the page.  Let's take a look at the web service that we are calling in that code:

This is a pretty standard WCF Ajax service, which uses a utility class called UserControlUtility by calling its RenderAsString method, which looks like this:

In the helper method above, I'm simply accepting a parameter called path, which allows us to use the LoadControl method in the usual way.  If you are worried about the potential baggage of instantiating a Page object for every User Control that is rendered, don't lose too much sleep over it.  A page object that is instantiated like this is pretty lightweight, and doesn't go through the heavy ASP.NET Page lifecycle that occurs on a normal page load.

This is pretty nifty for simple scenarios, but big challenges arise when the application gets more complicated.  What happens when the user control has javascript of it's own?  Well ordinarily you would have a few options.  One option that I defaulted to when starting out with jQuery was to write all the JavaScript in the calling page, and just apply it to the user control's html when it has been loaded.  This is not the best solution, because you lose the encapsulation that we were trying to maintain with user controls in the first place.  The second solution is to include the javascript within the user control within another jQuery ready handler.  This works out much better, because the client functionality gets to be bundled with the markup for clean encapsulation.  Additionally, the included javascript will be excuted when the control is rendered on the parent page, thanks to jQuery.  But has this solved all of our problems?  Not quite.

Mo Javascript, Mo Problems

To illustrate how problems can arise with that last solution, let me give an example. Say you are developing a real-time stock-screening application.  In this application, you have a user control called StockItemRow.ascx that had quite a bit of javascript associated with it. You also have a page called Screener.aspx that periodically polls a web service for matching stocks, and adds those stocks to the grid via a rendered instance of StockItemRow.ascx.  And suppose the user control had a good deal of javascript bundled with it, and also a few nested user controls of its own (with their own javascript, of course).  What were to happen if you dynamically added 50 or 60 rows over a few minutes? You may see what I am trying to get at here.

The problem is that the JavaScript is being loaded over and over on each successful new request for data, simply because it is bundled inside the rendered user control.  As you load more and more data onto the page, this becomes a bigger and bigger waste.  Plus, unless you write your javascript very carefully, each new dynamically loaded user control could end up applying it's javascript to other user controls that have already been loaded.  Yuck!  In order to solve these problems, it is going to take a little more work.

The first issue we need to solve is the repititious loading of unnecessary javascript.  To do this, we need to separate it out from the user control into it's own js file.  Some may argue that we are losing encapsulation here, but I disagree.  I think just if an aspx page can have both a file for markup and a codebehind file, then a user control can have both a markup file and a js file (and it's codebehind file, for that matter). After we have separated it out, we have freed ourselves to be able to load the javascript file once, while still rendering the user control multiple times.

But just separating the javascript out doesn't solve our problems. We need to somehow "register" a single instance of javascript on the page, and have any dynamically loaded user controls use just that instance.  Additionally, we need to make sure that the javascript is capable of being applied to individual user controls, without affecting other user controls that have already been wired up and loaded on the page. 

Enter jQuery.DynamicLoader

jQuery.DynamicLoader is a simple jQuery plugin I wrote that allowed a parent page to dynamically load User Controls and their corresponding script files on demand. Here is the way it works:

  • You reference jQuery.DynamicLoader on your parent page.
  • Create an ajax service that renders user controls, similar to the example I showed earlier
  • Anytime you want to load a user control on that page, call $.dynamicLoader.loadUC() with the appropriate options.  This will fetch the rendered user control, and its corresponding javascript file. If the javascript is being loaded for the first time, DynamicLoader will register that instance as the singleton for all subsequent user controls of that same type.
  • The javascript instance is then invoked with the rendered user control as its UI context.

Let's jump into the sample project I've created as an example:

DynamicLoader.zip (52.77 kb)

The project contains a single page Default.aspx, and two user controls, TableWidget.ascx and CellWidget.ascx.   The purpose of the project is to demonstrate a page intitally with no content, and how we can dynamically load several tiers of user controls, each with their own scripts.  We start from a single button on Default.aspx that will dynamically load a new TableWidget every time it is clicked.  Inside each TableWidget is a button gets wired up to load its own user controls, this time CellWidgets.  Each CellWidget has its own javascript that needs to execute as well. 

 Here is how the first button is wired up with jQuery: 

As you can see, it is calling DynamicLoader's loadUC function, which takes a few options: ucName is the path to the user control to be loaded, queryString allows you to pass parameters to your UserControl to help render it on the server, and eventBindings allows you to handle events that are fired within the usercontrol.

As I mentioned earlier, the javascript in your user control needs to be registered before it can be used.  Don't get scared off now, it's only two extra lines of code:

We have a standard jQuery ready handler, and inside that we call DynamicLoader's registerUC function.  This will only be loaded once, even if multiple TableWidgets are loaded afterwards.  Also notice the event triggers.  You can create as many different types of events as your heart's desire, as long as the parent knows the name of the event (and references it in the eventBindings option).  I've included ready, busy, unbusy, and finished in the default options.  The ready event is one that I consider critical, because it is the event that the parent will use to attach the user control to the page.

Live Demo | Download Sample Solution (52.77 kb)

You can see that there are buttons on the CellWidget that do some trivial javascript actions, and also a button that demonstrates an event being monitored by the parent user control.

Room for Improvement

DynamicLoader is more of a proof concept than a full-fledged plugin, and there are several areas in which it needs to be improved:

  • The event chaining needs some work.  I haven't really tested it with events that bubble more than two layers up.
  • Right now it doesn't look like jQuery's $.getScript is caching the scripts.  I'd like to rewrite a version of getScript that does.
  • The registration system is very rigid at this point.  It expects you to pass in a user control's path, and the script needs to register itself with that exact path as its key (without the extension). 

So there you have it. This technique allows you to treat your User Controls as neatly encapsulated modules that are loaded and configured on demand.  Plus, there is no limit to nesting your user controls, and they will load efficiently and within their own context.   Finally, you don't have to break communication with your user controls.  The event binding allows a separation of concerns, while still being able to act on important things that happen within the user control.

I hope you find this technique useful, and please let me know if you have suggestions or improvements!

More

Tuesday, December 23, 2008

Note that VS added a @Register directive right at the top of the page on drag drop of the user control on to your page.

But in future if you want to move the location of your User controls you will be forced to change the Register directive in multiple places depending on how many pages you have used the User Control. This could be a pain. This can be easily solved by moving the registration of the user control to your Web.config file with the following entry, so that any change in location can be updated only at one place.

Web config entry would look like:

<configuration>

<system.web>

<pages>

<controls>

<add tagPrefix="MyControl" src="~/Controls/WebUserControl.ascx"

tagName="MyButton"/>

</controls>

</pages>

</system.web>

</configuration>

Once the web config entry is made. The control can then be used in any page like this:

<body>

    <form id="form1" runat="server">

        <MyControl:MyButton ID="MyUserControl" runat="server" />

    </form>

</body>

 

It’s been a few months since I started getting acquainted with the System.DateTimeOffset type and I can honestly say I don’t see any reason to use System.DateTime anymore.

I’ve even gone as far as ask whether anyone knew when I would rather use DateTime over DateTimeOffset. The responses I got were along the lines of ‘backwards compatibility’ or ‘when you need an abstract time’. My recommendation is that if you haven’t yet looked at the type, go do it now and after that, start using it.

So what is this DateTimeOffset? When representing a date/time, especially in an internationally-faced system, you have to include a time-zone. DateTime did a very poor job handling time-zones, like being insensitive to changes. DateTimeOffset is the exact same thing as DateTime, only it takes heed of time-zones. For instance, comparing two DateTimeOffsets with the same UTC time in different time-zones will result in equality.
Moreover, DateTime also had only three modes: Local, UTC and Unspecified, whereas DateTimeOffset has an Offset property, allowing you to create dates in any time-zone you like.

Things to note:

  1. DateTime can be implicitly converted to DateTimeOffset, but not vice-versa. To do that, you would have to use DateTimeOffset’s DateTime property.
    When converting this way, the DateTime’s kind will always be Unspecified.
2.  DateTimeOffset dateTimeOffset = DateTimeOffset.UtcNow;
DateTime dateTime = dateTimeOffset.DateTime;
  1. When parsing a DateTimeOffset, note that you can specify AssumeUniversal and AssumeLocal using the DateTimeStyles enum. These come in handy when the string you’re parsing has no time-zone data.
4.  if (!DateTimeOffset.TryParse(myDateString, CultureInfo.InvariantCulture.DateTimeFormat, DateTimeStyles.AssumeUniversal, out dateTimeOffset))
    dateTimeOffset = default(DateTimeOffset);
  1. It is a best practice to store all of your dates as UTC in the database, regardless of the physical location of your users / servers. When doing this, be sure to manually change your DateTimeOffset objects to UTC using ToUniversalTime and only then use the DateTime property as I have previously noted.
SaveTimeToDatabase(dateTimeOffset.ToUniversalTime().DateTime);

Note that you do not need to convert DateTimeOffset objects to any time-zone to do calculations / comparisons. The only time you need to convert them to a time-zone is for displaying them to the user.

  1. If you want to store a user’s time-zone (on a database that doesn’t support date/times with time-zones), it would be best to have a translation table and use the TimeZoneInfo class’s Ids (for instance: TimeZoneInfo.Local.Id). Then you could use TimeZoneInfo.FindSystemTimeZoneById to translate that value to a TimeZoneInfo and use that object’s BaseUtcOffset property to get the difference from UTC.
    This may seem a bit cumbersome, but considering the fact that time-zones change due to daylight savings time, you can’t just store the difference from UTC and would be better suited allowing Windows to take care of these issues for you.
    Here’s a sample of this method of conversion:
7.  string id = "Israel Standard Time";
8.  DateTimeOffset utcnow = DateTimeOffset.UtcNow;
9.   
DateTimeOffset now = utcnow.ToOffset(TimeZoneInfo.FindSystemTimeZoneById(id).BaseUtcOffset);

Side note: When storing these in the database, it would be prudent to use SQL Server 2008’s datetimeoffset type, which is the equivalent of DateTimeOffset and takes care of the time-zones in the same manner.

More

 

Monday, December 22, 2008

Scenario

A common requirement for enquiry queries on an OLTP database is to have search criteria which are very specific ('get me details for for OrderID = NNNN') and also the occasional reports which ask for all the orders ('get me all the orders, no questions asked'.) Here is a sample from AdventureWorks which illustrates the problem:

CREATE PROCEDURE RptOrder(@OrderID int)
AS
BEGIN
    SELECT *
    FROM Sales.SalesOrderHeader
    WHERE (SalesOrderID = @OrderID OR @OrderID IS NULL)
END

What is the meaning of the underlined predicate in the above WHERE clause? It is actually a 'special case' where the developer intends to get back all the rows, regardless of the OrderID. This 'special case' is triggered by passing in a value of NULL for the @OrderID parameter.

Problem

So while this construct looks good in theory, it lends itself to very poor performance. Take a look at the 2 cases where this procedure is executed.

Case A: with specific OrderID

EXEC RptOrder 43672

Case B: asking for all records

EXEC RptOrder NULL

The plan, it turns out, is the same for both cases and a scan is used! This is despite a seekable index being present on SalesOrderID column for the SalesOrderHeader table:

The reason the optimizer chooses to scan the SalesOrderHeader (in this case it chooses a non-clustered index scan) is because it has no way to determine at compile and optimization time, as to what the specific value of @OrderID would be. Hence it has no way to 'fold' the (@OrderID IS NULL) expression and therefore has no option but to look at all the records.

Workarounds

'IF-ELSE' Workaround: The straightforward workaround in simple cases like the one above is to separate out the 2 cases into an IF-ELSE block:

ALTER PROCEDURE RptOrder(@OrderID int)
AS
BEGIN
    IF (@OrderID IS NOT NULL)
    BEGIN
        SELECT *
        FROM Sales.SalesOrderHeader
        WHERE (SalesOrderID = @OrderID)
    END
    ELSE
    BEGIN
        SELECT *
        FROM Sales.SalesOrderHeader
    END
END

Now, the 2 test cases work as expected. Here are the execution plans:

EXEC RptOrder 43672

EXEC RptOrder NULL

Dynamic SQL Workaround: However, as the number of predicates in the WHERE clause increase, and if all those predicates (or most of them) have such 'catch-all' handling then the IF - ELSE construct becomes unviable. In those cases, a dynamic SQL construct should be considered. Of course, when dealing with dynamic SQL, we must consider security first, including the possibility of SQL Injection and also the Execution Context of the dynamic SQL statement. But that is a topic for another post. Right now, here is how we could handle something like that:

-- NOTE: This code is highly simplified and does not provide for any screening

-- or protection against SQL injection!!! Provided as-is, confers no warranties.

ALTER PROCEDURE RptOrder(@OrderID int)
AS
BEGIN
    DECLARE @sDynamicSQL nvarchar(4000)
    SELECT @sDynamicSQL = 'SELECT * FROM Sales.SalesOrderHeader '

    IF (@OrderID IS NOT NULL)
    BEGIN
        SELECT @sDynamicSQL = @sDynamicSQL + ' WHERE (SalesOrderID = @OrderID)'
    END

    EXEC sp_executesql @sDynamicSQL, N'@OrderID int', @OrderID = @OrderID
END

Different Code Paths: The cleanest way of course is to consider having separate procedures for each kind of query. For example we can have a procedure called RptSpecificOrder for the case where we are searching by specific OrderID, and another one called RptAllOrders for the 'get-me-everything' case. This does have the advantage of clean isolation, but it does not scale easily when the number of predicates are larger. But is does also have the advantage that if we are querying for specific orders 99% of the time, that code path is simplified and optimized accordingly.

Conclusion

Beware of this T-SQL anti-pattern as it is one of the most common ones we see and it does have a huge (negative) impact on query performance. As you can see, if they are not done with these patterns in mind, application design and reporting requirements can have a detrimental effect on OLTP query execution. Separating reporting and OLTP workloads could be the key to solving these kinds of issues. But if separation is not possible, then clever use of separate code paths and stored procedures could help ensure that the most efficient execution plan is selected for each case. For complex queries, dynamic SQL may offer the simplest way out, but due care has to be taken to ensure that permissions and SQL injection issues are kept in mind when dealing with dynamic SQL statements.

More

Friday, December 19, 2008

Today, I am starting my journey in MSDN blog . I hope, this will help people who are facing this problem.

As a support engineer I have been troubleshooting many issues on daily basis. I am going to address an issue which is very common these days as people are migrating from 32 bit machine to 64 bit machines, They are facing some compatibility issues with the application build on 32 bit platform. I am gonna talk about one of such common problem and its different solutions.

Problem description

Lets take a simple scenario which is very common across the companies.

You have ASP.net application which is hosted on 32 bit machine. This web site also reference a third party DLL which is also build in 32 bit. Every thing was working fine. Now I decided to host this web site on 64 bit machine. Suddenly I realize my application has broken, its not working and start throwing the following error.

“Retrieving the COM Class factory for component with Glass ID, CLS ID failed due to the following error 80040154.”

Cause

Thumb Rule: You can not load a 32 bit Dll in a 64 bit process space.
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/13f991a5-45eb-496c-8618-2179c3753bb0.mspx?mfr=true

Solutions

This problem can be solved in different ways and totally depends upon your requirements. Same time each approach has its own pros and cons which needs to be examine carefully before adopting any approach.

The best solution of this problem is

  • Build the referenced DLL's on 64 bit platform

Most of the time either you are using a third party dll's which you can not build on 64 bit platform or you have other limitations which stops you to build the references dll's' on 64 bit platform. Now in both the scenarios you are not left with much options. Lets see what are the options you have.

  • Run the IIS host process(w3wp.exe) in 32 bit mode
  • Create a wrapper of your dll's and host this wrapper dll's in COM+
  • Upgrade your IIS6.0 to IIS 7.0 where you can run a particular app pool( w3wp.exe) in 32 bit mode

As I mentioned before that all the above approaches has its own pros and cons and I would not like to go in to details about it. I would like to give you step by step process to execute the above approaches.

Run the IIS host process(w3wp.exe) in 32 bit mode

This is very simple approach where you will run two commands on command prompt and your application will start working. The problem with this approach is that after this change of IIS, all of your web site will work under 32 bit app pool.

Here are the command you need to run on your command prompt.

ASP.NET 2.0, 32-bit version

To run the 32-bit version of ASP.NET 2.0, follow these steps:

1.Click Start, click Run, type cmd, and then click OK.

2.Type the following command to enable the 32-bit mode:

cscript %SYSTEMDRIVE%\inetpub\adminscripts\adsutil.vbs SET W3SVC/AppPools/Enable32bitAppOnWin64 1

3.Type the following command to install the version of ASP.NET 2.0 (32-bit) and to install the script maps at the IIS root and under:

%SYSTEMROOT%\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -i

4.Make sure that the status of ASP.NET version 2.0.50727 (32-bit) is set to Allowed in the Web service extension list in Internet Information Services Manager.

ASP.NET 2.0, 64-bit version

To run the 64-bit version of ASP.NET 2.0, follow these steps:

1.Click Start, click Run, type cmd, and then click OK.

2.Type the following command to disable the 32-bit mode:

cscript %SYSTEMDRIVE%\inetpub\adminscripts\adsutil.vbs SET W3SVC/AppPools/Enable32bitAppOnWin64 0

3.Type the following command to install the version of ASP.NET 2.0 and to install the script maps at the IIS root and under:

%SYSTEMROOT%\Microsoft.NET\Framework64\v2.0.50727\aspnet_regiis.exe -i

4.Make sure that the status of ASP.NET version 2.0.50727 is set to Allowed in the Web service extension list in Internet Information Services Manager.

These steps are taken from the following KB. Please check it for more details.
http://support.microsoft.com/kb/894435

 

 Create a wrapper of your dll's and host this wrapper dll's in COM+

In this solution, we will create a wrapper for the 32 bit dll's and will host it in COM+. ASP.NET application will communicate to 32 bit dll's VIA COM+ (dllhost.exe). The main advantage of this approach is that you can still use your 32 bit dll's inside the 64 bit IIS process but you have to take care of the COM+ complexity.

There are basically four steps which you have to perform:

1)    Create COM+ Service component.
2)    Give your assembly a strong name.
3)    Host your wrapper under COM+
4)    Call the COM+ component in your ASP.net application

You can find first three steps in the following KB. It will give you a step by step process.
http://support.microsoft.com/kb/306296

Once you register the component with COM+, you have to use the following steps to use it in your ASP.net application

  • Add the reference of COM+ component in your ASP.net project
  • Add the following line in your code
    Using System.EnterpriseServices;
    Using <ServicedCOM>;
      //( ServicedCOM is the name of your COM+ component)
  • Now create an object as you create for any other class in .net and call the respective functions. for example

    ServicedCOM.SimpleTrans ob  = new ServicedCOM.SimpleTrans(); // again i am taking ServicedCOM as an example.
    Response.Write(ob.DoTrans());

Upgrade your IIS6.0 to IIS 7.0 where you can run a particular app pool( w3wp.exe) in 32 bit mode

Now the last option we have is to upgrade the from IIS 6.0 to IIS 7.0. The advantage of this approach is that you can run a specific app pool in 32 bit mode but to achieve this you have to migrate your operating system to Windows server 2008 (Longhorn) as IIS 7.0 is available only with W2k8 or VISTA.

If you are ready to upgrade/migrate your OS, here are the steps you have to take to resolve the issue.

  • First install the IIS7.0 from the windows components/Server Roles 
  • Open IIS 7.0 manager
  • Right click on “Application Pools” and Click “Add Application Pool”
  • Give it a name & change the  version of .net as per your requirement.              
  • After creating your new web app, right click on it and open “Advanced Settings”
  • Change the setting “Enable 32-Bit Applications”,  to True like...                          
  • Now open the virtual directory of your application and change the app pool as follows

                           

   

I hope, this post will add some value in case you are facing similar problem. If you have any question or doubts, Please feel free to write me. I will try my best to help you.

Disclaimer: I work at Microsoft. Everything here, though, is my personal opinion and is not read or approved by Microsoft before it is posted. No warranties or other guarantees will be offered as to the quality of the opinions or anything else offered here.

More

Tuesday, December 16, 2008

Caching is an excellent way to improve the performance of a web application, and ASP.NET offers a rich caching API. Caching data typically involves the Cache object, which serves as an in-memory cache available to all requests to a web application. For example, when a visitor reaches the Top Selling Products page you can store the results retrieved from the database in the Cache object. When the next visitor reaches this page the list of top products can be pulled from the cache rather than from the database. This style of caching is referred to as a per-application cache because the data in the Cache object is shared among all requests to a specific web application and its data survives as long as the application.

There is another style of caching known as per-request caching in which the cache is specific to each request and the items in the cache only exist for the duration of the request. This style of caching is useful for sharing information among the many actors that are involved in a particular request to an ASP.NET resource. Consider an ASP.NET page that uses a Master Page and contains a User Control. When this page is requested there are three actors involved in the request: the Master Page, the User Control, and the ASP.NET page. Each of these actors have their own declarative markup and code, and in many cases they may need to work with the same data. For example, in an online message board website the Master Page might display a synopsis of the currently logged on user's information. This particular ASP.NET page might also load information about the currently logged on user, as might the User Control. Without any form of per-request caching, the Master Page, User Control, and ASP.NET page will each need to go to the database to get the currently logged on user's information, whereas in a perfect world this information would have only been retrieved once.

Every time the ASP.NET engine receives a request for a resource it creates an instance of the HttpContext class responsible for handling that request. The core objects that every ASP.NET developer is familiar with - Response, Request, Session, and so on - are actually properties of this HttpContext object. A lesser known property of the HttpContext class is the Items collection, which provides a key/value collection that can be used to share state during the lifetime of the request. The Items collection is an ideal location for a per-request cache store.

The following code snippet shows the HttpContext class's Items collection in action in terms of the messageboard website example described above. While the Items collection could be accessed directly from the code in the Master Page, User Control, and ASP.NET page, ideally it would be moved outside of these actors and located within the architecture. Within the application architecture there might be a MessageboardUser class with properties describing the attributes of a user and methods for retrieving information about all users, about a particular user, about certain subsets of users, and so on. The following code snippet shows the GetUser(userId) method and the use of the Items collection.

  1. public class MessageboardUser  
  2. {  
  3.     public MessageboardUser GetUser(int userId)  
  4.     {  
  5.         // See if the user information is in the Items collection  
  6.         MessageboardUser u = HttpContext.Current.Items["GetUser" + userID.ToString()]   
  7.            as MessageboardUser;  
  8.         if (u == null)  
  9.         {  
  10.             // The user is NOT in the cache... Get the user information from the  
  11.             // database  
  12.             u = GetUserFromDatabase(userId);  
  13.             // Put the user information into the cache  
  14.             HttpContext.Current.Items["GetUser" + userID.ToString()] = u;  
  15.         }  
  16.         return u;  
  17.     }  
  18. }  

With this small addition to the architecture code the site now takes advantage of a per-request cache and can save unneeded work when there are more than one actor in a single request using the same functionality or information.

Note that to access the Items collection from outside of an ASP.NET page you need to use HttpContext.Current.Items. Furthermore, you need to have added using System.Web to the class file and reference the System.Web.dll assembly in the Class Library project where your architecture is implemented.

 

Friday, December 12, 2008

There have been quite a few posts lately where people list the tools that make them productive throughout a given day, so I thought I would share some of the tools that I use on a daily basis.

Rockscroll - Extends the scrollbar in Visual Studio to show a syntax highlighted thumbnail view of your source.

Search .NET - Great .NET search engine

C# Search .NET - Great search engine for all things C#.

Console - A Windows console window enhancement. Features include: multiple tabs, text editor-like text selection, etc.

SlickRun - A free floating command line utility for Windows.  Once you create your shortcuts, it is a big time saver.

SyncBack - Great automated backup application.  Hear about this on the Mike Tech Show podcast.

UltraVNC - a powerful, easy to use and free software that allows you to remote control another system

LogMeIn - allows you to access your pc from anywhere (like GoToMyPC). They have pay versions as well as a free version.  This is great for logging into the office from home.  Saves gas :-).

FileZilla - Good FTP client from Mozilla.  Works well for Secure FTP sites.

NotePad++ - A free source code editor (and Notepad replacement)

Foxit Reader 2.0 for Windows - Light-weight PDF viewer

TaskArrange - A simple utility that lets you rearrange the buttons of the Windows taskbar.  This is helpful if you are like me and like to have certain applications show up first on your task bar.  I like to have my email app and my work log app open first then everything else.

FastStone Capture - A powerful, flexible and intuitive screen-capture utility

UltraMon - Utility for dual monitors (30 day trial, then $39 - well worth it)

PingMe - A free, web-based reminder tool.  It is easy to set up one time or recurring reminders to be sent to email or texted to your phone.  I use this religiously.

Jott - A tool that allows you to capture thoughts, create to-dos and set reminders with a simple phone call.  Since I drive 80 miles a day, this is very helpful.  You’d be surprised at how many things come to mind while driving. There are pay versions, but I still use the free plan.

Evernote - Awesome note-taking software that allows you to sync notes from any pc/laptop/mobile device to the web.  That way my notes are with me anywhere I go.

Windows Live Writer - Desktop application that makes it easier to compose compelling blog posts

Twhirl - A great desktop client for social networks such as Twitter, FriendFeed and Seesmic.

Texter - Text substitution application.

Stickies - Computerized version of post-it notes.

More

I’ve noticed that the developer and designer community appear to be obsessed with lists. Is this for a reason that I have somehow missed? For example, over at dZone, 2 out of the top 3 most popular articles are lists: 15 CSS Tricks That Must be Learned and 10 Dirty Little Web Development Tricks - incidentally I liked both articles so I seem to have this obsession myself. OK this is not that many but hey I’ve seen loads of lists elsewhere. Anyway, I thought I would embrace this culture to feed my own addiction and I have detailed 7 (I was only going to do 5, and 6 was out as it’s not prime ;-)) ways to write beautiful code.

First things first here, I’m talking about pure aesthetics, nothing else. As I have said previously, good code starts by being something other people find easy to read. In fact, Jeff Attwood had a blog post comparing coding to writing citing several sources. I urge you to take a look.

Anyway on to my list.

  1. Return from if statements as quickly as possible.

For example, consider the following JavaScript function, this just looks horrific:

1
2
3
4
5
6
7
8
9
10
11
12
13
function findShape(flags, point, attribute, list) {
    if(!findShapePoints(flags, point, attribute)) {
        if(!doFindShapePoints(flags, point, attribute)) {
            if(!findInShape(flags, point, attribute)) {
                if(!findFromGuide(flags,point) {
                    if(list.count() > 0 && flags == 1) {
                          doSomething();
                    }
                }
            }
       }
    }   
 }

Instead we can change the above to the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
function findShape(flags, point, attribute, list) {
    if(findShapePoints(flags, point, attribute)) {
        return;
    }
 
    if(doFindShapePoints(flags, point, attribute)) {
        return;
    }
 
    if(findInShape(flags, point, attribute)) { 
        return;
    }
 
    if(findFromGuide(flags,point) {
        return;
    }
 
    if (!(list.count() > 0 && flags == 1)) {
        return;
    }
 
    doSomething();
 
}

You probably wouldn’t even want a function like the second one, too much going on (see point 7), but it illustrates exiting as soon as you can from an if statement. The same can be said about avoiding unnecessary else statements.

  1. Don’t use an if statement when all you simply want to do is return the boolean from the condition of the if.

Once again an example will better illustrate:

1
2
3
4
5
6
7
8
function isStringEmpty(str){
    if(str === "") { 
        return true;
    }
    else {
        return false;
    }
}

Just remove the if statement completely:

1
2
3
function isStringEmpty(str){
    return (str === "");
}
  1. Please use whitespace it’s free!

You wouldn’t believe the amount of people that just don’t use whitespace - you would think there was a tax associated with using it. Again another example and I hesitate to say this but this is from real live code (as was the first example), all I have done is change the programming language and some function names - to protect the guilty:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
function getSomeAngle() {
    // Some code here then
    radAngle1 = Math.atan(slope(center, point1));
    radAngle2 = Math.atan(slope(center, point2));
    firstAngle = getStartAngle(radAngle1, point1, center);
    secondAngle = getStartAngle(radAngle2, point2, center);
    radAngle1 = degreesToRadians(firstAngle);
    radAngle2 = degreesToRadians(secondAngle);
    baseRadius = distance(point, center);
    radius = baseRadius + (lines * y);
    p1["x"] = roundValue(radius * Math.cos(radAngle1) + center["x"]);
    p1["y"] = roundValue(radius * Math.sin(radAngle1) + center["y"]);
    pt2["x"] = roundValue(radius * Math.cos(radAngle2) + center["y"]);
    pt2["y"] = roundValue(radius * Math.sin(radAngle2) + center["y");
    // Now some more code
}

I mean I won’t bother putting an example of how it should be - it should just be sooo bloody obvious. That said, I see code like this ALL the time and so certain people do not find it that easy to judge how to use whitespace. Screw it, for them I will inject some whitespace into the example and it’s shown below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
function getSomeAngle() {
    // Some code here then
    radAngle1 = Math.atan(slope(center, point1));
    radAngle2 = Math.atan(slope(center, point2));
 
    firstAngle = getStartAngle(radAngle1, point1, center);
    secondAngle = getStartAngle(radAngle2, point2, center);
 
    radAngle1 = degreesToRadians(firstAngle);
    radAngle2 = degreesToRadians(secondAngle);
 
    baseRadius = distance(point, center);
    radius = baseRadius + (lines * y);
 
    p1["x"] = roundValue(radius * Math.cos(radAngle1) + center["x"]);
    p1["y"] = roundValue(radius * Math.sin(radAngle1) + center["y"]);
 
    pt2["x"] = roundValue(radius * Math.cos(radAngle2) + center["y"]);
    pt2["y"] = roundValue(radius * Math.sin(radAngle2) + center["y");
    // Now some more code
}
  1. Don’t have useless comments:

This one can get quite irritating. Don’t point out the obvious in comments. In the example below everyone can see that we’re getting the students id, there is no need to point it out.

1
2
3
4
5
6
7
8
9
10
11
12
13
function existsStudent(id, list) {
    for(i = 0; i < list.length; i++) {
       student = list[i];
 
       // Get the student's id
       thisId = student.getId();
 
       if(thisId === id) {
           return true;
       }
    }
    return false;   
}
  1. Don’t leave code that has been commented out in the source file, delete it.

If you are using version control, which hopefully you are - if not why not! - then you can always get that code back easily by reverting to a previous version. There is nothing more off putting when looking through code and seeing a large commented out block of code. Something like below or even a large comment block within a function itself.

1
2
3
4
5
6
//function thisReallyHandyFunction() {
//      someMagic();
//      someMoreMagic();
//      magicNumber = evenMoreMagic();
//      return magicNumber;
//}
  1. Don’t have overly long lines.

There is nothing worse than when you look at code that has lines that go on forever - especially with sample code on the internet. The number of times I see this and go ahhhhhhhhhh (I’ll switch to Java for this, as generics makes this particularly easy to do):

1
2
3
4
5
6
public static EnumMap<Category, IntPair> getGroupCategoryDistribution(EnumMap<Category, Integer> sizes, int groups) {
        EnumMap<Category, IntPair> categoryGroupCounts = new EnumMap<Category,IntPair>(Category.class);
 
        for(Category cat : Category.values()) {
            categoryGroupCounts.put(cat, getCategoryDistribution(sizes.get(cat), groups));
        }

I’m not suggesting the 70 characters width that you had to stick to on old Unix terminals but a sensible limit like say 120 characters makes things a little easier. Obviously if you are putting sample code on the internet and you have it within a fixed width container, make it easier for people to read by actually having it fit in the container.

  1. Don’t have too many lines within a function/method.

Believe it or not a few years ago now an old work colleague exclaimed that Visual C++ was “shit” as it didn’t allow you to have a method with more than 10,000 lines. I kid you not - well ok I can’t remember the exact number of lines but it was huge. I still see this time and time again where a function/method is at least 50 lines long. Can anyone tell me this is easy to follow? Not only that but it normally forms part of an if statement that you can never find the enclosing block because you are scrolling. To me anything over 30-35 lines is pretty hard to follow and requires scrolling. My recommendation is if it’s more than 10-15 lines consider splitting it up.

This is by no means an exhaustive list and I could have gone on longer - in fact my abolish the switch statement motto would have been number 8 if I had not already mentioned it before. However, it has to end somewhere but feel free to state your own annoyances and maybe I can update the post with some more.

 

I love the asp.net validation controls. They're chick and really useful. We sometimes have to show some messages that are not the ErrorMessages of validators. An approach to do this could be to

a) ScriptManager.RegisterClientScriptBlock to show a popup (YUCK!!)

b) Put in a label and set its Text from code. This could be an option, only the error message will be in the label and not inside the ValidationSummary. This does not look good as some errors will be shown neatly in a ValidationSummary whereas others will eb shown in a separate label.

c) This is my favourite. Here's what you do:

In code behind, when you want to add a message to the ValidationSummary, just do this:

RequiredFieldValidator req = new RequiredFieldValidator();
req.ValidationGroup = "vgInput";
req.ErrorMessage = "Your message goes here.";
req.IsValid = false;
Page.Form.Controls.Add(req);
req.Visible = false;

Notice the last three lines. One causes page validation to fail, the next adds the validator to the page's controls so it can have effect (validators must be inside the form tag). The last hides the validator so the error message is shown only in the ValidationSummary and not at the location of the validator (which is at the end of the form as it's added dynamically.

Hope that helps.

 

Monday, December 8, 2008

When building web applications, we've had a "defect" reported a number of times where a user presses a button twice in quick succession and causes a postback twice, typically causing whatever server-side code you've written to run twice.

Fellow developers usually dismiss this "defect" and say "can't the user just press the button once and wait?". Short answer, NO.

I've solved this in the past a few different ways.

For long running tasks I use BusyBoxDotNet. This basically uses the same concept as the ModalPopup through the AJAX Control Toolkit, but it hooks into the browser's onbeforeunload event. As it places a modal popup over the page, it prevents the user from pressing the submit button again.

This week, I needed a quick and easy way to prevent a user from submitting twice, and found it thanks to jQuery.

Screenshot

On the screenshot above, you can see that I've got a TextBox (which has a RequiredFieldValidator). In my button press event, I've got a Thread.Sleep(1000) to simulate a long running task.

If the user tries clicking the textbox after the page has already submitted, you get the alert message above.

Show me the code....

The magic happens in the following piece of javascript code...

Breaking it down.

Add a reference to the jQuery library, then call "noConflict()". (I like to call jQuery by its name, not using $)

I keep a javascript variable called "submitted" which initially is set to false.

Using jQuery, I hook up an event handler for the document ready event. This function finds (using jQuery selectors) the Page's Form.ClientID (so you can call the form what you want). When "submit" is called on the form, run the event handler.

I have to wrap "Page_IsValid" in a try..catch as Page_IsValid may NOT be on the page. This happens when there are no validators on the page.

If the form is valid, and it hasn't been submitted already, then all we do is update the "submitted" variable to TRUE.
If the form is valid, but the page HAS been submitted, then we call "preventDefault()" which stops form submission, and alert() to the user.

Important Notes

The important part here is that when we subscribe to the "submit" event in jQuery, it APPENDS our code to whatever is already there, it DOES NOT replace the code. I've struggled in the past with trying to get this to work, as you need to check if the form is submitted AFTER all the ASP.NET validators have run.

In the past some of the things I've tried are:

Using the Page.RegisterOnSubmitStatement()

This registers the javascript function before the WebForm_OnSubmit() function call.

Creating my own FORM class by overriding HtmlForm

I quickly tried this and tried to override OnPreRender() and RenderAttributes() and could never get my javascript AFTER the WebForm_OnSubmit() code.

Disabling / hiding buttons after they've been pressed

This is probably the easiest method of stopping submission of a page twice, but you have to be careful you don't disable the button BEFORE you check for validation errors, otherwise your user will never be able to submit a page.

Use BusyBoxDotNet

For really long running postbacks (eg payment gateway transactions), we use BusyBoxDotNet. As all our sites are hosted at WebCentral we've had to roll our own version to ensure it runs under a partial trust environment.

More

Friday, December 5, 2008

Alachisoft has released TierDeveloper 6.1 as free software (previous version priced at $1495/developer). TierDeveloper lets you develop major chunks of your .NET applications in a matter of hours and days instead of weeks and months. TierDeveloper is one of the most feature-rich ORM code generators in the market.

It provides you the following:

- Map and generate .NET persistence and domain objects in C# and VB.NET

- Design and generate custom ASP.NET and Windows Forms GUI seamlessly

- Generate web services and WCF server layers and proxy client objects

- Powerful Template IDE to let you customize existing or write new code generation templates

- Full support for .NET 2.0/3.5 and Visual Studio 2005/2008

Download TierDeveloper 6.1 Free Software
http://www.alachisoft.com/rp.php?dest=/download.html

TierDeveloper 6.1 Information
http://www.alachisoft.com/rp.php?dest=/tdev/index.html