:::: MENU ::::

Tuesday, June 23, 2009

Firebug is a revolutionary Firefox extension that helps web developers and designers test and inspect front-end code. It provides us with many useful features such as a console panel for logging information, a DOM inspector, detailed information about page elements, and much more.

Though Firebug is already fully packed with features out of the box, several extensions out there can enhance its utility. In this article, you will find the 10 best Firefox extensions for Firebug that will make your life, as a developer or designer, easier.

1. Pixel Perfect

Pixel Perfect allows you to overlay web layouts and other design compositions onto a web page so that you can accurately (and easily) write your CSS and HTML. By being able to toggle a web composition on or off, web developers and designers can have a visual guide for pixel-perfect accuracy of the position and dimensions of web page components. Check out the video demonstration to Pixel Perfect in action.

2. Page Speed

Page Speed is an open source Firebug add-on for evaluating web page performance, giving developers suggestions on front-end performance optimizations they can carry out. Tests and evaluations are based on Google’s Web Performance Best Practices developed through Steve Sounder’s work. Make sure to read the Page Speed user guide for complete documentation of its many features.

3. CodeBurner

CodeBurner, released by SitePoint, extends Firebug to provide a built-in HTML and CSS reference. The extension also presents contextual information based on what is currently in Firebug’s CSS and HTML panels. The references are very helpful, showing you information about browser compatibility and W3C Recommendation compliance of page elements, among many other types of information.

4. FireRainbow

FireRainbow is a simple Firebug extension that fills in a sorely desired function: code syntax highlighting. FireRainbow colorizes JavaScript, CSS, and HTML for improved readability of code being reviewed or inspected in Firebug. There are currently over 20 different FireRainbow themes that you can choose from, giving you some options for customization.

5. Inline Code Finder

Inline Code Finder is great for hunting down inline JavaScript and CSS and is perfect for developers refactoring existing markup to separate structure (HTML) from style (CSS) and function (JavaScript). The usage of the tool is simple: it searches the entire web page for inline code and provides the developer with contextual information about the inline code it finds. The newest version gives you the ability to filter certain groups of inline code.

6. Firecookie

Developing web applications that utilize cookies can be time-consuming. Firecookie, a Firebug extension, gives you a host of options and features strictly for working with cookies. The extension allows you to view, inspect, export, and manage cookies, log cookie events (creation, deletion, etc.), and much more. The latest version of Firecookie adds several improvements such as the ability to list only cookies sourcing from a subdomain.

7. FirebugCodeCoverage

FirebugCodeCoverage is a benchmarking Firebug extension inspired by Selenium IDE for determining the percentage of your code being executed for time duration, known as code coverage. This is typically measured during automated testing to see how well the test cases are able to thoroughly test your code (with higher percentages being your goal).

8. SenSEO

SenSEO is a Firebug extension that analyzes a web page and indicates how well it is doing for single-page whitehat search engine optimization. The extension checks for correct use of meta tags, presence of a title, headings, and other relevant criterions for optimal search engine optimization.

9. Yahoo! YSlow

YSlow evaluates a web page for performance and suggests potential places for improvements. YSlow is based on YDN’s Best Practices for Speeding Up Your Web Site and gives you letter grades on one of the three predefined (or user-defined) rule sets. It has a handful of useful features such as displaying information and statistics about web page components, and integration of optimization tools such as JSLint and Smush.it.

10. Firefinder

Firefinder is for quickly finding web page elements that match CSS or Xpath selectors that you input as your search criteria. Firefinder is great for testing which page elements are affected by a CSS style rule as well as for highlighting and finding elements that match your searches.

More

Monday, June 22, 2009

I’m not used to use Finalizers in my everyday job but I always thought that if I use it more frequently I could achieve some extra performance.

Well, I’m a natural lazy programmer and that prevents me from digging deeper and doing some tests to clarify this guess.

Thanks to community, there is always someone ready to share knowledge and help those lazy guys like me.

One of those guys were Andrew Hunter that posted an article named “Understanding Garbage Collection in .NET”.

In is article I found that:

  • The Finalizer execution is non deterministic – it depends on GC and the way it’s operating: concurrent or synchronous.
  • It requires two GC cycles to completely remove the object and memory footprint.
  • Two many objects with Finalizers could slow down GC a lot

This don't mean that we shouldn’t use Finalizers, but we must take same care when we create Finalizers. The simplified way to Finalizer usage is:

  1. Implement System.IDisposable interface
  2. move Finalizer code to Dispose method
  3. Finish Dispose method execution with a GC.SupressFinalize() operation, this way the GC will know that this object wont need the Finalizer invocation and can be remove immediately.
  4. Invoke Dispose method in Finalizer – this way we ensure that external resources were always removed.

A deeper understand on this procedure, also known as the IDisposable Pattern, can be acquired by reading this article (kindly pointed by Luis Abreu).

Conclusion

Using finalizers don't bring any performance improvement but it’s wrong use can be a vector for severe memory management problems.

When we have a class that own external reference not managed automatically by Runtime then the IDisposable Pattern should be used to ensure the correct memory cleanup.

 

Friday, June 19, 2009

I’m working on a little application right now that provides some insight into the assemblies in use for a given application.  One of the things that I want to be able to show is whether or not each assembly was built in Debug or Release mode.  As you’re no doubt aware, running applications in production that were built in Debug mode can be a major performance problem (at a minimum – depending on what else you have turned on in Debug mode it could also be a security issue).

So I did some binging (followed by some purging? no, wait, that with a hard G sound) and quickly found some sample code that I was able to use in some test code to confirm that it is working.  Fellow MVP Bill McCarthy wrote (like, 5 years ago to the day!) about how to do this in Visual Basic.  Not that I have anything against VB, but my preference since VB6 has been C# (if nothing else, it makes JavaScript much easier to grok), so I had to translate the code into my native tongue, which provides me with something to share with you.  This is literally Bill’s code, translated by me into C#.  Any bugs you can probably assign to me as the translator, rather than Bill as original author.

Check If An Assembly Was Compiled In Debug Mode

private bool IsAssemblyDebugBuild(string filepath)
{
    return IsAssemblyDebugBuild(Assembly.LoadFile(Path.GetFullPath(filepath)));
}
 
private bool IsAssemblyDebugBuild(Assembly assembly)
{
    foreach (var attribute in assembly.GetCustomAttributes(false))
    {
        var debuggableAttribute = attribute as DebuggableAttribute;
        if(debuggableAttribute != null)
        {
            return debuggableAttribute.IsJITTrackingEnabled;
        }
    }
    return false;
}

 

Thursday, June 18, 2009

CAPTCHA HttpHandler Path

This release of BotDetect includes the ability to change the full CAPTCHA HttpHandler request path. It is recommended to change this path to a unique value for each of your applications, since that can improve the CAPTCHA security.

For example, if a bot tries scanning many websites for the default LanapCaptcha.aspx path, it becomes slightly more difficult to automatically recognize your site as using BotDetect CAPTCHA protection.

To change the BotDetect CAPTCHA HttpHandler path to CustomCaptchaHandler.ashx (for example):

  • Add the following lines to the <appSettings> section of the web.config file:
  • <appSettings>
  •   <add key="LBD_RequestPath" value="CustomCaptchaHandler.ashx" />

</appSettings>

  • Update the HttpHandler registration in the <system.web> section of the web.config file:
  • <httpHandlers>
  •   <add verb="*" path="CustomCaptchaHandler.ashx"

    type="Lanap.BotDetect.CaptchaHandler, Lanap.BotDetect

 

http://captcha.biz/asp.net-captcha.html

Wednesday, June 17, 2009

For loops have been our friend for so many years. I have fond memories of looping through huge lists of items imperatively bobbing and weaving to construct my final masterpiece!

1.for (int i = 0; i < items.Length; i++)

2.{

3.  if (items[i].SomeValue == "Value I'm Looking For!")

4.  {

5.    result.Add(items[i]);

6.  }

7.}

Look at that beauty. I am looping through a list of items, filtering them on some value, and then BAM! I get a result list with the values in it. Magic I tell you, magic. And then foreach loops came along and I realized how ridiculously ugly it was. Just check this out:

1.foreach (SomeDummyClass item in items)

2.{

3.  if (item.SomeValue == "Value I'm Looking For!")

4.  {

5.    result.Add(item);

6.  }

7.}

Mmmmm. Beauty, simplicity, less room for error. But I still have to do a lot of declaring and looping and things. Ugh. So then they gave me the magical yield statement, and when used with a method, I could do this:

01.private static IEnumerable<SomeDummyClass> GetItems(SomeDummyClass[] items)

02.{

03.  foreach (SomeDummyClass item in items)

04.  {

05.    if (item.SomeValue == "Value I'm Looking For!")

06.    {

07.      yield return item;

08.    }

09.  }

10.}

Nice. Still a lot of looping, but now I don't have to declare that stupid result list. And, if the result is never used, nothing even runs! Lazy evaluation rocks your face people! This still just feels gross though. Why am I holding the compilers hand so much? I just need to say "hello computer, give me crap in this list where SomeValue = some crap I'm looking for". Lo and behold Anders Hejlsberg and his team came down from on high and delivered Linq to us. Now I say:

1.var result = items.Where(i => i.SomeValue == "Value I'm Looking For!");

And the compiler figures it all out for me. Better yet I still get lazy evaluation and I get my list filtered. Best of both worlds! And since I am not telling the compiler exactly what to do, then in the future (with .NET 4.0) when my list grows really really large, all I have to do is say:

1.var result = items.AsParallel().Where(i => i.SomeValue == "Value I'm Looking For!");

And suddenly my list is being filtered by all of the processors on the box! This is possible because at each iteration we began to tell the computer less and less how to perform the individual operations needed in order to get our result, and instead we are now more closely telling the computer the action to take, not the specifics of how to perform the action. This lets the computer best decide how to execute our action, and in the case of Parallel Linq, we are now able to tell the framework that we want our task executed in parallel. (In case you are wondering, there are a few reasons why it can't just run it as parallel by default)

As you can see, we really are moving more and more down the road of declarative development. Over time we will see more "what" and less "how" in our day to day programming adventures. And that, my friends, is life after loops.

More

Typemock have launched a new Unit testing tool, Typemock Racer, and for the launch they are offering a free license for bloggers who will review it.

 

Get the full details on Typemock’s blog.

 

Have released a new deadlock detection, unit testing tool – Typemock Racer, and for the product’s launch, each blogger who will review the Racer will get a free Racer license!

As the software world move towards multi-threading platforms, more and more developers find themselves faced with the problem of testing their code. Due to the nature of multi-threaded code development, the causes for bugs aren't easy to detect even with thorough testing. Companies run the risk of falling behind the competition, as they either don't develop multi-threaded code, or deliver buggy software, leading to users abandoning their products.

Typemock Racer, our latest in a long line of unit testing tools, scans the developer’s code for known multi-threading problems. Racer is designed to integrate smoothly into the developer's work flow, enabling the developer to write tests, and get a detailed report to show where the potential problems occur in the code.

Tuesday, June 16, 2009

It feels like forever since SecureString has been introduced in the .Net Framework and yet I keep seeing it being mishandled, in particular by code that returns or receives passwords in clear text, so in this post I’ll show what you should be looking for when doing code reviews and finding conversions from SecureString to string.

The main idea with SecureString is that you would never store a password or other textual secret in plain text, unencrypted memory – at least not in a System.String instance. Unfortunately, SecureString was introduced in the framework only after plenty of APIs were built and shipped using passwords stored in String, such as System.Net.NetworkCredential, so an application that must use these APIs has no option but to convert SecureStrings to strings.

However, the SecureString class itself doesn’t provide any methods to get back a plain string with its content, exactly to discourage this type of usage. What a developer has to do is use functions from System.Runtime.InteropServices.Marshal to get a native buffer with the plain string, marshal the value into a managed string then – very importantly – free the native buffer.

And this is where I’ve seen people people making mistakes. You can certainly spot the problems in the following functions:

public static string BAD_ConvertToUnsecureString(this SecureString securePassword)

{

    IntPtr unmanagedString = Marshal.SecureStringToGlobalAllocUnicode(securePassword);

    var s = Marshal.PtrToStringUni(unmanagedString);

    Marshal.ZeroFreeGlobalAllocUnicode(unmanagedString);

    return s;

}

 

public static string REALLY_BAD_ConvertToUnsecureString(this SecureString securePassword)

{

    IntPtr unmanagedString = Marshal.SecureStringToGlobalAllocUnicode(securePassword);

    return Marshal.PtrToStringUni(unmanagedString);

}

The first one is almost ok, but if an exception happens after the native buffer is allocated then we leak that password. The second one is just plain wrong; the developer completely forgot to free the native buffer, introducing an unconditional leak of that password.

I’ve seen both mistakes being freely cut-and-paste because the original code was embedded in a larger function that was doing multiple things, so the details of the conversion were easy to overlook. Lacking a simple, reusable function leads to that, so don’t let these bugs creep into your project.

The correct implementation is to use a try/finally block to free the native buffer. Here’s an example, using SecureStringToGlobalAllocUnicode and ZeroFreeGlobalAllocUnicode:

public static string ConvertToUnsecureString(this SecureString securePassword)

{

    if (securePassword == null)

        throw new ArgumentNullException("securePassword");

 

    IntPtr unmanagedString = IntPtr.Zero;

    try

    {

        unmanagedString = Marshal.SecureStringToGlobalAllocUnicode(securePassword);

        return Marshal.PtrToStringUni(unmanagedString);

    }

    finally

    {

        Marshal.ZeroFreeGlobalAllocUnicode(unmanagedString);

    }

}

Please understand that a little village bursts on fire every time you call such a function, but if you do need it, write it like this, so there’s no excuse for anyone to write that code again.

The extension method syntax makes it easy to call the conversion as if it was part of SecureString itself:

SecureString password = someCodeToGetSecureString();

var credentials = new NetworkCredential(userName, password.ConvertToUnsecureString())

While at it you may want to provide the counterpart that converts a plain String to a SecureString. Following the same pattern as above we’ll have:

public static SecureString ConvertToSecureString(this string password)

{

    if (password == null)

        throw new ArgumentNullException("password");

 

    unsafe

    {

        fixed (char* passwordChars = password)

        {

            var securePassword = new SecureString(passwordChars, password.Length);

            securePassword.MakeReadOnly();

            return securePassword;

        }

    }

}

The important point here is to mark the secure string as read-only before returning it – it is very unlikely that you’ll need to edit a secure string that was created from an existing string. You’ll also note that in the above example I used unsafe code to initialize the SecureString from its fixed/char* representation, which is about 10x faster than using AppendChar. If you are not allowed to use unsafe code, don’t worry, just write it using AppendChar and it works fine.

I’m not proud to have clear text passwords in my process memory, but we’ll have to live with it until the day somebody updates the framework to use SecureString everywhere, particularly in System.Net.

If you need to learn more about SecureString this post is a great start: http://blogs.msdn.com/shawnfa/archive/2004/05/27/143254.aspx.

 

Friday, June 12, 2009

Microsoft’s Fiddler

Fiddler is a free web performance tool, it is not really a property of Microsoft rather a side project by Eric Lawrence, a PM with Microsoft. I used Fiddler for both security testing and now for performance. I love it a lot. Must mention it requires Net Fx 2.0 as a prerequisite so it is limited to Windows OS. Recently Eric added support to Firefox – Fiddler Hook For Firefox, so the tool is great for both IE and FF. My related posts:

Microsoft’s VRTA

VRTA is a free web performance tool and it stands for Visual Round Trip Analyzer created by Microsoft’s Jim Pierson and used internally for sometime. It was made available for public use during last PDC 2008. Jim has written very detailed article about the tool and how it solves performance problems - 12 Steps To Faster Web Pages With Visual Round Trip Analyzer. VRTA installs and uses under the hood free Microsoft Network Monitor (Netmon) to capture and analyze network captures.

Yahoo’s YSlow

YSlow is a free performance analysis tool created by Steve Souders when he was with Yahoo. Steve created another good tool called Cuzilion. YSlowl comes with extremely good set of performance guidance that can be found here - rules for high performance web pages. YSlow requires Firebug as a prerequisite, meaning it is restricted to Firefox only.

IBM’s Page Detailer

Page Detailer is a free web performance tool from IBM. I was not able to identify any good articles that cover it – if you share with me please, or better off publish one. It does not have any prerequisites, consider it as an advantage.

Google’s Page Speed

Recently I stumbled on Page Speed from Google. It reminds me a Yahoo’s YSlow lot  that makes me believe it comes from Steve Souders that works now for Google. It also requires Firebug as a prerequisite and works with FireFox only. It comes with nice guidance found here - Web Performance Best Practices. Must admit – I adore the concept of the tool although in most cases I cannot use it as I work for customers that IE is their target browser. Nevertheless the guidance is tool agnostic and I recommend bookmarking it for quick reference.

 

Thursday, June 11, 2009

This is one of the most irritating build errors a developer might see, especially while setting up an existing .NET application on a fresh machine. I have found one solution that works always. Go to "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files" ( Modify the framework version, root directory etc according to your environment. ) Now, right-click and open up the Security tabs for this folder. Grant "Full control" to the 'users' group. (Modify will also work, thought I didn't try it out) However, it this is a security problem for your environment to grant full control to all users in Users group, another Solution could be to put the concerned assemblies lo GAC which I avoided as it was not  logical to put the assemblies into GAC in my scenario. Hope this helps.

 

Wednesday, June 10, 2009

'm using Forms Authentication and Role based security for an app I'm working on. Ran into a problem that even though I set the IsPersistent parameter of the FormsAuthenticationTicket to true, next time I open the app I have to log in again...

authTkt = new FormsAuthenticationTicket(1, userId, DateTime.Now, DateTime.Now.AddYears(1), true, userRoles);

Well, doh, the plain and simple answer is that you need to set the expiry date of the cookie, otherwise it expires when the user closes the browser. Problem solved :-)

authCookie.Expires = authTkt.Expiration;

 

9% of the projects my company works on for clients are deployed internally to an enterprise environment managed by a company.  We rarely have to worry about deploying to a shared hosting environment which is nice.  One of the client projects we finished up recently based on ASP.NET MVC, PLINQO and jQuery was deployed to a shared hosting provider for the client and nothing worked initially.  Everything worked locally of course even if we hit the hosting provider’s database from the staging server code base.  It’s never fun to have development or staging working with the same code failing in production.

After researching the error more we thought it related to data access permissions since we were getting null reference errors as collections were converted to lists (data wasn’t being returned from the database).  Finding the root cause of errors in this situation is like finding a needle in a haystack since you can’t attach a debugger, can’t run SQL Profiler or do anything aside from looking at log files, make code changes, FTP the files to the server and see what happens. 

After thinking through it more and running several isolated tests we realized it wasn’t a data access error that we were getting (even though that’s what logs led us to believe).  With a little research I came across the good old <trust> element that can be used in web.config.  It’s not something that I personally have had to use for over 5 years so I completely forgot about it.  By placing the following in web.config you can simulate a medium trust environment:

<system.web>

  <trust level="Medium"/>
</
system.web>

Changing the trust level made the staging environment fail with the same error shown on the shared host (which was awesome….rarely am I excited to get an error).  We found out that a PLINQO assembly we were using didn’t allow partially trusted calls so we located the source for that assembly, recompiled it to allow partially trusted callers and everything was fine. 

Bottom Line/Lesson Learned: If you’re deploying to a medium trust hosting environment you’d be wise to set the trust level to Medium up front to save time and energy down the road.

 

Sometimes you want your web page to 'stay alive'. That is, if a user is filling out a complicated form, you do not want the session to time out before they are finished. The user could get very angry and rightfully so: You might even get yelled at!

It's not simply a matter of increasing the session timeout to a very large value. If you do that, the sessions would be left active in the server memory for hours—long after the visitors have left the site. Increasing the session timeout IS a solution… but not necessarily a good solution.

The goal is that the session should stay active as long as the web page is open on the client machine …even if there are no post backs to reset the session timer. When the web page is closed, the session should time out normally.

I implemented a solution for this: The client will "ping" the server at intervals of less than the session timeout which will reset the session timer. This is known as the Heartbeat design pattern (I couldn't find a decent site/page to link to).

Miscellaneous Setup Stuff:

The page must have a ScriptManager installed.

For testing purposes, I set the Session Timeout to two minutes in web.config:

<system.web>

  <sessionState timeout="2">

</sessionState>

 

To trace what is happening, I used a utility function called ODS (it's in a class called MiscUtilities):

/// ---- ODS ---------------------------------------

/// <summary>

/// Output Debug String with time stamp.

/// </summary>

 

public static void ODS(string Msg)

{

    String Out = String.Format("{0}  {1}",

                       DateTime.Now.ToString("hh:mm:ss.ff"), Msg);

    System.Diagnostics.Debug.WriteLine(Out);

}

 

To watch the Session State events, I added debugging strings to the global.asax file:

<%@ Application Language="C#" %>

 

<script RunAt="server">

 

    void Application_Start(object sender, EventArgs e)

    {

        MiscUtilities.ODS("****ApplicationStart");

    }

    void Session_Start(object sender, EventArgs e)

    {

        MiscUtilities.ODS("Session_Start");

    }

    void Session_End(object sender, EventArgs e)

    {

        MiscUtilities.ODS("Session_End");

    }

Here are the details:

We need a method at the server for the client to call. We use a WebMethod.

  1. There must be a ScriptManager on the page.
  2. The ScriptManager must have EnablePageMethods set to true.
  3. The WebMethod must be public and static.
  4. The WebMethod must have the EnableSession attribute set to true.

<asp:ScriptManager ID="ScriptManager1" runat="server"

    EnablePageMethods="true">

</asp:ScriptManager>

 

public partial class _Default : System.Web.UI.Page

{

    [WebMethod(EnableSession=true ) ]

    public static void PokePage()

    {

        // called by client to refresh session

        MiscUtilities.ODS("Server: I am poked");      

    }

 

We need JavaScript at the client to call the server function at fixed intervals:

<script type="text/javascript">

 

    var HeartBeatTimer;

 

    function StartHeartBeat()

    {

        // pulse every 10 seconds

        if (HeartBeatTimer == null)

            HeartBeatTimer = setInterval("HeartBeat()", 1000 * 10);

    }

 

    function HeartBeat()

    {

        // note: ScriptManger must have: EnablePageMethods="true"

        Sys.Debug.trace("Client: Poke Server");

        PageMethods.PokePage();

    }

 

<body id="MyBody"  onload="StartHeartBeat();">

 

Here is what the output looks like without the heartbeat:

10:22:43.03  ****ApplicationStart
10:22:45.13  Session_Start
10:25:00.00  Session_End 

Here is the output with the heartbeat:

10:26:06.10  ****ApplicationStart
10:26:08.05  Session_Start
Client: Poke Server
10:26:18.93  Server: I am poked
Client: Poke Server
10:26:28.95  Server: I am poked
Client: Poke Server
10:26:38.96  Server: I am poked
Client: Poke Server
10:26:48.98  Server: I am poked

    . . . (lines deleted)

Client: Poke Server
10:29:59.45  Server: I am poked
Client: Poke Server
10:30:09.47  Server: I am poked
Client: Poke Server
10:30:19.48  Server: I am poked

    . . . (lines deleted)

It looks like the session is staying alive while the client is idle: Excellent!

I hope someone finds this useful.

 

Tuesday, June 2, 2009

I was recently asked to review a licensing solution – CryptoLicensing for .NET(disclosure – I was paid for my time , the time it took to review it, but had only committed to doing a fair and balanced review). After reading Oren’s post a while ago about how he couldn’t find a decent licensing solution to the point of making his own, I was sure that this one will fall out as well over some technicality.

Typemock also has a home-grown solution for licensing, and while searching the web about a year ago I was hard pressed to find a decent licensing component for .NET that didn’t suck in terms of usability or features.

So, all the signs pointed to the possibility that this would be yet another one of those licensing schemes that are just not good enough, but I was wrong.

It took me less than an hour to feel pretty comfortable with the CryptoLicensing API, and the feature set it contains is pretty powerful, packaged in a very simple API. It does have some small flaws, but they are shadowed by the very big value it provides.

As one discovers pretty quickly when doing their own homegrown licensing solution, a licensing scheme has at least two main parts: THE API you use from within your software, and the Serial Generator (KeyGen) software. Often, the keygen will also have an API that can be used programmatically so that you can then generate licenses on the fly automatically per an automated request (say CRM).

I was happy to discover that CryptoLicensing supports all that out of the box and so much more. but let’s start at the beginning:

The API:

The API is very simple to use. There is only one class you use in your project: CryptoLicense. This class allows you to load a license from the local machine, or save one either to the registry or to the file system. the API can be used in two ways:

the simple way is just to check a status property on the CryptoLicense class one you have tried to “load” it. If it’s anything but Status.Valid then you need to put in a serial number.

The class can then be invoked to show a “Enter Serial” dialog that is built in, which will tell the user what are the current license terms or how much time they have left on the current license. The dialog is fully customizable – it comes with full source code so you can make it your own very simply.

More

We have earlier discussed about Web Deployment and Web Packaging quite a bit, today I wanted to dive into web.config transformation. If you would like to check out the other topics please read through the earlier blog posts below:

·         Web Deployment with VS 2010 and IIS

·         Web Packaging: Creating a Web Package using VS 2010

·         Web Packaging: Creating web packages using MSBuild

·         How does Web Deployment with VS 10 & MSDeploy Work?

·         Installing Web Packages using Command Line

Usually web applications go through a chain of server deployments before being finally being deployed to production environment. Some of these environments can be Developer box (Debug), QA Server, Staging/Pre-Production, Production (Release). While transitioning between these environments various settings of the web application residing in web.config file change, some of these settings can be items like application settings, connection strings, debug flags, web services end points etc.

VS10’s new web.config transformation model allows you to modify your web.config file in an automated fashion during deployment of your applications to various server environments. To help command line based deployments, Web.Config transformation is implemented as an MSBuild task behind the scene hence you can simply call it even outside of deployment realm.

I will try to go through below steps to explain web.config transformation in detail

1.       Creating a “Staging” Configuration on your developer box

2.      Adding a “Staging” Web.Config Transform file to your project

3.      Writing simple transforms to change developer box connection string settings into “Staging” environment settings

4.      Generating a new transformed web.config file for “Staging” environment from command line

5.      Generating a new transformed web.config file for “Staging” environment from VS UI

6.      Understanding various available web.config Transforms and Locators

7.      Using Web.config transformation toolset for config files in sub-folders within the project

More