:::: MENU ::::

Sunday, June 29, 2008

I liked the approach the author used and it clearly works great for scenarios where you want to display a single 'something is happening' message to the user while the page is being updated.  However, after the operation completes and the indicator disappears, it is still up to the user to figure out where on the screen (i.e. what control was updated?) to look for the new changes.  If your screens are relatively simple, or if you feel the update should be obvious this is probably not an issue for you.  Unfortunately things aren't so for the web application I am currently working on.  A few of the screens contain a number of different sections and users would like to see the indicator rendered over the control (most commonly a GridView or DetailsView) that is being updated.

To achieve this, I am using the UpdatePanelAnimationExtender control that is part of the AjaxControlToolkit (You can follow directions here to install and get started using the toolkit).  This control allows for defining the visual effects you want run before (the OnUpdating animation) and after (the OnUpdated animation) the contents within the UpdatePanel have been refreshed.  For the gmail progress indicator, the OnUpdaing animation runs a piece of JavaScript that calculates the bounds of the GridView that is contained within the UpdatePanel and renders an HTML DIV in the upper right hand corner of the GridView control.  After the update occurs, the OnUpdated fires and runs another piece of JavaScript that hides the DIV.        

You can view a live demo here and download the sample web application with all of the code included here.  There is no code-behind for the page, so if you are intereseted in quickly viewing the markup and JavaScript for the page - it is posted below (you will have to change the connection string to point to your copy of the Northwind database ...) 

More

For a better user experience you would your users to see please wait message while browser render the page completely. Here is the one solution to the same problem.

Let’s create a master page called Site.master and a web content form as demo.aspx.

view plaincopy to clipboardprint

  1.     <title>Loading Demo</title>  
  2.   
  3.     <form>  
  4.    
  5.    
  6.    
  7.    
  8.           
  9.   
  10. <div>  
  11.   
  12. <!-- Page Header will go here... -->  
  13.   
  14. </div>  
  15.   
  16. <div>  
  17.   
  18.                                             <!-- Page-specific content will go here... -->  
  19.                                           
  20.   
  21. </div>  
  22.   
  23. <div>  
  24.   
  25. <!-- Page Footer will go here... -->  
  26.   
  27. </div>  
 
    <title>Loading Demo</title>
 
    <form>
 
 
 
 
        
 
<div>
 
<!-- Page Header will go here... -->
 
</div>
 
<div>
 
                                            <!-- Page-specific content will go here... -->
                                        
 
</div>
 
<div>
 
<!-- Page Footer will go here... -->
 
</div>

Demo.aspx is the web content form which fetches data from a database.

view plaincopy to clipboardprint

  1. <!-- Code to fetch data from a database -->  
 
<!-- Code to fetch data from a database -->

To show loading message add following code to the code behind of master file Site.master.cs

view plaincopy to clipboardprint

  1.   protected override void OnLoad(EventArgs e)  
  2.         {  
  3.             if (!IsPostBack)  
  4.             {  
  5.                 Response.Buffer = false;  
  6.                 Response.Write("<div style='top:2px;left:2px;width:83px;height:19px;text-align:right;background-color:orange;'>Please wait...</div>");  
  7.                 Response.Flush();  
  8.             }  
  9.             base.OnLoad(e);  
  10.         }  
  11.         protected override void Render(HtmlTextWriter writer)  
  12.         {  
  13.             if (!IsPostBack)  
  14.             {  
  15.                 Response.Clear();  
  16.                 Response.ClearContent();  
  17.             }  
  18.             base.Render(writer);  
  19.         }  
 
  protected override void OnLoad(EventArgs e)
        {
            if (!IsPostBack)
            {
                Response.Buffer = false;
                Response.Write("<div style='top:2px;left:2px;width:83px;height:19px;text-align:right;background-color:orange;'>Please wait...</div>");
                Response.Flush();
            }
            base.OnLoad(e);
        }
        protected override void Render(HtmlTextWriter writer)
        {
            if (!IsPostBack)
            {
                Response.Clear();
                Response.ClearContent();
            }
            base.Render(writer);
        }

in the Site.master.aspx file add following javascript at the end of the file.

view plaincopy to clipboardprint

  1.    
  2.     try{  
  3.  var divLoadingMessage =  document.getElementById("divLoadingMsg&quot <img src="http://s.wordpress.com/wp-includes/images/smilies/icon_wink.gif" alt=";)" class="wp-smiley">  
  4.  if (divLoadingMessage != null && typeof(divLoadingMessage) != 'undefined' <img src="http://s.wordpress.com/wp-includes/images/smilies/icon_wink.gif" alt=";)" class="wp-smiley">  
  5.         {  
  6.             divLoadingMessage.style.display="none";  
  7.             divLoadingMessage.parentNode.removeChild(divLoadingMessage);  
  8.         }  
  9.     }catch(e){}  
 
 
    try{
 var divLoadingMessage =  document.getElementById("divLoadingMsg&quot ;)
 if (divLoadingMessage != null && typeof(divLoadingMessage) != 'undefined' ;)
        {
            divLoadingMessage.style.display="none";
            divLoadingMessage.parentNode.removeChild(divLoadingMessage);
        }
    }catch(e){}

That’s it now all your pages using Site.master will be showing Please wait.. message when the page starts loading. Of course instead of putting a message you can put a nice web2.0 loading image in between divLoadingMsg tags.
So how does this works?
The onLoad event of master page will be called before any of the content web form’s Onload event. as soon as master page loads div tag becomes visible. After the page has loaded completely the script written at the end of the master page hides the div tag. so simple isnt’t it.Hope this helps!

More

Many times we are faced with heavy or slow processing pages in our projects. If we cannot improve performance, how do we display some sort of information in the browser while the page is processing?

I used to have this problem when accessing slow databases or retrieving a large amount of information from it. Sometimes, the queries lasted for 30 seconds or even more and it's not very user-friendly to just sit there waiting. Many impatient users will often assume that the application has hanged and close the browser before the query results are shown.

Let's imagine the following scenario. We have a form where the user specifies some sort of search pattern. He then clicks a button and the results are shown in a datagrid in the same page. Ok, solving this is easy...you just add some client scripting to the button that triggers the round-trip to the database so that on the client side a message will be displayed (something like: please wait...loading the data).

But what if you want to show the results in another aspx? By this I mean you access a page and on the page_load you retrieve the data and populate the datagrid? What if you have a complex form with lots of drop down lists that must be populated and you want to show some sort of info to your users while they are waiting for the page to appear on the browser? And, what if you want to do something like a dynamic bar to show that something is actually happening?

Well, I recently discovered a very useful method of the Response object. Because it functions like a Stream, you are able to say Flush several times during a request. What this means is that you don't have to wait for the entire page to be rendered to send the response down to the client. So, basically this looks like a good way of solving this problem: we create a mixed blend of HTML and JavaScript in the beginning of the Page_Load event, we flush it to the client and we proceed with our server-side processing. When we get the data we don't have to do anything special. The Page_Load event will end and the HTML will be rendered by ASP.NET. All we have to do is possibly hide the information we previously sent in order to display the page correctly.

I encapsulated the entire logic into a single class (C#):

using System;

using System.Web;

namespace WaitPage

{

/// <summary>

/// To use this class, simply insert in the beggining of your page load the following line:

/// Wait.Send_Wait_Info(Response);

///

/// Also, in the HTML part of your web form add the following line at the end of the head section:

/// <script>Stop_Wait();</script>

/// </summary>

public class Wait

{

public Wait()

{

}

const string MAIN_IMAGE = "images/logo.bmp";

const int PROGRESS_BAR_SIZE = 10; //number of steps in your progress bar

const string PROGRESS_BAR_STEP = "images/pro.bmp"; //image for idle steps

const string PROGRESS_BAR_ACTIVE_STEP = "images/pro2.bmp"; //image for active step

 

public static void Send_Wait_Info(System.Web.HttpResponse Response)

{

Response.Write("<div id=\"mydiv\" align=\"center\">");

Response.Write("<img src=\"" + MAIN_IMAGE + "\">");

Response.Write("<div id=\"mydiv2\" align=\"center\">");

for(int i=1;i<=PROGRESS_BAR_SIZE;i++)

{

Response.Write("<img id='pro" + i.ToString() + "' src='" + PROGRESS_BAR_STEP + "'>&nbsp;");

}

Response.Write("</div>");

Response.Write("</div>");

Response.Write("<script language=javascript>");

Response.Write("var counter=1;var countermax = " + PROGRESS_BAR_SIZE + ";function ShowWait()");

Response.Write("{ document.getElementById('pro' + counter).setAttribute(\"src\",\"" + PROGRESS_BAR_ACTIVE_STEP + "\"); if (counter == 1) document.getElementById('pro' + countermax).setAttribute(\"src\",\"" + PROGRESS_BAR_STEP + "\");else {var x=counter - 1; document.getElementById('pro' + x).setAttribute(\"src\",\"" + PROGRESS_BAR_STEP + "\");} counter++;if (counter > countermax) counter=1;}");

Response.Write("function Start_Wait(){mydiv.style.visibility = \"visible\";window.setInterval(\"ShowWait()\",1000);}");

Response.Write("function Stop_Wait(){ mydiv.style.visibility = \"hidden\";window.clearInterval();}");

Response.Write("Start_Wait();</script>");

Response.Flush();

}

}

}

This code will simply send some HTML and Javascript to the client creating an animated progress-like bar. You can download the source code of a demo app showing it's usage here.

The Stop_Wait script block must be inserted in the end of the head section. When the final response is sent, the bar is cleared and the JavaScript cycle that animates the bar is stopped. Pretty simple .

Also, all the images presented can easily be changed. There are 4 constants defined in the class.

  • The main image
  • The size of the bar (how many steps)
  • The image representing a step
  • The image representing the active step

so you can customize this at will. Also, I believe the code will help build your own mechanism. I just did it for the exercise so it's not very pretty :P

Hope this proves of some use to you.

More

Thursday, June 26, 2008

A series of articles on ASP.NET's membership, roles, and profile functionality.

·  Part 1 - learn about how the membership features make providing user accounts on your website a breeze. This article covers the basics of membership, including why it is needed, along with a look at the SqlMembershipProvider and the security Web controls.

·  Part 2 - master how to create roles and assign users to roles. This article shows how to setup roles, using role-based authorization, and displaying output on a page depending upon the visitor's roles.

·  Part 3 - see how to add the membership-related schemas to an existing database using the ASP.NET SQL Server Registration Tool (aspnet_regsql.exe).

·  Part 4 - improve the login experience by showing more informative messages for users who log on with invalid credentials; also, see how to keep a log of invalid login attempts.

·  Part 5 - learn how to customize the Login control. Adjust its appearance using properties and templates; customize the authentication logic to include a CAPTCHA.

·  Part 6 - capture additional user-specific information using the Profile system. Learn about the built-in SqlProfileProvider.

·  Part 7 - the Membership, Roles, and Profile systems are all build using the provider model, which allows for their implementations to be highly customized. Learn how to create a custom Profile provider that persists user-specific settings to XML files.

·  Part 8 - learn how to use the Microsoft Access-based providers for the Membership, Roles, and Profile systems. With these providers, you can use an Access database instead of SQL Server.

·  Part 9 - when working with Membership, you have the option of using .NET's APIs or working directly with the specified provider. This article examines the pros and cons of both approaches and examines the SqlMembershipProvider in more detail.

·  Part 10 - the Membership system includes features that automatically tally the number of users logged onto the site. This article examines and enhances these features.

·  Part 11 - many websites require new users to verify their email address before their account is activated. Learn how to implement such behavior using the CreateUserWizard control.

 

More

These days, the biggest threat to an organization’s network security comes from its public Web site and the Web-based applications found there. Unlike internal-only network services such as databases—which can be sealed off from the outside via firewalls—a public Web site is generally accessible to anyone who wants to view it, making application security an issue. As networks have become more secure, vulnerabilities in Web applications have inevitably attracted the attention of hackers, both criminal and recreational, who have devised techniques to exploit these holes.









In fact, attacks upon the Web application layer now exceed those conducted at the network level, and can have consequences which are just as damaging. Some enlightened software architects and developers are becoming educated on these threats to application security and are designing their Web-based applications with security in mind. By "baking in" application security from the start of the development process, rather than trying to ”brush it on” at the end, you are much more likely to create secure applications that will withstand hackers' attacks. However, even the most meticulous and security-aware C# or VB.NET code can still be vulnerable to attack if you neglect to secure the Web.config configuration files of your application. Incorrectly configured Web-based applications can be just as dangerous as those that have been incorrectly coded. To make matters worse, many configuration settings actually default to insecure values.

1. Custom Errors Disabled

When you disable custom errors as shown below, ASP.NET provides a detailed error message to clients by default.

Vulnerable configuration:
<configuration>
<system.web>
<customErrors mode=”Off”>
Secure configuration:
<configuration>
<system.web>
<customErrors mode=”RemoteOnly”>

In itself, knowing the source of an error may not seem like a risk to application security, but consider this: the more information a hacker can gather about a Web site, the more likely it is that he will be able to successfully attack it. An error message can be a gold mine of information to an attacker. A default ASP.NET error message lists the specific versions of ASP.NET and the .NET framework which are being used by the Web server, as well as the type of exception that was thrown. Just knowing which Web-based applications are used (in this case ASP.NET) compromises application security by telling the attacker that the server is running a relatively recent version of Microsoft Windows and that Microsoft Internet Information Server (IIS) 6.0 or later is being used as the Web server. The type of exception thrown may also help the attacker to profile Web-based applications; for example, if a SqlException is thrown, then the attacker knows that the application is using some version of Microsoft SQL Server.

You can build up application security to prevent such information leakage by modifying the mode attribute of the <customErrors> element to On or RemoteOnly. This setting instructs Web-based applications to display a nondescript, generic error message when an unhandled exception is generated. Another way to circumvent this application security issue is to redirect the user to a new page when errors occur by setting the defaultRedirect attribute of the <customErrors> element. This approach can provide even better application security because the default generic error page still gives away too much information about the system (namely, that it's using a Web.config file, which reveals that the server is running ASP.NET).

2. Leaving Tracing Enabled in Web-Based Applications

The trace feature of ASP.NET is one of the most useful tools that you can use to ensure application security by debugging and profiling your Web-based applications. Unfortunately, it is also one of the most useful tools that a hacker can use to attack your Web-based applications if it is left enabled in a production environment.

Vulnerable configuration:
<configuration>
<system.web>
<trace enabled=”true” localOnly=”false”>
Secure configuration:
<configuration>
<system.web>
<trace enabled=”false” localOnly=”true”>

When the <trace> element is enabled for remote users of Web-based applications ( localOnly="false"), any user can view an incredibly detailed list of recent requests to the application simply by browsing to the page trace.axd. If a detailed exception message is like a gold mine to a hacker looking to circumvent application security, a trace log is like Fort Knox! A trace log presents a wealth of information: the .NET and ASP.NET versions that the server is running; a complete trace of all the page methods that the request caused, including their times of execution; the session state and application state keys; the request and response cookies; the complete set of request headers, form variables, and QueryString variables; and finally the complete set of server variables.

A hacker looking for a way around application security would obviously find the form variable histories useful because these might include email addresses that could be harvested and sold to spammers, IDs and passwords that could be used to impersonate the user, or credit card and bank account numbers. Even the most innocent-looking piece of data in the trace collection can be dangerous in the wrong hands. For example, the APPL_PHYSICAL_PATH server variable, which contains the physical path of Web-based applications on the server, could help an attacker perform directory traversal attacks against the system.

The best way to prevent a hacker from obtaining trace data from Web-based applications is to disable the trace viewer completely by setting the enabled attribute of the <trace> element to false. If you have to have the trace viewer enabled, either to debug or to profile your application, then be sure to set the localOnly attribute of the <trace> element to true. That allows users to access the trace viewer only from the Web server and disables viewing it from any remote machine, increasing your application security.

3. Debugging Enabled

Web servers often are installed with default configurations that may not be secure. These insecurities include unnecessary samples and templates, administrative tools, and predictable locations of utilities used to manage servers. Without appropriate security risk management, this can lead to several types of attacks that allow hackers to gain complete control over the Web server.

Vulnerable configuration:
<configuration>
<system.web>
<compilation debug=”true”>
Secure configuration:
<configuration>
<system.web>
<compilation debug=”false”>

Like the first two application security vulnerabilities described in this list, leaving debugging enabled is dangerous because you are providing inside information to users who shouldn’t have access to it, and who may use it to attack your Web-based applications. For example, if you have enabled debugging and disabled custom errors in your application, then any error message displayed to a user of your Web-based applications will include not only the server information, a detailed exception message, and a stack trace, but also the actual source code of the page where the error occurred. Unfortunately, this configuration setting isn't the only way that source code might be displayed to the user.

Here's a story that illustrates why developers shouldn’t concentrate solely on one type of configuration setting to improve application security. In early versions of Microsoft’s ASP.NET AJAX framework, some controls would return a stack trace with source code to the client browser whenever exceptions occurred. This behavior happened whenever debugging was enabled, regardless of the custom error setting in the configuration. So, even if you properly configured your Web-based applications to display non-descriptive messages when errors occurred, you could still have unexpectedly revealed your source code to your end users if you forgot to disable debugging.

To disable debugging, set the value of the debug attribute of the <compilation> element to false. This is the default value of the setting but it’s safer to explicitly set the desired value rather than relying on the defaults to protect application security.

4. Cookies Accessible through Client-Side Script

In Internet Explorer 6.0, Microsoft introduced a new cookie property called HttpOnly. While you can set the property programmatically on a per-cookie basis, you also can set it globally in the site configuration.

Vulnerable configuration:
<configuration>
<system.web>
<httpCookies httpOnlyCookies=”false”>
Secure configuration:
<configuration>
<system.web>
<httpCookies httpOnlyCookies=”true”>

Any cookie marked with this property will be accessible only from server-side code, and not to any client-side scripting code like JavaScript or VBScript. This shielding of cookies from the client helps to protect Web-based applications from Cross-Site Scripting attacks. A hacker initiates a Cross-Site Scripting (also called CSS or XSS) attack by attempting to insert his own script code into the Web page to get around any application security in place. Any page that accepts input from a user and echoes that input back is potentially vulnerable. For example, a login page that prompts for a user name and password and then displays “Welcome back, <username>” on a successful login may be susceptible to an XSS attack.

Message boards, forums, and wikis are also often vulnerable to application security issues. In these sites, legitimate users post their thoughts or opinions, which are then visible to all other visitors to the site. But an attacker, rather than posting about the current topic, will instead post a message such as “<script>alert(document.cookie);</script>”. The message board now includes the attacker’s script code in its page code—and the browser then interprets and executes it for future site visitors. Usually attackers use such script code to try to obtain the user’s authentication token (usually stored in a cookie), which they could then use to impersonate the user. When cookies are marked with the HttpOnly property, their values are hidden from the client, so this attack will fail.

As mentioned earlier, it is possible to enable HttpOnly programmatically on any individual cookie by setting the HttpOnly property of the HttpCookie object to true. However, it is easier and more reliable to configure the application to automatically enable HttpOnly for all cookies. To do this, set the httpOnlyCookies attribute of the <httpCookies> element to true.

5. Cookieless Session State Enabled

In the initial 1.0 release of ASP.NET, you had no choice about how to transmit the session token between requests when your Web application needed to maintain session state: it was always stored in a cookie. Unfortunately, this meant that users who would not accept cookies could not use your application. So, in ASP.NET 1.1, Microsoft added support for cookieless session tokens via use of the “cookieless” setting.

Vulnerable configuration:
<configuration>
<system.web>
<sessionState cookieless=”UseUri”>
Secure configuration:
<configuration>
<system.web>
<sessionState cookieless=”UseCookies”>

Web applications configured to use cookieless session state now stored the session token in the page URLs rather than a cookie. For example, the page URL might change from http://myserver/MyApplication/default.aspx to http://myserver/MyApplication/(123456789ABCDEFG)/default.aspx. In this case, 123456789ABCDEFG represents the current user's session token. A different user browsing the site at the same time would receive a completely different session token, resulting in a different URL, such as http://myserver/MyApplication/(ZYXWVU987654321)/default.aspx.

While adding support for cookieless session state did improve the usability of ASP.NET Web applications for users who would not accept cookies, it also had the side effect of making those applications much more vulnerable to session hijacking attacks. Session hijacking is basically a form of identity theft wherein a hacker impersonates a legitimate user by stealing his session token. When the session token is transmitted in a cookie, and the request is made on a secure channel (that is, it uses SSL), the token is secure. However, when the session token is included as part of the URL, it is much easier for a hacker to find and steal it. By using a network monitoring tool (also known as a “sniffer”) or by obtaining a recent request log, hijacking the user’s session becomes a simple matter of browsing to the URL containing the stolen unique session token. The Web application has no way of knowing that this new request with session token “123456789ABCDEFG” is not coming from the original, legitimate user. It happily loads the corresponding session state and returns the response back to the hacker, who has now effectively impersonated the user.

The most effective way to prevent these session hijacking attacks is to force your Web application to use cookies to store the session token. This is accomplished by setting the cookieless attribute of the <sessionState> element to UseCookies or false. But what about the users who do not accept cookies? Do you have to choose between making your application available to all users versus ensuring that it operates securely for all users? A compromise between the two is possible in ASP.NET 2.0. By setting the cookieless attribute to AutoDetect, the application will store the session token in a cookie for users who accept them and in the URL for those who won’t. This means that only the users who use cookieless tokens will still be vulnerable to session hijacking. That's often acceptable, given the alternative—that users who deny cookies wouldn't be able to use the application at all. It is ironic that many users disable cookies because of privacy concerns when doing so can actually make them more prone to attack.

6. Cookieless Authentication Enabled

Just as in the “Cookieless Session State Enabled” vulnerability discussed in part one, enabling cookieless authentication in your Web-based applications can lead to session hijacking and problems with application security.

Vulnerable configuration:
<configuration>
<system.web>
<authentication mode=”Forms”>
<forms cookieless=”UseUri”>

Secure configuration:
<configuration>
<system.web>
<authentication mode=”Forms”>
<forms cookieless=”UseCookies”>

When a session or authentication token appears in the request URL rather than in a secure cookie, an attacker with a network monitoring tool can get around application security, easily take over that session, and effectively impersonate a legitimate user. However, session hijacking has far more serious consequences for application security after a user has been authenticated. For example, online shopping sites generally utilize Web-based applications that allow users to browse without having to provide an ID and password. But when users are ready to make a purchase, or when they want to view their order status, they have to login and be authenticated by the system. After logging in, sites provide access to more sensitive data, such as a user's order history, billing address, and credit card number. Attackers hijacking this user’s session before authentication can't usually obtain much useful information. But if the attacker hijacks the session after authentication, all that sensitive information could be compromised.

The best way to prevent session hijacking with Web-based applications is to disable cookieless authentication and force the use of cookies for storing authentication tokens. This application security measure is added by changing the cookieless attribute of the <forms> element to the value UseCookies.

7. Failure to Require SSL for Authentication Cookies

Web-based applications use the Secure Sockets Layer (SSL) protocol to encrypt data passed between the Web server and the client. Using SSL for application security means that attackers using network sniffers will not be able to interpret the exchanged data. Rather than seeing plain text requests and responses, they will see only an indecipherable jumble of meaningless characters. You can require the forms authentication cookie from your Web-based applications to use SSL by setting the require SSL attribute of the <forms> element to true.

Vulnerable configuration:
<configuration>
<system.web>
<authentication mode=”Forms”>
<forms requireSSL=”false”>

Secure configuration:
<configuration>
<system.web>
<authentication mode=”Forms”>
<forms requireSSL=”true”>

The previous section discussed the importance of transmitting the authentication token in a cookie, rather than embedding it in the request URL. However, disabling cookieless authentication is just the first step towards securing the authentication token. Unless requests made to the Web server are encrypted, a network sniffer will still be able to read the authentication token from the request cookie. An attacker would still be able to hijack the user’s session.

At this point, you might be wondering why it is necessary with application security to disable cookieless authentication, since it is very inconvenient for users who won’t accept cookies, and seeing as how the request still has to be sent over SSL. The answer is that the request URL is often persisted regardless of whether or not it was sent via SSL. Most major browsers save the complete URL in the browser history cache. If the history cache were to be compromised, the user’s login credentials would be as well. Therefore, to truly secure the authentication token, you must require the authentication token to be stored in a cookie, and use SSL to ensure that the cookie be transmitted securely. By setting the requireSSL attribute of the <forms> element to true, ASP.NET Web-based applications will use a secure connection when transmitting the authentication cookie to the Web server. Note that IIS requires additional configuration steps to support SSL. You can find instructions to configure SSL for IIS on MSDN .

8. Sliding Expiration Used

All authenticated ASP.NET sessions have a timeout interval to protect the application security. The default timeout value is 30 minutes. So, 30 minutes after a user first logs into any of these Web-based applications, he will automatically be logged out and forced to re-authenticate his credentials.

Vulnerable configuration:
<configuration>
<system.web>
<authentication mode=”Forms”>
<forms slidingExpiration=”true”>

Secure configuration:
<configuration>
<system.web>
<authentication mode=”Forms”>
<forms slidingExpiration=”false”>

The slidingExpiration setting is an application security measure used to reduce risk to Web-based applications in case the authentication token is stolen. When set to false, the specified timeout interval becomes a fixed period of time from the initial login, rather than a period of inactivity. Attackers using a stolen authentication token have, at maximum, only the specified length of time to impersonate the user before the session times out. Because typical attackers of these Web-based applications have only the token, and don't really know the user's credentials, they can't log back in as the legitimate user, so the stolen authentication token is now useless and the application security threat is mitigated. When sliding expiration is enabled, as long as an attacker makes at least one request to the system every 15 minutes (or half of the timeout interval), the session will remain open indefinitely. This gives attackers more opportunities to steal information and cause other mischief in Web-based applications. To avoid this application security issue altogether, you can disable sliding expiration by setting the slidingExpiration attribute of the <forms> element to false.

9. Non-Unique Authentication Cookie Used

Over the last few sections, I hope I have successfully demonstrated the importance of application security and of storing your application’s authentication token in a secure cookie value. But a cookie is more than just a value; it is a name-value pair. As strange as it seems, an improperly chosen cookie name can create an application security vulnerability just as dangerous as an improperly chosen storage location.

Vulnerable configuration:
<configuration>
<system.web>
<authentication mode=”Forms”>
<forms name=”.ASPXAUTH”>

Secure configuration:
<configuration>
<system.web>
<authentication mode=”Forms”>
<forms name=”{abcd1234…}”>

The default value for the name of the authentication cookie is .ASPXAUTH. If you have only one Web-based application on your server, then .ASPXAUTH is a perfectly secure choice for the cookie name. In fact, any choice would be secure. But, when your server runs multiple ASP.NET Web-based applications, it becomes critical to assign a unique authentication cookie name to each application. If the names are not unique, then users logging into any of the Web-based applications might inadvertently gain access to all of them. For example, a user logging into the online shopping site to view his order history might find that he is now able to access the administration application on the same site and change the prices of the items in his shopping cart.

The best way to ensure that all Web-based applications on your server have their own set of authorized users is to change the authentication cookie name to a unique value. Globally Unique Identifiers (GUIDs) are excellent choices for application security since they are guaranteed to be unique. Microsoft Visual Studio helpfully includes a tool that will automatically generate a GUID for you. You can find this tool in the Tools menu with the command name “Create GUID”. Copy the generated GUID into the name attribute of the <forms> element in the configuration file.

10. Hardcoded Credentials Used

Vulnerable configuration:
<configuration>
<system.web>
<authentication mode=”Forms”>
<forms>
<credentials>

</credentials>
</forms>

Secure configuration:
<configuration>
<system.web>
<authentication mode=”Forms”>
<forms>
</forms>

A fundamental difficulty of creating software is that the environment in which the application will be deployed is usually not the same environment in which it is created. In a production environment, the operating system may be different, the hardware on which the application runs may be more or less powerful, and test databases are replaced with live databases. This is an issue for creating Web-based applications that require authentication because developers and administrators often use test credentials to test the application security. This begs the question: Where do the test credentials come from?

For convenience, to avoid forcing developers from spending time on creating a credential store used solely for test purposes (and which would subsequently be discarded when the application went to production), Microsoft added a section to the Web.config file that you can use to quickly add test users to Web-based applications. For each test user, the developer adds an element to the configuration file with the desired user ID and password as shown below:

<authentication mode="Forms">
<forms>
<credentials>
<user name="bob" password="bob"/>
<user name="jane" password="Elvis"/>
</credentials>
</forms>
</authentication>

While undeniably convenient for development purposes, this was never intended for use in a production environment. Storing login credentials in plaintext in a configuration file is simply not secure. Anyone with read access to the Web.config file could access the authenticated Web application. It is possible to store the SHA-1 or MD5 hash of the password value, rather than storing the password in plaintext. This is somewhat better, but it is still not a secure solution. Using this method, the user name is still not encrypted. First, providing a known user name to a potential attacker makes it easier to perform a brute force attack against the system. Second, there are many reverse-lookup databases of SHA-1 and MD5 hash values available on the Internet. If the password is simple, such as a word found in a dictionary, then it is almost guaranteed to be found in one of these hash dictionaries.

The most secure way to store login credentials is to not store them in the configuration file. Remove the <credentials> element from your Web.config files in production applications.

You’re Not Out of the Woods Yet

Now that you’ve finished reading the top ten list, and you've checked your configuration settings, your applications are secure forever, right? Not just yet. Web.config files operate in a hierarchical inheritance manner. Every Web.config file inherits values from any Web.config file in a parent directory. That Web.config file in turn inherits values from any Web.config file in its parent directory, and so on. All Web.config files on the system inherit from the global configuration file called Machine.config located in the .NET framework directory. The effect of this is that the runtime behavior of your application can be altered simply by modifying a configuration file in a higher directory.

This can sometimes have unexpected consequences. A system administrator might make a change in a configuration file in response to a problem with a completely separate application, but that change might create a security vulnerability in your application. For example, a user might report that he is not able to access the application without enabling cookies in his browser. The administrator, trying to be helpful, modifies the global configuration file to allow cookieless authentication for all applications.

To keep your application-specific settings from being unexpectedly modified, the solution is to never rely on default setting values. For example, debugging is disabled by default in configuration files. If you’re examining the configuration file for your application and you notice that the debug attribute is blank, you might assume that debugging is disabled. But it may or may not be disabled—the applied value depends on the value in parent configuration settings on the system. The safest choice is to always explicitly set security-related values in your application’s configuration.

Ultimately, securing a Web application requires the efforts and diligence of many different groups, from quality assurance to security operations. However, the developer who codes the application itself has an inherent responsibility to instill security into the application from the beginning of the development process. By making security-conscious decisions from the beginning, developers can create applications that users can trust with their confidential information and that are capable of withstanding attacks launched by hackers. Sometimes that process can be as simple as making the right decisions when configuring your application.

One of the most useful registry hacks I use on a regular basis is one Robert McLaws wrote, the “ASP.NET 2.0 Web Server Here” Shell Extension. This shell extension adds a right click menu on any folder that will start WebDev.WebServer.exe (aka Cassini) pointing to that directory.

I recently had to repave my work machine and I couldn’t find the .reg file I created that would recreate this shell extension. When I brought up Robert’s page, I noticed that the settings he has are out of date for Visual Studio 2008.

Here is the updated registry settings for VS2008 (note, edit the registry at your own risk and this only has the works on my machine seal of approval).

32 bit (x86)

Windows Registry Editor Version 5.00
 
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\VS2008 WebServer]
@="ASP.NET Web Server Here"
 
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\VS2008 WebServer\command]
@="C:\\Program Files\\Common Files\\microsoft shared\\DevServer
\\9.0\\Webdev.WebServer.exe /port:8080 /path:\"%1\""

64 bit (x64)

Windows Registry Editor Version 5.00
 
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\VS2008 WebServer]
@="ASP.NET Web Server Here"
 
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\VS2008 WebServer\command]
@="C:\\Program Files (x86)\\Common Files\\microsoft shared\\DevServer
\\9.0\\Webdev.WebServer.exe /port:8080 /path:\"%1\""

For convenience, here is a zip file containing the reg files. The x64 one I tested on my machine. The x86 one I did not. If you installed Visual Studio into a non-standard directory, you might have to change the path within the registry file.

 

Wednesday, June 25, 2008

The sql injection that has came up is affecting several ASP and ASP.NET applications.  Although the only way to prevent an attack is validate the code, hopefully these posts will provide some direction.  I included some links that discuss this more. 

Here's a list of additional reading:

Building Secure ASP.NET Applications - Authentication, Authorization, and Secure Communication.
http://www.microsoft.com/downloads/details.aspx?FamilyID=055ff772-97fe-41b8-a58c-bf9c6593f25e&DisplayLang=en

Improving Web Application Security - Threats and Countermeasures
http://www.microsoft.com/downloads/details.aspx?FamilyId=E9C4BFAA-AF88-4AA5-88D4-0DEA898C31B9&displaylang=en

This link talks about the issue in ASP/NET perspective:SQL Injection Attacks:
http://msdn2.microsoft.com/en-us/library/aa302392.aspx#secnetch12_sqlinjectionattacks

Sample code provided by Microsoft to validate SQL statements.
http://blogs.iis.net/nazim/archive/2008/04/28/filtering-sql-injection-from-classic-asp.aspx

Log parser examples
http://weblogs.asp.net/steveschofield/archive/2008/04/26/clarification-on-iis-reported-sql-injection-exploits.aspx

Youtube
http://youtube.com (search for sql injections)  This will show several videos posted on how people are doing this.

To do a quick find type from a command prompt

findstr "CAST(" ex080622.log > ss.txt   (change the log file date)

Note the 'CAST' is case senstative

More

For those supporting a Classic ASP and ASP.NET application, you probably have noticed an increase in sql injection attempts.  Microsoft has released an updated URLScan 3.0.    Here is the link to download URlScan version 3 beta for 32 bit or 64 bit.   You can read about on the blogs by Wade Hilmo and Nazim security blog.

I've been kicking the tires on URLScan 3.0.  One thing to remember when applying custom rules is to add them to the RuleList optionSearch for RuleList in urlscan.ini, and put the name of your rule, for example RuleList=SQL Injection Raw. Double quotes aren't needed around rules with spaces in the name.   When you apply a custom rule per the docs, make sure it shows up as started in the urlscan logs in c:\windows\system32\inetsrv\urlscan\logs.  

Here is what shows the rule has been loaded.  Notice it matches up the rule defined in our example below.

[06-23-2008 - 00:35:58] The following extensions will not be allowed: .exe, .bat, .cmd, .com, .htw, .ida, .idq, .htr, .idc, .shtm, .shtml, .stm, .printer, .ini, .log, .pol, .dat, .config
[06-23-2008 - 00:35:58] The following URL sequences will be denied: .., ./, \, :, %%, &
[06-23-2008 - 00:35:58] The following Query String sequences will be denied:
[06-23-2008 - 00:35:58] The following rules are active: SQL Injection Raw

Here is an example sql injection rule

[SQL Injection Raw]
AppliesTo=.asp,.aspx
DenyDataSection=SQL Injection Raw Strings
ScanUrl=0
ScanAllRaw=1
ScanQueryString=0
ScanHeaders=

[SQL Injection Raw Strings]
--
@          ; also catches @@
alter
cast
create
declare
delete
drop
exec       ; also catches execute
fetch
insert
kill
select

One last thing to think about is which option you'll chose to be scanned.  The example rule choses ScanAllRaw. 

ScanUrl=0
ScanAllRaw=1
ScanQueryString=0
ScanHeaders=

Testing can help determine which characters to add to your custom rule.    To see if your rule is active and blocking requests.  Look in the URlScan logs.  Also, if someting is rejected, you can look in your IISLogs, Rejected by URLScan will be there.  Here are a couple examples.

URLScan example log entry
[06-24-2008 - 00:35:54] Client at 1.1.1.1: Rule 'SQL Injection Raw' detected string '--' in the header strings. Request will be rejected.  Site Instance='123456', Raw URL='/examplePage.aspx'

Example IIS Log entry 
ex080624.log:2008-06-24 00:00:03 192.168.0.98 GET /Rejected-By-UrlScan ~/examplePage.aspx 80 - 192.168.0.99 Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1;+.NET+CLR+1.1.4322;+.NET+CLR+2.0.50727;+Creative+ZENcast+v2.00.13) http://example.com  404 0 2 1864 571 46

Log Parser query to detect and list Rejected URL's - change the from
LogParser.exe" -i:iisw3c "SELECT count(*) as hitCount, cs-uri-stem,cs-uri-query FROM <example.com> WHERE cs-uri-stem like '%Reject%' GROUP BY cs-uri-stem,cs-uri-query ORDER BY hitCount desc" -o:csv

More