:::: MENU ::::

Friday, July 31, 2009

There is no doubt that tehre are some drawbacks to Linq to Sql. One of them is that the Sql statement is built dynamically so it is needed to be parsed and compiled each time you run it. Fortunately .Net 3.5 has a solution for this problem. System.Data.Linq namespace includes a class named CompiledQuery which is responsible for caching the compiled version of a Linq to Sql query. This class has a static method called Compile which takes a Func<T,S,R> delegate. In this signature, T is the type of a DataContext (i.e. HRMDataContext) , S is the type of a predicate to filter the query and R is the type of returned result. Needless to say that it must be IQueryable<T>.

In this article we will see how to pre-compile a query, its limitations and how it really improves the speed of a Linq query.

To pre-compile a query we must define a public static field of type Func<T,S,R> . What we assign to this field is the result of CompiledQuery.Compile method:

public static Func<testDataContext , SearchCriteria, IQueryable<Person>> FilteredResult …

In the above line, testDataContex is the type of a DataContext inside the project, SearchCriteria is type of a class or struct that is designed for passing search criteria to .Compile method. For example, suppose that in testDataContext, we have a Table named Person. We have also defined a class (or struct) named SearchCriteria as bellow:

public class SearchCriteria
{
public int id { set; get; }
public string FirstName { set; get; }
public string LastName { set; get; }
}

Now to get these definitions to work with a precompiled query we can write such a statement:

public static Func<testDataContext , SearchCriteria, IQueryable<Person>> FilteredResult =
System.Data.Linq.CompiledQuery.Compile(
(testDataContext dc , SearchCriteria criteria ) =>
from p in dc.Persons
where (p.id == criteria.id || criteria.id == -1)
&& (p.FirstName == criteria.FirstName || criteria.FirstName == string.Empty)
&& (p.LastName == criteria.LastName || criteria.LastName == string.Empty)
select p
);

That’s it. At this point, FilteredResult contains a pre-compiled query and can be used this way:

testDataContext dc = new testDataContext();
SearchCriteria criteria = new SearchCriteria();
criteria.id = -1;
criteria.FirstName = “Bill”;
criteria.LastName = “Gates”;
List<Person> p = FilteredResult(dc, criteria).ToList();

The above code creates instances of testDataContext (dc) and SearchCriteria (criteria) and passes them to FilteredResult as arguments. The result of FilteredResult is IQueryable<Person> we have called .ToList() extension method to get a List<Person> series.

One upsetting point about pre-compiled queries is that you can not use a stored-procedure to make a compiled query. In the above Linq to Sql code, if you write “from C in usp_GetPerson() …” you will get an error indicating that stored procedures are not allowed to be used.

Now let’s see how much precompilation can be helpful. I have written a small Console application that runs two version (one is compiled and one is not) of a query over a database for 1000 times. The time needed to run each query is as follows:

Compiled query takes 0 minutes, 1 seconds and 62 milliseconds.

Regular query takes 0 minutes, 13 seconds and 328 milliseconds.

As it is clear, the compiled query is greatly faster than a regular query. Notice that in a Linq model, nothing will really happen unless we iterate over the result of the query. Therefore, I have written a foreach statement to iterate over the result of queries. I also have written a small query at the beginning of the program to make Linq manager open a connection to Sql Server. If we do not do this, the compiled query will surprisingly takes longer!

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data.Linq;

namespace CompiledQuery
{
class Program
{
public static Func<testDataContext , SearchCriteria, IQueryable<Person>> FilteredResult =
System.Data.Linq.CompiledQuery.Compile(
(testDataContext dc , SearchCriteria criteria ) =>
from p in dc.Persons
where (p.id == criteria.id || criteria.id == -1)
&& (p.FirstName == criteria.FirstName || criteria.FirstName == string.Empty)
&& (p.LastName == criteria.LastName || criteria.LastName == string.Empty)
select p
);

static void Main(string[] args)
{
testDataContext dc = new testDataContext();
SearchCriteria criteria = new SearchCriteria();
IQueryable<Person> Q = null;

// The following code makes Linq manager to open a connection to Sql Server
var init = from p in dc.Persons select p;
foreach (Person person in init) ;

criteria.id = -1;
criteria.FirstName = “Bill”;
criteria.LastName = “Gates”;
DateTime BeginTime = DateTime.Now;

for (int i = 0; i < 1000; i++)
{
Q = FilteredResult(dc, criteria);
foreach (Person person in Q) ;
}

DateTime EndTime = DateTime.Now;
TimeSpan Diff1 = EndTime – BeginTime;

BeginTime = DateTime.Now;

for (int i = 0; i < 1000; i++)
{
Q = from p in dc.Persons
where (p.id == criteria.id || criteria.id == -1)
&& (p.FirstName == criteria.FirstName || criteria.FirstName == string.Empty)
&& (p.LastName == criteria.LastName || criteria.LastName == string.Empty)
select p;
foreach (Person person in Q) ;
}

EndTime = DateTime.Now;
TimeSpan Diff2 = EndTime – BeginTime;

Console.WriteLine(”Compiled query takes : {0}:{1}:{2}”, Diff1.Minutes, Diff1.Seconds, Diff1.Milliseconds);
Console.WriteLine(”Regular query takes {0}:{1}:{2}”, Diff2.Minutes, Diff2.Seconds, Diff2.Milliseconds);

Console.ReadKey();
}
}
}

You can download the source of of a full sample project from here.

 

Over the past few weeks we’ve been showcasing some amazing articles, tools, and videos in our Resources section. Our twitter followers have gotten a taste of these resources and have let us know they are really enjoying them! Today I would like to share some tools with you that focus on accessibility, a very important sector of user experience.

Improve Accessibility

Here are some tools you may find useful increase accessibility, a constant battle that UX designers have to face:

Wave: Web Accessbility Evaluation Tool

WAVE is a free web accessibility evaluation tool provided by WebAIM. It is used to aid humans in the web accessibility evaluation process. Rather than providing a complex technical report, WAVE shows the original web page with embedded icons and indicators that reveal the accessibility of that page.

Image Analyzer

This service examines all images found on a web page to check for any accessibility issues. The width, height, alt, and longdesc attributes are examined for appropriate values. Learning from errors pointed out using this service could improve accessibility issues.

Color Blindness Simulator

Use this Colour Blindness Simulator to reveal how your images or websites may appear to users with a variety of colour blindness conditions. Approximately one in twenty people have some form of color blindness that prevents them from seeing color the same way that people without any color vision deficiencies do.

Improve Readability

Readability is key to accessibility. It is hard to test, however. This online took evaluates text based on different reading scales and also suggests which complex sentences to take another look at. Great for those writers who commonly write just a little too complex.

Color Contrast

AccessColor tests the color contrast and color brightness between the foreground and background of all elements in the DOM to make sure that the contrast is high enough for people with visual impairments. Assuring that you provide enough color contrast between foreground and background colors takes time and we hope that this tool will help web developers to build accessible websites by visually flagging the section(s) of a page with problematic color combinations. AccessColor will find the relevant colour combinations within your HTML and CSS documents rather than requiring you to find each value to input yourself in order to test the contrast between each colour combination.

User Testing Tools:

It is always fun to try out new user testing tools. These tools can often lead to insight on accessibility issues you may not know that you have. I have gathered a few of the recent ones I’ve found here for you to try out yourself. Be sure to let us know in the comments what you think of them!

User Testing

UserTesting.com offers low cost usability testing. A fast and cost efficient way to get your users feedback is always a great resource to add to your toolbelt as a UX enthusiast.

Usabilla

Usabilla allows you to collect visual feedback from your website in five minutes. Offering a transparent approach to visual feedback, this service is another tool that could be useful in testing.

Open Hallway

Open Hallway offers three things: creation of usability test scenarios, the ability to invite testers through a link, and the option to watch and hear real invited people to using your website or application.

 

GhostDoc by SubMain is great tool for documenting source code. It is able to generate documentation from methods and properties names. It analyses names and then offers appropriate description. GhostDoc is also able to use description of interface member that current property or method implements. And it takes only one simple key press.

Well, GhostDoc is great tool and I think it makes developers life easier. But not always. Don’t blame GhostDoc – the real problem is (like always) between chair and keyboard!

How to abuse GhostDoc

GhostDoc is like every good invention – one can use it and abuse it. Like a gun – you can use it to protect yourself or your home or country and you can use it to kill someone just for fun. Here is one real-life scenario.

One of the most painful things one developer can do is documenting code using GhostDoc automatically generated code without worrying if documentation helps other developers or not. Image the class that is full of auto-generated documentation like this.

/// <summary>

/// Sets the employee phone.

/// </summary>

/// <param name="value">The value.</param>

public void SetEmployeePhone(string value)

{

    phoneLabel.Text = value;

}

Looking at method name I get exactly the same information that documentation sais - this method sets employee phone somewhere. But what is the phone here? Is it phone number or phone model or phone type? We know only that there is phone and we know that it has value. But where this value goes? Who knows…

How bad documentation affects development?

For this short method I was able to ask three questions. Let’s suppose we have 20 methods documented like this – it makes 60 questions. This amount of questions refers to serious problem of understand ability of documentation. Doesn’t matter what is written in documentation – are there notes like above or some spicy yellow news about stars – it is worthless and useless for other developers.

Developers who actively abuse GhostDoc are real pain in the ass for those developers who respect their co-workers and want to write quality software. Faulty or crappy code documentation is not only useless but also dangerous because it slows down debugging process – if developer notices that there is documentation available then he or she starts reading it. Every such crappy documentation stops the reader and asks for attention.

How to avoid bad documentation?

There is no good way unless one writes brilliant GREP or PowerShell scripts to detect weird documentation blocks by patterns. Besides automatic discovery of crappy documentation you can also use most expensive resource – man power. To fight against literal anarchy in code documentation I am considering the following process:

  1. Establish rules for code documentation.
  2. Make these rules available to all developers.
  3. Look also at code documentation during code reviews.
  4. Allow developers to remove crappy documentation as soon as they find one.
  5. Make compiler generate warnings or errors when documentation is missing.

 

Monday, July 27, 2009

Ah yes, automatic properties.  Isn’t it great that you don't have to do all of that extra typing now?  (Well, you wouldn't be doing it anyways with ReSharper, but that's beside the point.) For some reason, they've never sat well with me; they just seem like a pretty useless feature and, not only that, I think it severely impacts readability. 

Quick, are these members in a class, an abstract class, or an interface?

int Id { get; set; }
 
string FirstName { get; set; }
 
string LastName { get; set; }

Can't tell!  It's happened to me more than once where I've been working in a file, trying to add some logic to a getter and getting weird errors only to realize that I was working in the interface or abstract class instead of the concrete class.  Of course I don't normally write many non-public properties, but it's easy to make the mistake of missing the access modifier if you're working furiously and tabbing back and forth, especially if the file is long (or your screen resolution is low).

Look again:

public interface IEmployee
{
        int Id { get; set; }
 
        string FirstName { get; set; }
 
        string LastName { get; set; }
}
 
public class Employee
{
        int Id { get; set; }
 
        string FirstName { get; set; }
 
        string LastName { get; set; }
}
 
public abstract class AbstractEmployee
{
        int Id { get; set; }
 
        string FirstName { get; set; }
 
        string LastName { get; set; }
}

It's even more confusing when you're working within an abstract class and there's implementation mixed in.  Not only that, it looks like a complete mess as soon as you have to add custom logic to the getter or setter of a property (and add a backing field); it just looks so...untidy.  I'm also going to stretch a bit and postulate that it may also encourage breaking sound design in scenarios where junior devs don't know any better since they won't think to use a private field when the situation calls for one (just out of laziness).

I get that it's a bit more work (yeah, maybe my opinion would be different if I had to type them out all the time, too - but I don't :P), but seriously, if you're worried about productivity, then I really have to ask why you haven't installed ReSharper yet (I've been using it since 2.0 and can't imagine developing without it).  It's easy to mistake one for the other if you're just flipping through tabs really fast.  I've sworn off using them and I've been sticking to my guns on this.

There are three general arguments that I hear, from time to time, from the opposition:

  1. Too many keystrokes, man!  With R#, you simply define all of your private fields and then ALT+INS and in less than 5 or 6 keystrokes, you've generated all of your properties.  I would say even less keystrokes than using automatic properties since it's way easier to just write the private field and generate it using R#.  If you're worried about productivity and keystrokes and you're not using R#, then what the heck are you waiting for? 
  2. Too much clutter, takes up too much space! If that's the case, just surround it in a region and don't look at it.  I mean, if you really think about it, using KNF instead of Allman style bracing throughout your codebase would probably reduce whitespace clutter and raw LOC and yet...
  3. They make the assembly heavier!  Blah!  Not true!  Automatic properties are a compiler trick.  They're still there, just uglier and less readable (in the event that you have to extract code from an assembly (and I have - accidentally deleted some source, but still had the assembly in GAC!)).  In this case, the compiler generates the following fields:

<FirstName>k__BackingField
<Id>k__BackingField
<LastName>k__BackingField

Depending on the project, there may also be unforseen design scenarios where you may want to get/set a private field by reflection to bypass logic implemented in a property (I dunno, maybe in a serialization scenario?).

So my take?  Just don't use them, dude!

 

Introduction:

In this post, I will explain you how can we get the countries name filled in any collection using .net without using any database.

It is a regular task, which we all as developers did some past day but the difference is we used database table or xml file to hold the country names. But .net framework provide us with all the countries information in Globalization namespace.

So, here is the code for that

Dictionary<string,string> objDic = new Dictionary<string,string>();
 
foreach (CultureInfo ObjCultureInfo in CultureInfo.GetCultures(CultureTypes.SpecificCultures))
{
    RegionInfo objRegionInfo = new RegionInfo(ObjCultureInfo.Name);
    if (!objDic.ContainsKey(objRegionInfo.EnglishName))
    {
        objDic.Add(objRegionInfo.EnglishName, objRegionInfo.TwoLetterISORegionName.ToLower());
    }
}
 
var obj = objDic.OrderBy(p => p.Key );
foreach (KeyValuePair<string,string> val in obj)
{
    ddlCountries.Items.Add(new ListItem(val.Key, val.Value));
}

 

Explanation:

Notice that, we have used typed dictionary object to store the name and the values of the countries.

Then, we use CultureInfo.GetCultures to get the cultural information of the countries.

Later on, we use RegionInfo to get the regional information of that  culture.

Since, there can be multiple cultures of the same country that is why there is a condition which check either the country is already added in dictionary. If not, then simply add the country name and country two letter name. (Note : We are treating the two letter country name as the value)

After the loop, I used some LinQ stuff to sort county names, and then iterate through the returned object to add the values in drop down list.

That's it. Now you are not only limited to show the English name of the country but you can also show the native name. For example, the name of my country in English is "Islamic Republic of Pakistan" but the native name is پاکستان.

Also, you can get the following country information using RegionInfo

Some developers are habitual of using country id along with the country name. if they still want to use some id to save the country information they can use the GeoId property of the RegionInfo.

 

Friday, July 24, 2009

In this post, I will explain you how can we show Loading message in asp.net ajax without using Update Progress. Now some one may asked, why do I want to skip Update Progress ?

Well, there can be several reasons for this, fist of all you have to work on every single page, and on every update panel to get the update progress working.

There are basically three methods of meeting this requirement.

  1. Using Master Pages : A very smart way, but not all of us are using them .. right ?
  2. Extending Page Class  : A little harder but to me it is very elegant way.
  3. Extending Script Manager : Similar to the page class one, but implementation is comparatively simple.

The Basics:

Before I start with exploring the different approaches let me first create a ground by showing what things will be involve in creating a loading message.

I want the background to be grayed and displayed a simple loading text at the top, for that we need a style sheet, which will apply to the loading message div.  Create a stylesheet and call it style.css

view source

print?

01..ModalProgressContainer

02.{

03.    z-index: 10005;

04.    position: fixed;

05.    cursor: wait;

06.    top:0%;

07.    background-color: #ffffff;

08.    filter: alpha(opacity=50);

09.    opacity: 0.5;

10.    -moz-opacity: .5;

11.    height: 100%;

12.    width: 100%;

13.    text-align: center;

14.     

15.    }

16..ModalProgressContent

17.{

18.    padding: 10px;

19.    border: solid 0px #000040;

20.    font-weight: bold;

21.    background-color:#ffffff;

22.    margin-top:300px;

23.}

Now lets read and understand the following script.

view source

print?

01.var prm = Sys.WebForms.PageRequestManager.getInstance();

02.prm.add_initializeRequest(InitializeRequest);

03.prm.add_endRequest(EndRequest);

04. 

05.// ----------------------------- //

06.// the below script will be saved in JS File, create a JS file and call it ajaxload.js and save the following script

07. 

08.function InitializeRequest(sender, args) {

09. 

10.    if (document.getElementById('ProgressDiv') != null)

11.       $get('ProgressDiv').style.display = 'block';

12.    else

13.        createContorl();

14.}

15. 

16.function EndRequest(sender, args) {

17. 

18.    if (document.getElementById('ProgressDiv') != null)

19.        $get('ProgressDiv').style.display = 'none';

20.    else

21.        createContorl();

22.}

23. 

24.function createContorl() {

25.    var parentDiv = document.createElement("div");

26.    parentDiv.setAttribute("class", "ModalProgressContainer");

27.    parentDiv.setAttribute("Id", "ProgressDiv");

28.  

29. 

30.    var innerContent = document.createElement("div");

31.    innerContent.setAttribute("class", "ModalProgressContent");

32.    var img = document.createElement("img");

33. 

34.    img.setAttribute("src", "/Images/Images/Loading.gif");

35. 

36.    var textDiv = document.createElement("div");

37.    textDiv.innerHTML = 'Loading....';

38. 

39.    innerContent.appendChild(img);

40.    innerContent.appendChild(textDiv);

41. 

42.    parentDiv.appendChild(innerContent);

43. 

44.    document.body.appendChild(parentDiv);

45. 

46.}

Notice,in the first three lines. We are getting the instance of PageRequestManager and then defining InitilizeRequest and EndRequest functions to display or hide the loading div. Where as, in createControl function we are simply writing DHTML, to be more specific there is no HTML of the loading div in our markup. So, we are writing that from JavaScript.

Also, note the that I have break down this script into two part by using comments. First is the declaration and second is definition of the functions.

note: The definition will take place on a seperate JS file where as the declaration need to be made in the page, under body markup.  Now we are all set to explore different approaches.

 

Using Master Pages :

A very simple approach, all you need to do is open your master page and paste the following lines in the head section.

view source

print?

1.<link href="style.css" rel="stylesheet" type="text/css" />

2.<script type="text/javascript" src="ajaxload.js"></script>

 

And in body, after form tag create a script section and paste the following JavaScript.

view source

print?

1.var prm = Sys.WebForms.PageRequestManager.getInstance();

2. 

3.prm.add_initializeRequest(InitializeRequest);

4.prm.add_endRequest(EndRequest);

 

Notice it is the same declaration section which we have discussed above and that’s it you are done. All the content form of your web application should now display loading div on each partial postback.

 

Extending Page Class  :

For this, create a class file and call it ajaxPage and inherit it from System.Web.UI.Page and write the following code.

view source

print?

01.public class ajaxPage : Page

02.{

03.    protected override void OnLoad(EventArgs e)

04.    {

05.        //Include CSS File

06.        Page.Header.Controls.Add(new LiteralControl("<link href='style.css' rel='stylesheet' type='text/css' />"));

07. 

08. 

09.        //Include JS file on the page

10.        ClientScript.RegisterClientScriptInclude("ajaxload", ResolveUrl("~/ajaxload.js"));

11. 

12.        //Writing declaration script

13.        String script = "var prm = Sys.WebForms.PageRequestManager.getInstance();";

14.        script += "prm.add_initializeRequest(InitializeRequest);";

15.        script += "prm.add_endRequest(EndRequest);";

16. 

17.        ClientScript.RegisterStartupScript(typeof(string), "body", script, true);

18. 

19.        base.OnLoad(e);

20.    }

21. 

22.}

 

Well, we have simply extend the System.Web.UI.Page into our own class and override OnLoad function to include the JS file and write the declaration markup.

Now, on the page code behind where you want to implement Loading message change the inherit namespace from System.Web.UI.Page to ajaxPage (make sure you namespace).

 

Extending Script Manager :

Now instead of extending page class we will extend Script Manager control and for that create a new class file and call it ScrtipManagerExt and write the following code.

view source

print?

01.public class ScriptManagerExt : ScriptManager

02.{

03.    protected override void OnLoad(EventArgs e)

04.    {

05. 

06.        //Include CSS File

07.        Page.Header.Controls.Add(new LiteralControl("<link href='style.css' rel='stylesheet' type='text/css' />"));

08. 

09.        RegisterClientScriptInclude(this, typeof(Page), "ajaload", ResolveClientUrl("~/ajaxload.js"));

10. 

11.        String script = "var prm = Sys.WebForms.PageRequestManager.getInstance();";

12.        script += "prm.add_initializeRequest(InitializeRequest);";

13.        script += "prm.add_endRequest(EndRequest);";

14. 

15.        RegisterStartupScript(this, typeof(Page), "ajaxtest", script, true);

16.        base.OnLoad(e);

17.    }

18.}

Almost the same thing we did in extend page approach, only the implementation will be change. Instead of using the old Script Manager we will use our new one. the include directive and markup will look like as below.

view source

print?

1.<%@ Register Assembly="Assembly" Namespace="namespace" TagPrefix="cc1" %>

2.<cc1:ScriptManagerExt ID="ScriptManagerExt1" runat="server">

3.</cc1:ScriptManagerExt>

 

That’s it we are done. I tried to make it simpler and show you every possible way I know of doing this task. Again, any approach selection will be on you and your project type. You can also download  the VS 2008 project file.

And

wow,  I keep forgetting how complicated microsoft ajax is.   Here is JQuery ajax.  Note the few lines of code.  Note how the code is near self documenting.

            // Block UI, pop I am workig message
            $(document).ready(function() {
                $.blockUI({ message: $('#YourDivWithWorkingMessage'), css: { width: '275px' } });
             });
            
                                
            // build args and call Controller/Action
            var dataToSend = BuildObject()
            var methodAction = 'YourController/YourAction';
      
        $.ajax({
                data: dataToSend,
                dataType: 'html',
                error: function(XMLHttpRequest, textStatus, errorThrown) {
                    $.unblockUI();
                    var errorObj = JSON.parse(XMLHttpRequest.responseText);
                    $('#LabelInErrorDiv).html(errorObj.message);
                    $.blockUI({ message: $('#ErrorDivForAjaxError), css: { width: '275px' } });
                },
                success: function(data) {
                    $.unblockUI();
                    $('#YourDivToPutResultIn).html(data);
                    },
                url: methodAction
            });

 

 

Thursday, July 23, 2009

Recently I was working on diagnosing a performance issue with a customer’s web site with a colleague (this is one of our favourite engagement types so if you need some help let me know J), and we found that items were being trimmed very regularly from the ASP.NET Cache, causing excessive back-end work, and in turn reduced scalability.

* a “cache trim” is when ASP.NET looks for unused (based on a LRU algorithm) cache entries and deletes them.

So if you’re using the ASP.NET Cache API (or indeed any cache provider) to store custom application data, the moral of the story is to make sure you size your cache appropriately, and monitor it during test and live to see how it behaves.

It is worth knowing that an un-configured ASP.NET Cache takes a memory limit of the minimum of either 60% of physical RAM, or a fixed size RAM. This fixed size is different depending upon your processor architecture - see Thomas Marquardt's post here for some more detail. The idea is that when the limits imposed by configuration are approached, the cache will start trimming.

I’ve attached a very simple ASP.NET application that you can play with to study the cache behaviour while you’re reading the next section. Host it in IIS for the best results.

Setting Cache Limits

It is possible to set limits on how much memory an ASP.NET Application Pool can consume in IIS (see here), and you can also configure behaviour using settings on the cache web.config element. It is important to understand exactly what these mean;

·         privateBytesLimit (in web.config, on the cache element): This is the maximum number of bytes that the host process is permitted to consume before the cache is trimmed. In other words, it is not the memory to dedicate to the cache, nor a limit to the memory that the worker process can use... but rather the total memory usage by the w3wp process (in IIS7) at which cache trimming will occur.

·         percentagePhysicalMemoryUsedLimit (in web.config, on the cache element): This is the maximum percentage of RAM that the machine can use before the cache is trimmed. So on a 4gb machine with this set to 80%, the cache would be trimmed if Task Manager reported over 3.2gb (i.e. 80% of 4gb) was in use. Note that this is not the percentage memory used by ASP.NET or caching – it is the memory used machine-wide.

·         Private Memory Limit  (in IIS7 manager, as an advanced property of the App Pool): This is the maximum number of private bytes a worker process can consume before it is recycled (which will of course also empty the cache). If you set this limit lower than privateBytesLimit in web.config, you’ll always get a recycle before a cache trim... which sounds unlikely to be what you would want.

Things to Watch

The remaining content of this post itemises things you should consider monitoring to assess your cache’s performance. Some of it is obvious – but it is surprising how easy it is to overlook the obvious when your boss is breathing down your neck about your slow application J

Overall Application Performance

The best indicator that something might be going wrong is if your application is performing badly. Or maybe it was performing fine, but you’ve changed something or gotten more users, and suddenly it’s not so fast. Make sure you collect timing data in your IIS logs, and use LogParser to parse and analyse them. Also use standard performance counters from ASP.NET, WCF, and whatever other technologies you’re using. Get a feel for where and when things are slow or failing. This isn’t specific to caching, but it is an essential first step!

Hits on the resource you’re caching data from

Do some simple maths on the number of hits your back-end resource is getting. For example, if you have a “GetCities” web service, the results of which are cached for 2 hours, and you have 4 front-end load balanced web servers each with their own cache, you should expect a maximum of 4 hits to that web service every 2 hours. If you’re seeing more than that, alarm bells should be ringing.

Cache Entries

There are some great performance counters for the ASP.NET Cache API, so use those to monitor the state of the cache. Specifically the “ASP.NET Apps\Cache API Entries” counter is the number of items that are in the ASP.NET Cache right now. It should be broadly stable over longer periods with approximately the same load. If you cache an item per user, per region, or anything similar, be aware that this can dramatically affect your cache behaviour and memory consumption... in which case Cache Entries can be quite revealing.

Cache Trims

The “ASP.NET Apps\Cache API Trims” counter is the number of cache items that have been removed due to a memory limit being hit (i.e. they were trimmed). Obviously this should ideally be very low or zero under normal operation. Too many trims could indicate you need to revisit your caching strategy, or manually configure cache memory limits.

Cache Turnover Rate

The “ASP.NET Apps\Cache API Turnover Rate” counter shows the number of newly added and removed/expired/trimmed cache items per second. A target for this number depends on the behaviour of your application, but generally it should be fairly low – ideally items should be cached for quite some time before they expire or are removed. A high turnover rate could imply frequent trims, frequent explicit removals in application code, dependencies firing frequently (e.g. a SqlCacheDependency), or a sudden increase in user load (if, for example, you’re caching something per user).

Private Bytes

Monitoring the “Process\Private Bytes” counter for the w3wp process (assuming you’re using IIS7) tells you how much memory IIS is using for your ASP.NET application. A cache trim is likely to show up as a sudden drop in consumed bytes, and equally you should be able to see how close it is to memory limits for your configuration.

Worker Process Restarts

It is initially a little easy to confuse the cause of a drop in consumed Private Bytes between heavy cache trimming and an Application Pool recycle, so you should also watch the “ASP.NET\Worker Process Restarts” performance counter to ensure you know which happened.

Cache Removed Reasons

When you add items to the ASP.NET cache you can optionally specify a CacheItemRemovedCallback. This means that when the item is removed from the cache the call-back you’ve specified is called, passing in a CacheItemRemovedReason. This is great for adding some debugging instrumentation – if your item was trimmed due to memory pressure then the reason will be “underused”. Other reasons are Expired (you specified a time limit), Removed (you called Cache.Remove) and DependencyChanged (a file, SQL table, or other dependency setup with a CacheDependency was fired).

If you decide to add some logging using this approach I’d recommend adding a switch that enables or disables setting a call-back, as there is a small overhead in dealing with it.

Cache Memory Percentages

There are two slightly mysterious sounding cache counters too. I must admit, at first it took me a few minutes to get my head around exactly what these counters meant.

The first is “ASP.NET Apps\Cache % Machine Memory Limit Used”. This is literally the current physical memory usage by the whole machine as a percentage of the configured (or default) maximum physical memory usage. What?! Well, if you have edited the “percentagePhysicalMemoryUsedLimit” setting to 60%, this means your machine can use up to 60% of its physical memory before cache trimming occurs (not a very realistic example I know!). This counter reports the current usage as a percentage of the maximum... so if your machine is using 40% of available physical RAM, and the limit is 60%, this counter would report 66% (40 divided by 60, multiplied by 100 to get a percentage). It is important to note that this is memory consumed across the whole machine – not just by ASP.NET.

The second is “ASP.NET Apps\Cache % Process Memory Limit Used”. This is the total Private Bytes in use by the current ASP.NET process as a percentage of the limit specified in “privateBytesLimit”. So if you set the limit at 400mb, and your process is currently using 350mb, that would be reported as 87.5% (350 divided by 400, multiplied by 100 to get a percentage).

If either of these counters hits 100% ASP.NET will almost immediately trim 50% of your cache by picking the least recently used items... so obviously you don’t want to be hitting these limits often.

Conclusion

Phew. Well I hope that’s useful stuff. The best advice I can give is to do some real sums. Many customers cache data items per user, plus huge lists of reference data that can grow over time. The end result can be caches that are crippled over a fixed concurrent user level, or take a long time to reload large reference data sets when they have been trimmed. It is more than possible to do some rough maths to work out how much memory your application is using for the cache, and how this changes according to user numbers, regions, languages, locations, roles, user types, base offices, or other parameters – and the results can be very illuminating.

Finally, remember that you should always test using hardware, configuration, and simulated user loads that are as close to live as you can possibly afford, as this gives you the best possible chance of catching problems early.