23 April, 2009

Asp.Net 2.0's New SQL Server-Based Cache Invalidation Mechanism

With the advent of SQL Server 2005, your application can request SQL Server to notify it when critical data in a database has changed. Up to that point, an application can safely continue to retrieve data from the cache. This provides a much more granular level of control and takes any guesswork out of the question of how often the cache should be refreshed. This is possible via a new feature called Query Notification, used in conjunction with .NET 2.0.

There are various types of caching available in .Net. Then what's special in Sqlserver query notification. Basically one of the pitfalls with ASP.NET is that it is app domain specific. So if you have a web farm environment, each web server has its own copy of the cache, and this means there is a potential for data being out of sync. With SQL cache dependency you can cache the data on the web server but have SQL Server notify you when the data changes.

An overview of Query Notification

A multi-user web application should be able to provide the latest critical data to its users. To be sure of meeting this need, your application may have to retrieve the results from a database every time the user issues the query. However, this unnecessarily increases network round-trips, and is an inefficient use of resources. To reduce the number of round-trips, developers have traditionally relied on caching mechanisms. Once a query is run, the data is subsequently retrieved from the application's local cache. However, the lifetime value of this cache is generally defined so that the application can periodically update the cache with new values from the database, irrespective of whether the data in the back end has changed or not. This is much better, but it is still not precise.

With query notification enabled then when you run a query against the back-end database, you not only retrieve the data into the cache, but also tell SQL Server to register a subscription for notification if the underlying data changes in a way that will affect the result of the query.

When the notification is received, the event handler in your application invalidates the cache and the next time the application runs the query, it will fetch the data from the back-end server.

All this is done without the need to write any complex application code.

Before you can establish cache dependency with SQL Server 2005, you need to perform the following steps:
  • Configure SQL Server to support SQL Cache invalidation. This is a one-time setup of the tables or databases in the SQL Server database that you want to monitor.
  • Add the necessary configuration information to the web.config file.
Configuration of SQL Server to Support SQL Cache Invalidation

You can perform the configuration of SQL Server 2000 to support SQL Cache invalidation in two ways:
  1. Using the aspnet_regsql utility
  2. Using the EnableTableForNotifications method of the SqlCacheDependencyAdmin class
For this article, consider the first method. Basically, the aspnet_regsql utility creates an extra table named AspNet_SqlCacheTablesForChangeNotification that is used to keep track of the changes to all the monitored tables in the database. It also creates a number of triggers and stored procedures to enable this capability. To run the aspnet_regsql utility, open up the Visual Studio command prompt and enter the command shown below.

Command : aspnet_regsql -S localhost -U sa -P atanu -d TestDB -ed

Click the image to enlarge
The command enables the TestDB (My database..you can use your own) database to support SQL cache invalidation:
  • S—Name of the Server
  • U—User ID to use to connect to the SQL Server
  • P—Password to use to connect to the SQL Server
  • d—The name of the database
  • ed—Enables the database for SQL Server-triggered cache invalidation
Once this is done at the database level, you need to enable cache invalidation at the individual table level.

Command :
aspnet_regsql -S localhost -U sa -P atanu -t Products -d TestDB -et

Click the image to enlarge
In the above command:
  • t—Specifies the name of the table
  • et—Enables the table for SQL Server-triggered cache invalidation
The preceding screenshot shows how to enable SQL cache invalidation for the Products table in the TestDB database. Once you configure the Products table to send notifications, any time data in the table changes, it notifies ASP.NET to invalidate the corresponding item in the cache.

Web Configuration Settings for SQL Cache Invalidation

The next step, before you can use SQL cache invalidation in your ASP.NET application, is to update the Web configuration file. You need to instruct the ASP.NET framework to poll the databases that you have enabled for SQL cache invalidation. The following Web configuration file contains the necessary configuration information to poll the TestDB database at periodic intervals:

Click the image to enlarge

The preceding Web configuration file contains two sections, [connectionStrings] and [caching]. The connectionStrings section creates a database connection string to the TestDB database named "TestDbConnection". The caching section configures the SQL cache invalidation polling. Within the databases subsection, you list one or more databases that you want to poll for changes. The add section inside the databases section indicates that the database represented by "TestDB" is polled once a minute (every 60,000 milliseconds). You can specify different polling intervals for different databases. Remember, the server must do a little bit of work every time the database is polled for changes. If you don't expect the data in the database to change very often, you should increase the polling interval.

Enabling query notification in ASP.NET using SQLCacheDependency

Now that you have performed the required configurations, you are ready to take advantage of the SQL cache invalidation feature in your ASP.NET Web page. Add a new Web form named Default.aspx to your Web site. The following is the code for the Default.aspx where we are going to bind a Grid with the data of the Products table.

using System.Data.SqlClient;
using System.Web.Caching;

protected void Page_Load(object sender, EventArgs e)
GridView1.DataSource = bindCachedData();

private DataSet bindCachedData()
DataSet dsProducts;
dsProducts = (DataSet)Cache.Get("Products");

if (dsProducts == null)
SqlConnection sqlcon = new SqlConnection(ConfigurationManager.ConnectionStrings["TestDbConnection"]

SqlDataAdapter adapter = new SqlDataAdapter("Select * from Products", sqlcon);
dsProducts = new DataSet();

SqlCacheDependency dependency = new SqlCacheDependency("TestDbConnection", "Products");
Cache.Insert("Products", dsProducts, dependency);

return dsProducts;

In the above code, you start by creating an instance of the SqlConnection object passing in the connection string that is retrieved from the web.config file. Then you create an instance of the SqlDataAdapter object and pass in the SQL statement to be executed and the previously created SqlConnection object as its arguments. Then you execute the SQL query using the Fill method of the SqlDataAdapter object. After that you create an instance of the SqlCacheDependency object and supply the database name (that corresponds to the database name specified in the web.config file) and the table name as its arguments. Then you insert the categories dataset to the cache using the Insert method of the Cache object. At the time of inserting, you should also specify the SqlCacheDependency object so that the dataset can be invalidated when the data in the Product table changes. Finally, you bind the dataset to the GridView control.
The first time this application loads, there will be nothing in the application cache and dsProducts will be null. As a result, the query will be called. When this happens, the function will register a notification subscription with SQL Server. It will also load up the cache with the output of our query. When the user runs this query again the data will be retrieved from the cache. You can confirm this by running the app in debug mode.
To test if the SQL Server based cache invalidation mechanism works, modify the data in the Products table and then if you navigate to the page using the browser, you will get the changed data as retrieved from the database.

19 April, 2009

Garbage Collection in .NET - How it really works

Garbage collection is a process of releasing the memory used by the objects, which are no longer referenced. This is done in different ways and different manners in various platforms and languages. We will see how garbage collection is being done in .NET.

Garbage Collection basis
  • Almost every program uses resources such as database connection, file system objects etc. In order to make use of these things some resources should be available to us.
  • First we allocate a block of memory in the managed memory by using the new keyword.
  • Use the constructor of the class to set the initial state of the object.
  • Use the resources by accessing the type's members
But how many times have programmers forgotten to release the memory. Or how many times the programmers try to access the memory which was cleaned.

These two are the serious bugs, which will lead us to memory leak and commonly occurring. In order to overcome these things the concept of automatic memory management has come. Automatic memory management or Automatic garbage collection is a process by which the system will automatically take care of the memory used by unwanted objects (we call them as garbage) to be released. Hurrah... Thanks to Microsoft's Automatic Garbage collection mechanism.

Automatic Garbage Collection in .NET

When Microsoft planned to go for a new generation platform called .NET with the new generation language called C#, their first intention is to make a language which is developer friendly to learn and use it with having rich set of APIs to support end users as well. So they put a great thought in Garbage Collection and come out with this model of automatic garbage collection in .NET.

They implemented garbage collector as a separate thread. This thread will be running always at the back end. Some of us may think, running a separate thread will make extra overhead. Yes. It is right. That is why the garbage collector thread is given the lowest priority. But when system finds there is no space in the managed heap (managed heap is nothing but a bunch of memory allocated for the program at run time), then garbage collector thread will be given REALTIME priority (REALTIME priority is the highest priority in Windows) and collect all the un wanted objects.

How does Garbage collector locate garbage

When an program is loaded in the memory there will be a bunch of memory allocated for that particular program alone and loaded with the memory. This bunch of memory is called Managed Heap in .NET world. This amount of memory will only be used when an object is to be loaded in to the memory for that particular program.

This memory is separated in to three parts :
  • Generation Zero
  • Generation One
  • Generation Two
Ideally Generation zero will be in smaller size, Generation one will be in medium size and Generation two will be larger.

When we try to create an object by using NEW keyword the system will,
  • Calculate the number of bytes required for the object or type to be loaded in to the managed heap.
  • The CLR then checks that the bytes required to allocate the object are available in the reserved region (committing storage if necessary). IF the object fits, it is allocated at the address pointed to by NextObjPtr.
  • These processes will happen at the Generation zero level.
When Generation Zero is full and it does not have enough space to occupy other objects but still the program wants to allocate some more memory for some other objects, then the garbage collector will be given the REALTIME priority and will come in to picture.

Now the garbage collector will come and check all the objects in the Generation Zero level. If an object's scope and lifetime goes off then the system will automatically mark it for garbage collection.


Here in the process the object is just marked and not collected. Garbage collector will only collect the object and free the memory.

Garbage collector will come and start examining all the objects in the level Generation Zero right from the beginning. If it finds any object marked for garbage collection, it will simply remove those objects from the memory.

Here comes the important part. Now let us refer the figure below. There are three objects in the managed heap. If A and C are not marked but B has lost it scope and lifetime. So B should be marked for garbage collection. So object B will be collected and the managed heap will look like this.
But do remember that the system will come and allocate the new objects only at the last. It does not see in between. So it is the job of garbage collector to compact the memory structure after collecting the objects. It does that also. So the memory would be looking like as shown below now.
But garbage collector does not come to end after doing this. It will look which are all the objects survive after the sweep (collection). Those objects will be moved to Generation One and now the Generation Zero is empty for filling new objects.

If Generation One does not have space for objects from Generation Zero, then the process happened in Generation Zero will happen in Generation one as well. This is the same case with Generation Two also.

You may have a doubt, all the generations are filled with the referred objects and still system or our program wants to allocate some objects, then what will happen? If so, then the MemoryOutofRangeException will be thrown.


Instead of declaring a Finalizer, exposing a Dispose method is considered as good.

public void Dispose()
// all clean up source code here..
If we clean up a object, using Dispose or Close method, we should indicate to the runtime that the object is no longer needed finalization, by calling GC.SuppressFinalize() as shown above.

If we are creating and using objects that have Dispose or Close methods, we should call these methods when we’ve finished using these objects. It is advisable to place these calls in a finally clause, which guarantees that the objects are properly handled even if an exception is thrown.

15 April, 2009

Exploring the Factory Design Pattern

Software changes. This is a rather obvious statement, but it is a fact that must be ever present in the minds of developers and architects. Although we tend to think of software development as chiefly an engineering exercise, the analogy breaks down very quickly. When was the last time someone asked the designers of the Empire State building to add ten new floors at the bottom, put a pool on the top, and have all of this done before Monday morning?

This is a daunting challenge. We must design systems that are capable of changing at a moment's notice. Be it requirements, technology, or personnel, there will be change and we must be prepared. If we forget this, we do so at our own peril.

Although this can be a frightening proposition, there are a great many tools at our disposal to ensure that systems are designed to change rapidly while reducing the negative impact these alterations can bring. One such tool is design patterns. Design patterns represent a wealth of collective knowledge and experience. The proper use of these patterns will help to ensure that systems are malleable, enabling rapid change.

Over the course of this article, I will try examine in short one of the most commonly used patterns, the Factory pattern.

Factory Pattern

An important facet of system design is the manner in which objects are created. Although far more time is often spent considering the object model and instance interaction, if this simple design aspect is ignored it will adversely impact the entire system. Thus, it is not only important what an object does or what it models, but also in what manner it was created.

Since most object-oriented languages and runtimes provide object instantiation (e.g. new, newobj, etc.) and initialization (e.g. constructors) mechanisms, there may be a tendency to simply use these facilities directly without forethought to future consequences. The overuse of this functionality often introduces a great deal of the inflexibility in the system, as the direct use of a language/run-time object instantiation function creates an explicit association between the creator and created classes. While associations are a necessary type of relationship in an object-oriented system, the coupling introduced between classes is extremely difficult to overcome should requirements change (as they always do).

One of the most widely used creational patterns is the Factory. This pattern is aptly named, as it calls for the use of a specialized object solely to create other objects, much like a real-world factory. In the following sections, we will examine the basic knowledge of this pattern which will help you to adopt this design pattern quickly.

Logical Model

As with other design patterns, there are countless variations of the Factory pattern, although most variants typically used the same set of primary actors, a client, a factory, and a product. The client is an object that requires an instance of another object (the product) for some purpose. Rather than creating the product instance directly, the client delegates this responsibility to the factory. Once invoked, the factory creates a new instance of the product, passing it back to the client. Put simply, the client uses the factory to create an instance of the product. Below figure shows this logical relationship between these elements of the pattern.

The factory completely abstracts the creation and initialization of the product from the client. This indirection enables the client to focus on its discrete role in the application without concerning itself with the details of how the product is created. Thus, as the product implementation changes over time, the client remains unchanged.

While this indirection is a tangible benefit, the most important aspect of this pattern is the fact that the client is abstracted from both the type of product and the type of factory used to create the product. Presuming that the product interface is invariant, this enables the factory to create any product type it deems appropriate. Furthermore, presuming that the factory interface is invariant, the entire factory along with the associated products it creates can be replaced in a wholesale fashion. Both of these radical modifications can occur without any changes to the client.

Consider an application that models the assembly of a variety of personal computers. The application contains a ComputerAssembler class that is responsible for the assembly of the computer, a Computer class that models the computer being built, and a ComputerFactory class that creates instances of the Computer class. In using the Factory pattern, the ComputerAssembler class delegates responsibility for creation of Computer instances to the ComputerFactory. This ensures that if the instantiation and/or initialization process changes (e.g. new constructor, use of an activator, custom pooling, etc.), the ComputerAssembler does not need to change at all. This is the benefit of abstracting the creation of an object from its use.

In addition, suppose that the business requirements changed and a new type of computer needs to be assembled. Rather than modifying the ComputerAssembler class directly, the ComputerFactory class can be modified to create instances of a new computer class (assuming that this new class has the same interface as Computer). Furthermore, it is also possible to address this requirement by creating a new factory class (that has the same interface as ComputerFactory) that creates instances of the new computer class (again, with the same interface as Computer). As before, nothing in the ComputerAssembler class needs to change, all the logic just continues to work as it had previously.This is the benefit of abstracting the types of the product and factory into invariant interfaces.


As I conclude this examination of the of the Factory pattern, it should be apparent that this pattern offers considerable benefits. From the abstraction of instance creation to the use of immutable product and factory interfaces, this pattern provides a clear mechanism to design flexible system that are highly adaptable. The inclusion of this pattern in the .NET Framework is a clear testament to it usefulness. Regardless of the nature of your application, this pattern can help it weather the winds of change.

02 April, 2009

Native XML Web Services in SQL Server 2005 - Creating HTTP Endpoints

With SQL Server 2005 developers will be able to develop Web services in the database tier, making SQL Server a hypertext transfer protocol (HTTP) listener and providing a new type of data access capability for Web services-centric applications.

Native XML Web Services in SQL Server 2005

To begin, you need to be running Windows 2003 or Windows XP SP2 to use this feature and you don't need IIS. In fact, if you are running IIS, it may cause a problem if both SQL Server 2005 and IIS are both listening to port 80 when you try to create your HTTP Endpoint. For this tutorial, I would just stop IIS if you have it running. If you don't want to do that, you will need to change the default port that either IIS or SQL Server 2005 is using to listen to HTTP Requests. The default is 80, and one of them will need to be changed.

Expose Stored Procedure as XML Web Service

My goal is pretty simple. I want to expose a stored procedure, called GetContacts, in my AdventureWorks Database as an XML Web Service. As luck would have it, this is an extremely easy thing to do in SQL Server 2005 without requiring IIS.

CREATE PROCEDURE [dbo].[GetContacts]

SELECT TOP 20 [FirstName],[LastName] FROM
[Person].[Contact] ORDER BY [LastName];

Create HTTP Endpoint

Fire up SQL Server Management Studio, choose the AdventureWorks Database, and open a New Query Window. Create an HTTP Endpoint with the following statement:
STATE = Started
PATH = '/Contacts',
SITE = '*'
WEBMETHOD 'GetContacts'
(NAME = 'AdventureWorks.dbo.GetContacts'),
DATABASE = 'AdventureWorks',
If you get an error executing this query that mentions another service using the port, make sure you turn off IIS.

A Path = '/Contacts', Site = '*', and Ports = (CLEAR) means that your URL will be to the default host, localhost, not require SSL, and look like the following:


The FOR SOAP part of the command relates the GetContacts Method to the GetContacts Stored Procedure. Once you execute the command, take a peek at the Server Objects to see your HTTP Endpoint:

Fire Up Visual Studio 2005 to Consume the XML Web Service

At this point, you consume the XML Web Service like every other web service. Create a Windows Application, called Native Http, drop a DataGridView on the form, and add a Web Reference specifying the following URL:


Leave the web reference name as localhost just so you can add the following code to the Form Load Event:

localhost.AW_Contacts contacts = new Native_Http.localhost.AW_Contacts();
contacts.Credentials = CredentialCache.DefaultCredentials;
dataGridView1.DataSource = (contacts.GetContacts()[0] as DataSet).Tables[0];

Run the application and magically see the contacts appear in the DataGridView.