Wednesday, June 1, 2011

Using Pivot in Sql Server

Pivot Option is used when you to rotate a table. That means whenever there are a number of rows and we want to show the values of a specified column into column names we can use Pivot. Pivot will help you to change you data rows into columns. It was introduced in SQL Server 2005. We can achieve the same functionality with the help of CASE statements and GROUP by. But PIVOT takes lesser amount of coding. Lets us take an Example :

Suppose we following table Sales :


Day Items Amount
Mon I1 100.00
Tue I2 50.00
Mon I1 25.00
Mon I2 300.00
Tue I2 500.00

Output desired :

Day I2 I1
Mon 300.00 125.00
Tue 550.00

SELECT Day, [I2] AS I2, [I1] AS I1
FROM
(SELECT Day, Items, Amount
FROM Sales ) s
PIVOT
(
SUM (Amount)
FOR Items IN
( [I2], [I1])
) AS pvt


Now there are cases where we dont want to use any aggregate function. Lets take an example below :

Suppose we have a table EMP

Primary EmpNo Designation
key(ID) (DID)
1 101 M
1 103 D
1 102 L
3 104 M
3 105 D
3 106 L

now we need to show the result as follows :

M D L
1 101 103 102
3 104 105 106

Using Pivot following will be the query :

SELECT ID, [M],[D],[L]
FROM (
SELECT ID, EmpNo,DID
FROM EMP) up
PIVOT ( min(EmpNo) FOR DID IN ('[M]','[D]','[L]')) AS pvt

min would suffice here as the aggregate function to get us the desired result.

Limitations of PIVOT over CASE :

1) we need to hardcode the columns while using PIVOT. Though we can use Dynamic SQL if we are not sure of the number of columns that we need.

2)We can use only one aggregate field in PIVOT whereas in CASE we can aggregate any number of fields we want.

Wednesday, February 17, 2010

Sql Server Questions and Answers - Constraints

1) What are constraints?
A constraint is a property assigned to a column or the set of columns in a table that prevents certain types of inconsistent data values from being placed in the column(s). Constraints are used to enforce the data integrity.

2)What are the different types of data integrity?
The following categories of the data integrity exist:
• Entity Integrity
• Domain Integrity
• Referential integrity
• User-Defined Integrity

Entity Integrity ensures that there are no duplicate rows in a table.
Domain Integrity enforces valid entries for a given column by restricting the type, the format, or the range of possible values.
Referential integrity ensures that rows cannot be deleted, which are used by other records (for example, corresponding data values between tables will be vital).
User-Defined Integrity enforces some specific business rules that do not fall into entity, domain, or referential integrity categories.

3)What are the different types of constraints available in sql server
❑ PRIMARY KEY constraints
❑ FOREIGN KEY constraints
❑ UNIQUE constraints (also known as alternate keys)
❑ CHECK constraints
❑ DEFAULT constraints
❑ Rules

4)What is a Primary Key?
Primary keys are the unique identifiers for each row. They must contain unique values (and hence cannot
be NULL). Because of their importance in relational databases, primary keys are the most fundamental of
all keys and constraints.

5)How many types of primary keys exist?
There are two types of Primary Keys:
1.1) Simple Primary Key ==> Defining primary key on a single column is called Simple Primary Key.
1.2) Composite Primary Key==> Defining Primary Key on more than one column is called Composite Primary Key.

6)What is an unquie key? Name the different types of Unique keys?
A Unique Key in a table uniquely identifies each and every row and allowing Nulls per column Combination. There are two types of Unqiue Keys:
1.1) Simple Unique Key ==> Defining Unique key on a single column is called Simple Unique Key.
1.2) Composite Unique Key==> Defining Unique Key on more than one column is called Composite Unique Key.

7) What is Check Constraint?
Check constraint specifies a condition that is enforced for each row of the table on which the constraint is defined. Once constraint is defined, insert or update to the data within the tables is checked against the defined constraint.

8) What is Rule?
A rule specifies the acceptable values that can be inserted into a column. This is similar to CHECK constraint.

9)What is foreign Key?
Foreign keys are both a method of ensuring data integrity and a manifestation of the relationships between tables. When you add a foreign key to a table, you are creating a dependency between the table for which you define the foreign key (the referencing table) and the table your foreign key references (the referenced table).

10)What is candidate key?
A Candidate Key can be any column or a combination of columns that can qualify as unique key in database. There can be multiple Candidate Keys in one table. Each Candidate Key can qualify as Primary Key.

11) What is the difference between Primary Key and Unique Key?
1)Primary Key by definition cannot be null, where as unique key can accept null values but if the unique key is defined on a column which is not null , then this unique key can also be used as an alternate primary key functionality to identify unique rows in a table.
2)By definition you can have only one primary key defined on a table where as you can have multiple unique keys defined on a table
3)Also by default Primary key is created as clustured index and unique key is created as non clustered index

12)What is the difference between RULE and Check Constraint?The major difference between rule and Check is reusability. Check constraint is associated with columns in a Table. So these can't be re-used. Rules are defined with in a database and can be applied to any number of columns.

13)How to create a relationship between two tables? Using Foreign Keys. Create a Foreign Key on child table referencing Unique Key or Primary key of Parent table.

14)What is the table name,that contains the Primary Key, Unique Key and Foreign Key Information?
INFORMATION_SCHEMA.TABLE_CONSTRAINTS, where CONSTRAINT_TYPE column stores the information of Constraint Type.

15)Can we create a Foreign Key with out Primary Key?
Yes. If the table has Unique Key then it is posible to create a Foreign key constraint.

16) Can we have RULE and Check Constraint on the same column?YES

17) Can we apply Integrity Constraints on Computed Columns?
YES

18) Can you drop a Parent Table with out affecting its child tables?
No. First you need to drop all the Foreign Key relationships and then only you can drop Parent Table.

19) How to disable and Enable the constraints?
You need to use ALTER TABLE statement to disable constraint.
ex: ALTER TABLE ACCOUNT NOCHECK CONSTRAINT CHECK_IN_AMOUNT;

20) What is the order of Constraints execution?
There is no predefined order. All the constraints on the column gets executed.

21)Name the data integrity enforced by the sql server constraints?
1)The primary key constraints are used to enforce entity integrity.
2)The unique key constraints are used to enforce entity integrity as the primary key constraints
3)The check constraints are used to enforce domain integrity.
4)The not null constraints are used to enforce domain integrity, as the check constraints.

Questions on Global.asax

1)What is main difference between Global.asax and Web.Config?
ASP.NET uses the global.asax to establish any global objects that your Web application uses. The .asax extension denotes an application file rather than .aspx for a page file. Each ASP.NET application can contain at most one global.asax file. The file is compiled on the first page hit to your Web application. ASP.NET is also configured so that any attempts to browse to the global.asax page directly are rejected. However, you can specify application-wide settings in the web.config file. The web.config is an XML-formatted text file that resides in the Web site’s root directory. Through Web.config you can specify settings like custom 404 error pages, authentication and authorization settings for the Web site, compilation options for the ASP.NET Web pages, if tracing should be enabled, etc.

2)What does Global.asax file do?
The Global.asax file is an optional file that contains code for responding to
application-level events raised by ASP.NET or by HTTP modules.

3)The Global.asax is code-inside with default.How to change Global.asax to code-behind?
In global.asax you need to modify the @ Application directive.
<%@ Application Codebehind="Global.asax.cs" Inherits="YourNamespace.Global" %>
Then create your .asax.cs file, and add a class named Global derived from System.Web.HttpApplication.

4)How is the global.asax file different from an ASP global.asa file? The global.asax file is very similar to the global.asa file in classic ASP."You can add a global.asax file to the root of each Web application on your site."The global.asax supports the familiar Application_Start, Application_End, Session_Start, and Session_End events."In addition, ASP.NET adds support for several new events. Please see Microsoft's .NET documentation for more information.

5)List the event handlers that can be included in Global.asax?
Application_Start,
Application_End,
Application_AcquireRequestState,
Application_AuthenticateRequest,
Application_AuthorizeRequest,
Application_BeginRequest,
Application_Disposed,
Application_EndRequest,
Application_Error,
Application_PostRequestHandlerExecute,
Application_PreRequestHandlerExecute,
Application_PreSendRequestContent,
Application_PreSendRequestHeaders,
Application_ReleaseRequestState,
Application_ResolveRequestCache,
Application_UpdateRequestCache,
Session_Start,
Session_End

You can optionally include "On" in any of method names. For example, you can name a BeginRequest event handler.Application_BeginRequest or Application_OnBeginRequest.You can also include event handlers in Global.asax for events fired by custom HTTP modules.Note that not all of the event handlers make sense for Web Services (they're designed for ASP.NET applications in general, whereas .NET XML Web Services are specialized instances of an ASP.NET app). For example, the Application_AuthenticateRequest and Application_AuthorizeRequest events are designed to be used with ASP.NET Forms authentication.

6)How can you use Global.asax to catch unhandled exceptions?Unhandled exceptions can be caught by using Application_Error() method of global.asax.
void Application_Error(object sender, EventArgs e)
{
// Code that runs when an unhandled error occurs
Exception objErr = Server.GetLastError().GetBaseException();
string err = "Error in: " + Request.Url.ToString() +
". Error Message:" + objErr.Message.ToString();

}


A complete information on Global.asax can be found at : http://articles.techrepublic.com.com/5100-10878_11-5771721.html

Sql server Optimization techniques and best practices

1)Go for views and stored procedures
Rather than writing the heavy-duty queries, we can use stored procedures and views, Stored procedure will be compiled only once, hence it will not create execution plan every time. Also it will reduce network traffic, also we can restrict the user from accessing tables by using views. this will give more security to data.

2)Use Table Variables
At the maximum, go for table variable rather than opting for Temporary variable because table variable requires less locking when comparing to Temporary variable. Logging is also less when compare to Temporary tables.

3)Distinct Clause
This causes additional overhead, this first get all the data related to the condition specified then removes the duplicate rows,. this leads to performance degradation on large set of data. Apply distinct only if it's necessary. if u have duplicate rows, first try to remove the duplicate rows from the table.

4)use constraints
Instead of working with triggers, the best thing is to use constraints at the maximum, this will boost the performance. A proper handling of constraints will restrict unwanted data, entering into the database.

5)Use Union All
In the performance wise observation, Union All is faster, it will not consider the duplicate rows, hence fetches the records more faster. where as Union statement does a distinct over the result set.

6)SQL Server Cursors
It will occupy the memory available for other resources, hence degrades the performance. for fetching a 100 rows, it will execute 100 individual queries.
Use cursor only in the place which is unavoidable. Instead use correlated sub-queries or derived tables in case of performing row-by-row operations.

7)Add the WITH RECOMPILE option to the CREATE PROCEDURE statement if you know that your query will vary each time it is run from the stored procedure.
The WITH RECOMPILE option prevents reusing the stored procedure execution plan, so SQL Server does not cache a plan for this procedure and the procedure is always recompiled at run time. Using the WITH RECOMPILE option can boost performance if your query will vary each time it is run from the stored procedure, because in this case the wrong execution plan will not be used


8)Avoid using Having Clause
The HAVING clause is used to restrict the result set which is returned by the
GROUP BY clause. When one use GROUP BY with the HAVING clause, the GROUP BY clause will divide the rows into the set of grouped rows and aggregates their values, and then the HAVING clause eliminates the unwanted aggregated groups. In many cases, you can write your select statement, so that it will contain only WHERE and GROUP BY clauses without HAVING clause. This will improve the performance of your query.

9)Row Count
If you want to return the total row count, Use the below query to get the row count of a particular table, it is much faster than count(*).

SELECT rows
FROM sysindexes
WHERE id = OBJECT_ID('table_name') AND (indid < 2) because the count(*) will involve in a full table scan, hence degrades the performance, also it will take more and more time for larger set of data.
10)SET NOCOUNT ON
Do Include SET NOCOUNT ON statement on your stored procedures, this will stop
the message indicating the number of rows affected by a T-SQL statement.
This will reduce network traffic, because your client will not receive
the message indicating the number of rows affected by a T-SQL statement.

11)Use Where Clause
Use where clause, it will restrict the result set and will increase the performance because SQL Server will return to the client only the particular rows instead of all rows from the table. hence it would reduce the network traffic and boost the overall performance of the query.

12)Use Top
Use top keyword if you want to get a particular set of results.
Also you can make use of SET Rowcount. this will improve the performance by reducing the traffic between the client and server.

Select Top 1 Col from Table

This will return only one row as a result.

13)Use Selected Columns
Don't use select *, instead select the fields which you want to retrieve. The server will give the client only the selected columns, this will also reduce the traffic and increase the speed of your query.

14)Index Optimization
Each and every index will increase the time it takes to perform INSERT, UPDATE
and DELETE, So the number of index in a table must be less to have good performance. At the maximum use 4-5 indexes on one table
In case if your table is read only then the index can be more.

Try to put index on the correct column, there is of no use if you apply the index in the Gender column, where there will be only two options, Male or female. Use index in the integer field, than applying the index in characters
Clustered indexes are preferable than non-clustered indexes. Apply the covering index, if your application performs the same query repeatedly

15) stored procedures prefixed with 'sp_'
Any stored procedures prefixed with 'sp_' are first searched for in the Master database rather than the one it is created in. This will cause a delay in the stored procedure being executedAny stored procedures prefixed with 'sp_' are first searched for in the Master database rather than the one it is created in. This will cause a delay in the stored procedure being executed.

16)all stored procedures referred to as dbo.sprocname
When calling a stored procedure you should include the owner name in the call, i.e. use EXEC dbo.spMyStoredProc instead of EXEC spMyStoredProc.Prefixing the stored procedure with the owner when executing it will stop SQL Server from placing a COMPILE lock on the procedure while it determines if all objects referenced in the code have the same owners as the objects in the current cached procedure plan.

17)Transactions being kept as short as possible
If you are use SQL transactions, try to keep them as short as possible. This will help db performance by reducing the number of locks. Remove anything that doesn't specifically need to be within the transaction like setting variables, select statements etc.

Improving performance in Asp.net applications

The following are the some of the guidelines to create a good ASP.NET application.

State Management
1)Disable session when not using it. This can be done at the application level in the "machine.config" file or at a page level. ASP.NET uses a built-in session state mechanism as well as supports your Custom Session State model. However, the cost of Session storage becomes heavy when the users, objects stored increase significantly. Turn off Session State for pages which dont require Session. Typically in a web application there may be static as well as dynamic pages. In the static pages, which dont require Session, the Session State can be turned off. Wherever, you just require Session Data as ReadOnly (where you wont be updating the Session Data), the SessionState can be made ReadOnly.

To turn off Session State at page level,
<%@ Page EnableSessionState="False" %>

To make it ReadOnly at page level,
<%@ Page EnableSessionState="ReadOnly" %>

If your Website has a lot of content pages which are static and only certain pages require Session Data, try considering Turning off SessionState at the Web.Config level and only enable it for those pages which require Session Data. Always the Page settings will override the web.config settings. Session State can be turned off in the web.config as follows:-

<pages enableSessionState="false"></pages>


2)The in-proc model of session management is the fastest of the three options. SQL Server option has the highest performance hit.

3)Minimize the amount and complexity of data stored in a session state. The larger and more complex the data is, the cost of serializing/deserializing of the data is higher (for SQL Server and State server options).

4)Avoid storing STA COM objects in session state.

5)Use the ReadOnly attribute when you can.

6)Use static properties instead of the Application object to store application state.

7)Use application state to share static, read-only data.

8)Do not store single-threaded apartment (STA) COM objects in application state.

9)Turn off View State
View State is the wonderful mechanism which stores the page as well as controls' data, state etc., across round trips. However, the View State can grow heavy significantly increasing the load and hence, View State must be turned off for pages which dont require the same. ViewState can be turned off at page level as follows:-

<%@ Page EnableViewState="False" %>

10)Save server control view state only when necessary.

11)Minimize the number of objects you store in view state.

Server Round Trips
1)Use Server.Transfer for redirecting between pages in the same application. This will avoid unnecessary client-side redirection.

2)Avoid unnecessary round-trips to the server - Code like validating user input can be handled at the client side itself.

3)Use Page.IsPostback to avoid unnecessary processing on a round trip.

Use of Server Controls
1)Use server controls in appropriate circumstances. Even though are they are very easy to implement, they are expensive because they are server resources. Sometimes, it is easier to use simple rendering or data-binding.

2)Avoid creating deep hierarchies of controls.

Exception Handling
1)Don't rely on exceptions in the code. Exceptions reduce performance. Do not catch the exception itself before handling the condition.

// Consider changing this...
try { result = 100 / num;}
catch (Exception e) { result = 0;}

// to this...
if (num != 0)
result = 100 / num;
else
result = 0;

2)To guarantee resources are cleaned up when an exception occurs, use a try/finally block. Close the resources in the finally clause. Using a try/finally block ensures that resources are disposed even if an exception occurs. Open your connection just before needing it, and close it as soon as you're done with it. Your motto should always be "get in, get/save data, get out." If you use different objects, make sure you call the Dispose method of the object or the Close method if one is provided. Failing to call Close or Dispose prolongs the life of the object in memory long after the client stops using it. This defers the cleanup and can contribute to memory pressure. Database connection and files are examples of shared resources that should be explicitly closed.

Com Components
1)Port call-intensive COM components to managed code. While doing Interop try avoiding lot of calls. The cost of marshalling the data ranges from relatively cheap (i.e. int, bool) to more expensive (i.e. strings). Strings, a common type of data exchanged for web applications, can be expensive because all strings in the CLR are in Unicode, but COM or native methods may require other types of encoding (i.e. ASCII).

2)Release the COM objects or native resources as soon as the usage is over. This will allow other requests to utilize them, as well as reducing performance issues, such as having the GC release them at a later point.

DataBase Related:
1)Use SQL server stored procedures for data access.

2)Use the SQLDataReader class for a fast forward-only data cursor.

3)Datagrid is a quick way of displaying data, but it slows down the application. The other alternative, which is faster, is rendering the data for simple cases. But this difficult to maintain. A middle of the road solution could be a repeater control, which is light, efficient, customizable and programmable.

4)Cache data and page output whenever possible.

Collections:
1) Avoid the use of ArrayList. Because any objects added into the Arraylist are added as System.Object and when retrieving values back from the arraylist, these objects are to be unboxed to return the actual valuetype. So it is recommended to use the custom typed collections instead of ArrayList. .NET provides a strongly typed collection class for String in System.Collection.Specialized, namely StringCollection. For other type of class, rename the _ClassType attribute in the attached file to your required type.

2) Reconsider the use of Hashtable instead try other dictionary such as StringDictionary, NameValueCollection, HybridCollection. Hashtable can be used if less number of values are stored.

3)Enumerating into collections sometimes is more expensive than index access in a loop. This is because the CLR can sometimes optimize array indexes and bounds checks away in loops, but can't detect them in for each type of code.

Others:
1)Disable debug mode before deploying the application.

2)For applications that rely extensively one external resource, consider enabling web gardening on multiprocessor computers. The ASP.NET process model helps enable scalability by distributing work to several processes, one on each CPU. If the application is using a slow database server or calls COM objects that access external resources, web gardening could be a solution.

3)Do a "pre-batch" compilation. To achieve this, request a page from the site.

4)Setting the AutoEventWireup attribute to false in the Machine.config file means that the page will not match method names to events and hook them up (for example, Page_Load). If page developers want to use these events, they will need to override the methods in the base class (for example, they will need to override Page.OnLoad for the page load event instead of using a Page_Load method). If you disable AutoEventWireup, your pages will get a slight performance boost by leaving the event wiring to the page author instead of performing it automatically.

5)Buffering is on by default. Turning it off will slow down the performance. Don't code for string buffering - Response.Write will automatically buffer any responses without the need for the user to do it. Use multiple Response.Writes rather than create strings via concatenation, especially if concatenating long strings.

Tuesday, February 16, 2010

Best Data Access Layer Practices

A Data Access Layer comprises of a collection of classes, interfaces and their methods and properties that are used to perform CRUD (Create, Read, Update and Delete) operations in the application. A Data Access Layer encapsulates the code that is used to connect to the database and perform these operations and it actually works as a link between the business entities in your application and the actual data storage layer. You typically use the Data Access Layer to create and populate business entities with data from the database and for updating and storing business entities in the database.

Features of a Data Access Layer

A Data Access Layer should provide the following features:

■Connect to the database
■Open and Close connections
■Support for CRUD operations
■Transaction management
■Provider independence
■Concurrency management

Some of the points that we should keep in mind are :
•Create all your database access routines as generic, versatile objects, rather than client-side repeated-code methods. All your UI should do is interact with these components, and not have to work out any details.

•If you want to page data or provide your application with functionality use a Dataset as the preferred method of disconnected data, otherwise use a Datareader for all your data retrieval. For XML users this would translate to an XmlReader , and StreamReader for text files, both equivalent as that of a DataReader for their respective file types.


•Use the correct managed data provider for your particular database, ex. System.Data.SqlClient for SQL Server , System.Data.OleDb for Access, System.Data.OracleClient for Oracle , etc.


•Use Strongly-Typed Datasets over the standard, common un-Typed ones when possible, as this yields better performance. A generated typed Dataset is basically an early bound class inheriting the Dataset base class, as opposed to the typical late bound, non-inheritable Dataset. Thus, it allows for greater flexibility in dealing with your data. In turn, you'll be dealing with easier to read code, whereby you can access fields and tables by customizable names, instead of the conventional collection-based way.

•Take full advantage of .NET Caching, as this will significantly boost performance and greatly diminish any persistent database interaction. However, and an important however is, if you decide on caching your data from within your data object, then conventional caching methods won't apply. Within the confines of components, and its interaction with the caller page, these requests happen via an HTTP request. Therefore, caching within your component can only be implemented by using the HttpContext.Cache Class property, part of .NET's Page class.


•Use parameterized Stored Procedures along side .NET's Command Class Prepare() method, that caches your query for all future SQL Server uses.
objCommand.Prepare()

•Make chunky calls to your database rather than smaller, chatty calls. It's better to group all similar, associated calls in one SQL Server access. In other words, use one solid connection to retrieve as much as you can, as opposed to multiple ones.


•Take advantage of connection pooling (whereby all your connection strings are identical) by storing all your connection strings in your web.config file. Furthermore, you'll find that your application will scale alot better with any increased network traffic by doubling, even tripling the default Max Pool Size of 100 to 300, and even bumping up the default Connect Timeout of 10 seconds to 60. Additionally, if your not enlisting any transactional procedures, include enlist=false; to your database's connection string for added performance.
Additionally, if your not enlisting any transactional procedures, include enlist=false ; to your database's connection string for added performance.


•Finally, remember to close, clear and dispose of all your data objects no matter what. If you would like to further confirm that your database connection is indeed closed, you would write:


A good article on building a data access layer is as follows :http://msdn.microsoft.com/en-us/library/aa581776.aspx

Best Exception Handling Practises

Best Practices for handling exceptions

1) Exception is an expensive process, for this reason, you should use exceptions only in exceptional situations and not to control regular logic flow.

For example:

void EmpExits ( string EmpId)
{
//... search for employee
if ( dr.Read(EmpId) ==0 ) // no record found, ask to create
{
throw( new Exception("Emp Not found"));
}
}

The best practice is :

bool EmpExits ( string EmpId)
{
//... search for Product
if ( dr.Read(EmpId) ==0 ) // no record found, ask to create
{
return false
}
}

2) Avoid exception handling inside loops, if its really necessary implement try/catch block surrounds the loop.

3)Adopt the standard way of handling the exception through try, catch and finally block. This is the recommended approach to handle exceptional error conditions in managed code, finally blocks ensure that resources are closed even in the event of exceptions.

For example:

SqlConnection conn = new SqlConnection("...");
try
{
conn.Open();
//.some operation
// ... some additional operations
}
catch(¦)
{
// handle the exception
}
finally
{
if (conn.State==ConnectionState.Open)
conn.Close(); // closing the connection
}

4)Where ever possible use validation code to avoid unnecessary exceptions .If you know that a specific avoidable condition can happen, precisely write code to avoid it. For example, before performing any operations checking for null before an object/variable can significantly increase performance by avoiding exceptions.

For example:

double result = 0;
try
{
result = firstVal/secondVal;
}
catch( System.Exception e)
{
//handling the zero divided exception
}

This is better then the above code

double result = 0;
if(secondVal >0)
result = firstVal/secondVal;
else
result = System.Double.NaN;

5)Do not rethrow exception for unnecessary reason because cost of using throw to rethrow an existing exception is approximately the same as creating a new exception and rethrow exception also makes very difficult to debug the code.

For example:

try
{
// Perform some operations ,in case of throw an exception
}
catch (Exception e)
{
// Try to handle the exception with e
throw;
}

6)The recommended way to handle different error in different way by implement series of catch statements this is nothing but ordering your exception from more specific to more generic for example to handle file related exception its better to catch FileNotFoundException, DirectoryNotFoundException, SecurityException, IOException, UnauthorizedAccessException and at last Exception.

7) .NET errors should be capture through SqlException or OleDbException.
•Use the ConnectionState property for checking the connection availability instead of implementing an exception.

•Use Try/Finally more often, finally provides option to close the connection or the using statement provides the same functionality.

•Use the specific handler to capture specific exception, in few scenarios if you know that there is possibility for a specific error like database related error it can be catch through SqlException or OleDbException as below.

try
{ ...
}
catch (SqlException sqlexp) // specific exception handler
{ ...
}
catch (Exception ex) // Generic exception handler
{ ...
}
8) When creating user-defined exceptions, you must ensure that the metadata for the exceptions is available to code executing remotely, including when exceptions occur across application domains. For example, suppose Application Domain A creates Application Domain B, which executes code that throws an exception. For Application Domain A to properly catch and handle the exception, it must be able to find the assembly containing the exception thrown by Application Domain B. If Application Domain B throws an exception that is contained in an assembly under its application base, but not under Application Domain A's application base, Application Domain A will not be able to find the exception and the common language runtime will throw a FileNotFoundException. To avoid this situation, you can deploy the assembly containing the exception information in two ways:
a) Put the assembly into a common application base shared by both application domains
- or -
b)If the domains do not share a common application base, sign the assembly containing the exception information with a strong name and deploy the assembly into the global assembly cache.

9) In most cases, use the predefined exceptions types. Define new exception types only for programmatic scenarios. Introduce a new exception class to enable a programmer to take a different action in code based on the exception class.

10)Do not derive user-defined exceptions from the Exception base class. For most applications, derive custom exceptions from the ApplicationException class.

11)Include a localized description string in every exception. When the user sees an error message, it is derived from the description string of the exception that was thrown, rather than from the exception class.

12)Use grammatically correct error messages, including ending punctuation. Each sentence in a description string of an exception should end in a period.

13)Provide Exception properties for programmatic access. Include extra information in an exception (in addition to the description string) only when there is a programmatic scenario where the additional information is useful.

14) Throw an InvalidOperationException if a property set or method call is not appropriate given the object's current state.
Throw an ArgumentException or a class derived from ArgumentException if invalid parameters are passed.

Its recommend to use "Exception Management Application Block" provided by Microsoft. It is a simple and extensible framework for logging exception information to the event log or you can customize to write the exception information to other data sources without affecting your application code and implemented all best practices and tested in Microsoft Lab.