Search This Blog

Wednesday, April 27, 2011

Software Development Life Cycle (SDLC)

The Systems Development Life Cycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system development project from an initial feasibility study through maintenance of the completed application. 
         Various SDLC methodologies have been developed to guide the processes involved including the waterfall model (the original SDLC method), rapid application development (RAD), joint application development (JAD), the fountain model and the spiral model. Mostly, several models are combined into some sort of hybrid methodology. Documentation is crucial regardless of the type of model chosen or devised for any application, and is usually done in parallel with the development process. Some methods work better for specific types of projects, but in the final analysis, the most important factor for the success of a project may be how closely particular plan was followed. 



Different types of SDLC they are

1.Water fall model
2.Iterative model
3.Spiral model
4.Proto type model
5.RAD model(Rapid application development) 
6.cocomo model:cost to cost model
7.v-model
8.Fish model
9.Component Assembly Model 
 
 1.Water Fall Model:
 
This is also known as Classic Life Cycle Model (or) Linear Sequential Model. 
This model has the following activities. 

          
A. System/Information Engineering and Modeling

As software is always of a large system (or business), work begins by establishing the requirements for all system elements and then allocating some subset of these requirements to software. This system view is essential when the software must interface with other elements such as hardware, people and other resources. System is the basic and very critical requirement for the existence of software in any entity. So if the system is not in place, the system should be engineered and put in place. In some cases, to extract the maximum output, the system should be re-engineered and spruced up. Once the ideal system is engineered or tuned, the development team studies the software requirement for the system. 

B. Software Requirement Analysis

This process is also known as feasibility study. In this phase, the development team visits the customer and studies their system. They investigate the need for possible software automation in the given system. By the end of the feasibility study, the team furnishes a document that holds the different specific recommendations for the candidate system. It also includes the personnel assignments, costs, project schedule, target dates etc.... The requirement gathering process is intensified and focussed specially on software. To understand the nature of the program(s) to be built, the system engineer or "Analyst" must understand the information domain for the software, as well as required function, behavior, performance and interfacing. The essential purpose of this phase is to find the need and to define the problem that needs to be solved .



C. System Analysis and Design

In this phase, the software development process, the software's overall structure and its nuances are defined. In terms of the client/server technology, the number of tiers needed for the package architecture, the database design, the data structure design etc... are all defined in this phase. A software development model is thus created. Analysis and Design are very crucial in the whole development cycle. Any glitch in the design phase could be very expensive to solve in the later stage of the software development. Much care is taken during this phase. The logical system of the product is developed in this phase.

D. Code Generation

The design must be translated into a machine-readable form. The code generation step performs this task. If the design is performed in a detailed manner, code generation can be accomplished without much complication. Programming tools like compilers, interpreters, debuggers etc... are used to generate the code. Different high level programming languages like C, C++, Pascal, Java are used for coding. With respect to the type of application, the right programming language is chosen.

E. Testing

Once the code is generated, the software program testing begins. 
Different testing methodologies are available to unravel the bugs that were committed during the previous phases. Different testing tools and methodologies are already available. Some companies build their own testing tools that are tailor made for their own development operations.

F. Maintenance

The software will definitely undergo change once it is delivered to the customer. There can be many reasons for this change to occur. Change could happen because of some unexpected input values into the system. In addition, the changes in the system could directly affect the software operations. The software should be developed to accommodate changes that could happen during the post implementation period.
 
2. Iterative model:
An iterative lifecycle model does not attempt to start with a 
full specification of requirements. Instead, development begins by specifying and implementing just part of the software, 
which can then be reviewed in order to identify further requirements.
This process is then repeated, producing a new version of the software for each cycle of the model.Consider an iterative 
lifecycle model which consists of repeating the following four phases in sequence: 

A Requirements phase, in which the requirements for the software are gathered and analyzed. Iteration should eventually result in a requirements phase that produces a complete and final specification of requirements. - A Design
phase, in which a software solution to meet the requirements is designed. This may be a new design, or an extension of an earlier design.

- An Implementation and Test phase, when the software is coded, integrated and tested.

- A Review phase, in which the software is evaluated, the current requirements are reviewed, and changes and additions to requirements proposed.

For each cycle of the model, a decision has to be made as to whether the software produced by the cycle will be discarded, or kept as a starting point for the next cycle (sometimes referred to as incremental prototyping). Eventually a point will be reached where the requirements are complete and the software can be delivered, or it becomes impossible to enhance the software as required, and a fresh start has to be made.

The iterative lifecycle model can be likened to producing software by successive approximation. Drawing an analogy with mathematical methods that use successive approximation to arrive at a final solution, the benefit of such methods depends on how rapidly they converge on a solution.

3.Spiral model:

This model proposed by Barry Bohem in 1988, attempts to combine the strengths of various models. It incorporates the elements of the prototype driven approach along with the classic software life cycle. Is also takes into account the risk assessment whose outcome determines taking up the next phase of the designing activity.

Unlike all other models which view designing as a linear process, this model views it as a spiral process. This is done by representing iterative designing cycles as an expanding spiral.
Typically the inner cycles represent the early phase of requirement analysis along with prototyping to refine the requirement definition, and the outer spirals are progressively representative of the classic software designing life cycle.
At every spiral there is a risk assessment phase to evaluate the designing efforts and the associated risk involved for that particular iteration. At the end of each spiral there is a review phase so that the current spiral can be reviewed and the next phase can be planned.
Six major activities of each designing spirals are represented by six major tasks:
1. Customer Communication
2. Planning
3. Risk Analysis
4. Software Designing Engineering
5. Construction and Release
6. Customer Evolution

Advantages
1. It facilities high amount of risk analysis.
2. This software designing model is more suitable for designing and managing large software projects.
3. The software is produced early in the software life cycle.

Disadvantages
1. Risk analysis requires high expertise.
2. It is costly model to use
3. Not suitable for smaller projects.
4. There is a lack of explicit process guidance in determining objectives, constraints and alternatives..
5. This model is relatively new. It does not have many practitioners unlike the waterfall model or prototyping model.


4.Proto type model:
Prototyping is a technique that provides a reduced functionality or limited performance version of the eventual software to be delivered to the user in the early stages of the software development process. If used judiciously, this approach helps to solidify user requirements earlier, thereby making the waterfall approach more effective.

What is done is that before proceeding with design and coding, a throw away prototype is built to give user a feel of the system. The development of the software prototype also involves design and coding, but this is not done in a formal manner. The user interacts with the prototype as he would do with the eventual system and would therefore be in a better position to specify his requirements in a more detailed manner. The iterations occur to refine the prototype to satisfy the needs of the user, while at the same time enabling the developer to better understand what needs to be done.

Disadvantages
1. In prototyping, as the prototype has to be discarded, so might argue that the cost involved is higher.
2. At times, while designing a prototype, the approach adopted is “quick and dirty” with the focus on quick development rather than quality.
3. The developer often makes implementation compromises in order to get a prototype working quickly.
4.RAD model(Rapid application development):

 The RAD modelis a linear sequential software development process that emphasizes an extremely short development cycle. The RAD model is a "high speed" adaptation of the linear sequential model in which rapid development is achieved by using a component-based construction approach. Used primarily for information systems applications, the RAD approach encompasses the following phases:


A. Business modeling

The information flow among business functions is modeled in a way that answers the following questions:

What information drives the business process?
What information is generated?
Who generates it?
Where does the information go?
Who processes it?

B. Data modeling

The information flow defined as part of the business modeling phase is refined into a set of data objects that are needed to support the business. The characteristic (called attributes) of each object is identified and the relationships between these objects are defined.

C. Process modeling

The data objects defined in the data-modeling phase are transformed to achieve the information flow necessary to implement a business function. Processing the descriptions are created for adding, modifying, deleting, or retrieving a data object.

D. Application generation

The RAD model assumes the use of the RAD tools like VB, VC++, Delphi etc... rather than creating software using conventional third generation programming languages. The RAD model works to reuse existing program components (when possible) or create reusable components (when necessary). In all cases, automated tools are used to facilitate construction of the software.
 
E. Testing and turnover

Since the RAD process emphasizes reuse, many of the program components have already been tested. This minimizes the testing and development time. 

6.Cocomo model:cost to cost model:


 The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by Barry Boehm. The model uses a basic regression formula, with parameters that are derived from historical project data and current project characteristics.

COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. The first level, Basic COCOMO is good for quick, early, rough order of magnitude estimates of software costs, but its accuracy is limited due to its lack of factors to account for difference in project attributes (Cost Drivers). Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO additionally accounts for the influence of individual project phases.
1.Basic COCOMO:
Basic COCOMO computes software development effort (and cost) as a function of program size. Program size is expressed in estimated thousands of lines of code (KLOC)
COCOMO applies to three classes of software projects:
  • Organic projects - "small" teams with "good" experience working with "less than rigid" requirements
  • Semi-detached projects - "medium" teams with mixed experience working with a mix of rigid and less than rigid requirements
  • Embedded projects - developed within a set of "tight" constraints (hardware, software, operational, ......)
The basic COCOMO equations take the form
Effort Applied (E) = ab(KLOC)bb [ man-months ]
Development Time (D) = cb(Effort Applied)db [months]
People required (P) = Effort Applied / Development Time [count]
where, KLOC is the estimated number of delivered lines (expressed in thousands ) of code for project, The coefficients ab, bb, cb and db are given in the following table.
Software project ab bb cb db
Organic 2.4 1.05 2.5 0.38
Semi-detached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32

Basic COCOMO is good for quick estimate of software costs. However it does not account for differences in hardware constraints, personnel quality and experience, use of modern tools and techniques, and so on.

b.Intermediate COCOMOs :

Intermediate COCOMO computes software development effort as function of program size and a set of "cost drivers" that include subjective assessment of product, hardware, personnel and project attributes. This extension considers a set of four "cost drivers",each with a number of subsidiary attributes:-
  • Product attributes
    • Required software reliability
    • Size of application database
    • Complexity of the product
  • Hardware attributes
    • Run-time performance constraints
    • Memory constraints
    • Volatility of the virtual machine environment
    • Required turnabout time
  • Personnel attributes
    • Analyst capability
    • Software engineering capability
    • Applications experience
    • Virtual machine experience
    • Programming language experience
  • Project attributes
    • Use of software tools
    • Application of software engineering methods
    • Required development schedule

    The Intermediate Cocomo formula now takes the form:
    E=ai(KLoC)(bi).EAF
    where E is the effort applied in person-months, KLoC is the estimated number of thousands of delivered lines of code for the project, and EAF is the factor calculated above. The coefficient ai and the exponent bi are given in the next table.
    Software project ai bi
    Organic 3.2 1.05
    Semi-detached 3.0 1.12
    Embedded 2.8 1.20
    The Development time D calculation uses E in the same way as in the Basic COCOMO.

    c.Detailed COCOMO:
    Detailed COCOMO incorporates all characteristics of the intermediate version with an assessment of the cost driver's impact on each step (analysis, design, etc.) of the software engineering process.
    The detailed model uses different efforts multipliers for each cost drivers attribute these Phase Sensitive effort multipliers are each to determine the amount of effort required to complete each phase.
    In detailed COCOMO, the effort is calculated as function of program size and a set of cost drivers given according to each phase of software life cycle.
    The five phases of detailed COCOMO are:-

  • plan and requirement.

  • system design.

  • detailed design.

  • module code and test.

  • integration and test.


7.v-model:

The V-Model is a software development model designed to simplify the understanding of the complexity associated 
with developing systems.
 

 

The V-model consists of a number of phases. The Verification Phases are on the left hand side of the V, the Coding Phase is at the bottom of the V and the Validation Phases are on the right hand side of the V.
Requirements analysis:
In the Requirements analysis phase, the requirements of the proposed system are collected by analyzing the needs of the user(s). This phase is concerned about establishing what the ideal system has to perform. However it does not determine how the software will be designed or built. Usually, the users are interviewed and a document called the user requirements document is generated.
The user requirements document will typically describe the system’s functional, physical,interface, performance, data, security requirements etc as expected by the user. It is one which the business analysts use to communicate their understanding of the system back to the users. The users carefully review this document as this document would serve as the guideline for the system designers in the system design phase. The user acceptance
tests are designed in this phase.
System Design:
Systems design is the phase where system engineers analyze and understand the business of the proposed system by studying the user requirements document. They figure out possibilities and techniques by which the user requirements can be implemented. If any of the requirements are not feasible, the user is informed of the issue. A resolution is found and the user requirement document is edited accordingly.
The software specification document which serves as a blueprint for the development phase is generated. This document contains the general system organization, menu structures, data structures etc. It may also hold example business scenarios, sample windows, reports for the better understanding. Other technical documentation like entity diagrams, data dictionary will also be produced in this phase. The documents for system
testing is prepared in this phase.
Architecture Design:
The phase of the design of computer architecture and software architecture can also be referred to as high-level design. The baseline in selecting the architecture is that it should realize all which typically consists of the list of modules, brief functionality of each module, their interface relationships, dependencies, database tables, architecture diagrams, technology details etc. The integration testing design is carried out in this
phase.
Module Design:
The module design phase can also be referred to as low-level design. The designed system is broken up into smaller units or modules and each of them is explained so that the programmer can start coding directly. The low level design document or program specifications will contain a detailed functional logic of the module, in pseudo code -database tables, with all elements, including their type and size. The unit test design is developed in this stage.

Advantages of V-model

  • It saves ample amount of time and since the testing team is involved early on, they develop a very good understanding of the project at the very beginning.
  • Reduces the cost for fixing the defect since defects will be found in early stages
  • It is a fast method

Disadvantages of V-model

  • The biggest disadvantage of V-model is that it’s very rigid and the least flexible.If any changes happen mid way, not only the requirements documents but also the test documentation needs to be updated.
  • It can be implemented by only some big companies.
  • It needs an established  process to implement.

8.Fish model:
This is a process oriented company's development model. Even though it is a time
 consuming and expensive model One can be rest assured that both verification and 
validation is done paralley by separate teams in each phase of the 
model. So there are two reports generated by the end of each phase one 
for validation and one for verification. Because all the stages except 
the last delivery and maintenance phase is covered by the two parallel 
processes the structure of this model looks like a skeleton between two 
parallel lines hence the name fish model. 

  Advantages: 

This strict process results in products of exceptional quality. 
So one of the important objective is achieved.
 


 Disadvantages: 

 Time consuming and expensive.
 
9.Component Assembly Model  :
Object technologies provide the technical framework for a 
component-based process model for software engineering. The object 
oriented paradigm emphasizes the creation of classes that encapsulate 
both data and the algorithm that are used to manipulate the data. If 
properly designed and implemented, object oriented classes are reusable 
across different applicationsand computer based system architectures. 
Component Assembly Model leads to software reusability. The 
integration/assembly of the already existing software components 
accelerate the development process. Nowadays many component libraries 
are available on the Internet. If the right components are chosen, the 
integration aspect is made much simpler. 

 

Working with XML Data in SQL Server 2005

XML is one of the data types in SQL Server 2005.It is considered as one of the significant changes introduced in SQL Server 2005.In the previous versions, storing and retrieving XML data were possible but it was not as easy and extensible as it is in the current version. The XML data type lets the user to store XML data in the SQL Server database. In this article we will be discussing how to store, retrieve and query XML data.
Support for XML is integrated into all the components of SQL Server 2005.The SQL Server 2005 supports XML in the following way:
  • SQL Server 2005 supports XML data type.
  • The ability to specify an XQuery query against XML data stored in columns and variables of the xml type.
  • Enhancements to OPENROWSET to allow bulk loading of XML data.
  • Enhancements to the FOR XML clause and OPENXML function introduced in SQL Server 2000.
Let’s now take a look at an example. This example demonstrates how to create a table that contains an XML column. We will also insert some records into the table.

1. Create table with XML column
Create table dbo.Student(StudentID int,StudentName varchar(50),Studentcontactdetails XML)

2. Insert valid XML data into the table:
Insert into dbo.Student values(1,'ABC','<ROOT><Student>123,XYZ street,London</Student></ROOT>')

If we try to insert invalid XML, it will result in an error message.
Insert into dbo.Student values(1,'ABC','<ROOT><Student>123,XYZ street,London')
Msg 9400, Level 16, State 1, Line 1
XML parsing: line 1, character 36, unexpected end of input

We can also create variables of XML type.
Declare @x xml

Typed vs. Untyped XML
Untyped XML can be stored in any form. The XML should be a well formed one. When the user inserts a value to the XML column, a check will occur to see whether the data that is about to be inserted matches the XML standard. The value is not validated against any XML schema.Untyped xml provides a more flexible way to store data.
Typed XML is used when the user wants to validate the XML with an XML schema. The XML schema has to be mentioned when creating the XML datatype.This is done by referring to the XML schema.XML schema has to be initially stored and catalogued in the database. Thus the XML that is validated against a XML schema is Typed XML.The typed XML is declared in the following way:
Declare @x XML(schema.xmlschemacollection)
The typed XML is suitable in scenarios where the XML data which is stored in the database has to match a strict definition such as, for example, an invoice.
The XML schema can be created as follows:
CREATE XML SCHEMA COLLECTION [ <relational_schema>. ]sql_identifier AS Expression
  • relational_schema:-Identifies the relational schema name. If not specified, default relational schema is assumed.
  • sql_identifier :-Is the SQL identifier for the XML schema collection
  • Expression:-Is a string constant or scalar variable. It can be varchar, varbinary,       nvarchar, nvarbinary, or xml type.
Example:

CREATE XML SCHEMA COLLECTION Chemicals AS '
<schema xmlns="http://www.w3.org/2001/XMLSchema">
<element name="root">
<complexType>
<sequence>
<element name="ChemicalName" type="string"/>
<element name="Symbol" type="string"/>
</sequence>
</complexType>
</element>
</schema>'

After the XML schema is catalogued, the information about it can be obtained by using the “XML_schema_namespace” function.
Example:
SELECT xml_schema_namespace(N'dbo',N'Chemicals')


Once the XML schema is defined, we can use it and refer to it in CREATE TABLE or DECLARE XML statement.

Example:
1)
Declare @xml xml(Chemicals)
set @xml='<root><ChemicalName>Sulphuric Acid</ChemicalName><Symbol>H2SO4</Symbol></root>'

2)
Insert into students (studentname,Labid,Experimentid,Experimentchemical)
values ('ABC',1,1,'<root><ChemicalName>Sulphuric Acid</ChemicalName><Symbol>H2SO4</Symbol></root>')


Querying XML Data
The methods which can be used to retrieve data from an xml datatype are as follows:

1) Query method: This method returns a fragment of untyped XML.The following example shows how to return a value from within an XML column.
Example:
select experimentchemical.query('/root/ChemicalName') from students
Result:
This will result in a partial result set but will return a fragment of untyped XML.
<ChemicalName>Sulphuric Acid</ChemicalName>

To retrieve only the values,we can use the data function.


2) Value Method:The value method is similar to the query method.The only difference is that the value method will accept an additional parameter to determine the resulting scalar datatype.
Example:


If the user want to see the second customer’s customer ID then the query would be as follows:
select @x.value('(/root/CustomerDescription/@CustID)[2]','int')

3) Exist method
The exist method takes an expression as input.This expression selects a single node within the XML document and returns true(1) if that node exists or false(0) if it does not.

Example:
select Experimentchemical.exist('/root/ChemicalName') from students
The above query will return true for all items where the student has the chemical details to be used in the experiment.



The exist method can be used in the where clause in the following way:
select * from students where Experimentchemical.exist('/root/ChemicalName')=1

4)Modify Method
Modify method can be used to manipulate the XML data stored in a table.The modify method consists of three substatements:
  • INSERT
  • DELETE
  • REPLACE
Example:
1)
declare @x xml
declare @custid int
set @x='<root>
<CustomerDescription CustID="101" CustomerName="ABCL Industries Limited">
<Phonenumber>
<Work>1234567890</Work>
<Residence>1434546678</Residence>
</Phonenumber>
</CustomerDescription>
<CustomerDescription CustID="102" CustomerName="HAL Industries Limited">
<Phonenumber>
<Work>1234567890</Work>
<Residence>1434546678</Residence>
</Phonenumber>
</CustomerDescription>
</root>'
select @x
set @x.modify('delete /root/CustomerDescription/@CustID')
select @x
Result:
<root>
<CustomerDescription CustomerName="ABCL Industries Limited">
<Phonenumber>
<Work>1234567890</Work>
<Residence>1434546678</Residence>
</Phonenumber>
</CustomerDescription>
<CustomerDescription CustomerName="HAL Industries Limited">
<Phonenumber>
<Work>1234567890</Work>
<Residence>1434546678</Residence>
</Phonenumber>
</CustomerDescription>
</root>


2)
To insert a new node or fragment of XML,we can use the INSERT statement.The syntax for it is as follows:
Syntax:
insert
Expression1 (
{as first | as last} into | after | before
Expression2
)

Arguments:
Expression1
Identifies one or more nodes to insert.
into
Nodes identified by Expression1 are inserted as direct descendents (child nodes) of the node identified by Expression2. If the node in Expression2 already has one or more child nodes, the user must use either as first or as last to specify where the new node has to be added. For example, at the start or at the end of the child list, respectively.
after
Nodes identified by Expression1 are inserted as siblings directly after the node identified by Expression2.
before
Nodes identified by Expression1 are inserted as siblings directly before the node identified by Expression2.
Expression2
Identifies a node. The nodes identified in Expression1 are inserted relative to the node identified by Expression2.

Example:
USE AdventureWorks;
GO
DECLARE @myDoc xml      
SET @myDoc = '<Root>      
<ProductDescription ProductID="1" ProductName="Road Bike">      
<Features>      
</Features>      
</ProductDescription>      
</Root>'      
SELECT @myDoc      
-- insert first feature child (no need to specify as first or as last)      
SET @myDoc.modify('      
insert <Maintenance>3 year parts and labor extended maintenance is available</Maintenance>
into (/Root/ProductDescription/Features)[1]')
SELECT @myDoc 

Result:
<Root>
<ProductDescription ProductID="1" ProductName="Road Bike">
<Features>
<Maintenance>3 year parts and labor extended maintenance is available</Maintenance>
</Features>
</ProductDescription>
</Root>
For more examples,refer to” insert(XML DML)” in SQL Server 2005 books online.

5) Nodes Method
The nodes method can be used to extract data from an XML document and use that to generate subnodes that can be used for various purposes,such as,to create new content or insert content into new tables.
Example:
declare @x xml
set @x='<Root><row id="1"><AirportCode>MUM</AirportCode><AirportName>Mumbai</AirportName></row>
<row id="2"><AirportCode>MAS</AirportCode><AirportName>Madras</AirportName></row>
<row id="3"></row></Root>'

Select T.c.query('.') as result
from @x.nodes('/Root/row') T(c) 



Eventdata function
Eventdata function is returns information about server or database events. EVENTDATA is called when an event notification fires, and the results are returned to the specified service broker. EVENTDATA can also be used inside the body of a DDL or logon trigger.Mor information about DDL triggers can be found in the article “Understanding DDL triggers in SQL Server 2005” .Eventdata returns a value of type XML. The XML schema includes information about the following:
  • The time of the event.
  • The System Process ID (SPID) of the connection when the trigger executed.
  • The type of event that fired the trigger
Depending on the event type, the schema then includes additional information such as the database in which the event occurred, the object against which the event occurred, and the Transact-SQL statement of the event.
Example:

The following example creates a DDL trigger to prevent new tables from being created in the database. The Transact-SQL statement that fires the trigger is captured by using XQuery against the XML data that is generated by Eventdata.

USE AdventureWorks;
GO
CREATE TRIGGER safety
ON DATABASE
FOR CREATE_TABLE
AS
PRINT 'CREATE TABLE Issued.'
SELECT EVENTDATA().value
('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)')
RAISERROR ('New tables cannot be created in this database.', 16, 1)
ROLLBACK
;
GO
--Test the trigger.
CREATE TABLE NewTable (Column1 int);
GO
--Drop the trigger.
DROP TRIGGER safety
ON DATABASE
GO

Result:



Messages:

XML Indexes:
For the faster retrieval of XML data, the user can create indexes on these columns.XML indexes fall into the following categories:
1) Primary XML index
2) Secondary XML index
The first index on the xml type column must be the primary XML index. Using the primary XML index, the following types of secondary indexes are supported: PATH, VALUE, and PROPERTY. Depending on the type of queries, these secondary indexes might help improve query performance.
Primary XML Index:
The primary XML index is a shredded and persisted representation of the XML BLOBs in the xml data type column. For each XML binary large object (BLOB) in the column, the index creates several rows of data. The number of rows in the index is approximately equal to the number of nodes in the XML binary large object.
Each row stores the following node information:
  1. Tag name such as an element or attribute name.
  2. Node value.
  3. Node type such as an element node, attribute node, or text node.
  4. Document order information, represented by an internal node identifier.
  5. Path from each node to the root of the XML tree. This column is searched for path expressions in the query.
  6. Primary key of the base table.
Secondary XML Index:
To enhance search performance, secondary XML indexes can be created. A primary XML index must first exist before the user creates secondary indexes. These are the types of secondary XML index:
  • Path secondary XML index: - If the queries based on path instructions, Path secondary index may be able to speed up the search. The Path secondary index is helpful when you have queries that specify exist() method in the WHERE clause.
  • Value secondary XML index: - If queries are value based and the path is not fully specified or it includes a wildcard, we can obtain faster results by building a Value secondary XML index that is built on node values in the primary XML index.
  • PROPERTY secondary XML index:- Queries that retrieve one or more values from individual XML instances might benefit from a PROPERTY index. This scenario occurs when you retrieve object properties by using the value() method of the xml type and when the primary key value of the object is known.

Getting Started with the ADO.NET Entity Framework Part 4

The ADO.NET Entity Framework, the next generation of Microsoft's data access technology, is an extended Object Relational Mapping (ORM) technology that makes it easy to tie together the data in your database with the objects in your applications. This is done by abstracting the object model of an application from its relational or logical model. It is an extended ORM in the sense that it provides many additional features over an ORM. Some of these features are:
  • Entity Inheritance and Composition
  • Identity Resolution and Change Tracking
  • LINQ Support
  • The Object Service Layer
In this article, we will cover the following points:
  • Creating an Entity Data Model
  • Introducing the Entity Data Source Control
  • Implementing our first application using the ADO.NET Entity Framework
We will start this article with a discussion on how we can create an Entity Data Model from a Payroll database.

Creating an Entity Data Model
You can create the ADO.NET Entity Data Model in one of two ways:
  • Use the ADO.NET Entity Data Model Designer
  • Use the command line Entity Data Model Designer called EdmGen.exe
We will first take a look at how we can design an Entity Data Model using the ADO.NET Entity Data Model Designer which is a Visual Studio wizard that is enabled after you install ADO.NET Entity Framework and its tools. It provides a graphical interface that you can use to generate an Entity Data Model.

Creating the Payroll Entity Data Model Using the ADO.NET Entity Data Model Designer
Here again are the tables of our Payroll database that we will use to generate the data model:
  • Employee
  • Designation
  • Department
  • Salary
  • Provident Fund
To create an entity data model using the ADO.NET Entity Data Model Designer, follow these simple steps:
1. Open Visual Studio.NET and create a solution for a new web application project as seen below and save with a name.





2. Switch to the Solution Explorer, right click and click on Add New Item as seen in the following screenshot:



3. Next, select ADO.NET Entity Data Model from the list of the templates displayed as shown in the following screenshot:

 


4. Name the Entity Data Model PayrollModel and click on Add.


5. Select Generate from database from the Entity Data Model Wizard as shown in the following screenshot:



Note that you can also use the Empty model template to create the Entity Data Model yourself.

If you select the Empty Data Model template and click on next, the following screen appears:



As you can see from the above figure, you can use this template to create the Entity Data Model yourself. You can create the Entity Types and their relationships manually by dragging items from the toolbox. We will not use this template in our discussion here. So, let's get to the next step.

6. Click on Next in the Entity Data Model Wizard window shown earlier.

7. The modal dialog box will now appear and prompts you to choose your connection as shown in the following figure:



8. Click on New Connection Now you will need to specify the connection properties and parameters as shown in the following figure:



We will use a dot to specify the database server name. This implies that we will be using the database server of the localhost, which is the current system in use.
9. After you specify the necessary user name, password, and the server name, you can test your connection using the Test Connection button. When you do so, the message Test connection succeeded gets displayed in the message box as shown in the previous figure.

10. When you click on OK on the Test connection dialog box, the following screen appears:



Note the Entity Connection String generated automatically. This connection string will be saved in the ConnectionStrings section of your application's web.config file. This is how it will look like:

    <connectionStrings>
      <add name="PayrollEntities" connectionString="metadata=res://
        *;provider=System.Data.SqlClient;provider connection
        string=&quot;Data Source=.;Initial Catalog=Payroll;User
        ID=sa;Password=joydip1@3;MultipleActiveResultSets=True&quot;"
        providerName="System.Data.EntityClient" />
    </connectionStrings>

11. When you click on Next in the previous figure, the following screen appears:



12. Expand the Tables node and specify the database objects that you require in the Entity Data Model to be generated as shown in the following figure:



13. Click on Finish to generate the Entity Data Model.

Your Entity Data Model has been generated and saved in a file named PayrollModel.edmx. We are done creating our first Entity Data Model using the ADO.NET Entity Data Model Designer tool.

Note how the Entity Types in the above model are related to one another. These relationships have been generated automatically by the Entity Data Model Designer based on the relationships between the tables of the Payroll database we created in the previous chapter.

 
we will now take a look at how to create a data model using the Entity Data Model generation tool called EdmGen.  The EdmGen.exe command line tool can be used to do one or more of the following:  
  • Generate the .cdsl, .msl, and .ssdl files as part of the Entity Data Model
  • Generate object classes from a .csdl file
  • Validate an Entity Data Model



The EdmGen.exe command line tool generates the Entity Data Model as a set of three files: .csdl, .msl, and .ssdl. If you have used the ADO.NET Entity Data Model Designer to generate your Entity Data Model, the .edmx file generated will contain the CSDL, MSL, and the SSDL sections. You will have a single .edmx file that bundles all of these sections into it. On the other hand, if you use the EdmGen.exe tool to generate the Entity Data Model, you would find three distinctly separate files with .csdl, .msl or .ssdl extensions.
Here is a list of the major options of the EdmGen.exe command line tool:



Note that you basically need to pass the connection string, specify the mode, and also the project name of the artifact files (.csdl, .msl, and the .ssdl files) to be created. To create the Entity Data Model for our database, open a command window and type in the following:

edmgen /mode:fullgeneration /c:"Data Source=.;Initial Catalog=Payroll;User ID=sa;Password=joydip1@3;" /p:Payroll

This will create a full ADO.NET Entity Data Model for our database. The output is shown in the following figure:


 You can now see the list of the files that have been generated:


You can validate the Payroll Entity Data Model that was just created, using the ValidateArtifacts option of the EdmGen command line tool as shown below:

EdmGen /mode:ValidateArtifacts /inssdl:Payroll.ssdl /inmsl:Payroll.msl /incsdl:Payroll.csdl

When you execute the above command, the output will be similar to what is shown in the following figure:



As you can see in the previous figure, there are no warnings or errors displayed. So, our Entity Data Model is perfect.

The section that follows discusses the new Entity Data Source control which was introduced as part of the Visual Studio.NET 2008 SP1 release.

This article is part three of a four part series on the ADO.NET Entity Framework. This article discusses the new Entity Data Source control which was introduced as part of the Visual Studio.NET 2008 SP1 release.

Data controls are those that can be bound to data from external data sources. These data sources may include databases, XML files or flat files. ASP.NET 2.0 introduced some data source controls with a powerful data binding technique that
eliminated the need to write lengthy code for binding data to data controls.

In ASP.NET the term Data Binding implies binding the controls to data retrieved from a data source and providing a read or write connectivity between these controls and the data that they are bound to.

The Entity Data Source control is an example of a data control that is included as part of the Visual Studio 2008 SP1 release and can be used to bind data retrieved from an Entity Data Model to the data bound controls of ASP.NET. If Visual Studio 2008 SP1 is installed, the EntityDataSource control can be seen listed in the Data section of the toolbox.

If the EntityDataSource control is not in the toolbox, follow these steps:

1. Right-click on the Toolbox and select the Choose Items option as shown in the following figure:



2. From the list of the components displayed, scroll down to locate the EntityDataSource in the .NET Framework Components tab. This is the first component selected in the below figure:



3. Select the checkbox next to the EntityDataSource component and click on OK. The ADO.NET Entity Data Source control is now added to the toolbox as shown in the following figure:



If the EntityDataSource component is not listed in the list of the components displayed in the Choose Toolbox Items window, it has to be added manually. To do this, click on the Browse button in the Choose Toolbox Items window, locate the System.Web.Entity.dll in the folder where Microsoft .NET Framework 3.5 has been installed and click OK.


This article is the final article of a 4-part series on the ADO.NET Entity Framework. This article will describe how to use the Entity Data Model and the Entity Data Source Control to implement a program using the Entity Framework. Using a GridView control to display bound data.  Refer to the solution we created earlier using the Entity Data Model Designer and follow the steps below:

1. Drag and drop an Entity Data Source control from the toolbox onto your Default.aspx web form.

2. Click on the Configure Data Source option to specify the data source as shown below.



3. Specify the Connection String and DefaultContainerName and then click on Next.

4. Specify the fields you would want to retrieve from the database table and
when done click Finish.

5. Drag and drop a GridView control from the toolbox onto the Default.aspx web form as seen below.



6. Use the Choose Data Source option of the GridView control to associate its data source with the Entity Data Source control we created earlier. Refer to the following figure:



Below is how the markup code of the GridView control looks with its templates defined. Note how the DataSourceID of the GridView control has been associated with the Entity Data Source control created earlier.

<asp:GridView ID="GridView1" runat="server" AutoGenerateColumns=
"False" DataKeyNames="EmployeeID"
DataSourceID="SqlDataSource1" BorderColor="Black" BorderStyle="Solid" Width="400px">
<Columns>
<asp:BoundField DataField="EmployeeID" HeaderText="Employee ID" ReadOnly="True" SortExpression="EmployeeID" />
<asp:BoundField DataField="FirstName" HeaderText=
"First Name" SortExpression="FirstName" />
<asp:BoundField DataField="LastName" HeaderText=
"Last Name" SortExpression="LastName" />
<asp:BoundField DataField="Address" HeaderText="Address" SortExpression="Address" />
</Columns>
</asp:GridView>

It's now done! When the application is executed, the output should be similar to what is shown below:



Summary
This series of articles has discussed how to get started with the ADO.NET Entity Framework. Learning how to create an Entity Data Model and use it along with the Entity Data Source control to bind data to a GridView data control.


Working with Exceptions in ADO.NET

ADO.NET is a very rich data access technology with a plenty of powerful features - improved performance, an optimized SQL Provider, seamless support for XML and ability to work in connected and disconnected mode, to name a few. Handling exceptions properly is one of the prime concerns when working with any data access technologies. The new version of ADO.NET includes powerful support for working with exceptions efficiently. This article throws light on how to handle exceptions efficiently when working with ADO.NET and highlights the best practices that can be followed in this context.


What are Exceptions?

An exception is an error that occurs at runtime. If it is not handled properly, it terminates the normal flow of the program. According to MSDN, “An exception is any error condition or unexpected behavior encountered by an executing program. Exceptions can be raised because of a fault in your code or in code you call (such as a shared library), unavailable operating system resources, unexpected conditions the common language runtime encounters (such as code that cannot be verified), and so on. Your application can recover from some of these conditions, but not others. While you can recover from most application exceptions, you cannot recover from most runtime exceptions.” The .NET runtime creates an exception object to represent an exception when it occurs. It also creates an Exception Information Table for each executable. There are four different types of exception handlers, namely
A finally handler
A fault handler
A type-filtered handler
A user-filtered handler

Exceptions in ADO.NET

In ADO.NET 1.1, there was no common base class for ADO.NET exceptions. According to Frans Bouma, "every provider has its own exception for reporting errors, and they're not derived from a generic ADO.NET exception. This thus makes it hard to catch db specific exceptions in generic code and handle it". With ADO.NET 2.0 however, we have a new common exception base class called the System.Data.Common.DbException class. The generic DbException class in ADO.NET 2.0 exposes all the information a provider specific exception would require. This class enables catching the provider specific exceptions in a generic manner. However, exceptions should be avoided at all cost. Exceptions are expensive; they slow down the application, and use resources, regardless of whether or not they are handled. In addition, "logic by exception" can make debugging very tedious.

Exception Handlers

Exceptions are handled using the try, catch and finally blocks. The try block contains code that raises an exception. The exceptions that are raised in the try block are caught in one or more appropriate catch blocks. Why appropriate? The reason is that if an exception has occurred, only one of the catch blocks would be executed that is most appropriate to the type of the exception that has occurred. The finally block contains any necessary cleanup code. It should be noted that a try block should contain one or more catch blocks or at least a finally block. Why is a finally block required?
A finally block is executed whether or not an exception occurs. Hence, it is particularly useful for cleanup operations. MSDN states, “the purpose of a finally statement is to ensure that the necessary cleanup of objects, usually objects that are holding external resources, happens immediately, even if an exception is thrown”. Let us consider that we have opened a database connection in a try block. If an exception occurs, it is caught in the catch block just after the try block as shown in the code snippet below:
string connectionString = ...; // Some connection string
SqlConnection sqlConnection = null;
try
{
sqlConnection = new SqlConnection(connectionString);
          sqlConnection.Open();
          //Some code
}

catch (Exception ex)
{
//Some exception handling code
}

finally
{
           sqlConnection.Close();
}
Remember that the connections that are opened as late as possible and released as early as possible. They should be closed immediately we are done using them. Database connections should be closed as soon as you are done using them for efficient connection pooling and hence enhanced performance. Refer to my article on connection pooling and its efficient usage at Understanding Connection Pooling in NET. It is advisable to open the database connections in the try block and close them in the finally block. This would ensure that the connections would be properly closed irrespective of whether an exception has occurred or not. In the code example above, the connection is guaranteed to be closed as we have made a call to the Close() method on the SqlConnection instance in the finally block. I would however prefer to use the “using” statement for handling exceptions when using unmanaged resources. Let me explain what it is and how it can be used.

Using the “using” statement

The using statement can be used to specify a boundary for the object outside of which, the object is destroyed automatically. The runtime invokes the Dispose method of the objects that are specified within this statement when the control comes out of this block. This is why this is a preferred choice when using exceptions for managing resources in .NET. Refer to the following code that uses the “using” statement:
string connectionString = ...; // Some connection string
using (SqlConnection sqlConnection = new SqlConnection(connectionString))
{
  sqlConnection.Open();
  //Some code
}
Note that when the end of the using block would be encountered, the Dispose () method will be immediately called on the instance. Note that when the Dispose() method is called on this connection instance, it checks to see if the connection is opened; if open it would close it implicitly prior to disposing off the instance. Please refer to my article at When and How to Use Dispose and Finalize in C# on Dispose and Finalize for more information on when and how to use them appropriately.
The above code gets translated implicitly to:
string connectionString = ...; // Some connection string
SqlConnection sqlConnection = new SqlConnection(connectionString));
try
{
  sqlConnection.Open();
  //Some code
}
finally
{
   ((IDispose)sqlConnection).Dispose();
}
Remember to keep the try block as short as possible. Note that in the code example above, we have just opened the connection in the try block. Do not use any unnecessary code/logic in the try block that is not supposed to throw any exception. Do not catch any exception that you cannot handle and avoid rethrowing exceptions unnecessarily as it is very expensive.

Prevent unnecessary database hit

One of the most useful of all features of ADO.NET is that you can attach messages to each row of data in a DataSet object. The SqlDataAdapter class attaches error messages to the rows of a DataSet if a specific database action has not been successfully completed. We can then check whether there are any errors in a DataSet prior to sending the same for updating the database using the HasErrors property of the DataSet instance. This, if used judiciously, can prevent an unnecessary database hit. Please refer to the code snippet that follows:
DataSet newDataSet = previousDataSet.GetChanges(DataRowState.Modified);
    if (newDataSet.HasErrors)
    {
      // If there are errors take appropriate action   
    }
    else
    {
      // Necessary code to update the database
    }

Conclusion

Exceptions should be used appropriately in ADO.NET and only used when required -- in exceptional cases. Even if they come in handy, exceptions tend to slow down the execution speed of the program and consume more resources. In this article, I have examined how to make proper use of exceptions when working with ADO.NET.

Working with ADO.NET Transactions

A transaction is a group of operations combined into a logical unit of work that is either guaranteed to be executed as a whole or rolled back. Transactions help the database in satisfying all the ACID (Atomic, Consistent, Isolated, and Durable). Transaction processing is an indispensible part of ADO.NET. It guarantees that a block of statements will either be executed in its entirety or rolled back,( i.e., none of the statements will be executed). Transaction processing has improved a lot in ADO.NET 2.0. This article discusses how we can work with transactions in both ADO.NET 1.1 and 2.0.

Implementing Transactions in ADO.NET

Note that in ADO.NET, the transactions are started by calling the BeginTransaction method of the connection class. This method returns an object of type SqlTransaction.
Other ADO.NET connection classes like OleDbConnection, OracleConnection also have similar methods. Once you are done executing the necessary statements within the transaction unit/block, make a call to the Commit method of the given SqlTransaction object, or you can roll back the transaction using the Rollback method, depending on your requirements (if any error occurs when the transaction unit/block was executed).
To work with transactions in ADO.NET, you require an open connection instance and a transaction instance. Then you need to invoke the necessary methods as stated later in this article.  Transactions are supported in ADO.NET by the SqlTransaction class that belongs to the System.Data.SqlClient namespace.
The two main properties of this class are as follows:
  • Connection: This indicates the SqlConnection instance that the transaction instance is associated with
  • IsolationLevel: This specifies the IsolationLevel of the transaction
The following are the methods of this class that are noteworthy:
Commit()   This method is called to commit the transaction
Rollback()  This method can be invoked to roll back a transaction. Note that a transaction can only be rolled back after it has been committed.
Save()       This method creates a save point in the transaction. This save point can be used to rollback a portion of the transaction at a later point in time. The following are the steps to implement transaction processing in ADO.NET.
  • Connect to the database
  • Create a SqlCommand instance with the necessary parameters
  • Open the database connection using the connection instance
  • Call the BeginTransaction method of the Connection object to mark the beginning of the transaction
  • Execute the sql statements using the command instance
  • Call the Commit method of the Transaction object to complete the
    transaction, or the Rollback method to cancel or abort the transaction
  • Close the connection to the database
The following code snippet shows how we can implement transaction processing using ADO.NET in our applications.
string connectionString = ...; //Some connection string
SqlConnection sqlConnection = new SqlConnection(connectionString);
sqlConnection.Open();

SqlTransaction sqlTransaction = sqlConnection.BeginTransaction();

SqlCommand sqlCommand = new SqlCommand();
sqlCommand.Transaction = sqlTransaction;

try
{
sqlCommand.CommandText = "Insert into Employee (EmpCode, EmpName) VALUES (1, 'Joydip')";
sqlCommand.ExecuteNonQuery();
sqlCommand.CommandText = "Insert into Dept (DeptCode, DeptName, EmpCode) VALUES (9, 'Software', 1)";
sqlCommand.ExecuteNonQuery();
sqlTransaction.Commit();
//Usual code
}

catch(Exception e)
{
sqlTransaction.Rollback();
//Usual code
}

finally
{
sqlConnection.Close();
}
The next piece of code illustrates how we can use the “using” statement for the above code. According to MSDN, the “using” statement, “defines a scope, outside of which an object or objects will be disposed.
A using statement can be exited either when the end of the using statement is reached or if an exception is thrown and control leaves the statement block before the end of the statement”.
using (SqlConnection sqlConnection = new SqlConnection(connectionString))
{
SqlCommand command = connection.CreateCommand();
SqlTransaction transaction = null;

try
{
sqlConnection.Open();
transaction = sqlConnection.BeginTransaction();

command.Transaction = transaction;

command.CommandText = "Insert into employee (empID, empName) values (1, 'Joydip');
command.ExecuteNonQuery();

command.CommandText = "Insert into dept (deptID,deptName,empID) values (9,'Software',1)";
command.ExecuteNonQuery();

transaction.Commit();
}
catch(Exception ex)
{
transaction.Rollback();
throw ex;
}
finally
{
sqlConnection.Close();
}
}
The Microsoft’s ADO.NET version 2.0 added a lot of new features to its earlier counterpart to add moer flexibility and ease of use. As far as transactions are concerned, a new namespace called System.Transactions has been introduced that promises a significantly improved support for distributed transactions. It contains a class called TransactionScope that can run a set of statements. It can also determine whether the objects in the scope have support for transactions. If the transaction has completed successfully, the changes are committed to the database else it is rolled back. We need to specify whether the transaction block is complete by making a call to the TransactionScope.Complete method explicitly, else, the transaction would be rolled back when the transaction instance would be discarded by the implicit Dispose method.
The following piece of code illustrates what we have learnt so far in this section.
bool IsConsistent = false;
using (System.Transactions.TransactionScope transactionScope = new System.Transactions.TransactionScope())
{
SqlConnection sqlConnection = newSqlConnection(connectionString);
string sqlString = "Update emp set empName = 'Joydip Kanjilal' where empID = 9";
SqlCommand cmd1 = newSqlCommand(sql, cn);
sqlConnection.Open();
cmd1.ExecuteNonQuery();
sqlConnection.Close();
transactionScope.Consistent = IsConsistent;
}

TransactionScope also has support for distributed transactions. 
We can implement transactions for multiple database connections using it. 
The following piece of code shows how we can implement transactional support for 
multiple databases using the TransactionScope class.

using (TransactionScope transactionScope = new TransactionScope())
{
using (SqlConnection codesDatabaseConnection = new SqlConnection(codesDatabaseConnectionString))
{
SqlCommand sqlCommandCodes = codesDatabaseConnection.CreateCommand();
sqlCommandCodes.CommandText = "Insert Into codes (codeID,codeText) values (1,'Test')";
codesDatabaseConnection.Open();
sqlCommandCodes.ExecuteNonQuery();
codesDatabaseConnection.Close();
}

using (SqlConnection statesDatabaseConnection = new SqlConnection(statesDatabaseConnectionString))
{
SqlCommand sqlCommandStates = statesDatabaseConnection.CreateCommand();
sqlCommandStates.CommandText = "Insert into States(stateID,stateName) values (1, 'Test')";
codesDatabaseConnection.Open();
sqlCommandStates.ExecuteNonQuery();
statesDatabaseConnection.Close();
}

transactionScope.Complete();
}

Points to be noted

It should be noted that the SqlTransaction object returned by the BeginTransaction () method has to be assigned to the Transaction property of the Command object; else an InvalidOperationException will be thrown by the application when the first query is executed. Likewise, the Connection instance should be open by invoking the Open method on it prior to starting a new transaction; else an InvalidOperationException would be thrown. In order to improve the performance of applications, we should try to keep the transactions (the transaction units/blocks that contain the statements to be executed in a batch as a whole) as short as possible. This will help minimize the lock contention and hence increase throughput. Further, we should analyze whether or not we actually require a transaction for a batch of statements. Try not to unnecessarily have transactional statements in you code as it might have a performance drawback due to the reasons stated above.

Conclusion

The usage of transactions guarantee the execution of a batch of statements sequentially in its entirety or roll them back hence preserving the database integrity and data consistency. In a nutshell, in order to work with transactions, invoke the BeginTransaction() method of the appropriate database connection instance and then call either the Commit() or Rollback() method on the returned transaction object reference depending on the circumstances. However, it should be noted that transactions hold locks and may cause contention issues; they should be as short as it is possible. Hence, this is a major performance drawback in using transactions in our code. But, if used properly, it can facilitate the design and implementation of robust applications with data security and consistency. This article has discusses transactions in details in a lucid language with code examples to illustrate the concepts.

Implementing a Generic Data Access Layer in ADO.NET

A Data Access Layer (DAL) is an integral part in the design of any application. There are plenty of articles that discuss how we an implement a DAL using ADO.NET. Most of these have constraints in the sense that they are not generic in nature. In other words, they are not provider independent. This series of articles will discuss the implementation of a generic, i.e., a provider independent Data Access Layer in ADO.NET. The basic prerequisite to learning this article is a proper understanding of ADO.NET and good coding skills in C#. I will present the code examples in this article in C#. However with little effort, you can twist it over to VB.NET as well.

The Strategies Involved in Creating a Data Access Layer

Let us first understand what the necessities are for building such a layer. I would rather start by discussing how an application designed using ADO.NET actually connects to the database and performs the CRUD (Create, Read, Update and Delete) operations.
First, you need to open the connection using a database provider. Fine, but what is a provider anyway? A provider is responsible for connecting to a specific database. Why specific? The reason is that a provider for an Oracle database cannot be used to connect to a SQL Server database and vice-versa. Next, you need a command object that can be used to execute the database commands of your choice. This is followed by the usage of a DataReader or a DataSet or a DataTable instance to retrieve data (if you are performing a Read operation) from the database table. When you use a DataSet, you need a DataAdapter as a bridge between the actual database and the DataSet instance.

Implementing the DAL Framework

With this in mind, let us design a provider independent Data Access Layer.
Let us first understand the ADO.NET Library. The major classes that constitute the ADO.NET library are:
  • Connection
  • Command
  • Data Reader
  • Data Adapter
The corresponding interfaces that the above classes implement are stated below.
  • IDBConnection
  • IDataReader
  • IDBCommand
  • IDBDataAdapter
The Data Providers that make up the library are specific to a particular database that they would connect to. These are the Data Providers that are available in ADO.NET.
  • SQL Server Data Provider
  • Oracle Data Provider
  • ODBC Data Provider
  • OleDB Data Provider
Now we are all set to implement our DAL. The major components that constitute our DAL block are:
  • ProviderType (Enum)
  • DatabaseConnectionState (Enum)
  • StoredProcedureParameterDirection (Enum)
  • DBManager (Class)
  • DBHelper (Class)
We will start our discussion with the enum data type that would contain the data provider types in it. These provider types relate to the databases that we will be connecting to, depending our requirements. The following code snippet illustrates the ProviderType enum that contains four values that correspond to a specific data provider.
public enum ProviderType

{

SqlServer, OleDb, Oracle, ODBC, ConfigDefined

}
Now, there may be situations where you might need to either keep the database connection state open or close after a database operation is over. As an example, after you read data into a DataReader instance from the underlying database, you might need to keep the connection state open for subsequent operations. You may also need to close it if it is no longer used. Keeping this in mind, let us have an enum data type that houses two values that correspond to the database connection states that we just discussed about. The following is the code for the enum called DatabaseConnectionState.
public enum DatabaseConnectionState

{

KeepOpen, CloseOnExit

}
When you are executing the stored procedures, you might want to send data to the database or retrieve the same from the database. Accordingly, we have another enum called StoredProcedureParameterDirection that contains values that correspond to the parameter directions for the stored procedures that we would execute with the help of our DAL. The following is the code for this enum.
public enum StoredProcedureParameterDirection

{

Input, InputOutput, Output, ReturnValue

}
We need a factory class that would return a DbProviderFactory type instance or a DbDataAdapter type instance depending on the data provider that we are using. This class contains factory methods that typically are static methods. What is a static method, anyway? This is an often misunderstood concept but a very important one. Well, a static method, often called a shared method (it is shared by all instances of the class that it belongs to) belongs to the class and a non-static method belongs to an object of a class. That is, a non-static method can only be called on an object of a class that it belongs to. A static method can however be called both on the class as well as an object of the class. Further, a static method can access the static members of a class only unlike a non-static method that can access both static and non-static members. These static methods in the DBFactory class accept a reference to the ProviderType enum that denotes the data provider type in use.
The next class in our discussion is the DBFactory class, designed on the factory design pattern. Before we discuss the DBFactory class and its intent, let us understand what a factory design pattern is. What is a factory design pattern? The Factory pattern is responsible for providing an interface for the creation of objects, but allows the inherited classes to decide on the appropriate time of these instantiations.
The following is the source code for our DBFactory class. It contains two static methods called GetProvider and GetDataAdapter both of which accept an instance of the database provider type enum, i.e., ProviderType.
using System.Data.Common;

using System.Data.SqlClient;

using System.Data.OleDb;

using System.Data.Odbc;

using System.Data.OracleClient;

using System.Collections.Generic;

using System.Text;

namespace DataAccessLayer

{

internal class DBFactory

{

private static DbProviderFactory objFactory = null;

 public static DbProviderFactory GetProvider(ProviderType provider)

{

switch (provider)

{

case ProviderType.SqlServer:

objFactory = SqlClientFactory.Instance;

break;

case ProviderType.OleDb:

objFactory = OleDbFactory.Instance;

break;

case ProviderType.Oracle:

objFactory = OracleClientFactory.Instance;

break;

case ProviderType.ODBC:

objFactory = OdbcFactory.Instance;

break;

}

return objFactory;

}  public static DbDataAdapter GetDataAdapter(ProviderType providerType)

{

switch (providerType)

{

case ProviderType.SqlServer:

return new SqlDataAdapter();

case ProviderType.OleDb:

return new OleDbDataAdapter();

case ProviderType.ODBC:

return new OdbcDataAdapter();

case ProviderType.Oracle:

return new OracleDataAdapter();

default:

return null;

}

}

}

}
Note that you have different providers for different databases, i.e., the database providers are all database specific. A DataAdapter as we may recall, is a bridge between the database and the DataSet – a set of disconnected data. Though you have various data readers depending on the type of the data provider you are using, you have only one type of data set. Why? This is because a data set is a disconnected in-memory set of data. The schema of the database defines the schema of the data set.
Refer to the code example shown above. Both the methods in the code example shown above check the value of the enum reference instance and accordingly return an appropriate DbDataProvider or DbProviderFactory instance respectively. Such methods are actually called factory methods.
The DatabaseHelper class encapsulates the various calls to the database to perform the CRUD operations. The DBManager class that we will discuss later acts as a wrapper on top of this class. You have various methods in the DatabaseHelper class to add parameters, to execute queries, stored procedures, etc.
Here is the code that illustrates how the connection to the database is established based on the provider type chosen and the command object created.
public DatabaseHelper(string connectionstring, ProviderType provider)
{
this.strConnectionString = connectionstring;
objFactory = DBFactory.GetProvider(provider);
objConnection = objFactory.CreateConnection();
objCommand = objFactory.CreateCommand();
objConnection.ConnectionString = this.strConnectionString;
objCommand.Connection = objConnection;
}
In ADO.NET, you have the following data providers.
  • SQL Server Data Provider
  • Oracle Data Provider
  • Odbc Data Provider
  • OleDB Data Provider
Note: Depending on the data provider used, you need to use the command object that is specific to that provider. Your data reader should also be specific to the data provider used. The use of the DBFactory class as shown in the code snippet above. Note that you use the command objects to execute the database commands that contain the SQL statements. Added to this, we will have overloaded versions of AddParameter method to add parameters to the command objects so that we can pass parameters to the database stored procedures or SQL statements. Here is the simplest version of the AddParameter method.
internal int AddParameter(string name, object value)

{
DbParameter dbParameter = objFactory.CreateParameter();
dbParameter.ParameterName = name;
dbParameter.Value = value;
return objCommand.Parameters.Add(dbParameter);
}
While the ParameterName identifies the unique name of the parameter to be passed, the Value implies the value of the parameter passed. Hence, if the ParameterName comprises of “@EmpName”, the Parameter’s value might be “Dinesh Sabat”.
In order to ensure that our DataAccessLayer supports transactions, we have three methods that enable support for transactions. Fine, but what is a transaction? A transaction is an unit of work that is guaranteed to be executed in its entirety or not executed at all. Here are those methods.
internal void BeginTransaction()

{
if (objConnection.State == System.Data.ConnectionState.Closed)
{
objConnection.Open();
}
objCommand.Transaction = objConnection.BeginTransaction();
}

 internal void CommitTransaction()
{
objCommand.Transaction.Commit();
objConnection.Close();
}

 internal void RollbackTransaction()
{
objCommand.Transaction.Rollback();
objConnection.Close();
}
Note that we have methods that correspond to beginning, commiting or rolling a transaction back to revert the changes.
We will have the following four methods for performing the CRUD (Create, Update, Read and Delete) operations in the database. These methods are:
ExecuteScalar()
ExecuteReader()
ExecuteNonQuery()
ExecuteDataSet()
The ExecuteScalar() method is used to read one value from the database. The ExecuteReader() method returns a DataReader instance from the database populated with rows of data. The ExecuteNonQuery() method is used to insert, update or delete data either using SQL statements or using stored procedures.
The following is the complete code for the DatabaseHelper class.
using System;

using System.Collections.Generic;
using System.Text;
using System.Data;
using System.Configuration;
using System.Data.Common;
using System.Data.SqlClient;
using System.Data.OleDb;
using System.Data.Odbc;
using System.IO;
using ApplicationFramework.Configuration;

 namespace ApplicationFramework.DataAccessLayer

{
public class DatabaseHelper : IDisposable
{
private string strConnectionString;
private DbConnection objConnection;
private DbCommand objCommand;
private DbProviderFactory objFactory = null;
private ParameterCache parameterCache = ParameterCache.GetParameterCache();
public DatabaseHelper(string connectionstring, ProviderType provider)

{

this.strConnectionString = connectionstring;
objFactory = DBFactory.GetProvider(provider);
objConnection = objFactory.CreateConnection();
objCommand = objFactory.CreateCommand();
objConnection.ConnectionString = this.strConnectionString;
objCommand.Connection = objConnection;

}

internal int AddParameter(string name, object value)

{

DbParameter dbParameter = objFactory.CreateParameter();
dbParameter.ParameterName = name;
dbParameter.Value = value;
return objCommand.Parameters.Add(dbParameter);

}

 internal int AddParameter(DbParameter parameter)

{

return objCommand.Parameters.Add(parameter);

}

 internal int AddParameter(string name, StoredProcedureParameterDirection parameterDirection)

{

DbParameter parameter = objFactory.CreateParameter();
parameter.ParameterName = name;
parameter.Value = String.Empty;
parameter.DbType = DbType.String;
parameter.Size = 50;
switch (parameterDirection)

{

case StoredProcedureParameterDirection.Input:
parameter.Direction = System.Data.ParameterDirection.Input;
break;
case StoredProcedureParameterDirection.Output:
parameter.Direction = System.Data.ParameterDirection.Output;
break;
case StoredProcedureParameterDirection.InputOutput:
parameter.Direction = System.Data.ParameterDirection.InputOutput;
break;
case StoredProcedureParameterDirection.ReturnValue:
parameter.Direction = System.Data.ParameterDirection.ReturnValue;
break;

}
 return objCommand.Parameters.Add(parameter);

}

internal int AddParameter(string name, object value, StoredProcedureParameterDirection parameterDirection)

{

DbParameter parameter = objFactory.CreateParameter();
parameter.ParameterName = name;
parameter.Value = value;
parameter.DbType = DbType.String;
parameter.Size = 50;
switch (parameterDirection)

{

case StoredProcedureParameterDirection.Input:
parameter.Direction = System.Data.ParameterDirection.Input;
break;
case StoredProcedureParameterDirection.Output:
parameter.Direction = System.Data.ParameterDirection.Output;
break;
case StoredProcedureParameterDirection.InputOutput:
parameter.Direction = System.Data.ParameterDirection.InputOutput;
break;
case StoredProcedureParameterDirection.ReturnValue:
parameter.Direction = System.Data.ParameterDirection.ReturnValue;
break;

}

 return objCommand.Parameters.Add(parameter);

}

 internal int AddParameter(string name, StoredProcedureParameterDirection parameterDirection, int size, DbType dbType)

{

DbParameter parameter = objFactory.CreateParameter();
parameter.ParameterName = name;
parameter.DbType = dbType;
parameter.Size = size;
switch (parameterDirection)

{

case StoredProcedureParameterDirection.Input:
parameter.Direction = System.Data.ParameterDirection.Input;
break;
case StoredProcedureParameterDirection.Output:
parameter.Direction = System.Data.ParameterDirection.Output;
break;
case StoredProcedureParameterDirection.InputOutput:
parameter.Direction = System.Data.ParameterDirection.InputOutput;
break;
case StoredProcedureParameterDirection.ReturnValue:
parameter.Direction = System.Data.ParameterDirection.ReturnValue;
break;

}

 return objCommand.Parameters.Add(parameter);

}

 internal int AddParameter(string name, object value, StoredProcedureParameterDirection parameterDirection, int size, DbType dbType)

{

DbParameter parameter = objFactory.CreateParameter();
parameter.ParameterName = name;
parameter.Value = value;
parameter.DbType = dbType;
parameter.Size = size;
switch (parameterDirection)

{

case StoredProcedureParameterDirection.Input:
parameter.Direction = System.Data.ParameterDirection.Input;
break;
case StoredProcedureParameterDirection.Output:
parameter.Direction = System.Data.ParameterDirection.Output;
break;
case StoredProcedureParameterDirection.InputOutput:
parameter.Direction = System.Data.ParameterDirection.InputOutput;
break;
case StoredProcedureParameterDirection.ReturnValue:
parameter.Direction = System.Data.ParameterDirection.ReturnValue;
break;

}

 return objCommand.Parameters.Add(parameter);

}

 internal DbCommand Command

{

get

{

return objCommand;

}

}

 internal DbConnection Connection

{

get

{

return objConnection;

}

}

 internal void BeginTransaction()

{

if (objConnection.State == System.Data.ConnectionState.Closed)

{

objConnection.Open();

}

 objCommand.Transaction = objConnection.BeginTransaction();

}

 internal void CommitTransaction()

{

objCommand.Transaction.Commit();
objConnection.Close();

}

 internal void RollbackTransaction()

{

objCommand.Transaction.Rollback();
objConnection.Close();

}

internal int ExecuteNonQuery(string query)

{

return ExecuteNonQuery(query, CommandType.Text, DatabaseConnectionState.CloseOnExit);

}

 internal int ExecuteNonQuery(string query, CommandType commandtype)

{

return ExecuteNonQuery(query, commandtype, DatabaseConnectionState.CloseOnExit);

}

 internal int ExecuteNonQuery(string query, DatabaseConnectionState connectionstate)

{

return ExecuteNonQuery(query, CommandType.Text, connectionstate);

}

 internal int ExecuteNonQuery(string query, CommandType commandtype, DatabaseConnectionState connectionstate)

{

objCommand.CommandText = query;

objCommand.CommandType = commandtype;

 int i = -1;

 try

{

if (objConnection.State == System.Data.ConnectionState.Closed)

{

objConnection.Open();

}

 i = objCommand.ExecuteNonQuery();

}

catch

{

throw;

}

finally

{

if (connectionstate == DatabaseConnectionState.CloseOnExit)

{

objConnection.Close();

}

}

 return i;

}

 internal object ExecuteScalar(string query)

{

return ExecuteScalar(query, CommandType.Text, DatabaseConnectionState.CloseOnExit);

}

 internal object ExecuteScalar(string query, CommandType commandtype)

{

return ExecuteScalar(query, commandtype, DatabaseConnectionState.CloseOnExit);

}

 internal object ExecuteScalar(string query, DatabaseConnectionState connectionstate)

{

return ExecuteScalar(query, CommandType.Text, connectionstate);

}

 internal object ExecuteScalar(string query, CommandType commandtype, DatabaseConnectionState connectionstate)

{

objCommand.CommandText = query;
objCommand.CommandType = commandtype;
object o = null;

try

{
if (objConnection.State == System.Data.ConnectionState.Closed)
{
objConnection.Open();
}
 o = objCommand.ExecuteScalar();
}
catch
{
throw;
}
finally
{
objCommand.Parameters.Clear();
if (connectionstate == DatabaseConnectionState.CloseOnExit)
{
objConnection.Close();
}

}

 return o;
}
 internal DbDataReader ExecuteReader(string query)

{
return ExecuteReader(query, CommandType.Text, DatabaseConnectionState.CloseOnExit);
}

 internal DbDataReader ExecuteReader(string query, CommandType commandtype)
{
return ExecuteReader(query, commandtype, DatabaseConnectionState.CloseOnExit);
}

 internal DbDataReader ExecuteReader(string query, DatabaseConnectionState connectionstate)
{
return ExecuteReader(query, CommandType.Text, connectionstate);
}

internal DbDataReader ExecuteReader(string query, CommandType commandtype, DatabaseConnectionState connectionstate)

{
objCommand.CommandText = query;
objCommand.CommandType = commandtype;
DbDataReader reader = null;

try
{
if (objConnection.State == System.Data.ConnectionState.Closed)
{
objConnection.Open();
}

if (connectionstate == DatabaseConnectionState.CloseOnExit)
{
reader = objCommand.ExecuteReader(CommandBehavior.CloseConnection);
}

else
{
reader = objCommand.ExecuteReader();
}

 }

catch
{
}

finally
{
objCommand.Parameters.Clear();
}

 return reader;
}
 internal DataSet ExecuteDataSet(string query)
{
return ExecuteDataSet(query, CommandType.Text, DatabaseConnectionState.CloseOnExit);
}

 internal DataSet ExecuteDataSet(string query, CommandType commandtype)
{
return ExecuteDataSet(query, commandtype, DatabaseConnectionState.CloseOnExit);
}

 internal DataSet ExecuteDataSet(string query, DatabaseConnectionState connectionstate)
{
return ExecuteDataSet(query, CommandType.Text, connectionstate);
}

 internal DataSet ExecuteDataSet(string query, CommandType commandtype, DatabaseConnectionState connectionstate)
{
DbDataAdapter adapter = objFactory.CreateDataAdapter();
objCommand.CommandText = query;
objCommand.CommandType = commandtype;
adapter.SelectCommand = objCommand;
DataSet ds = new DataSet();

try
{
adapter.Fill(ds);
}

catch
{
throw;
}

finally
{
objCommand.Parameters.Clear();
if (connectionstate == DatabaseConnectionState.CloseOnExit)

{
if (objConnection.State == System.Data.ConnectionState.Open)
{

objConnection.Close();

}

}

}

return ds;
}
public void Dispose()
{
if (objConnection.State == ConnectionState.Open)
{

objConnection.Close();
objConnection.Dispose();

}
objCommand.Dispose();
}

 internal IDataReader ExecuteReader(string storedProcedureName, params object[] parameters)

{

objCommand.CommandText = storedProcedureName;
objCommand.CommandType = CommandType.StoredProcedure;
DbDataReader reader = null;

try
{

RetrieveParameters(objCommand);
SetParameterValues(objCommand, parameters);
if (objConnection.State == System.Data.ConnectionState.Closed)

{
objConnection.Open();
}

reader = objCommand.ExecuteReader();

}

catch
{
throw;
}

finally

{
objCommand.Parameters.Clear();
}

return reader;

}
 internal void SetParameterValues(DbCommand objCommand, object[] parameters)

{
int index = 0;
for (int i = 0; i < parameters.Length; i++)
{

DbParameter parameter = objCommand.Parameters[i + index];
SetParameterValue(objCommand, parameter.ParameterName, parameters[i]);

}

}

internal virtual void SetParameterValue(DbCommand dbCommand, string parameterName, object value)

{

dbCommand.Parameters[parameterName].Value = (value == null) ? DBNull.Value : value;

}

 internal void RetrieveParameters(DbCommand dbCommand)

{

if (parameterCache.ContainsParameters(Connection.ConnectionString, dbCommand.CommandText))
{
DbParameter[] parameters = parameterCache.GetParameters(Connection.ConnectionString, dbCommand.CommandText);
dbCommand.Parameters.AddRange(parameters);
}

else

{
string connectionString = Connection.ConnectionString;
dbCommand.Connection = Connection;
Connection.Open();
SqlCommandBuilder.DeriveParameters(dbCommand as SqlCommand);
parameterCache.AddParameters(connectionString, dbCommand.CommandText, dbCommand.Parameters);

}

}

 internal object GetParameter(string name)

{
return objCommand.Parameters[name].Value;
}

}

}
 Note that most of the methods of the DatabaseHelper class have been marked as “internal” to prevent them from being called outside of the “ApplicationFramework.DataAccessLayer” namespace.
Now we will come to the DBManager class; the wrapper class that encapsulates the calls to another class called DBHelper that actually performs the CRUD operations on the underlying database. The DBManager class extends the DBManagerBase abstract class. The DBManagerBase class contains the definition for the Open () and the Close () methods and some other public properties that are generic and can be used by any class that acts as a wrapper. We will have a look at the DBManagerBase class first.
The following code listing shows the DBManagerBase class.
using System;
using System.Collections.Generic;
using System.Text;
using System.Data;
using System.Configuration;
using System.Data.Common;
using System.Data.SqlClient;
using System.Data.OleDb;
using System.Data.Odbc;
using System.IO;

namespace ApplicationFramework.DataAccessLayer

{
public abstract class DBManagerBase

{

protected DatabaseHelper databaseHelper = null;
protected DbDataReader dbDataReader = null;
protected DataSet dataSet = null;
protected ProviderType providerType;
protected String connectionString = String.Empty;
protected bool isOpen = false;

public bool IsOpen

{

get
{
return isOpen;

}

set
{

isOpen = value;

}

}

 public string ConnectionString
{
get

{
return connectionString;
}

set

{
connectionString = value;
}

}

public DbConnection Connection
{
get
{

return databaseHelper.Connection;
}
}

public DbCommand Command

{
get
{

return databaseHelper.Command;
}

}

public ProviderType DBProvider
{
set
{
providerType = value;
}

get
{
return providerType;
}

}

public DataSet DBSet
{
get
{
return dataSet;
}

}

public DbDataReader DBReader
{
get
{
return dbDataReader;
}

}

protected void Open(string connectionString)

{
databaseHelper = new DatabaseHelper(connectionString, DBProvider);
}

protected void Close()
{
if (dbDataReader != null)
if (!dbDataReader.IsClosed)
dbDataReader.Close();
databaseHelper.Dispose();
}

public void BeginTransaction()
{
databaseHelper.BeginTransaction();
}

public void CommitTransaction()
{
databaseHelper.CommitTransaction();
}

 public void RollbackTransaction()
{
databaseHelper.RollbackTransaction();
}
}
}
Note that the DBManagerBase class contains the most common methods that are required. You can Open or Close a connection, Begin, Commit or Rollback transactions, etc. These methods would remain the same and are mandatory in this context even if you decide to have another version of the DBManager class with some more methods implemented it.

The DBManager class that extends the DBManagerBase abstract class contains a list of methods that can be used to execute stored procedures, queries and return DataSet instance or DataReader instances as well. You can opt for keeping your connection open after the ExecuteReader method is called so that you can use the live connection in the subsequent operations that you need to perform on your database. The methods names in the DBManager class relate to the operations that they are meant to perform. I feel not you will have any problems understanding what each of these methods are supposed to do.
Then, you have the AddParameter method that can be used to add parameters to your stored procedure so that at the time of invoking the procedure, you can pass the parameters along. The connection string that we need to use to connect to our database can be set using the ConnectionString public property. The connection string can typically be stored in your configuration file and the DBManager class can read the configuration file to retrieve the connection string.
The provider type can be set using the ProviderType enum. Fine, but, where will these values be set, i.e., how can we call the DBManager and from where? Confused? Hang on. Let us have a look at the DBManager class followed by how we can use this class to perform CRUD operations.
The following code listing depicts the DBManager class.
using System;
using System.Collections.Generic;
using System.Text;
using System.Data;
using System.Configuration;
using System.Data.Common;
using System.Data.SqlClient;
using System.Data.OleDb;
using System.Data.Odbc;
using System.IO;

namespace ApplicationFramework.DataAccessLayer

{
public sealed class DBManager : DBManagerBase
{
public void OpenConnection()
{

connectionString = ConfigurationSettings.AppSettings["ConnectionString"].ToString();
base.Open(connectionString);

}

public void OpenConnection(String connectionString)
{
base.Open(connectionString);
base.IsOpen = true;

}

public void CloseConnection()

{
if (base.isOpen)
base.Close();
base.IsOpen = false;

}

public int AddParameter(string name, object value)
{
return databaseHelper.AddParameter(name, value);
}

public int AddParameter(string name, StoredProcedureParameterDirection parameterDirection)
{
return databaseHelper.AddParameter(name, parameterDirection);

}

public int AddParameter(string name, object value, StoredProcedureParameterDirection parameterDirection)
{
return databaseHelper.AddParameter(name, value, parameterDirection);
}

public int AddParameter(string name, StoredProcedureParameterDirection parameterDirection, int size, DbType dbType)
{
return databaseHelper.AddParameter(name, parameterDirection, size, dbType);
}

public int AddParameter(string name, object value, StoredProcedureParameterDirection parameterDirection, int size, DbType dbType)
{
return databaseHelper.AddParameter(name, value, parameterDirection, size, dbType);
}

public object GetParameter(string name)
{
return databaseHelper.GetParameter(name);
}

public DbDataReader ExecuteReader(string query)
{
this.dbDataReader = databaseHelper.ExecuteReader(query);
return this.dbDataReader;
}

public DbDataReader ExecuteReader(string query, CommandType commandtype)
{
this.dbDataReader = databaseHelper.ExecuteReader(query, commandtype, DatabaseConnectionState.CloseOnExit);
return this.dbDataReader;
}

public IDataReader ExecuteReader(string storedProcedureName, params object[] parameters)
{
this.dbDataReader = (DbDataReader)databaseHelper.ExecuteReader(storedProcedureName, parameters);
return this.dbDataReader;
}

public DbDataReader ExecuteReader(string query, CommandType commandtype, DatabaseConnectionState connectionstate)
{
this.dbDataReader = databaseHelper.ExecuteReader(query, commandtype, connectionstate);
return this.dbDataReader;
}

public DbDataReader ExecuteReader(string query, DatabaseConnectionState connectionstate)
{
this.dbDataReader = databaseHelper.ExecuteReader(query, connectionstate);
return this.dbDataReader;
}

public object ExecuteScalar(string query)
{
return databaseHelper.ExecuteScalar(query);
}

public object ExecuteScalar(string query, CommandType commandtype)
{
return databaseHelper.ExecuteScalar(query, commandtype);
}

public object ExecuteScalar(string query, DatabaseConnectionState connectionstate)
{
return databaseHelper.ExecuteScalar(query, connectionstate);
}

public object ExecuteScalar(string query, CommandType commandtype, DatabaseConnectionState connectionstate)
{
return databaseHelper.ExecuteScalar(query, commandtype, connectionstate);
}

public DataSet ExecuteDataSet(string query)
{
this.dataSet = databaseHelper.ExecuteDataSet(query);
return this.dataSet;
}

public DataSet ExecuteDataSet(string query, CommandType commandtype)
{
this.dataSet = databaseHelper.ExecuteDataSet(query, commandtype);
return this.dataSet;
}

public int ExecuteNonQuery(string query, CommandType commandtype)
{
return databaseHelper.ExecuteNonQuery(query, commandtype);
}

public int ExecuteNonQuery(string query, CommandType commandtype, DatabaseConnectionState databaseConnectionState)
{
return databaseHelper.ExecuteNonQuery(query, commandtype, databaseConnectionState);
}
}
}

Using the DBManager class

You can make use of the DBManager class as shown in the code snippet below.
DBManager dbManager = new DBManager();
dbManager.OpenConnection();
dbManager.ExecuteReader("Select * from employee");

while (dbManager.DBReader.Read())
Response.Write(dbManager.DBReader[“EmpName”].ToString());
dbManager.CloseConnection();
Note that the OpenConnection and the CloseConnection methods of the DBManager class invoke the Open and the Close methods of the DBManagerBase class internally.
Similarly, you can use the DBManager class to insert data as shown in the code snippet below.
DBManager dbManager = new DBManager();
String sql = "insert into employee (EmpCode, EmpName) values ('E001''Joydip')";

try
{
dbManager.OpenConnection();
dbManager.ExecuteNonQuery(sql,CommandType.Text);
}

catch(Exception e)
{
HttpContext.Current.Response.Write(e);

}

finally
{
dbManager.CloseConnection();
HttpContext.Current.Response.Write("<BR>"+"1 record added...");
}