Shree Khanal – Architect’s Blog

Ms SQL Server Consultant

Writing Dynamic Stored Procedure

 Introduction

I have seen various developers in my company writing dynamic queries using variables in the stored procedures. They have been using variables to store larger and complex queries in stored procedures. Recently I read the article How to Build Dynamic Stored Procedures posted by Robert Marda which describes one way to do this. I have observed that most of the time only WHERE clause needs to be dynamic in the procedures. The SELECT list and FROM clause remains static. People tend to choose variables so that they can build where clause dynamically using IF conditions and putting a query in a variable gives trouble at time of debugging and maintenance of the query. I was following this same path until I had to invest considerably big amount of time to identify and fix a syntactical error in my query. Hence I revisited my query and found that I only needed build where clause dynamically. For a three liner WHERE clause I had put whole query in a variable. After looking at various approaches I came up with a different solution to write such queries. Here I will explain how we can eliminate use of variable while writing queries which product the same result as a dynamically written query will do. Decision We can use variables to store the SQL query if, the columns in SELECT lists will be generated dynamically source table names will be decided at runtime If the SELECT list is going to be static and we need to take care of only WHERE clause, then we can eliminate use of variable. This will benefit as:- Less complexity Easy syntax checking (mostly quote (‘) gives problem to novice developers when query is stored in a variable) Easy maintenance Process To understand the different approach we will require some basic data as explained below. Let’s create a simple table EmployeeDetails. CREATE TABLE EmployeeDetails ( Employee_Name VARCHAR(50), Gender CHAR(1), Age INT ) GO We have done with an empty table creation. Now fill the table with data. INSERT INTO EmployeeDetails VALUES (‘Sunil’,’M’,30) GO INSERT INTO EmployeeDetails VALUES (‘Jimmy’,’F’,24) GO INSERT INTO EmployeeDetails VALUES (‘David’,’M’,25) GO INSERT INTO EmployeeDetails VALUES (‘Ravina’,’F’,21) GO

 

We are ready with the data. Let’s go ahead and write stored procedure to fetch the details from EmployeeDetails depending on various input parameters.

Let’s create a SP named uspGetEmpDetails which will have two input parameters, – Gender – Age

We will keep both parameters defaulted to NULL. This way we can match query results as per the input values provided.

CREATE PROCEDURE uspGetEmpDetails 

@Gender CHAR(1) = NULL,

@Age INT = NULL
AS
BEGIN

SELECT Employee_Name, Gender, Age FROM EmployeeDetails  

WHERE...
END

Here comes the trick. During execution out SP can have a input value for a single parameter or for the both the parameters. I can search all employees, – having Gender as ‘M’ OR – whose age is above 25 OR – having Gender as ‘M’ and whose age is above 25

Looking at the input conditions, we can figure out that input value of a parameter either can be NULL or it can be something valid value. So we can write the WHERE condition as,

...WHERE ( @Gender IS NULL OR Gender = @Gender) AND (@Age IS NULL OR Age > @Age)

Comparing the parameter values in this way we are sure about, If value for @Gender is NULL, second part of OR condition will be eliminated. This gives impression as good as Gender is not being compared in WHERE condition and only AGE is considered. Vice-versa for the parameter AGE.

Complete look at SP will be like,

CREATE PROCEDURE uspGetEmpDetails 

@Gender CHAR(1) = NULL,

@Age INT = NULL
AS
BEGIN

SELECT Employee_Name, Gender, Age FROM EmployeeDetails  

WHERE ( @Gender IS NULL OR Gender = @Gender) 

AND ( @Age IS NULL OR Age > @Age)
END

Now lets execute the procedure by passing appropriate values to it.

1) Fetch all male employees

EXEC uspGetEmpDetails 'M'

I get these results:

 Employee_Name    Gender           Age 

------------------------------------------------------------- 

Sunil            M                30 

David            M                25

2) Fetch all employees having age more than 22 yrs

EXEC uspGetEmpDetails NULL,22

to get these results

Employee_Name       Gender             Age
------------------------------------------------------
Sunil               M                  30
Jimmy               F                  24
David               M                  25

3) Fetch all female employees having age more than 21 yrs

EXEC uspGetEmpDetails 'F',21

which gives

Employee_Name                     Gender            Age
----------------------------------------------------------
Jimmy                             F                 24

Conclusion

In this article I have shown a way to writing queries where conditional part of the query can be written to tally input parameters with actual data at runtime. The query produces the same result as written in a way of using variables and generating WHERE clause using IF conditions. Thank you for giving your time to read this article. The above example it primarily intended to show comparing input parameter values in WHERE clause and generating desired results with ease of writing and debugging the query. Any suggestions and improvements to this are welcome.

June 1, 2009 Posted by | Uncategorized | Leave a comment

Exchange Server 2010 – The First Public Beta Version

Microsoft has been working quite some time on a new version of Exchange Server and very recently Microsoft has released its first public beta of this new version. It’s time to have a first look.

Exchange Server 2010 will be the next version of Exchange server and the successor of Exchange Server 2007. It is targeted for release by the end of this year, but since we’re only at the first beta this date is not really fixed of course. We might see a public Beta 2 and a Release Candidate version as well. Beta 1 is by far not feature complete and it can contain bugs and other undocumented features. For sure it will change later this year, so features we’re now missing can be added later on. Or vice versa, features we see now can be removed in later versions….

A quick look at Exchange Server 2010 might lead you to believe that it’s just another version of Exchange Server 2007 but that’s not entirely true. Of course it builds on top of Exchange Server 2007, but there are major improvements and new technologies in this product.

So, what’s new?

  • One of the first things is that the mailbox replication has changed dramatically, and personally I’m pretty excited about this. Microsoft has taken the Cluster Continuous Replication (CCR) and the Standby Continuous Replication (SCR) features and combined these to create a new feature called “database copies”. 
  • One of the issues with CCR is the added complexity by the Windows Clustering and to lighten up the administrator’s life Exchange Server 2010 no longer needs a fully fledged Windows Cluster Server. Under the hood it uses some parts of Windows Clustering but that’s completely taken care of by Exchange Server 2010 itself.
  • To create highly available mailbox environment, multiple mailbox server can be configured in a “Database Availability Group” or DAG. In a DAG, multiple copies of a mailbox database exist. If one database fails another server automatically takes over, a user will not notice anything about this.
  • The concept of multiple databases in a Storage Group is removed in Exchange Server 2010. Also the name “Storage Group” isn’t used anymore in Exchange Server 2010. The database technology, which is still based on the Extensible Storage Engine, or ESE still uses the “mailbox database.edb” format as well as the log files (E0000000001.log etc) and the checkpoint file.
  • Local Continuous Replication and Standby Continuous Replication have been removed in Exchange Server 2010.
  • The database schema has changed, or flattened. The database schema is less complex than in previous version of Exchange server making it possible to reduce the disk I/O compared to Exchange Server 2007 with up to 50% (although we cannot confirm this by individual testing).
  • Public Folders are still there and Public Folders are still fully supported in Exchange Server 2010. Even better, there are improvements in Public Folders like enhanced reporting possibilities.
  • In Exchange Server 2007 and earlier, MAPI clients connected directly to the mailbox server, while all other clients connected to the Client Access Server. In Exchange Server 2010 MAPI clients now connect to the Client Access Server. No clients connect directly to the Mailbox Server in Exchange Server 2010.
  • Enhanced move-mailbox functionality.
  • A very enhanced version of Outlook Web Access. One of the design goals was to create a cross browser experience for users. Users on an Apple Macbook with a Safari browser get the same user experience as users using a Windows Vista client with Internet Explorer! A lot of features that were already available in Outlook 2007 are now also available in Outlook Live. Webmail is getting better and better every release of Exchange server…
  • Exchange Server 2010 has enhanced disclaimers, which means you can create HTML formatted disclaimers, containing hyperlinks, images and even Active Directory attributes!
  • Exchange Server 2010 runs on PowerShell V2 and Windows Remote Management, making it possible to administer remote Exchange Server 2010 servers.

Furthermore there are a lot of changes in the administration of Exchange Server 2010, the routing model, compliancy features etc. Too many to mention in an article like this.

Installing Exchange Server 2010

Installing Exchange Server 2010 is pretty easy, but only on Windows Server 2008. Windows Server 2008 R2 should follow shortly, but for Beta 1 there are still some challenges.  Windows should also be a 64-bit version of Windows. It is unclear if a 32-bits version for testing will be available, but like Exchange Server 2007 this one is not supported in a production environment. Other requirements are .NET Framework 3.5, Windows Remote Management 2.0 and PowerShell 2.0.

When finished installing Exchange Server 2010 the Management Console is shown like in Exchange Server 2007 and it looks familiar:

Figure  1. The Exchange Management Console of Exchange Server 2010

As you can see in Figure 1, the Exchange Management Console looks familiar. But, because of the new high availability features and the flattened database model the database is no longer tied to a particular server but to the Exchange organization. When you want to mount or dismount a database you have to go to the Organization Configuration in the Exchange Management Console and no longer to the Server Configuration. Be aware of this, otherwise it can take you some time before you figure out what’s wrong.

Storage Groups no longer exist in Exchange Server 2010, so all cmdlets regarding Storage Groups are removed. Exchange Server 2010 still uses the ESE database, the accompanying log files and checkpoint files, so all Storage Group commandlet options that are still valid for the log file and checkpoint file configuration have been moved to the Database commandlets.

Another neat feature in the new Management Console is the “Send Mail” option. When you are working on recipients and need to send a (test) mail to this recipient you can just right click the recipient and select “Send Mail”. No need to send test messages from Outlook or Outlook Live anymore.

As said earlier Microsoft has introduced a concept called “database copies” in Exchange Server 2010. You can install a second Exchange server into the organization and the Exchange setup program takes care of everything. In Exchange Server 2007 only the mailbox role could be installed on a Windows Failover Cluster, in Exchange Server 2010 this is no longer the case. All server roles (except for the Edge Transport role) can be installed on a high availability cluster.

When you’ve installed a second server holding the Mailbox Server role you can create a copy of the database. Right click on the database and select “Add Mailbox Database Copy”, select the 2nd server and you’re done.

Names of Mailbox Databases should be unique in the organization and you have to setup a fairly clear naming convention for your Mailbox Databases. If you do not you will certainly get confused with the databases and their copies.

But wait, there’s more… since there are multiple copies of a Mailbox Database Microsoft has introduced a “self healing” technique. Exchange knows every copy of the database, and all databases are identical. If a page of a database gets corrupt Exchange can retrieve this page from one of the copies of the database and insert that page in the original database.

In Exchange Server 2010 the move-mailbox functionality is enhanced dramatically. It is now possible to asynchronously move mailboxes. The mailbox is not actually being moved, but it is being synchronized with the new location. The user still accesses, and uses the mailbox on its old location. The move is performed by a new service called the “Mailbox Replication Service” (MRS), running on the Exchange Server 2010 Client Access Server. Like a previous move-mailbox the synchronization can take hours to complete, depending on the amount of data that needs to be synchronized. Once complete, the actual move can take place, but since the data is already in place the move itself will take seconds. Online mailbox moves are only available between Exchange Server 2010 mailboxes and from Exchange Server 2007 SP2 mailboxes.

From an Outlook perspective… in the past Outlook clients connected directly to the back-end server (Exchange Server 2003 and earlier) or to the Exchange Server 2007 mailbox server. Internet clients connected to the Front-end server or to the Exchange Server 2007 Client Access Server. In Exchange Server 2010 the MAPI access also moved to the Client Access Server. A new service is introduced called “MAPI on the Middle Tier” (MOMT), but this name will change before Exchange Server 2010 is officially released. What is the advantage of MAPI clients connecting to the Client Access Server? Suppose something happens to the mailbox database and a fail-over takes place. In the past the Outlook clients were disconnected, the mailbox database was transferred to the other node of the cluster and the clients reconnected. This can take somewhere between 15 seconds and a couple of minutes, depending on the load of the server.
In Exchange Server 2010 when a database fails the Outlook clients stay connected to the Client Access Server and the mailbox is “moved” to the other server. Not really moved, but the Client Access Server just retrieves the information from another copy of the database. This will result is a transparent user experience; he or she will never know what mailbox server the data is coming from, nor do they experience any outage of the mail environment!

Clients….

One of the major improvements on the client side is Outlook Live, previously known as Outlook Web Access. A design goal was to create a cross browser experience so that non-IE users get the same user experience. First test: take an Apple computer, start a Safari browser and open Outlook Live. Wow… that works like a charm:

Figure 2. A Safari browser on an Apple Macbook gives a great user experience!

Fairly new in Exchange Server 2010 is the end-user administration option. End users have a lot of extra possibilities regarding the control of their personal information. They can change (basic) user properties in their personal property set like Name, Location, Phone Number etc., but they can also perform some basic administration regarding Distribution Groups

 

Figure 3. The options page for end users, a complete HTML management interface

See the “Edit” button? Click here and you can change settings like Contact Location, Contact Numbers and General Information. On the right hand side in the actions pane there are quick links to the most important features for users, like the Out-of-Office assistant or the rules assistant.

And the Groups option in the navigation pane, right here users can create their own distribution groups and manage their own group membership. Don’t worry, group owners can restrict ownership. And there’s a difference between public and private distribution groups.

The default view in Outlook Live is now in conversation mode in the results pane (=middle pane). In the right pane a quick view of the last message is visible, and below that just some quick notes of earlier messages of this conversation.

Other improvements in Outlook Live are:

  • Search Folders;
  • Message filtering;
  • Side by side view for calendars;
  • Attach messages to messages;
  •  Enhanced right-click capabilities;
  • Integration with Office Communicator;
  • Send and Receive text messages (SMS)  from Outlook Live;
  • Outlook Live Mailbox polices

But these are interesting topics for a future article

So, What Conclusion?

It’s way too early to draw a conclusion about Exchange Server 2010. The only thing I can say is that I’m very enthusiastic about what I’ve seen so far. The database replication resulting in multiple copies of the data, combined with the self healing possibilities…. That’s how the “old” Continuous Cluster Replication should have been. The scalability, the high-availability options, the new Outlook Live, it’s all very promising. But, it is still beta 1 and no matter how enthusiastic one is, it’s a bad idea to bring this into production. It’s even not supported. And before Microsoft hits RTM (Release to Manufacturing) it has a long way to go, and a lot can change. And a lot will change…. But it still looks very promising

May 20, 2009 Posted by | Uncategorized | Leave a comment

Installing SQL Backup on Multiple Servers using SQL Multi Script

One of the common requests we receive is how to quickly install the SQL Backup 5 server components remotely on a large number of servers. While the SQL Backup 5 User Interface supports remote installation, this can only be performed on a single server at a time, which can be time consuming when working with tens or even hundreds of servers.
Using the script provided in this article, the techniques utilised by the SQL Backup 5 User Interface, and SQL Multi Script, it is possible to perform an ‘unattended’ remote installation. This will allow you to install or upgrade your SQL Backup server components across the network in one go, rather than installing the components manually. Furthermore, these techniques can be used to collate versioning, licensing and installation information about the SQL Backup server component installations into one easy-to-read grid.
The Script
You can download the complete script from the box above, to the right of the article title. The following sections describe what is going on “under the hood” of the script, and how SQL Backup provides the information to automate this task.
1) Information Gathering
To be able to install the SQL Backup server components successfully, we first need to check that the machine meets the necessary criteria, namely that it is running SQL Server 2000 or 2005 in a non-clustered environment. While SQL Backup does support clustering, installation on a cluster is beyond the scope of this script.
The script performs a couple of preliminary checks to ensure that these criteria are met:

-- Establish the current SQL Server major version (e.g. 8, 9, 10).
SET @SqlProductVersion = CAST(SERVERPROPERTY('ProductVersion') AS NVARCHAR);
SET @SqlMajorVersion = CAST(SUBSTRING(@SqlProductVersion, 1, CHARINDEX('.', @SqlProductVersion) - 1) AS INT);

-- Establish the clustering status ('1' means clustered, '0' means non-clustered, NULL means unknown)
SET @SqlIsClustered = CAST(SERVERPROPERTY('IsClustered') AS VARCHAR(1));

IF @SqlMajorVersion >= 8 AND @SqlMajorVersion <=9 AND (@SqlIsClustered = '0')
BEGIN
We also gather the server and instance names, for the purpose of reporting and installation:
SET @MachineName = CAST(SERVERPROPERTY('MachineName') AS VARCHAR(128));
IF @MachineName IS NULL SET @MachineName = '';

SET @InstanceName = CAST(SERVERPROPERTY('InstanceName') AS VARCHAR(128));
IF @InstanceName IS NULL SET @InstanceName = '';

SET @CombinedName = CAST(SERVERPROPERTY('ServerName') AS VARCHAR(128));
IF @CombinedName IS NULL SET @CombinedName = '';

2) Gathering Existing Server Component Status
Once it has been established that the installation is allowed, the script gathers some benchmark information, by which it can ensure that the install or upgrade was successful.
The following three commands use the utility function ‘sqbutility’ to extract version information for xp_sqlbackup.dll (the extended stored procedure) and SQBCoreService.exe (the SQL Backup Agent Service), as well as the current license type and key.
These are inserted into a temporary table called #SqbOutput in order to eliminate the result grid that would otherwise be returned to the caller (and would then clutter up the SQL Multi Script output).
INSERT #SqbOutput EXECUTE master..sqbutility 30, @OldDllVersion OUTPUT;
INSERT #SqbOutput EXECUTE master..sqbutility 1030, @OldExeVersion OUTPUT;
INSERT #SqbOutput EXECUTE master..sqbutility 1021, @OldLicenseVersionId OUTPUT, NULL, @SerialNumber OUTPUT;
The @OldLicenseVersionId doesn't have any meaning as-is so, using a CASE statement, this is converted into a human readable form:
SELECT @OldLicenseVersionText =
CASE WHEN @OldLicenseVersionId = '0' THEN 'Trial: Expired'
WHEN @OldLicenseVersionId = '1' THEN 'Trial'
WHEN @OldLicenseVersionId = '2' THEN 'Standard'
WHEN @OldLicenseVersionId = '3' THEN 'Professional'
WHEN @OldLicenseVersionId = '6' THEN 'Lite'
END

3) Application Installation
The next part of the script attempts the actual installation. To do this we need to use the xp_cmdshell extended stored procedure, which must explicitly be turned on in SQL Server 2005 or later.
Once this has been done, we can perform a check to see if the installer file exists; if it doesn’t then it will be impossible to proceed. This check is performed using the shell syntax “IF EXIST “.
SET @SqbFileExistsExec = 'if exist ' + @DownloadDirectory + '\SqbServerSetup.exe time';
INSERT #SqbOutput EXECUTE master..xp_cmdshell @SqbFileExistsExec;

We return the value of ‘time’ as a single line output, rather than using ‘echo’ which returns a newline character (and hence two lines of output).
The installation needs to be performed silently, using the /VERYSILENT and /SUPPRESSMSGBOXES flags discussed below. We can supply other flags to configure the installer as necessary:

April 14, 2009 Posted by | Uncategorized | Leave a comment

Database Design: A Point in Time Architecture

Point in Time Architecture (PTA) is a database design that guarantees support for two related but different concepts – History and Audit Trail.

•History – all information, both current and historical, that as of this moment, we believe to be true.
•Audit Trail – all information believed to be true at some previous point in time.
The distinction is that the Audit Trail shows the history of corrections made to the database. Support for History and Audit Trail facilities are notably absent from typical OLTP databases. By “typical”, we mean databases that support the traditional Select, Insert, Delete and Update operations. In many cases, typical OLTP databases are perfectly fine for their requirements, but some databases demand the ability to track History and Audit Trail as core requirements. Without these abilities, the database will fail.

Typical OLTP databases destroy data. This is most obvious with the Delete command, but a moment’s thought reveals that the Update command is equally destructive. When you update a row in a table, you lose the values that were there a moment ago. The core concept in PTA is this: no information is ever physically deleted from or updated in the database.

However, some updates are deemed important while others are not. In all likelihood, the data modeler, DBA, or SQL programmer will not know which updates are important and which unimportant without consultation with the principal stakeholders. A mere spelling error in a person’s surname may be deemed unimportant. Unfortunately, there is no way to distinguish a spelling error from a change in surname. A correction to a telephone number may be deemed trivial, but again there is no way to distinguish it from a changed number. What changes are worth documenting, and what other changes are deemed trivial? There is no pat answer.

The Insert statement can be almost as destructive. Suppose you insert ten rows into some table today. Unless you’ve got a column called DateInserted, or similar, then you have no way to present the table as it existed yesterday.

What is Point In Time Architecture (PTA)?
PTA is a database design that works around these problems. As its name implies, PTA attempts to deliver a transactional database that can be rolled back to any previous point in time. I use the term “rolled back” metaphorically: traditional restores are unacceptable for this purpose, and traditional rollbacks apply only to points declared within a transaction.

A better way to describe the goal of a PTA system is to say that it must be able to present an image of the database as it existed at any previous point in time, without destroying the current image. Think of it this way: a dozen users are simultaneously interrogating the database, each interested in a different point in time. UserA wants the current database image; UserB wants the image as it existed on the last day of the previous month; UserC is interested in the image of the last day of the previous business quarter; and so on.


Requirements of PTA

Most obviously, physical Deletes are forbidden. Also, Inserts must be flagged in such a way that we know when the Insert occurred. Physical Updates are also forbidden; otherwise we lose the image of the rows of interest prior to the Update.


What do we need to know?

•Who inserted a row, and when.
•Who replaced a row, and when.
•What did the replaced row look like prior to its replacement?

We can track which rows were changed when in our PTA system by adding some standard PTA columns to all tables of PTA interest. I suggest the following:

•DateCreated – the actual date on which the given row was inserted.
•DateEffective – the date on which the given row became effective.
•DateEnd – the date on which the given row ceased to be effective.
•DateReplaced – the date on which the given row was replaced by another row.
•OperatorCode – the unique identifier of the person (or system) that created the row.

Notice that we have both a DateCreated column and a DateEffective column, which could be different. This could happen, for example, when a settlement is achieved between a company and a union, which guarantees specific wage increases effective on a series of dates. We might know a year or two in advance that certain wage increases will kick in on specific dates. Therefore we might add the row some time in advance of its DateEffective. By distinguishing DateCreated from DateEffective, we circumvent this problem.

Dealing with inserts

The easiest command to deal with is Insert. Here, we simply make use of our DateCreated column, using either a Default value or an Insert trigger to populate it. Thus, to view the data as it stood at a given point in time, you would perform the Select using the following syntax:

SELECT * FROM AdventureWorks.Sales.SalesOrderHeader WHERE DateCreated < [some PTA date of interest]This scenario is all fine and dandy assuming that you are creating the table in question. But you may be called upon to backfill some existing tables.

If you are retrofitting a database to support PTA, then you won’t be able to use a Default value to populate the existing rows. Instead you will have to update the existing rows to supply some value for them, perhaps the date on which you execute the Update command. To that extent, all these values will be false. But at least it gives you a starting point. Once the DateCreated column has been populated for all existing rows, you can then alter the table and either supply a Default value for the column, or use an Insert trigger instead, so that all new rows acquire their DateCreated values automatically.

Dealing with deletes
In a PTA architecture, no rows are physically deleted. We introduce the concept of a “logical delete”. We visit the existing row and flag it as “deleted on date z.” We do this by updating its DateEnd column with the date on which the row was “deleted”. We do not delete the actual row, but merely identify it as having been deleted on a particular date. All Select statements interrogating the table must then observe the value in this column.

SELECT * FROM AdventureWorks.Sales.SalesOrderHeader WHERE DateEnd < [PTA_date]Any row logically deleted after our PTA date of interest is therefore assumed to have logically existed up to our date of interest, and ought to be included in our result set.

Dealing with updates
In PTA, updates are the trickiest operation. No rows are actually updated (in the traditional sense of replacing the current data with new data). Instead, we perform three actions:

1.Flag the existing row as “irrelevant after date x”.
2.Copy the values of the existing row to a temporary buffer.
3.Insert a new row, copying most some of its values from the old row (those that were not changed), and using the new values for those columns that were changed. We also supply a new value for the column DateEffective (typically GetDate(), but not always as described previously).
There are several ways to implement this functionality. I chose the Instead-Of Update trigger. Before investigating the code, let’s describe the requirements:

1.We must update the existing row so that its DateReplaced value reflects GetDate() or UTCDate(). Its DateEnd value might be equal to GetDate(), or not. Business logic will decide this question.
2.The Deleted and Inserted tables give us the values of the old and new rows, enabling us to manipulate the values.
Here is the code to create a test table and the Instead-Of trigger we need. Create a test database first, and then run this SQL:

CREATE TABLE [dbo].[Test_PTA_Table]( [TestTablePK] [int] IDENTITY(1,1) NOT NULL, [TestTableText] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [DateCreated] [datetime] NOT NULL CONSTRAINT [DF_Test_PTA_Table_DateCreated] DEFAULT (getdate()), [DateEffective] [datetime] NOT NULL, [DateEnd] [datetime] NULL, [OperatorCode] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [DateReplaced] [datetime] NULL CONSTRAINT [DF_Test_PTA_Table_DateReplaced] DEFAULT (getdate()), CONSTRAINT [PK_Test_PTA_Table] PRIMARY KEY CLUSTERED ( [TestTablePK] ASC)WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]) ON [PRIMARY]Here is the trigger:

CREATE TRIGGER [dbo].[Test_PTA_Table_Update_trg] — ALTER TRIGGER [dbo].[Test_PTA_Table_Update_trg] ON [dbo].[Test_PTA_Table] INSTEAD OF UPDATEAS SET NOCOUNT ON DECLARE @key int SET @key = (SELECT TestTablePK FROM Inserted) UPDATE Test_PTA_Table SET DateEnd = GetDate(), DateReplaced = GetDate() WHERE TestTablePK = @key INSERT INTO dbo.Test_PTA_Table (TestTableText, DateCreated, DateEffective, OperatorCode, DateReplaced) (SELECT TestTableText, GetDate(), GetDate(), OperatorCode, NULL FROM Inserted)A real-world example would involve more columns, but I kept it simple so the operations would be clear. With our underpinnings in place, open the table and insert a few rows. Then go back and update one or two of those rows.

Dealing with selects
Every Select statement must take into account the dates just described, so that a query which is interested in, say, the state of the database as it appeared on December 24, 2006, would:

•Exclude all data inserted or updated since that day.
•Include only data as it appeared on that day. Deletes that occurred prior to that date would be excluded.
•In the case of updated rows, we would be interested only in the last update that occurred prior to the date of interest.
This may be trickier than it at first appears. Suppose that a given row in a given table has been updated three times prior to the point in time of interest. We’ll need to examine all the remaining rows to determine if any of them have been updated or deleted during this time frame, and if so, exclude the logical deletes, and include the logical updates.

With our standard PTA columns in place this may not be as tricky as it at first sounds. Remember that at any particular point in time, the rows of interest share the following characteristics.

•DateCreated is less than or equal to the PTA date of interest.
•DateEffective is greater than or equal to the PTA date.
•DateEnd is either null or greater than the PTA date.
•DateReplaced is either null or less than the PTA date.
So for our row that has been updated three times prior to the PTA date:

•The first and second rows will have a DateEnd and a DateReplaced that are not null, and both will be less than the PTA date.
•The third row will have a DateEffective less than the PTA date, and a DateReplaced that is either null or greater than the PTA date.
So we can always query out the rows of interest without having to examine columns of different names, but rather always using the same names and the same semantics.

PTA implementation details
The most important thing to realize is that it may not be necessary to trace the history of every column in a table. First of all, some columns, such as surrogate IDs, assigned dates (e.g. OrderDate), and other columns such as BirthDate will never be changed (other than for corrections). Another example is TelephoneNumber. In most applications, it is not significant that your telephone number changed twice in the past year: what we care about is your current number. Admittedly, some organizations may attach significance to these changes of telephone number. That is why we can only suggest a rule of thumb rather than an iron-clad rule. The stakeholders in the organization will help you decide the columns that are deemed “unimportant”.

What then qualifies as an important column? The rule of thumb is that important columns are those that have changeable attributes, and whose changes have significance.

Columns with changeable attributes are often called Slowly Changing Dimensions (SCDs). However, just because an attribute value changes, that doesn’t imply that the change is significant to the business. There are two types of SCD:

•Type 1 – columns where changes are of little or no interest to the organization
•Type 2 – columns where changes must be tracked and history recorded.
An obvious example of a Type 2 SCD is EmployeeDepartmentID. Typically, we would want to be able to trace the departments for which an employee has worked. But again, this may or may not be important to a given organization. What we can say is this: it is rarely the case that all columns within a table are considered Type 2.

Once you have defined the Type 1 and Type 2 columns, you can then devise the programmatic objects required to handle both types. The Type 1 code won’t bother with logical updates; it will perform a simple update., replacing the old value with a new one and not documenting this change in detail. The Type 2 code will follow the rules for logical updates and deletes.

Using domains
Depending on the development tools you use, you may or may not be able to take advantage of domains. (I am a big fan of ERwin and PowerDesigner, and almost never develop a data model without using them, except for the most trivial problems.)

In terms of data-modeling, a domain is like a column definition, except that it is not related to a table. You create a collection of domains, specifying their default values, description, check constraints, nullability and so on, without reference to any given table. Then, when you create individual tables, instead of supplying a built-in data type for a column, you specify its domain, thus “inheriting” all its attributes that you defined earlier. The less obvious gain is that should you need to change the domain definition (for example from int to bigint, or shortdatetime to datetime, or varchar(10) to char(10)), you make the change in exactly one place, and then forward-engineer the database. All instances of the domain in all tables will be updated to correspond to the new domain definition. In a database comprising hundreds of tables, this approach can be a huge time-saver.

Although I love domains, I have found one problem with them. In my opinion, there ought to be two kinds of domains, or rather a double-edged domain. Consider a domain called CustomerID. Clearly, its use in the Customers table as a PK is different than its use in various related tables, as an FK. In the Customer table it might be an int, Identity(1,1), whereas in the related tables, it will still be an int, but not an identity key. To circumvent this problem, I typically create a pair of domains, one for the PK and another for all instances of the domain as an FK.

Sample transactions
There is no panacea for creating a PTA database. However, using the examples provided here, plus some clear thinking and a standard approach, you can solve the problems and deliver a fully compliant PTA system.

Assume a Hollywood actress who marries frequently, and who always changes her surname to match her husband’s. In a PTA, her transaction record might look like this:

Table 1: Persons table with PTA columns and comment.

note

Here we have the history of Mary’s three marriages. Mary O’Hara entered the database on 01-Jan-04. In June of the same year she adopted, through marriage, the surname Roberts. This is reflected in our PTA database with the appropriate value inserted into the DateEnd and DateReplaced columns of Mary’s row. We then insert a new row into the Persons table, with a new PersonID value, the updated surname and the correct DateCreated and DateEffective values. This process is repeated for each of Mary’s subsequent marriages, so we end up with four rows in the Persons table, all referring to the same “Mary”.

These three Primary Keys all point to the same woman. Her surname has changed at various points in time. To this point, we have considered History as referring to the history of changes within the tables. However, this example illustrates another concept of history: the history of a given object (in this case, a person) within the database. Some applications may not need to know this history, while others may consider this critical. Medical and police databases come immediately to mind. If all a criminal had simply to change his surname to evade his history, we would have problems in the administration of justice.

One might handle this problem by adding a column to the table called PreviousPK, and insert in each new row the PK of the row it replaces. This approach complicates queries unnecessarily, in my opinion. It would force us to walk the chain of PreviousPKs to obtain the history of the person of interest. A better approach, I think, would be to add a column called OriginalPK, which may be NULL. A brand-new row would contain a null in this column, while all subsequent rows relating to this person would contain the original PK. This makes it trivial to tie together all instances. We can then order them using our other PTA columns, creating a history of changes to the information on our person of interest.

Table 2: Persons Table with PTA and Original PK tracking column.

Given Point-In-Time 21-Dec-2005, then the row of interest is the penultimate row: the row whose DateEffective value is 12-Dec-2005 and whose DateEnd is 06-June-2006. How do we identify this row?

SELECT * FROM Persons WHERE OriginalPK = 1234 AND DateEffective ’21-Dec-2005′ The DateEffective value must be less than or equal to 21-Dec-2005 and whose DateEnd is either NULL or greater than 21-Dec-2005.

Dealing with cascading updates
Let us now suppose that during the course of our history of Mary O’Hara, she changed addresses several times. Her simple changes of address are not in themselves problematic; we just follow the principles outlined above for the PersonAddresses table. If her changes of address correspond to her marriages, however, the waters muddy slightly., because this implies that she has changed both her name and her address. But let’s take it one step at a time.

Mary moves from one flat to another, with no other dramatic life changes. We stamp her current row with a DateEnd and DateReplaced (which, again, might differ). We insert a new row in PersonAddresses, marking it with her current PK from the Persons table, and adding the new address data. We mark it with a DateEffective corresponding to the lease date, and leave the DateEnd and DateReplaced null. Should her surname change within the scope of this update then we mark her row in Persons with a DateEnd and a DateReplaced, then insert a new row reflecting her new surname. Then we add a new row to PersonAddresses, identifying it with Mary’s new PK from Persons, and filling in the rest of the data.

Each time Mary’s Person row is logically updated, thus requiring a new row with a new PK, so we must logically update the dependent row(s) in the PersonAddresses (and all other related tables with a new row that references the new PK in the Persons table). This also applies to every other table in our database that relate to Persons. Fortunately, we can trace the history of Mary’s addresses using the Persons table.

In more general terms, the point to realize here is that every time a Type 2 update occurs in our parent table (Persons, in this case), a corresponding Type 2 update must occur in every related table. How complex these operations will be clearly depends on the particular database and its requirements. Again, there is no hard-and-fast rule to decide this.

Dealing with cascading deletes
A logical delete is represented in PTA as a row containing a not-null DateEnd and a null DateReplaced. Suppose we have a table called Employees. As we know, employees come and go. At the same time, their IDs are probably FKs into one or more tables. For example, we might track SalesOrders by EmployeeID, so that we can pay commissions. A given employee departs the organization. That certainly does not mean that we can delete the row. So we logically delete the row in the Employees table, giving it a DateEnd that will exclude this employee from any lists or reports whose PTA date is greater than said date – and thus preserving the accuracy of lists and reports whose PTA date is prior to the employee’s departure.

On the other hand, suppose that our firm sells products from several vendors, one of whom goes out of business. We logically delete the vendor as described above, and perhaps we logically delete all the products we previously purchased from said vendor.

NOTE:
There is a tiny glitch here, beyond the scope of this article, but I mention it because you may have to consider what to do in this event. Suppose that you still have several units on hand that were purchased from this vendor. You may want to postpone those related deletes until the inventory has been sold. That may require code to logically delete those rows whose QuantityOnHand is zero, and later on to revisit the Products table occasionally until all this vendor’s products have been sold. Then you can safely logically delete those Products rows.

Summary
The first time you confront the challenge of implementing Point in Time Architecture, the experience can be quite daunting. But it is not rocket science. I hope that this article has illuminated the steps required to accomplish PTA. As pointed out above, some applications may require the extra step of tracking the history of individual objects (such as Persons), while others may not need this. PTA is a general concept. Domain-specific implementations will necessarily vary in the details. This article, I hope, will serve as a practical guideline. I emphasize that there are rarely hard-and-fast rules for implementing PTA. Different applications demand different rules, and some of those rules will only be discovered through careful interrogation of the stakeholders. You can do it!

April 13, 2009 Posted by | Uncategorized | Leave a comment

Ten Common Database Design Mistakes

No list of mistakes is ever going to be exhaustive. People (myself included) do a lot of really stupid things, at times, in the name of “getting it done.” This list simply reflects the database design mistakes that are currently on my mind, or in some cases, constantly on my mind.

NOTE:
I have done this topic two times before. If you’re interested in hearing the podcast version, visit Greg Low’s super-excellent SQL Down Under. I also presented a boiled down, ten-minute version at PASS for the Simple-Talk booth. Originally there were ten, then six, and today back to ten. And these aren’t exactly the same ten that I started with; these are ten that stand out to me as of today.

Before I start with the list, let me be honest for a minute. I used to have a preacher who made sure to tell us before some sermons that he was preaching to himself as much as he was to the congregation. When I speak, or when I write an article, I have to listen to that tiny little voice in my head that helps filter out my own bad habits, to make sure that I am teaching only the best practices. Hopefully, after reading this article, the little voice in your head will talk to you when you start to stray from what is right in terms of database design practices.

So, the list:

1.Poor design/planning
2.Ignoring normalization
3.Poor naming standards
4.Lack of documentation
5.One table to hold all domain values
6.Using identity/guid columns as your only key
7.Not using SQL facilities to protect data integrity
8.Not using stored procedures to access data
9.Trying to build generic objects
10.Lack of testing
Poor design/planning
“If you don’t know where you are going, any road will take you there” – George Harrison

Prophetic words for all parts of life and a description of the type of issues that plague many projects these days.

Let me ask you: would you hire a contractor to build a house and then demand that they start pouring a foundation the very next day? Even worse, would you demand that it be done without blueprints or house plans? Hopefully, you answered “no” to both of these. A design is needed make sure that the house you want gets built, and that the land you are building it on will not sink into some underground cavern. If you answered yes, I am not sure if anything I can say will help you.

Like a house, a good database is built with forethought, and with proper care and attention given to the needs of the data that will inhabit it; it cannot be tossed together in some sort of reverse implosion.

Since the database is the cornerstone of pretty much every business project, if you don’t take the time to map out the needs of the project and how the database is going to meet them, then the chances are that the whole project will veer off course and lose direction. Furthermore, if you don’t take the time at the start to get the database design right, then you’ll find that any substantial changes in the database structures that you need to make further down the line could have a huge impact on the whole project, and greatly increase the likelihood of the project timeline slipping.

Far too often, a proper planning phase is ignored in favor of just “getting it done”. The project heads off in a certain direction and when problems inevitably arise – due to the lack of proper designing and planning – there is “no time” to go back and fix them properly, using proper techniques. That’s when the “hacking” starts, with the veiled promise to go back and fix things later, something that happens very rarely indeed.

Admittedly it is impossible to predict every need that your design will have to fulfill and every issue that is likely to arise, but it is important to mitigate against potential problems as much as possible, by careful planning.

Ignoring Normalization
Normalization defines a set of methods to break down tables to their constituent parts until each table represents one and only one “thing”, and its columns serve to fully describe only the one “thing” that the table represents.

The concept of normalization has been around for 30 years and is the basis on which SQL and relational databases are implemented. In other words, SQL was created to work with normalized data structures. Normalization is not just some plot by database programmers to annoy application programmers (that is merely a satisfying side effect!)

SQL is very additive in nature in that, if you have bits and pieces of data, it is easy to build up a set of values or results. In the FROM clause, you take a set of data (a table) and add (JOIN) it to another table. You can add as many sets of data together as you like, to produce the final set you need.

This additive nature is extremely important, not only for ease of development, but also for performance. Indexes are most effective when they can work with the entire key value. Whenever you have to use SUBSTRING, CHARINDEX, LIKE, and so on, to parse out a value that is combined with other values in a single column (for example, to split the last name of a person out of a full name column) the SQL paradigm starts to break down and data becomes become less and less searchable.

So normalizing your data is essential to good performance, and ease of development, but the question always comes up: “How normalized is normalized enough?” If you have read any books about normalization, then you will have heard many times that 3rd Normal Form is essential, but 4th and 5th Normal Forms are really useful and, once you get a handle on them, quite easy to follow and well worth the time required to implement them.

In reality, however, it is quite common that not even the first Normal Form is implemented correctly.

Whenever I see a table with repeating column names appended with numbers, I cringe in horror. And I cringe in horror quite often. Consider the following example Customer table:

354-image002
Are there always 12 payments? Is the order of payments significant? Does a NULL value for a payment mean UNKNOWN (not filled in yet), or a missed payment? And when was the payment made?!?

A payment does not describe a Customer and should not be stored in the Customer table. Details of payments should be stored in a Payment table, in which you could also record extra information about the payment, like when the payment was made, and what the payment was for:

354-image004

In this second design, each column stores a single unit of information about a single “thing” (a payment), and each row represents a specific instance of a payment.

This second design is going to require a bit more code early in the process but, it is far more likely that you will be able to figure out what is going on in the system without having to hunt down the original programmer and kick their butt…sorry… figure out what they were thinking

Poor naming standards
“That which we call a rose, by any other name would smell as sweet”

This quote from Romeo and Juliet by William Shakespeare sounds nice, and it is true from one angle. If everyone agreed that, from now on, a rose was going to be called dung, then we could get over it and it would smell just as sweet. The problem is that if, when building a database for a florist, the designer calls it dung and the client calls it a rose, then you are going to have some meetings that sound far more like an Abbott and Costello routine than a serious conversation about storing information about horticulture products.

Names, while a personal choice, are the first and most important line of documentation for your application. I will not get into all of the details of how best to name things here– it is a large and messy topic. What I want to stress in this article is the need for consistency. The names you choose are not just to enable you to identify the purpose of an object, but to allow all future programmers, users, and so on to quickly and easily understand how a component part of your database was intended to be used, and what data it stores. No future user of your design should need to wade through a 500 page document to determine the meaning of some wacky name.

Consider, for example, a column named, X304_DSCR. What the heck does that mean? You might decide, after some head scratching, that it means “X304 description”. Possibly it does, but maybe DSCR means discriminator, or discretizator?

Unless you have established DSCR as a corporate standard abbreviation for description, then X304_DESCRIPTION is a much better name, and one leaves nothing to the imagination.

That just leaves you to figure out what the X304 part of the name means. On first inspection, to me, X304 sounds like more like it should be data in a column rather than a column name. If I subsequently found that, in the organization, there was also an X305 and X306 then I would flag that as an issue with the database design. For maximum flexibility, data is stored in columns, not in column names.

Along these same lines, resist the temptation to include “metadata” in an object’s name. A name such as tblCustomer or colVarcharAddress might seem useful from a development perspective, but to the end user it is just confusing. As a developer, you should rely on being able to determine that a table name is a table name by context in the code or tool, and present to the users clear, simple, descriptive names, such as Customer and Address.

A practice I strongly advise against is the use of spaces and quoted identifiers in object names. You should avoid column names such as “Part Number” or, in Microsoft style, [Part Number], therefore requiring you users to include these spaces and identifiers in their code. It is annoying and simply unnecessary.

Acceptable alternatives would be part_number, partNumber or PartNumber. Again, consistency is key. If you choose PartNumber then that’s fine – as long as the column containing invoice numbers is called InvoiceNumber, and not one of the other possible variations.

Lack of documentation
I hinted in the intro that, in some cases, I am writing for myself as much as you. This is the topic where that is most true. By carefully naming your objects, columns, and so on, you can make it clear to anyone what it is that your database is modeling. However, this is only step one in the documentation battle. The unfortunate reality is, though, that “step one” is all too often the only step.

Not only will a well-designed data model adhere to a solid naming standard, it will also contain definitions on its tables, columns, relationships, and even default and check constraints, so that it is clear to everyone how they are intended to be used. In many cases, you may want to include sample values, where the need arose for the object, and anything else that you may want to know in a year or two when “future you” has to go back and make changes to the code.

NOTE:
Where this documentation is stored is largely a matter of corporate standards and/or convenience to the developer and end users. It could be stored in the database itself, using extended properties. Alternatively, it might be in maintained in the data modeling tools. It could even be in a separate data store, such as Excel or another relational database. My company maintains a metadata repository database, which we developed in order to present this data to end users in a searchable, linkable format. Format and usability is important, but the primary battle is to have the information available and up to date.

Your goal should be to provide enough information that when you turn the database over to a support programmer, they can figure out your minor bugs and fix them (yes, we all make bugs in our code!). I know there is an old joke that poorly documented code is a synonym for “job security.” While there is a hint of truth to this, it is also a way to be hated by your coworkers and never get a raise. And no good programmer I know of wants to go back and rework their own code years later. It is best if the bugs in the code can be managed by a junior support programmer while you create the next new thing. Job security along with raises is achieved by being the go-to person for new challenges.

One table to hold all domain values
“One Ring to rule them all and in the darkness bind them”

This is all well and good for fantasy lore, but it’s not so good when applied to database design, in the form of a “ruling” domain table. Relational databases are based on the fundamental idea that every object represents one and only one thing. There should never be any doubt as to what a piece of data refers to. By tracing through the relationships, from column name, to table name, to primary key, it should be easy to examine the relationships and know exactly what a piece of data means.

The big myth perpetrated by architects who don’t really understand relational database architecture (me included early in my career) is that the more tables there are, the more complex the design will be. So, conversely, shouldn’t condensing multiple tables into a single “catch-all” table simplify the design? It does sound like a good idea, but at one time giving Pauly Shore the lead in a movie sounded like a good idea too.

For example, consider the following model snippet where I needed domain values for:

•Customer CreditStatus
•Customer Type
•Invoice Status
•Invoice Line Item BackOrder Status
•Invoice Line Item Ship Via Carrier
On the face of it that would be five domain tables…but why not just use one generic domain table, like this?

354-image006
This may seem a very clean and natural way to design a table for all but the problem is that it is just not very natural to work with in SQL. Say we just want the domain values for the Customer table:

SELECT *FROM Customer JOIN GenericDomain as CustomerType ON Customer.CustomerTypeId = CustomerType.GenericDomainId and CustomerType.RelatedToTable = ‘Customer’ and CustomerType.RelatedToColumn = ‘CustomerTypeId’ JOIN GenericDomain as CreditStatus ON Customer.CreditStatusId = CreditStatus.GenericDomainId and CreditStatus.RelatedToTable = ‘Customer’ and CreditStatus.RelatedToColumn = ‘ CreditStatusId’As you can see, this is far from being a natural join. It comes down to the problem of mixing apples with oranges. At first glance, domain tables are just an abstract concept of a container that holds text. And from an implementation centric standpoint, this is quite true, but it is not the correct way to build a database. In a database, the process of normalization, as a means of breaking down and isolating data, takes every table to the point where one row represents one thing. And each domain of values is a distinctly different thing from all of the other domains (unless it is not, in which case the one table will suffice.). So what you do, in essence, is normalize the data on each usage, spreading the work out over time, rather than doing the task once and getting it over with.

So instead of the single table for all domains, you might model it as:

354-image008
Looks harder to do, right? Well, it is initially. Frankly it took me longer to flesh out the example tables. But, there are quite a few tremendous gains to be had:

•Using the data in a query is much easier:
SELECT *FROM Customer JOIN CustomerType ON Customer.CustomerTypeId = CustomerType.CustomerTypeId JOIN CreditStatus ON Customer.CreditStatusId = CreditStatus.CreditStatusId •Data can be validated using foreign key constraints very naturally, something not feasible for the other solution unless you implement ranges of keys for every table – a terrible mess to maintain.
•If it turns out that you need to keep more information about a ShipViaCarrier than just the code, ‘UPS’, and description, ‘United Parcel Service’, then it is as simple as adding a column or two. You could even expand the table to be a full blown representation of the businesses that are carriers for the item.
•All of the smaller domain tables will fit on a single page of disk. This ensures a single read (and likely a single page in cache). If the other case, you might have your domain table spread across many pages, unless you cluster on the referring table name, which then could cause it to be more costly to use a non-clustered index if you have many values.
•You can still have one editor for all rows, as most domain tables will likely have the same base structure/usage. And while you would lose the ability to query all domain values in one query easily, why would you want to? (A union query could easily be created of the tables easily if needed, but this would seem an unlikely need.)
I should probably rebut the thought that might be in your mind. “What if I need to add a new column to all domain tables?” For example, you forgot that the customer wants to be able to do custom sorting on domain values and didn’t put anything in the tables to allow this. This is a fair question, especially if you have 1000 of these tables in a very large database. First, this rarely happens, and when it does it is going to be a major change to your database in either way.

Second, even if this became a task that was required, SQL has a complete set of commands that you can use to add columns to tables, and using the system tables it is a pretty straightforward task to build a script to add the same column to hundreds of tables all at once. That will not be as easy of a change, but it will not be so much more difficult to outweigh the large benefits.

The point of this tip is simply that it is better to do the work upfront, making structures solid and maintainable, rather than trying to attempt to do the least amount of work to start out a project. By keeping tables down to representing one “thing” it means that most changes will only affect one table, after which it follows that there will be less rework for you down the road.

Using identity/guid columns as your only key
First Normal Form dictates that all rows in a table must be uniquely identifiable. Hence, every table should have a primary key. SQL Server allows you to define a numeric column as an IDENTITY column, and then automatically generates a unique value for each row. Alternatively, you can use NEWID() (or NEWSEQUENTIALID()) to generate a random, 16 byte unique value for each row. These types of values, when used as keys, are what are known as surrogate keys. The word surrogate means “something that substitutes for” and in this case, a surrogate key should be the stand-in for a natural key.

The problem is that too many designers use a surrogate key column as the only key column on a given table. The surrogate key values have no actual meaning in the real world; they are just there to uniquely identify each row.

Now, consider the following Part table, whereby PartID is an IDENTITY column and is the primary key for the table:

PartID
PartNumber
Description

1
XXXXXXXX
The X part

2
XXXXXXXX
The X part

3
YYYYYYYY
The Y part

How many rows are there in this table? Well, there seem to be three, but are rows with PartIDs 1 and 2 actually the same row, duplicated? Or are they two different rows that should be unique but were keyed in incorrectly?

The rule of thumb I use is simple. If a human being could not pick which row they want from a table without knowledge of the surrogate key, then you need to reconsider your design. This is why there should be a key of some sort on the table to guarantee uniqueness, in this case likely on PartNumber.

In summary: as a rule, each of your tables should have a natural key that means something to the user, and can uniquely identify each row in your table. In the very rare event that you cannot find a natural key (perhaps, for example, a table that provides a log of events), then use an artificial/surrogate key.

Not using SQL facilities to protect data integrity
All fundamental, non-changing business rules should be implemented by the relational engine. The base rules of nullability, string length, assignment of foreign keys, and so on, should all be defined in the database.

There are many different ways to import data into SQL Server. If your base rules are defined in the database itself can you guarantee that they will never be bypassed and you can write your queries without ever having to worry whether the data you’re viewing adheres to the base business rules.

Rules that are optional, on the other hand, are wonderful candidates to go into a business layer of the application. For example, consider a rule such as this: “For the first part of the month, no part can be sold at more than a 20% discount, without a manager’s approval”.

Taken as a whole, this rule smacks of being rather messy, not very well controlled, and subject to frequent change. For example, what happens when next week the maximum discount is 30%? Or when the definition of “first part of the month” changes from 15 days to 20 days? Most likely you won’t want go through the difficulty of implementing these complex temporal business rules in SQL Server code – the business layer is a great place to implement rules like this.

However, consider the rule a little more closely. There are elements of it that will probably never change. E.g.

•The maximum discount it is ever possible to offer
•The fact that the approver must be a manager
These aspects of the business rule very much ought to get enforced by the database and design. Even if the substance of the rule is implemented in the business layer, you are still going to have a table in the database that records the size of the discount, the date it was offered, the ID of the person who approved it, and so on. On the Discount column, you should have a CHECK constraint that restricts the values allowed in this column to between 0.00 and 0.90 (or whatever the maximum is). Not only will this implement your “maximum discount” rule, but will also guard against a user entering a 200% or a negative discount by mistake. On the ManagerID column, you should place a foreign key constraint, which reference the Managers table and ensures that the ID entered is that of a real manager (or, alternatively, a trigger that selects only EmployeeIds corresponding to managers).

Now, at the very least we can be sure that the data meets the very basic rules that the data must follow, so we never have to code something like this in order to check that the data is good:

SELECT CASE WHEN discount 1 then 1…We can feel safe that data meets the basic criteria, every time.

Not using stored procedures to access data
Stored procedures are your friend. Use them whenever possible as a method to insulate the database layer from the users of the data. Do they take a bit more effort? Sure, initially, but what good thing doesn’t take a bit more time? Stored procedures make database development much cleaner, and encourage collaborative development between your database and functional programmers. A few of the other interesting reasons that stored procedures are important include the following.

Maintainability
Stored procedures provide a known interface to the data, and to me, this is probably the largest draw. When code that accesses the database is compiled into a different layer, performance tweaks cannot be made without a functional programmer’s involvement. Stored procedures give the database professional the power to change characteristics of the database code without additional resource involvement, making small changes, or large upgrades (for example changes to SQL syntax) easier to do.

Encapsulation
Stored procedures allow you to “encapsulate” any structural changes that you need to make to the database so that the knock on effect on user interfaces is minimized. For example, say you originally modeled one phone number, but now want an unlimited number of phone numbers. You could leave the single phone number in the procedure call, but store it in a different table as a stopgap measure, or even permanently if you have a “primary” number of some sort that you always want to display. Then a stored proc could be built to handle the other phone numbers. In this manner the impact to the user interfaces could be quite small, while the code of stored procedures might change greatly.

Security
Stored procedures can provide specific and granular access to the system. For example, you may have 10 stored procedures that all update table X in some way. If a user needs to be able to update a particular column in a table and you want to make sure they never update any others, then you can simply grant to that user the permission to execute just the one procedure out of the ten that allows them perform the required update.

Performance
There are a couple of reasons that I believe stored procedures enhance performance. First, if a newbie writes ratty code (like using a cursor to go row by row through an entire ten million row table to find one value, instead of using a WHERE clause), the procedure can be rewritten without impact to the system (other than giving back valuable resources.) The second reason is plan reuse. Unless you are using dynamic SQL calls in your procedure, SQL Server can store a plan and not need to compile it every time it is executed. It’s true that in every version of SQL Server since 7.0 this has become less and less significant, as SQL Server gets better at storing plans ad hoc SQL calls (see note below). However, stored procedures still make it easier for plan reuse and performance tweaks. In the case where ad hoc SQL would actually be faster, this can be coded into the stored procedure seamlessly.

In 2005, there is a database setting (PARAMETERIZATION FORCED) that, when enabled, will cause all queries to have their plans saved. This does not cover more complicated situations that procedures would cover, but can be a big help. There is also a feature known as plan guides, which allow you to override the plan for a known query type. Both of these features are there to help out when stored procedures are not used, but stored procedures do the job with no tricks.

And this list could go on and on. There are drawbacks too, because nothing is ever perfect. It can take longer to code stored procedures than it does to just use ad hoc calls. However, the amount of time to design your interface and implement it is well worth it, when all is said and done.

Trying to code generic T-SQL objects
I touched on this subject earlier in the discussion of generic domain tables, but the problem is more prevalent than that. Every new T-SQL programmer, when they first start coding stored procedures, starts to think “I wish I could just pass a table name as a parameter to a procedure.” It does sound quite attractive: one generic stored procedure that can perform its operations on any table you choose. However, this should be avoided as it can be very detrimental to performance and will actually make life more difficult in the long run.

T-SQL objects do not do “generic” easily, largely because lots of design considerations in SQL Server have clearly been made to facilitate reuse of plans, not code. SQL Server works best when you minimize the unknowns so it can produce the best plan possible. The more it has to generalize the plan, the less it can optimize that plan.

Note that I am not specifically talking about dynamic SQL procedures. Dynamic SQL is a great tool to use when you have procedures that are not optimizable / manageable otherwise. A good example is a search procedure with many different choices. A precompiled solution with multiple OR conditions might have to take a worst case scenario approach to the plan and yield weak results, especially if parameter usage is sporadic.

However, the main point of this tip is that you should avoid coding very generic objects, such as ones that take a table name and twenty column names/value pairs as a parameter and lets you update the values in the table. For example, you could write a procedure that started out:

CREATE PROCEDURE updateAnyTable@tableName sysname,@columnName1 sysname,@columnName1Value varchar(max)@columnName2 sysname,@columnName2Value varchar(max)…The idea would be to dynamically specify the name of a column and the value to pass to a SQL statement. This solution is no better than simply using ad hoc calls with an UPDATE statement. Instead, when building stored procedures, you should build specific, dedicated stored procedures for each task performed on a table (or multiple tables.) This gives you several benefits:

•Properly compiled stored procedures can have a single compiled plan attached to it and reused.
•Properly compiled stored procedures are more secure than ad-hoc SQL or even dynamic SQL procedures, reducing the surface area for an injection attack greatly because the only parameters to queries are search arguments or output values.
•Testing and maintenance of compiled stored procedures is far easier to do since you generally have only to search arguments, not that tables/columns/etc exist and handling the case where they do not
A nice technique is to build a code generation tool in your favorite programming language (even T-SQL) using SQL metadata to build very specific stored procedures for every table in your system. Generate all of the boring, straightforward objects, including all of the tedious code to perform error handling that is so essential, but painful to write more than once or twice.

In my Apress book, Pro SQL Server 2005 Database Design and Optimization, I provide several such “templates” (manly for triggers, abut also stored procedures) that have all of the error handling built in, I would suggest you consider building your own (possibly based on mine) to use when you need to manually build a trigger/procedure or whatever.

Lack of testing
When the dial in your car says that your engine is overheating, what is the first thing you blame? The engine. Why don’t you immediately assume that the dial is broken? Or something else minor? Two reasons:

•The engine is the most important component of the car and it is common to blame the most important part of the system first.
•It is all too often true.
As database professionals know, the first thing to get blamed when a business system is running slow is the database. Why? First because it is the central piece of most any business system, and second because it also is all too often true.

We can play our part in dispelling this notion, by gaining deep knowledge of the system we have created and understanding its limits through testing.

But let’s face it; testing is the first thing to go in a project plan when time slips a bit. And what suffers the most from the lack of testing? Functionality? Maybe a little, but users will notice and complain if the “Save” button doesn’t actually work and they cannot save changes to a row they spent 10 minutes editing. What really gets the shaft in this whole process is deep system testing to make sure that the design you (presumably) worked so hard on at the beginning of the project is actually implemented correctly.

But, you say, the users accepted the system as working, so isn’t that good enough? The problem with this statement is that what user acceptance “testing” usually amounts to is the users poking around, trying out the functionality that they understand and giving you the thumbs up if their little bit of the system works. Is this reasonable testing? Not in any other industry would this be vaguely acceptable. Do you want your automobile tested like this? “Well, we drove it slowly around the block once, one sunny afternoon with no problems; it is good!” When that car subsequently “failed” on the first drive along a freeway, or during the first drive through rain or snow, then the driver would have every right to be very upset.

Too many database systems get tested like that car, with just a bit of poking around to see if individual queries and modules work. The first real test is in production, when users attempt to do real work. This is especially true when it is implemented for a single client (even worse when it is a corporate project, with management pushing for completion more than quality).

Initially, major bugs come in thick and fast, especially performance related ones. If the first time you have tried a full production set of users, background process, workflow processes, system maintenance routines, ETL, etc, is on your system launch day, you are extremely likely to discover that you have not anticipated all of the locking issues that might be caused by users creating data while others are reading it, or hardware issues cause by poorly set up hardware. It can take weeks to live down the cries of “SQL Server can’t handle it” even after you have done the proper tuning.

Once the major bugs are squashed, the fringe cases (which are pretty rare cases, like a user entering a negative amount for hours worked) start to raise their ugly heads. What you end up with at this point is software that irregularly fails in what seem like weird places (since large quantities of fringe bugs will show up in ways that aren’t very obvious and are really hard to find.)

Now, it is far harder to diagnose and correct because now you have to deal with the fact that users are working with live data and trying to get work done. Plus you probably have a manager or two sitting on your back saying things like “when will it be done?” every 30 seconds, even though it can take days and weeks to discover the kinds of bugs that result in minor (yet important) data aberrations. Had proper testing been done, it would never have taken weeks of testing to find these bugs, because a proper test plan takes into consideration all possible types of failures, codes them into an automated test, and tries them over and over. Good testing won’t find all of the bugs, but it will get you to the point where most of the issues that correspond to the original design are ironed out.

If everyone insisted on a strict testing plan as an integral and immutable part of the database development process, then maybe someday the database won’t be the first thing to be fingered when there is a system slowdown.

Summary
Database design and implementation is the cornerstone of any data centric project (read 99.9% of business applications) and should be treated as such when you are developing. This article, while probably a bit preachy, is as much a reminder to me as it is to anyone else who reads it. Some of the tips, like planning properly, using proper normalization, using a strong naming standards and documenting your work– these are things that even the best DBAs and data architects have to fight to make happen. In the heat of battle, when your manager’s manager’s manager is being berated for things taking too long to get started, it is not easy to push back and remind them that they pay you now, or they pay you later. These tasks pay dividends that are very difficult to quantify, because to quantify success you must fail first. And even when you succeed in one area, all too often other minor failures crop up in other parts of the project so that some of your successes don’t even get noticed.

The tips covered here are ones that I have picked up over the years that have turned me from being mediocre to a good data architect/database programmer. None of them take extraordinary amounts of time (except perhaps design and planning) but they all take more time upfront than doing it the “easy way”. Let’s face it, if the easy way were that easy in the long run, I for one would abandon the harder way in a second. It is not until you see the end result that you realize that success comes from starting off right as much as finishing right.

(Pro SQL Server 2005 Database Design and Optimization )

April 13, 2009 Posted by | Uncategorized | Leave a comment

New Spatial Data Types Ms SQL Server 2008

Applies: SQL Server 2008 (Katmai) CTP November 2007, SQL Server 2008 Enterprise Edition

SQL Server 2008 will be the first version of SQL Server to support spatial data and spatial operations natively. SQL Server 2008 introduces the geometry and the geography data types for storing spatial data. Geometry is a planar spatial data type, while geography represents data in a round-earth coordinate system (ellipsoidal) like latitude and longitude coordinates. Both data types are implemented as .NET CLR (Common Language Runtime) data types in Microsoft SQL Server 2008.

The geometry data type is based on the OGC (Open Geospatial Consortium) standards and supports the standard methods on geometry instances. Meanwhile, the geography data type uses coordinate system known as WGS 84 which is used by most GPS systems.

The geometry and geography data types support the following seven instantiable spatial data objects: Point, MultiPoint, LineString, MultiLineString, Polygon, MultiPolygon and GeometryCollection.

Microsoft SQL Server 2008 will introduce approximately 70 methods or functions to support operations with these two new data types.

The benefits of these new spatial data types are:

Spatial data types will allow to build location-enabled applications and services. So expect to see many interesting location-aware products in the near future using SQL Server spatial data types.
Both spatial data types will benefit from the new spatial indexes providing high-performance queries.
Extensibility through geospatial services such as Microsoft Virtual Earth.

The following example defines a point using the geography data type with coordinates of (18.25, 69.40) representing latitude and longitude for the SDQ Airport at Santo Domingo, Dominican Republic.

 image3

Finally, another example, this time using the geography data type to represent a polygon.

image21
 
http://www.microsoft.com/sql/2008/technologies/spatial.mspx

The Data Plaftorm Insider

http://blogs.technet.com/dataplatforminsider/archive/2007/11/30/my-favorite-sql-server-2008-feature.aspx

SQL Server 2008 Books Online

April 13, 2009 Posted by | Uncategorized | Leave a comment

Hello world!

Welcome to WordPress.com. This is your first post. Edit or delete it and start blogging!

March 29, 2009 Posted by | Uncategorized | Leave a comment