1/29/2012

Finding Running SQL Server Agent Jobs

By

Monitoring SQL Server agent jobs is a critical task for DBAs. There are several ways of achieving this.
The most obvious method is to use the Job Activity monitor.

For the above list, note that the currently running jobs will have a green arrow icon next to them. However, if you have large number of jobs running this won’t be an easy task so you could use the filtering option of the Job Activity Monitor as shown below.From this dialog you can select to view only jobs which are currently executing, just ensure you select Apply Filter checkbox.



However, some DBAs prefer to use scripts to achieve this, the below TSQL will provide the same detail:
exec msdb..sp_help_job @Execution_status = 1
The below screenshot shows the results obtained by executing the TSQL:


In addition, the sp_help_job will also provide you all the jobs in the server.

1/13/2012

New Built-In TSQL Functions in SQL Server 2012

By

New Built-in Functions in SQL Server 2012

SQL Server 2012 (Code named SQL Server Denali) introduces 14 new built in functions in four different categories with one existing function (LOG) being modified to have an optional second base parameter:
Category
Function Name
Conversion functions
PARSE
TRY_CONVERT
TRY_PARSE
Date and Time functions
DATEFROMPARTS
DATETIME2FROMPARTS
DATETIMEFROMPARTS
DATETIMEOFFSETFROMPARTS
EOMONTH
SMALLDATETIMEFROMPARTS
TIMEFROMPARTS
Logical functions
CHOOSE
IIF
String functions
CONCAT
FORMAT

Conversion Functions

There are three functions introduced in this category for the casting and conversion of values.

PARSE

This function is used to convert a string value to the requested/specified data type; you can optionally include the culture in which the string has been formatted. This function should be used to convert a string to a number or date/time data type. For other types of conversion you should continue using CAST or CONVERT. The performance of this function is slightly impaired because it relies on .NET CLR.
SELECT PARSE('01/03/2012' AS datetime) AS [Conversion using PARSE]
GO
SELECT PARSE('Tuesday, 3 January 2012' AS datetime2 USING 'en-US') AS [Conversion using PARSE]
GO

TRY_CONVERT

In previous versions of SQL Server we used the CONVERT function to convert a value to another data type, it works fine as long as the passed value can be converted to requested data type but if the value cannot be converted an exception is thrown. IN SQL Server 2012 TRY_CONVERT can be used which returns a value converted to the specified data type if the conversion is successful or it returns NULL instead of throwing exception is the conversion cannot be performed.
SET DATEFORMAT dmy;
SELECT CONVERT(datetime2, '12/31/2011') AS [Conversion using TRY_CONVERT] ;
GO
SELECT TRY_CONVERT(datetime2, '12/31/2011') AS [Conversion using TRY_CONVERT] ;
GO
--throws exception if explicit conversion is not allowed
SELECT TRY_CONVERT(xml, 1) AS [Conversion using TRY_CONVERT] ;
GO

TRY_PARSE

The PARSE function converts the string value to numeric or date/time data type value when the conversion is successful or else it fails with an exception being thrown. This is where TRY_PARSE can be used which returns a value converted to the specified data type if the conversion is successful or else it returns NULL.
SELECT PARSE('01/03/2012' AS datetime) AS [Conversion using PARSE]
GO
SELECT PARSE('01/03/2012' AS float) AS [Conversion using PARSE]
GO
SELECT TRY_PARSE('01/03/2012' AS datetime) AS [Conversion using TRY_PARSE]
GO
SELECT TRY_PARSE('01/03/2012' AS float) AS [Conversion using TRY_PARSE]
GO

Date and Time Functions

There are seven new Date/Time functions introduced in SQL Server 2012. Siz of these functions work as constructors to construct and return a value of type either date, time, datetime, datetime with offset etc. These functions can be used in cases where your system interoperates with other systems which store or accept date information in multiple parts (such as the day, month and year as separate data). With SQL Server 2012, you can easily combine these date parts and store them as a single value. The new functions also ensure the value is valid and does not fall beyond accepted ranges. See the code snippet below for example of the usage of each function:
--Syntax : DATEFROMPARTS (year, month, day)
--This function will be used to combine multiple date parts(year, month, day) into a value of DATE data type
SELECT DATEFROMPARTS (2012, 01, 03)
--Syntax : DATETIME2FROMPARTS (year, month, day, hours, minutes, seconds, fractions, precision)
--This function will be used to combine multiple date and time parts(year, month, day, hours, minutes, seconds, fractions, precision) into a value of DATETIME2 data type
SELECT DATETIME2FROMPARTS (2012, 01, 03, 14, 49, 00, 00, 00)
--Syntax : DATETIMEFROMPARTS (year, month, day, hours, minutes, seconds, milliseconds)
--This function will be used to combine multiple date and time parts(year, month, day, hours, minutes, seconds, milliseconds) into a value of DATETIME data type
SELECT DATETIMEFROMPARTS (2012, 01, 03, 14, 49, 00, 00)
--Syntax : DATETIMEOFFSETFROMPARTS (year, month, day, hours, minutes, seconds, fractions, hour_offset, minute_offset, precision)
--This function will be used to combine multiple date and time parts with offset (year, month, day, hours, minutes, seconds, fractions, hour_offset, minute_offset, precision) into a value of DATETIMEOFFSET data type(along with offset)
SELECT DATETIMEOFFSETFROMPARTS (2012, 01, 03, 14, 49, 00, 00, 05, 30, 00)
--Syntax : SMALLDATETIMEFROMPARTS (year, month, day, hours, minutes)
--This function will be used to combine multiple date and time parts(year, month, day, hours, minutes) into a value of SMALLDATETIME data type
SELECT SMALLDATETIMEFROMPARTS (2012, 01, 03, 14, 49)
--Syntax : TIMEFROMPARTS (hours, minutes, seconds, fractions, precision)
--This function will be used to combine multiple time parts(hours, minutes, seconds, fractions, precision) into a value of TIME data type
SELECT TIMEFROMPARTS (14, 49, 00, 00, 00) 
The EOMONTH function is little different from the above functions and returns the last day of the month (end of month) of the passed/specified date. You can also pass a second optional parameter to add months and return a calcualted result.
SELECT EOMONTH(GETDATE()) AS [Last day of current month]
SELECT EOMONTH(GETDATE(), -1) AS [Last day of last month]
SELECT EOMONTH(GETDATE(), 1) AS [Last day of next month]


Logical Functions

There are two new functions in this category : CHOOSE and IIF.

CHOOSE

CHOOSE can be used to return a value from a list of values based on an index number (note that index numbers start at 1). This function takes at least 2 arguments – the first argument must be an INT and the second argument and onwards can be any data type.
SELECT CHOOSE (1, 'A', 'B', 'C', 'D', 'E' ) AS [Using CHOOSE]
SELECT CHOOSE (4, 'A', 'B', 'C', 'D', 'E' ) AS [Using CHOOSE]
--Returns NULL if the index value exceeds the bounds of the array of values passed
SELECT CHOOSE (6, 'A', 'B', 'C', 'D', 'E' ) AS [Using CHOOSE]

IIF

If you have prior experience working with other programming languages you may have already encountered the IIF conditional function which is a compact way of writing IF..THEN..ELSE clause which returns the value based on the condition evaluation. The first argument of this function is a condition, if the condition is evaluated to TRUE then the second expression is evaluated and returned, if the condition is evaluated to FALSE then the third expression is evaluated and returned.
DECLARE @a int = 4;
DECLARE @b int = 40;
SELECT IIF ( @a > @b, 'Larger', 'Smaller' ) AS [Using CHOOSE]
SELECT IIF (1=1, 5+6, 1+2 ) AS [Using CHOOSE]
SELECT IIF (1=2, 5+6, 1+2 ) AS [Using CHOOSE]

String Functions

There are two new functions in this category – CONCAT and FORMAT.

CONCAT

You can pass two or more string values to the CONCAT function the function combines them all into string. All parameters passed to it of types other than a string will be converted to string implicitly and a NULL value will be converted to empty string (zero length string). The data type of the return value depends on the data type of the arguments passed.
SELECT CONCAT ('Today ', 'is ', '3rd', '-', 'January') AS [Using CONCAT]
SELECT CONCAT ('Today ', 'is ', 3, 'rd', '-', 'January') AS [Using CONCAT]

FORMAT

This function formats and returns the value passed to it in the specified format pattern. It returns a value of type nvarchar in case of success or NULL in case if the format pattern or culture is not valid. The performance of this function is slightly impaired as it relies on .NET CLR.
SELECT FORMAT(GETDATE(), 'dd/MM/yyyy' ) AS [Using FORMAT]
SELECT FORMAT(GETDATE(), 'dd-MMMM-yyyy') AS [Using FORMAT]
SELECT FORMAT(GETDATE(), 'dddd, MMMM dd, yyyy') AS [Using FORMAT]
SELECT FORMAT(GETDATE(), 'dd/MM/yyyy', 'en-US' ) AS [Using FORMAT]

Conclusion

In this article I talked about 14 new built in functions in four different categories coming up with SQL Server 2012 and which are going to make database developers lifes easier. These new useful functions emulate the functionality what we generally have in any other programming languages now at T-SQL/database engine level.

12/20/2011

Controlling Growth of a msdb Database

I recently encountered a situation where the drive hosting Sharepoint Databases in a Staging environment ran out of space. I logged onto the server and found that the msdb database has itself occupied 38 GB of the total disk space. Msdb database generally contain maintenance information for the database such as backups, log shipping and so on. My first step was to examine all the tables and I noted that there was not an abnormally large number of records.
I then decided to verify using a T-SQL script (shown below) exactly who the culprit actually is and the results were rather strange.
SELECT object_name(i.object_id) as objectName,
i.[name] as indexName,
sum(a.total_pages) as totalPages,
sum(a.used_pages) as usedPages,
sum(a.data_pages) as dataPages,
(sum(a.total_pages) * 8 ) / 1024 as totalSpaceMB,
(sum(a.used_pages) * 8 ) / 1024 as usedSpaceMB,
(sum(a.data_pages) * 8 ) / 1024 as dataSpaceMB
FROM sys.indexes i
INNER JOIN sys.partitions p
ON i.object_id = p.object_id
AND i.index_id = p.index_id
INNER JOIN sys.allocation_units a
ON p.partition_id = a.container_id
GROUP BY i.object_id, i.index_id, i.[name]
ORDER BY sum(a.total_pages) DESC, object_name(i.object_id)
GO
The output of the above T-SQL query is as shown in the screen capture below:

As you can see in the first highlighted section we can see that there are two indexes named c1lsmonitor_history_detail, and nc2lsmonitor_history_detail is present in the table named log_shipping_monitor_history_detail which has occupied 25698+9899=35597 MB= 34.76 GB and these were the primary cause of the large database size.
I then decided to perform a Re-index of the two indexes noted above and I did this by just right-clicking on the Index Name and selecting Rebuild.I also updated the statistics of the corressponding indexes. After completing the Re-indexing and Update Statistics, I tried to Shrink the msdb database and it shrank the database size from 35 GB to a mere 700 MB. please refer the screen capture below:

I then decided to include the msdb database as a part of daily reindexing and update statistics job which is set to occur daily.
Reindexing and updating statistics could alternatively be accomplished using the T-SQL below.
T-SQL for Re-indexing:
DECLARE
@dbname varchar(1000),
@parentname varchar(255),
@SQLSTR VARCHAR (1000),
@ctrl CHAR (2),
@command varchar(1000)
SET @ctrl = CHAR (13) + CHAR (10)
DECLARE DBCUR CURSOR FOR
select [name]
from sysdatabases where name not in
(
'master',
'model',
'tempdb'
)
order by 1
OPEN DBCUR
FETCH NEXT FROM DBCUR INTO @dbname
WHILE @@FETCH_STATUS = 0
BEGIN
select @command =
'
use ['+@dbname+']
Exec sp_MSForEachtable ''DBCC DBREINDEX ("?")''
'
exec (@command)
FETCH NEXT FROM DBCUR INTO @dbname
END
CLOSE DBCUR
DEALLOCATE DBCUR
GO
T-SQL for Update Statistics:
DECLARE
@dbname varchar(1000),
@parentname varchar(255),
@SQLSTR VARCHAR (1000),
@ctrl CHAR (2),
@command varchar(1000)
SET @ctrl = CHAR (13) + CHAR (10)
DECLARE DBCUR CURSOR FOR
select [name]
from sysdatabases where name not in
(
'master',
'model',
'tempdb'
)
order by 1
OPEN DBCUR
FETCH NEXT FROM DBCUR INTO @dbname
WHILE @@FETCH_STATUS = 0
BEGIN
select @command =
'
use ['+@dbname+']
Exec sp_MSForEachtable ''update statistics ? with fullscan''
'
exec (@command)
FETCH NEXT FROM DBCUR INTO @dbname
END
CLOSE DBCUR
DEALLOCATE DBCUR
GO
Thus we have successfully controlled the growth of the Distribution Database. Please let us know if you have any suggestions or comments on this approach.

Drop a Database by Closing Existing Connections using SSMS or T-SQL

To drop a SQL Server database, you will need exclusive access to the database ensure there are no other current users of the database or you will encounter the error:
Drop failed for Database ‘dbName’ …. Cannot drop database because it is currently in use
Ensuring there are no other current users can be very difficult – there may be jobs running using the database or there could be idle users who have opened the connections to the database and so on.
Therefore, you need to find all the spids and kill them. In SSMS when using the UI to drop the database there is an option to Close existing connections:

Alternatively, this can be done using the T-SQL script below.
USE master
Go
ALTER DATABASE [ClusterKey] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE ClusterKey
As you can see, first the database will be set to single user mode and point to remember is all the existing connections transactions will be rolled back.

12/19/2011

SQL Server Data Warehouse Cribsheet

by

SQL Server Data Warehouse Cribsheet

For things you need to know rather than the things you want to know
It is time to shed light on Data Warehousing and to explain how SSAS, SSRS and Business Intelligence fit into the puzzle. Who better to explain it all than Robert Sheldon?

Contents

Introduction

One of the primary components in a SQL Server business intelligence (BI) solution is the data warehouse. Indeed, the data warehouse is, in a sense, the glue that holds the system together. The warehouse acts as a central repository for heterogeneous data that is to be used for purposes of analysis and reporting.
Because of the essential role that the data warehouse plays in a BI solution, it’s important to understand the fundamental concepts related to data warehousing if you’re working with such a solution, even if you’re not directly responsible for the data warehouse itself. To this end, the article provides a basic overview of what a data warehouse is and how it fits into a relational database management system (RDBMS) such as SQL Server. The article then describes database modelling concepts and the components that make up the model, and concludes with an overview of how the warehouse is integrated with other components in the SQL Server suite of BI tools.
Note:
The purpose of this article is to provide an overview of data warehouse concepts. It is not meant as a recommendation for any specific design. In addition, the article assumes that you have a basic understanding of relational database concepts such as normalization and referential integrity. In addition, the examples used in here tend to be specific to SQL Server 2005 and 2008, although the underlying principles can apply to any RDBMS.

The Data Warehouse

A data warehouse consolidates, standardizes, and organizes data in order to support business decisions that are made through analysis and reporting. The data might originate in RDBMSs such as SQL Server or Oracle, Excel spreadsheets, CSV files, directory services stores such as Active Directory, or other types of data stores, as is often the case in large enterprise networks. Figure 1 illustrates how heterogeneous data is consolidated into a data warehouse.
Figure 1: Using a Data Warehouse to Consolidate Heterogeneous Data
The data warehouse must be able to store data from a variety of data sources in a way that lets tools such as SQL Server Analysis Services (SSAS) and SQL Server Reporting Services (SSRS) efficiently access the data. These tools are, in effect, indifferent to the original data sources and are concerned only with the reliability and viability of the data in the warehouse.
A data warehouse is sometimes considered to be a place for archiving data; however, that is not its purpose. Although historical data is stored in a data warehouse, only the historical range necessary to support analysis and reporting is retained there. For example, if a business rule specifies that the warehouse must maintain two years worth of historical data, older data is offloaded to another system for archiving or is deleted, depending on the specified business requirements.

Data Warehouse vs. Data Mart

A data warehouse is different from a data mart, although the terms are sometimes used interchangeable. In addition, there is some debate as to what exactly constitutes a data mart as compared to a data warehouse. However, it is generally accepted that a data warehouse is associated with enterprise-wide business processes and decisions (and consequently is usually a repository for enterprise-wide data), whereas the data mart tends to focus on a specific business segment of that enterprise. In some cases, a data mart might be considered a subset of the data warehouse, although this is by no means a universal interpretation or practice. For the purposes of this article, we’re concerned only with the enterprise-wide repository known as a data warehouse.

Relational Database vs. Dimensional Database

Because SQL Server, like Oracle and MySQL, is a RDBMS, any database stored within that system can be considered, by extension, a relational database. And that’s where things can get confusing.
The typical relational database supports online transaction processing (OLTP). For example, an OLTP database might support bank transactions or store sales. The transactions are immediate and the data is current, with regard to the most recent transaction. The database conforms to a relational model for efficient transaction processing and data integrity. The database design should, in theory, adhere to the strict rules of normalization which aim, among other things, to ensure that the data is treated as atomic units and there is minimal amount of redundant data.
A data warehouse, on the other hand, generally conforms to a dimensional model, which is more concerned with query efficiency than issues of normalization. Even though a data warehouse is, strictly speaking, a relational database (because it’s stored in a RDBMS), the tables and relationships between those tables are modelled very differently from the tables and relationships defined in the relational database. (The specifics of data warehouse modelling are discussed below.)
Note:
Because of the reasons described above, you might come across documentation that refers to a data warehouse as a relational database. However, for the purposes of this article, I refer to an OLTP database as a relational database and a data warehouse as a dimensional database.

Dimensional Database vs. Multidimensional Database

Another source of confusion at times is the distinction between a data warehouse and an SSAS database. The confusion results because some documentation refers to an SSAS database as a dimensional database. However, unlike the SQL Server database engine, which supports OLTP as well as data warehousing, Analysis Services supports online analytical processing (OLAP), which is designed to quickly process analytical queries. Data in an OLAP database is stored in multidimensional cubes of aggregated data, unlike the typical table/column model found in relational and dimensional databases.
Note:
I’ve also seen a data warehouse referred to as a staging database when used to support SSAS analysis. Perhaps from an SSAS perspective, this might make sense, especially if all data in the data warehouse is always rolled up into SSAS multidimensional cubes. In this sense, SSAS is seen as the primary database and the warehouse as only supporting that database. Personally, I think such a classification diminishes the essential role that the data warehouse plays in a BI solution, so I would tend to avoid this sort of reference.
OLAP technologies are usually built with dimensionally modelled data warehouses in mind, although products such as SSAS can access data directly from relational database. However, it is generally recommended to use a warehouse to support more efficient queries, properly cleanse the data, ensure data integrity and consistency, and support historical data. The data warehouse also acts as a checkpoint (not unlike a staging database!) for troubleshooting data extraction, transformation, and load (ETL) operations and for auditing the data.

The Data Model

A data warehouse should be structured to support efficient analysis and reporting. As a result, the tables and their relationships must be modelled so that queries to the database are both efficient and fast. For this reason, a dimensional model looks very different from a relational model.
There are basically two types of dimensional models: the star schema and snowflake schema. Often, a data warehouse model follows one schema or the other. However, both schemas are made up of the same two types of tables: facts and dimensions. Fact tables represent a core business process, such as retail sales or banking transactions. Dimension tables store related details about those processes, such as customer data or product data. (Each table type is described in greater detail later in the article.)

The Star Schema

The basic structure of a star schema is a fact table with foreign keys that reference a set of dimensions. Figure 2 illustrates how this structure might look for an organization’s sales process.

Figure 2: Using a Star Schema for Sales Data
The fact table (FactSales) pulls together all information necessary to describe each sale. Some of this data is accessed through foreign key columns that reference dimensions. Other data comes from columns within the table itself, such as Quantity and UnitPrice. These columns, referred to as measures, are normally numeric values that, along with the data in the referenced dimensions, provide a complete picture of each sale.
The dimensions, then, provide details about the functional groups that support each sale. For example, the DimProduct dimension includes specific details about each product, such as color and weight. Notice, however, that the dimension also includes the Category and Subcategory columns, which represent the hierarchical nature of the data. (Each category contains a set of subcategories, and each subcategory contains a set of products.) In essence, the dimension in the star schema denormalizes—or flattens out—the data. This means that most dimensions will likely include a fair amount of redundant data, thus violating the rules of normalization. However, this structure provides for more efficient querying because joins tend to be much simpler than those in queries accessing comparable data in a relational database.
Dimensions can also be used by other fact tables. For example, you might have a fact that references the DimProduct and DimDate dimensions as well as references other dimensions specific to that fact. The key is to be sure that the dimension is set up to support both facts so that data is presented consistently through each one.

The Snowflake Schema

You can think of the snowflake schema as an extension of the star schema. The difference is that, in the snowflake schema, dimensional hierarchies are extended (normalized) through multiple tables to avoid some of the redundancy found in a star schema. Figure 3 shows a snowflake schema that stores the same data as the star schema in Figure 2.
Figure 3: Using a Snowflake Schema for Sales Data
Notice that the dimensional hierarchies are now extended into multiple tables. For example, the DimProduct dimension references the DimProductSubcategory dimension, and the DimProductSubcategory dimension references the DimProductCategory dimension. However, the fact table still remains the hub of the schema, with the same foreign key references and measures.

The Star Schema vs. the Snowflake Schema

There is much debate over which schema model is better. Some suggest—in fact, insist—that the star schema with its denormalized structure should always be used because of its support for simpler and better performing queries. However, such a schema makes updating the data a more complex process. The snowflake schema, on the other hand, is easier to update, but queries are more complex. At the same time, there’s the argument that SSAS queries actually perform better against a snowflake schema than a star schema. And the debate goes on.
In many cases, data modellers choose to do a hybrid of both, normalizing those dimensions that support the user the best. The AdventureWorksDW and AdventureWorksDW2008 sample data warehouses take this approach. For example, in the AdventureWorksDW2008 database, the hierarchy associated with the DimProduct dimension is extended in a way consistent with a snowflake schema, but the DimDate dimension is consistent with the star schema, with its denormalized structure. The important point to keep in mind is that the business requirements must drive the data model—with query performance a prime consideration.

The Fact Table

At the core of the dimensional data model is the fact table. As shown in figures 2 and 3, the fact table acts as the central access point to all data related to a specific business process—in this case, sales. The table is made up of columns that reference dimensions (the foreign keys) and columns that are measures. A fact table supports several types of measures:
  • Additive: A type of measure in which meaningful information can be extracted by aggregating the data. In the preceding examples, the SalesAmt column is additive because those figures can be aggregated based on a specified date range, product, salesperson, or customer. For example, you can determine the total sales amount for each customer in the past year.
  • Nonadditive: A type of measure that does not expose meaningful information through aggregation. In the above examples, the UnitPrice column might be considered nonadditive. For example, totaling the column based on a salesperson’s sales in the past year does not produce meaningful data. Ultimately, a measure is considered nonadditive if it cannot be aggregated along any dimensions. (If a measure is additive along some dimensions and not others, it is sometimes referred to as semiadditive.)
  • Calculated: A type of measure in which the value is derived by performing a function on one or more measures. For example, the TotalAmt column is calculated by adding the SalesAmt and TaxAmt values. (These types of columns are sometimes referred to as computed columns.)
If you refer back to figure 2 or 3, you’ll see that no primary key has been defined on the fact table. I have seen various recommendations on how to handle the primary key. One recommendation is to not include a primary key, as I’ve done here. Another recommendation is to create a composite primary key based on the foreign-key columns. A third recommendation is to add an IDENTITY column configured with the INT data type. Finally, another approach is to add columns from the original data source that can act as the primary key. For example, the source data might include an OrderID column. (This is the approach taken by the AdventureWorksDW2008 data warehouse.)
As stated above, the goal of any data warehouse design should be to facilitate efficient and fast queries (while still ensuring data integrity). However, other considerations should include whether it will be necessary to partition the fact table, how much overhead additional indexing (by adding a primary key) will be generated, and whether the indexing actually improves the querying process. As with other database design considerations, it might come down to testing various scenarios to see where you receive your greatest benefits and what causes the greatest liabilities.

The Dimension

A dimension is a set of related data that supports the business processes represented by one or more fact tables. For example, the DimCustomer table in the examples shown in figures 2 and 3 contains only customer-related information, such as name, address, and phone number, along with relevant key columns.

The Surrogate Key

Notice that the DimCustomer dimension includes the CustomerKey column and the CustomerBusKey column. Let’s start with CustomerKey.
The CustomerKey column is the dimension’s primary key. In most cases, the key will be configured as an IDENTITY column with the INT data type. In a data warehouse, the primary key is usually a surrogate key. This means that the key is generated specifically within the context of the data warehouse, independent of the primary key used in the source systems.
You should use a surrogate for a number of reasons. Two important ones are that the key provides the mechanism necessary for updating certain types of dimension attributes over time and it removes the dependencies on keys originating in different data sources. For example, if you retrieve customer data from two SQL Server databases, a single customer might be associated with multiple IDs or the same ID might be assigned to multiple customers.
That doesn’t mean that the original primary key is discarded. The original key is carried into the dimension and maintained along with the surrogate key. The original key, usually referred to as the business key, lets you map the source data to the dimension. For example, the CustomerBusKey column in the DimCustomer dimension contains the original customer ID, and the CustomerKey column contains the new ID (which is the surrogate key and primary key).

The Date Dimension

A date dimension (such as the DimDate dimension in the examples) is treated a little differently from other types of dimension. This type of table usually contains one row for each day for whatever period of time is necessary to support an application. Because a date dimension is used, dates are not tracked within the fact table (except through the foreign key), but instead in the referenced date dimension. This way, not only is the day, month, and year stored, but also such data as the week of the year, quarter, and semester. As a result, this type of information does not have to be calculated within the queries. This is important because date ranges usually play an important role in both analysis and reporting.
Note:
The DimDate dimension in the examples uses only the numerical equivalents to represent day values such as day of week and month. However, you can create a data dimension that also includes the spelled-out name, such as Wednesday and August.
A fact table can reference a date dimension multiple times. For instance, the OrderDateKey and ShipDayKey columns both reference the DateKey column in DimDate. If you also want to track the time of day, your warehouse should include a time dimension that stores the specific time increments, such as hour and minute. Again, a time dimension can be referenced by multiple facts or can be referenced multiple times within the same fact.
Unlike other types of dimensions whose primary key is an integer, a date dimension uses a primary key that represents the date. For example, the primary key for the October 20, 2008 row is 20081020. A time dimension follows the same logic. If the dimension stores hours, minutes, and seconds, each row would represent a second in the day. As a result, the primary key for a time such as 10:30:27 in the morning would be 103027.

The Slowly Changing Dimension

Dimension tables are often classified as slowly changing dimensions (SCDs), that is, the stored data changes relatively slowly, compared to fact tables. For instance, in the previous examples, the FactSales table will receive far more updates than the DimProduct or DimSalesPerson dimensions. Such dimensions normally change very slowly.
The way in which you identify a SCD affects how the dimension is updated during the ETL process. There are three types of slowly changing dimensions:
  • Type 1: Rows to be updated are overwritten, thus erasing the rows history. For example, if the size of a product changes (which is represented as one row in the DimProduct dimension), the original size is overwritten with the new size, and there is no historical record of the original size.
  • Type 2: Rows to be updated are added as new records to the dimension. The new record is flagged as current, and the original record is flagged as not current. For example, if the product’s color changes, there will be two rows in the dimension, one with the original color and one with the new color. The row with the new color is flagged as the current row, and the original row is flagged as not current.
  • Type 3: Updated data is added as new columns to the dimension. In this case, if the product color changes, a new column is added so that the old and new colors are stored in a single row. In some designs, if the color changes more than once, only the current and original values are stored.
The built-in mechanisms in SQL Server Integration Services (SSIS) implement only Type 1 and Type 2 SCDs. Although you can build an ETL solution to work with Type 3 SCDs, Type 2 SCDs are generally considered more efficient, easier to work with, and provide the best capacity for storing historical data.
When working with SCDs in SSIS, you can use the Slowly Changing Dimension transformation to implement the SCD transformation workflow. However, SSIS does not identify SCDs at the dimension level, but rather at the attribute (column) level. Figure 4 shows the Slowly Changing Dimension Columns screen of the Slowly Changing Dimension wizard.

Figure 4: Specifying the Slowly Changing Dimension Columns in SSIS
For each column you can choose whether that column is a Changing Attribute (Type 1) or a Historical Attribute (Type 2). The wizard also offers a third option—Fixed Attribute, in which case, the value of the column can never change.
There is one other consideration when working with SCDs. If a dimension supports Type 2 SCDs, you need some way to mark each row to show whether it is the current row (the most recent updated row) and the historical range of when each row was current. The most effective way to do this is to add the following three columns to any dimension that supports SCD attributes:
  • Start date: The date/time when the row was inserted into the dimension.
  • End date: The date/time when the row became no longer current and a new row was inserted to replace this row. If the row is the most current row, this column value is set to null or assigned a maximum date that is out of the present-day range, such as December 31, 9999.
  • Current flag: A Boolean or other indicator that marks which row within a given set of associated rows is the current row.
Some systems implement only the start date and end date columns, without a flag, and then use the end date to calculate which row is the current row. In fact, the Slowly Changing Dimension transformation supports using only the dates or using only a flag. However, implementing all three columns is generally considered to be the most effective strategy when working with SCDs. One approach you can take when using the Slowing Changing Transformation wizard is to select the status flag as the indicator and then modify the data flow to incorporate the start and end date updates.

The Data

Although the data warehouse is an essential component in an enterprise-wide BI system, there are indeed other important components. Figure 5 provides an overview of how the SQL Server suite of tools might be implemented in a BI system.
Figure 5: The SQL Server BI Suite
As you can see, SSIS provides the means to retrieve, transform, and load data from the various data sources. The data warehouse itself is indifferent to the source of the data. SSIS does all the work necessary to ensure that the data conforms to the structure of the warehouse, so it is critical that the warehouse is designed to ensure that the reporting and analysis needs are being met. To this end, it is also essential to ensure that the SSIS ETL operation thoroughly cleanses the data and guarantees its consistency and validity. Together SSIS and the data warehouse form the foundation on which all other BI operations are built.
After the data is loaded into the warehouse, it can then be processed into the SSAS cubes. Note that SSIS, in addition to loading the data warehouse, can also be used to process the cubes as part of the ETL operation.
In addition to supporting multidimensional analysis, SSAS supports data mining in order to identify patterns and trends in the data. SSAS includes a set of predefined data-mining algorithms that help data analyzers perform such tasks as forecasting sales or targeting mailings toward specific customers.
Another component in the SQL Server BI suite is SSRS. You can use SSRS to generate reports based on the data in either the data warehouse or in the SSAS database. (You can also use SSRS to access the data sources directly, but this approach is generally not recommended for an enterprise operation because you want to ensure that the data is cleansed and consistent before including it in reports.) Whether you generate SSRS reports based on warehouse data or SSAS data depends on your business needs. You might choose not to implement SSAS or SSRS, although it is only when these components are all used in orchestration that you realize the full power of the SQL Server BI suite.

Conclusion

Regardless of the components you choose to implement or the business rules you’re trying to address in your BI solution, you can see that the data warehouse plays a critical role in that solution. By storing heterogeneous and historical data in a manner that ensures data integrity and supports efficient access to that data, the data warehouse becomes the heart of any BI solution. As a result, the better you understand the fundamental concepts associated with the data warehouse, the more effectively you will understand and be able to work with all your BI operations.

12/16/2011

Data Warehousing - Fact and Dimension Tables

Data warehouses are built using dimensional data models which consist of fact and dimension tables. Dimension tables are used to describe dimensions; they contain dimension keys, values and attributes. For example, the time dimension would contain every hour, day, week, month, quarter and year that has occurred since you started your business operations. Product dimension could contain a name and description of products you sell, their unit price, color, weight and other attributes as applicable.



Dimension tables are typically small, ranging from a few to several thousand rows. Occasionally dimensions can grow fairly large, however. For example, a large credit card company could have a customer dimension with millions of rows. Dimension table structure is typically very lean, for example customer dimension could look like following:

Customer_key
Customer_full_name
Customer_city
Customer_state
Customer_country


Although there might be other attributes that you store in the relational database, data warehouses might not need all of those attributes. For example, customer telephone numbers, email addresses and other contact information would not be necessary for the warehouse. Keep in mind that data warehouses are used to make strategic decisions by analyzing trends. It is not meant to be a tool for daily business operations. On the other hand, you might have some reports that do include data elements that aren't necessary for data analysis.



Most data warehouses will have one or multiple time dimensions. Since the warehouse will be used for finding and examining trends, data analysts will need to know when each fact has occurred. The most common time dimension is calendar time. However, your business might also need a fiscal time dimension in case your fiscal year does not start on January 1st as the calendar year.



Most data warehouses will also contain product or service dimensions since each business typically operates by offering either products or services to others. Geographically dispersed businesses are likely to have a location dimension.



Fact tables contain keys to dimension tables as well as measurable facts that data analysts would want to examine. For example, a store selling automotive parts might have a fact table recording a sale of each item. The fact table of an educational entity could track credit hours awarded to students. A bakery could have a fact table that records manufacturing of various baked goods.



Fact tables can grow very large, with millions or even billions of rows. It is important to identify the lowest level of facts that makes sense to analyze for your business this is often referred to as fact table "grain". For instance, for a healthcare billing company it might be sufficient to track revenues by month; daily and hourly data might not exist or might not be relevant. On the other hand, the assembly line warehouse analysts might be very concerned in number of defective goods that were manufactured each hour. Similarly a marketing data warehouse might be concerned by the activity of a consumer group with a specific income-level rather than purchases made by each individual.

SQL Server 2008’s Management Data Warehouse - Part 2

by

To identify the location of the MDW enter a server name, and database. Use the “New” button to create a new MDW database. Note that if you click on the new button and specify a new database to be created and then cancel the wizard the database will still be created.

There are two modes that in which a Data Collection might process. There are those collections that collect data based on a snapshot in time, and then those that constantly collect data. The Data Collections that are constantly collecting data use the “Cache directory” to store collected data between uploads to the MDW. To specify a cache location, either type in the name of the directory, or browse for it by using the ellipse (…) button. If you browse for the cache directory it must be present. If you type it in it you must make sure you identify an existing directory. If the directory doesn’t exist the system Data Collection processes that require a cache will fail until you create the cache folder specified. I’d suggest you create the cache folder in advance to avoid having any Data Collections failures issues to deal with.

Once you select a server, a database and identify a cache directory location the “Next >” button will become available. When you click on the next button the following screen will be displayed:






On this screen you identify the different rights each login will have in the MDW. There are three different roles identified: mdw_admin, mdw_reader, and , mdw_writer. The mdw_admin role will be allowed to read, write and update data, as well as run purge and cleanup jobs against the MDW. The mdw_reader role only has access to read data in the MDW, where as the mdw_writer role can write and upload data to the MDW. If you are a DBA and you want to completely manage the MDW then make sure you give yourself access to the “mdw_admin” role. You don’t have to add all the rights when using the wizard, you can go directly into the MDW and place people in these roles later if you desire. Once you are done mapping logins you can either click on the “Next>” or “Finish>>|” button, they both take you to a summary window. On the summary window you can verify what you have entered in the wizard. If while you are reviewing the summary information if you find something you want to change you can click on the “<Back” button to go back and change your MDW configuration options and then click onto the “Finish>>|” button to jump you back to the summary window.

After you click finish the wizard will take a few minutes to create the MDW environment. When the wizard is done setting up the MDW the following screen is displayed:




Once you have completed the wizard the MDW database will have been created, as well as three system Data Collections, and a number of SQL Server agent jobs and SSIS packages.

If you expand the “Data Collection” item in Object Explore within SSMS you will be able to see the three different system Data Collections under the “System Data Collection Set” item.

In addition to the three different system Data Collections a number of SQL Agent jobs and SSIS packages are also created. These SQL Agent jobs and SSIS packages are used to automate the extraction and load process of the MDW for each Data Collection.

System Data Collections
The three different Data Collection items created are: Disk Usage, Query Statistics, and the System Statistics. The Disk Usage Data Collection collects disk space usage information related to Data and Log files associated with each database. This collected information can be used to track the growth rate of your databases over time. This data is useful for performing capacity management.

The Query Statistics Data Collection collects information about any queries that are run against the SQL Server instance. Since Query Statistics Data Collection is disabled by default you will need to enable it if you want to collect query statistics. The System Statistics Data Collection gathers performance counter information, like CPU, Memory, etc. This Data Collection allows you to monitor the resource usages of various counters over time, so you can identify resource bottlenecks and trends over time.

SQL Server 2008 Performance Monitoring
The MDW and the Data Collection containers provide DBAs with an easy method to gather performance monitoring data for SQL Server 2008. Having the performance data collection process built into SQL Server 2008 reduces the need to build your own data collection routines. By using the MDW and the Data Collection process DBAs now have the tools necessary to provide them with data that they can use to track the performance of their SQL Server environment over time and perform capacity planning when new hardware needs to be acquired.