Slow Queries Part II

You are now the lead/only DBA for your company.
The VP of Marketing states that users are complaining of slowness and he would like you to get back to him later today to discuss any recommendations.
Where should one start?
I have been working on a checklist to help with troubleshooting. This second part is a quick overview and should help when you need to do performance tuning on a query.
This checklist is broken down into the following parts:
  1. Profiler
  2. Intro to Query Analyzer


Have you talked to users?
I know this seems like common sense but talking to the users first can help point you in the next best direction
Running ProfilerNow that you have an idea of query or queries the users were running , you can now set up the profiler to run.

  • Do not run profiler on server you are monitoring
  • Try to limit the data to only what you need.
  • When collecting data, do it over a typical work day.

Some events to capture

Stored Procedure – RPC:Completed


What to filter

Duration > 5000 Milliseconds (5 sec) [More or less is up to you]

Individual DBID to limit data to database in question.

Application: Exclude ex. SQL Profiler,SQL Agent,  MSQLEM (Emanager)

Data columns to capture

Duration: (group by)

Event class: Tells you what event is being monitored

DBID: Database (if needed)

TextData: SQL statements that ran









If you want to see how a typical work day would run on a new/test/development server, you can run a trace from production on those servers. This is a nice way to stress test a new server.


The query optimizer will REUSE a query if it is in the plan cache and this can play a role in how the execution plan is run.

To view queries in Plan Cache:

SELECT [cp].[refcounts]







FROM sys.dm_exec_cached_plans cp

CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st

CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle)qp

When Testing a Query you should run the following:


This forces all dirty pages for the current database to be written to disk and cleans the buffers.


Freeing the plan cache causes, for example, a stored procedure to be recompiled instead of reused from the cache.



Good info on logical reads and query time

What to do with the data?

Now that you have the data collected, you might see a lot of queries that ran based on your filter. This is where the grouping by Duration pays off. If the end users explained to you that slowness only occurred during a certain part of an application, you can search by ApplicationName,NTUserName or LoginName and see what SQL was run in the TextData column. You can also prioritize the output based on the longest running queries.

Execution Plan

Now that we have an idea of what query or queries that are slow, you  need to move the query to query analyzer to view the execution plan. If the TextData from profiler showed a stored procedure, you will need to find and open the stored procedure in enterprise manager and copy to query analyzer. You will also need to add the parameters to the stored procedure that showed up in profiler.

The execution plan is a great tool to answer the following.

  • Slow running queries
  • Is index used or not
  • How is data being retrieved
  • How are tables being joined
  • Cost of each

Execution Plan Icons

Each icon allows you to put the cursor over to view a Tool Tip box. The following is information that is displayed within this Tool Tip:


Cache Plan Size

This shows how much memory the query will take up in the procedure cache. Gives you an idea of what plans are using in memory.

Estimated Operator Cost

Estimated plan generated by the optimizer.

Estimated Subtree Cost

Shows the accumulated optimizer cost. Read from right to left when reading steps. Represents the amount of time the optimizer thinks the operator will take.

Estimated Number of Rows

The number is created based on statistics on the table or index.

Physical Operation

This shows what actually happened.

Logical Operation

The results of the optimizer for what should happen if executed.


Shows if the data that this operation is working on is in an ordered state.


This shows the order numbered from left to right even though plan should be read right to left.


Clustered Index Scan

Optimizer decides that so many rows need to be returned that it’s quicker to scan all the records in the index rather than use keys provided by the index.

Fine tune WHERE clause.

Clustered Index Seek

Optimizer tells the storage engine to look up data based on the keys of the index. Key values are used to quickly identify the row, or rows of data needed.

Non Clustered Index Scan (INDEX SEEK)

Uses index to look up the data/rows to be returned. The optimizer might be able to finds all the data in the non clustered index, or it might have to look up the data in the clustered index(if one) slightly hurting performance due to additional I/O needed to perform the extra look up.

Key Lookup

Always accompanied by the Nested Loop Join operation that combines the results of the two operations.

It is a Bookmark Lookup on a table with a clustered index which points to either the RID of the non clustered index or the KeyID of the clustered index.

Means that the optimizer cannot retrieve the rows in a single operation and has to use a clustered index.

An indication the query could benefit from a COVERING or INCLUDED index. These include all of the columns that need to be returned by the query, all the columns of each row are found in the index, and a Key Lookup does not have to Occur in order to get all the columns that need to be returned.

Nested Loop

Because a Key Lookup is needed, the Nested Loop Join is needed to combine the rows of the Index Seek and Key Lookup.

Table Scan

Rows returned by scanning table. Usually occurs due to no useful indexes.

Check index and/or WHERE clause.

Clustered Index Scan

Optimizer decides that so many rows need to be returned that it’s quicker to scan all the records in the index rather than use keys provided by the index.

RID Lookup

Query performs Index Seek on Primary Key. If Tool Tip shows Bmk… in Output List, this means that not all data was present in the index. This shows that the Index Seek is part of a query that has a Bookmark Lookup.

The query performs a RID Lookup, which is a type of Bookmark Lookup that occurs on a HEAP table (No Index) and uses a RID to find the rows to return. Since table does not have a Clustered Index, it must use a RID that links the Index to the HEAP. This adds I/O because two different operations have to be performed instead of one operation, which are then combined with a Nested Loop operation.

Add a Clustered Index, or use a Covering or Included index.


Index Scan

The output of the INDEX SCANS is combined through a HASH MATCH.

Hash Match (Join)

Hashing is where data is converted into a symbolic form. HASHING TABLE is a data structure that divides all of the elements into equal sized groups to allow quick access to the elements.

A HASH MATCH occurs when SQL joins two tables by HASHING the rows of the smaller table to be joined and then inserts the data into a HASH Table, then processing the larger table one row at a time against the smaller table looking for matches where rows need to be joined. The Hashed Table minimizes the size of the smaller table, this makes table comparisons to be quick.

Clustered Index Seek

Usually see this after a Hash Match Join.

Nested Loop Join

Following a Clustered Index Seek, the data accumulated by the other operations are joined with the data collected from the Seek though a Nested Loop.

The Nested Loop Join operation takes the input from two sets of data and joins them by scanning the outer data set (the bottom operator in a graphical execution plan) once for each row in the inner set. The number of rows in each of the two data sets was small, making this a very efficient operation. As long as the inner data set is small and the outer data set, small or not, is indexed, this becomes an extremely efficient join mechanism. Unless you have very large data sets, this is the type of join that you most want to see in an execution plan.

Compute Scalar

Usually see this after a Hash Match Join.


Keep in mind that this in just and brief overview, However it should be a great place to start. I will attempt to add to this check list in the near future. (see below)

Things to remember:

  • Appropriate Indexes
    • Index Fragmentation
  • Up-to-date Statistics
  • Database Design
  • Physical Hardware
    • Server Performance
  • Network
Posted in Performance | Leave a comment

HA vs. DR in SQL Server

High Availability vs. Disaster recovery

I will attempt to give you a brief summary on High Availability and Disaster Recovery and the techniques used to implement.

Data Availability

Data availability in SQL Server is organized into three groups.

  1. Data recovery
  2. High availability
  3. Disaster recovery

Data in your database must be available and must be protected based on your defined level of protection.

Risk Solution
Data Recovery Potential data loss Data redundancy
High Availability Complex to manage Automatic fail over
Disaster Recovery Downtime of
business operations
System redundancy

The main differences between HA and DR:

  • High availability ensures that the data is available with as little down time as possible.
  • Disaster recovery ensures that down time is as short as possible.

Once you have an accepted HA business plan you must still have a DR plan for your data and resources. You should always plan on a disaster where you will need to get your business up and running.

HA and DR Techniques

Before deciding on a technique for HA or DR, you should consider one of the following recovery modes:

Standby Mode

Cold standby ¤Manual process required ¤Backup and restore
Warm standby ¤Data copy ready to use

¤Potential data loss

¤  Manual failover required

¤Transaction log shipping

¤Database mirroring

Hot standby ¤Automatic failover ¤Database mirroring

¤Failover clustering

You will find through SQL Server or third party tools (i.e. RedGate Software), there are a few options for implementing a HA/DR environment.

Below are some HA/DR methods:

  1. Clustering
  2. Database Mirroring.
  3. Replication
  4. Log Shipping
  5. Backup and restore
  6. Detach and Attach
  1. Clustering is a very high availability option.
  2. Clusters share storage and should be fault tolerant
    1. Raid 1 (mirroring), Raid 5 (striping with parity)
  3. Storage is controlled by one cluster node at a time.
  4. Fail over is automatic but not instantaneously.
  5. Types of Cluster types:
    1. Active/Passive – one node cluster
      1. Can have multiple Passive nodes
    2. Active/Active – Multiple node cluster
    3. Complex administration


Mirroring lets a database to exist on two SQL instances at the same time and can be maintained automatically.

There are two mandatory and one optional database roles for mirroring.

  1. Principal
    1. This is the active database
  2. Mirror
    1. This is the copy database
    2. Is not accessed by users
  3. Witness (optional)
    1. Used to provide auto fail over
    2. No copy of database
    3. Monitors Primary instance.
  4. Can have separate fail over clusters at each site.

There three operating Modes:

Operating Mode

Transaction safety

Transfer mechanism

Quorum required

Witness server

Failover Type

High Availability Full Synchronous Yes Yes Auto/Man
High Safety Full


Yes No Manual
High Performance Off


No Not Needed Manual

¤  Synchronous

  • As soon as data is committed in the principal it is sent to the mirror server and it is also committed.

¤  Asynchronous

  • Data that is committed in the principal server is sent to the mirror server  but the principal does not wait for any Acknowledgement from mirror server before continuing with the other transactions.

¤  Transaction safety

  • FULL
    • Principal and Mirror operate in Synchronous mode.
  • OFF
    • Principal does not wait for verification from Mirror and the two may not always be synchronized.


Replication lets a database to exist in multiple physical locations or Instances.

SQL Server replication roles:

  1. Publisher
    1. An instance that is configured to distribute data.
    2. Can get and merge data from Subscription
    3. Subscription
      1. An instance that receives data from a Publisher and could get and merge data.
      2. Distributer
        1. A server that responds to the subscriptions requests.
        2. Can be on same server as Publisher.

There are three replication types:

  1. Transactional
    1. Gets a snapshot of the published data to the Subscription.
    2. Changes occur in almost real-time.
    3. Subscriber is usually read-only.
    4. Nice for up to date reporting database.
  2. Snapshot
    1. Uses Point in Time replication
    2. At snapshot the subscriber gets a full copy of the replicated data.
  3. Merge
    1. Lets data to be modified at Publisher and Subscriber.
    2. Like Transactional replication, it can be configured for low latency with correct hardware and configuration.

Log Shipping

  1. Log Shipping is the process of a standby database restoring transaction logs from a primary database at scheduled times.
  2. Changing primary database to recovery mode of SIMPLE can disrupt log shipping.
  3. This is a inexpensive option for HA/DR
  4. Database available for Read-Only.
  5. Simple process to maintain.
    1. High latency due to logs being restored in minutes.
    2. No automatic failover.
Posted in Disaster Recovery | 3 Comments

CASE and DATEPART Commands

Every once in a while I get a request to change the date filed on a report to reflect the actual day.
Below you will see that I use the CASE command to create the New_Date field.
The CASE command is nice command to allow for conditional processing. It is used to mostly change code into a more descriptive value. It can also be used to evaluate multiple conditions so it may return a value.
To change the Date to a different format, I used the DATEPART function.
The DATEPART function returns an integer value for part of the date in the DATEPART code.
I used the dw (Weekday) code to get the numeric day of the week for each date record.
(ex,  1 = Sunday, 2 = Monday, 3 = Tuesday….)
For more information on DATEPART :
— Create test table to store data
(Name varchar(50),
dateStmp datetime)
— Insert data into test table
VALUES (‘car’,’2011-05-19′)
VALUES (‘Truck’,’2011-05-18′)
VALUES (‘Van’,’2011-05-17′)
VALUES (‘Tractor’,’2011-05-16′)
VALUES (‘Bike’,’2011-05-15′)
VALUES (‘Scooter’,’2011-05-14′)
VALUES (‘Skateboard’,’2011-05-13′)
— View data in table
SELECT * FROM Date_test
— View data using CASE/datepart
SELECT name, dateStmp,
WHEN datepart(dw,dateStmp)= 1 THEN ‘Sunday’
WHEN datepart(dw,dateStmp)= 2 THEN ‘Monday’
WHEN datepart(dw,dateStmp)= 3 THEN ‘Tuesday’
WHEN datepart(dw,dateStmp)= 4 THEN ‘Wednesday’
WHEN datepart(dw,dateStmp)= 5 THEN ‘Thursday’
WHEN datepart(dw,dateStmp)= 6 THEN ‘Friday’
WHEN datepart(dw,dateStmp)= 7 THEN ‘Saturday’
END ‘New_Date’
FROM Date_test
Posted in T-SQL | Leave a comment

HIPAA Compliance

Is your database HIPAA compliant?


The Administrative Simplification provisions of the Health Insurance Portability and Accountability Act of 1996 (HIPAA, Title II) required the Department of Health and Human Services (HHS) to establish national standards for electronic health care transactions and national identifiers for providers, health plans, and employers.  As the industry has implemented these standards, and increased the use of electronic data interchange, the nation’s health care system will become increasingly effective and efficient. HIPAA can be broadly divided into four categories:

  1. Standards for Electronic Transactions
  2. Unique Identifiers for providers, employers and health plans
  3. Security Rule
  4. Privacy Rule

Microsoft SQL Server 2008 allows for security and auditing functionality that will address HIPAA requirements.

Some of the features for storing health information:

  • Transparent Database Encryption (TDE)
    • TDE allows for “Real-Time” encryption as well as decryption of data using an encryption key that is stored in the master database.
    • This key is encrypted using a certificate that is also stored in the master database. Data will not be able to be accessed without the certificate.
    • Encrypt anytime the data is sent outside your firewall.
  • Database Auditing (SQL Server Audit)
    • Any database that holds sensitive data should have the ability to track who is accessing the data and who has attempted to access data.
    • Ability to create Audit Groups for all users accessing database.
    • SQL Server Auditing is a great bundle of tools that replaces SQL Trace that allows you to define where the audit records will reside (Application, log, defined file).
    • Server Audit permits fine-grained auditing whereby an audit can be targeted to specific actions by a principal against a particular object.
  • Transmission Control Protocol End Points (TCP)
    • Endpoints control the capability to connect to an instance of SQL Server as well as to dictate the communications methods that are acceptable. Acting very similar to firewalls on the network, endpoints are a layer of security at the border between applications and your SQL Server instance.

Other than the above mentioned options you should also consider the following:

  • Windows Authentication
  • Strong user password
  • Create specific SQL Roles to assign to users
  • Consider using Secure Socket Layers (SSL) on web applications
  • Create Policy Based Management Alerts which allows you to define and enforce policies for configuring and managing SQL Server across the enterprise.
  • Set up rules on your router to allow only certain IP’s to network/ SQL server.

This should be a good start when you are creating your HIPAA security policy for your database that contains sensitive data. When it comes to HIPAA compliance, documentation is the key. Start by doing an assessment of your database risks and document what you are going to do to protect the database.

Posted in Database Security | 1 Comment

Disaster Recovery Planning

Where does my business stand when it comes to a disaster?

Is this even a question that is brought up by senior management?

If the answers to the above questions do not give you a warm feeling inside, you need to have some sort of Disaster Recover/ Business protection Plan in place that has been approved by senior management. A well thought out DRP could mean the difference between minimal system downtime to a business unable to continue.


The purpose of a Disaster Recover Plan is to maintain the continuity of mission critical applications during an unforeseen disaster to your I.T. infrastructure. Below are the basic of steps to planning a DR process:

Planning Process

You should set up meeting with senior management to discuss and define goals and standards of DRP. These meetings should be scheduled regularly and all should be in agreement to these goals.

You should do a risk analysis of your I.T. infrastructure. Make a spreadsheet of any and all scenarios that could hinder the uptime of your system. You should prioritize each risk and its likelihood to happen. Below is a list potential disaster’s to consider:

  • Man-made disasters: Fire, Willful destruction, Broken pipes, Electrical
  • Natural disasters: Storms, Fires, Loss of computer systems

Develop Plan

After feedback has been processed, the DR team should have an idea now as to what is needed for business continuity and what is defined for your business in regards to RTO (Recovery Time Objectives) and RPO (Recovery Point Objectives)

 Defined from Wikipedia

  • The Recovery Point Objective (RPO) is the point in time to which you must recover data as defined by your organization. This is generally a definition of what an organization determines is an “acceptable loss” in a disaster situation. If the RPO of a company is 2 hours and the time it takes to get the data back into production is 5 hours, the RPO is still 2 hours. Based on this RPO the data must be restored to within 2 hours of the disaster
  • The Recovery Time Objective (RTO) is the duration of time and a service level within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences associated with a break in business continuity.

All recovery procedures should be well documented. This document should detail the following:

  • Chain of command
  • Recall roster
  • Vendor notification
  • Location to business recovery work area
  • Critical function task list
  • Team recovery steps
  • Recovery preparedness steps
    • Plan review schedule
    • Training and exercise
  • Status report

Test/ Modify Plan

Now that you have completed your DR plan you need to test it and test it regularly. Remember, as your business environment changes, your Business Continuity Plan should change as well.

Posted in Disaster Recovery | 1 Comment

Expiring Password Notification

 This batch file will send out and email to a user who’s network password is about to expire. Script queries the Active Directory.

'*******************Begin Code*****
 errorresume next Const  
SECONDS_IN_DAY = 86400 strDomainDN = "local.local"'Domain name here - both Netbios and DNS style names should work 
 ReminderAge = -5 'Days before the reminders start being sent ReminderAge2 = 14 'Days before the reminders start being sent 'strbody - Body of the message being sent if password will be expiring  strbody = "This message is a reminder that your network password will be expiring soon." & vbcrlf  strbody = strbody & vbcrlf & "Please change your network password before the date listed above to avoid being locked out of the network resources." & vbcrlf  strbody = strbody & vbcrlf & "If you need instructions on how to change your password, please contact the Help Desk at x6999." & vbcrlf  strbody = strbody & vbcrlf & "Thank you," & vbcrlf  strbody = strbody & "The IS Department" 'strbody1 - Body of the message being sent if password has expired  strbody1 = "This message is a reminder that your network password has expired." & vbcrlf strbody1 = strbody1 & vbcrlf & "Please change your network password as soon as possible. When your password is expired, you will not be able to access VNAB network resources." & vbcrlf strbody1 = strbody1 & vbcrlf & "If you need instructions on how to change your password, please contact the Help Desk at x6999." & vbcrlf strbody1 = strbody1 & vbcrlf & "Thank you," & vbcrlf strbody1 = strbody1 & "The IS Department" 'create logfile  Set 
objFSO = CreateObject("Scripting.FileSystemObject") strScriptPath = objfso.GetParentFolderName(WScript.ScriptFullName)  strLogName = "MN-" & TwoDigits(Year(now)) & TwoDigits(Month(now)) & TwoDigits(Day(now)) & TwoDigits(Hour(now)) & TwoDigits(Minute(now)) & ".csv"
 strLogFile = strScriptPath & "Logs\" & StrLogName 'wscript.echo strLogFile  Set 
objLogFile = objFSO.CreateTextFile(strLogFile,1) objLogfile.Writeline "Email Password Check Script started: " & Now
rootDSE = GetObject("LDAP://RootDSE") 
oDomain = GetObject("LDAP://" & strDomainDN) 
maxPwdAge = oDomain.Get("maxPwdAge") DomainContainer = rootDSE.Get( "defaultNamingContext")
fs = CreateObject ("Scripting.FileSystemObject")
conn = CreateObject("ADODB.Connection")  conn.Provider = "ADSDSOObject"
 conn.Open "ADs Provider" numDays = CCur((maxPwdAge.HighPart * 2 ^ 32) + maxPwdAge.LowPart) / CCur(-864000000000) 'LDAP string to only find user accounts with mailboxes ' blank default ' ldapStr = "<LDAP://" & DomainContainer & ">;(& (mailnickname=*) (|(&(objectCategory=person)(objectClass=user)(!(homeMDB=*))(!(msExchHomeServerName=*)))(&(objectCategory=person)(objectClass=user)(|(homeMDB=*)(msExchHomeServerName=*))) ));adspath;subtree" ' Define Organizational Unit below ' METRONORTH Users  ldapStr = "<LDAP://ou=MetroNorth Users,ou=METRONORTH,dc=vnab,dc=local >;(& (mailnickname=*) (|(&(objectCategory=person)(objectClass=user)(!(homeMDB=*))(!(msExchHomeServerName=*)))(&(objectCategory=person)(objectClass=user)(|(homeMDB=*)(msExchHomeServerName=*))) ));adspath;subtree" Set 
rs = conn.Execute(ldapStr)
WhileNot rs.EOF
oUser = GetObject (rs.Fields(0).Value) dtmValue = oUser.PasswordLastChanged 
Err.Number = E_ADS_PROPERTY_NOT_FOUND Then  whenpasswordexpires = "The password has never been set."
 else  whenPasswordExpires = DateAdd( "d", numDays, oUser.PasswordLastChanged)  end 
 daysb4expire = Int(whenPasswordExpires - Now) if 
oUser.useraccountcontrol = 512 and daysb4expire <= ReminderAge2 and daysb4expire >= ReminderAge then
 'write user info to logfile  objLogfile.Writeline oUser.lastName & "," & oUser.firstName & "," & ouser.emailaddress & "," & oUser.description & "," & whenPasswordExpires & "," & Sendemail  end 
 dblMaxPwdNano = Abs(MaxPwdAge.HighPart * 2^32 + MaxPwdAge.LowPart)  dblMaxPwdSecs = dblMaxPwdNano * ONE_HUNDRED_NANOSECOND  dblMaxPwdDays = Int(dblMaxPwdSecs / SECONDS_IN_DAY) ' This will send email to user if user password about to expire or has expired  if 
daysb4expire <= ReminderAge2 and daysb4expire >= 1 and oUser.useraccountcontrol = 512  then 
 strNoteMessage = " " strNoteMessage = "Dear " & oUser.firstname & " " & oUser.lastname & vbcrlf & vbcrlf strNoteMessage = strNoteMessage & "Your Network password will expire on " & _  DateValue(dtmValue + dblMaxPwdDays) & " (" & _  Int 
((dtmValue + dblMaxPwdDays) - Now) & " days from today)." & vbcrlf & vbcrlf
objEmail = CreateObject("CDO.Message")  objEmail.From = ""
 objEmail.To = "" objEmail.Bcc = "" objEmail.Subject = "Network Password Expiration Notice"'Message subject  objEmail.TextBody = strNoteMessage & strBody  objEmail.Configuration.Fields.Item( "") = 2 objEmail.Configuration.Fields.Item( "") = ""' Your mailserver here objEmail.Configuration.Fields.Item( "") = 25  objEmail.Configuration.Fields.Update  objEmail.Send 'commented out right now---so you won't send out the email. ELSEIF 
daysb4expire <= 0 and daysb4expire >= ReminderAge and oUser.useraccountcontrol = 512  then 
 'Sendemail = "Password has Expired - sending eMail" 'objLogfile.Writeline "Password Expired - sending eMail" 'objLogfile.Writeline "*****************************" 'wscript.echo oUser.firstname & " " & oUser.lastname  strNoteMessage = "Dear " & oUser.firstname & " " & oUser.lastname & ", " & vbcrlf & vbcrlf strNoteMessage = strNoteMessage & "Your Network password has expired on " & _  DateValue(dtmValue + dblMaxPwdDays) & vbcrlf & vbcrlf  Set 
objEmail = CreateObject("CDO.Message")  objEmail.From = ""
 objEmail.To = "" objEmail.Bcc = "" objEmail.Subject = "Network Password Expiration Notice"'Message subject  objEmail.TextBody = strNoteMessage & strBody1  objEmail.Configuration.Fields.Item( "") = 2 objEmail.Configuration.Fields.Item( "") = ""' Your mailserver here objEmail.Configuration.Fields.Item( "") = 25  objEmail.Configuration.Fields.Update  'objEmail.Send 'commented out right now---so you won't send out the email during testing.  END 
 IF set  
whenpasswordexpires =  nothing 
 err.clear rs.MoveNext  Loop  Set 
oUser =  Nothing Set  
maxPwdAge =  Nothing Set  
oDomain =  Nothing 
 objLogFile.Close  Set 
fso =  Nothing Set  
objFSO =  Nothing Set  
fs =  Nothing  Function  
TwoDigits(t) TwoDigits = Right( "00" & t,2)
 'This will send csv file.  Set 
objEmailFile = CreateObject("CDO.Message") objEmailFile.From = ""
 objEmailFile.To = "" = "" objEmailFile.Subject = "MetroNorth - Network Password Expiration File"'Message subject  objEmailFile.TextBody = "Attached is the weekly notification of password soon to be expiring users." objEmailFile.AddAttachment(strLogFile)  objEmailFile.Configuration.Fields.Item( "") = 2 objEmailFile.Configuration.Fields.Item( "") = ""' Your mailserver here objEmailFile.Configuration.Fields.Item( "") = 25  objEmailFile.Configuration.Fields.Update 
 'objEmailFile.Send 'commented out right now---so you won't send out the email. 
'*********End Code******
Posted in Performance | Leave a comment

Analyzing Slow Queries


You have a developer who is complaining of slow performance in an application. Where should I start in diagnosing the problem?

Any Database Administrator will eventually have to troubleshoot a database for performance issues. I am not going to get into the vast materials that are associated with Query performance. However, I will give you a good place to start.

I have come across a great script that can be run as a stored procedure during testing that will store information in a table for you to use to resolve the slowness issue.

if exists (select name from sysobjects where name = ‘ssp_SlowQueryCapture’) begin drop procedure ssp_SlowQueryCapture end
Create Procedure ssp_SlowQueryCapture as
/* SQL 2005 only */


if not exists (select id from sysobjects where name = ‘SlowQueryCapture’)
      create table SlowQueryCapture
      (program_name     nvarchar(256),
      session_id  smallint,
      status      nvarchar(60),
      query_text  text,
      DBNAME      nvarchar(256),
      objectid    int,
      cpu_time    int,
      MinutesRunning    numeric,
      reads bigint,
      writes      bigint,
      logical_reads     bigint,
      scheduler_id      int,
      wait_type   nvarchar(120),
      last_wait_type    nvarchar(120),
      query_plan  xml,
      snapshotTime datetime default getdate())


if exists( select program_name
      from sys.dm_exec_requests r (nolock)
      join sys.dm_exec_sessions (nolock) on r.session_id = sys.dm_exec_sessions.session_id
      cross apply sys.dm_exec_sql_text(sql_handle) as qt
      where r.session_id > 50 /*User queries will have a session greater than 50*/
            and ((r.total_elapsed_time/1000.0)/60.0) > 1.0) /*This will store queries longer than 1 minute*/
      insert into SlowQueryCapture(program_name, session_id, status, query_text,DBNAME, objectid, cpu_time,   MinutesRunning, reads, writes, logical_reads, scheduler_id, wait_type, last_wait_type, qp.query_plan)
           exec (‘select program_name
                        (case when r.statement_end_offset = -1
                        then len(convert(nvarchar(max), qt.text)) * 2
                        else r.statement_end_offset end – r.statement_start_offset)/2)
                  as query_text   — this is the statement executing right now
                  ,db_name(qt.dbid) as [DBNAME]
                  ,((r.total_elapsed_time/1000.0)/60.0)as MinutesRunning
      from sys.dm_exec_requests r (nolock)
      join sys.dm_exec_sessions (nolock) on r.session_id = sys.dm_exec_sessions.session_id
      cross apply sys.dm_exec_sql_text(sql_handle) as qt
      cross apply sys.dm_exec_query_plan(plan_handle) as qp
      where r.session_id > 50
            and ((r.total_elapsed_time/1000.0)/60.0) > 1.0 /*This will store queries longer than 1 minute*/
      order by r.scheduler_id, r.status, r.session_id’)



Now, lets break it down……

The above script uses the following dynamic management views:

  • sys.dm_exec_requests
  • sys.dm_exec_sessions
    • Returns one row per authenticated session on SQL Server. sys.dm_exec_sessions is a server-scope view that shows information about all active user connections and internal tasks. This information includes client version, client program name, client login time, login user, current session setting, and more. Use sys.dm_exec_sessions to first view the current system load and to identify a session of interest, and then learn more information about that session by using other dynamic management views or dynamic management functions
  • sys.dm_exec_sql_text

What’s stored in the SlowQueryCapture table……

  • program_name 
    • sys.dm_exec_sessions
  • session_id
    • sys.dm_exec_requests
  • status   
    • sys.dm_exec_requests
  • query_text 
    • sys.dm_exec_requests
      • substring(qt.text,r.statement_start_offset/2,(case when r.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text)) * 2 else r.statement_end_offset end – r.statement_start_offset)/2)
      •  SQL statement executing
    • sys.dm_exec_sql_text
    • db_name(qt.dbid)
    • ID of Database
  • objectid 
    • sys.dm_exec_sql_text
    • ID of Object
  • cpu_time 
    • sys.dm_exec_requests
    • CPU time in milliseconds that is used by the request
  • MinutesRunning 
    • sys.dm_exec_requests
    • ((r.total_elapsed_time/1000.0)/60.0)
    • Total time elapsed in milliseconds since the request arrived (converted to minutes)
  • reads
    • sys.dm_exec_requests
    • Number of reads performed by this request
  • writes 
    • sys.dm_exec_requests
    • Number of writes performed by this request
  • logical_reads 
    • sys.dm_exec_requests
    • Number of logical reads performed by this request
  • scheduler_id 
    • sys.dm_exec_requests
    • ID of the scheduler that is scheduling this request
  • wait_type  
    • sys.dm_exec_requests
    • If the request is currently blocked, this column returns the type of wait.
  • last_wait_type 
    • sys.dm_exec_requests
    • If this request has previously been blocked, this column returns the type of the last wait
  • query_plan
    • sys.dm_exec_query_plan
    • Contains the compile-time Showplan representation of the query execution plan that is specified with plan_handle. The Showplan is in XML for
  • snapshotTime
    • Default datetime
What it all means…

Logical Reads

Logical read indicates total number of data pages needed to be accessed from data cache to process query. It is very possible that logical read will access same data pages many times, so count of logical read value may be higher than actual number of pages in a table. Usually the best way to reduce logical read is to apply correct index or to rewrite the query.


Physical read indicates total number of data pages that are read from disk. This is data that was not in the cache, the physical read will be equal to number of logical read. or subsequent same query request the number will be substantially decreased because the data pages have been in data cache.

Wait Types

Query Plan

This is how SQL server creates a set of steps to access or modify data. The query analyzer with determine the best alternative to executing. What you will discover is that the analyzer will not always pick the plan. This is why you will need to manually ‘tune’ the plan. Consider Joins. Indexes and SQL HINTS, and updating Statistics when analyzing the sql code. (these are just a few ways to ‘tune’ your code)

I hope you find this collection of data useful on your path to better understanding how to analyze queries and resolve performance issues.

Posted in Performance | Leave a comment