Report Builder 3.0 and SSRS 2008

Hello, and welcome back!

The first thing I would like to say is “Happy New Year!”   I thought I would start off the new year talking about Report Builder 3.0 and SQL Server Reporting Server (SSRS).

Last week I was tasked with installing and configuring Report Builder 3.0 to work with SSRS 2008.  First thing I had to do was to look on the Microsoft website and find the download.  Once I learned it is a stand-alone application I was able to download Report Builder 3.0.

Once I downloaded, installed and configured Report Builder 3.0 to work with my SSRS 2008 server I had received the following error message:

Connection failed.  Unable to connect to the server that is specified in the URL.  Make sure that the server is running, the URL is correct, and you have permission to use the server.

After looking high and low on the Internet and using Google to is maximum capabilities I was able to find out that Report Builder 3.0 is not compatible with SSRS 2008.  Report Builder 3.0 is compatible only with SSRS 2008 R2.  If you need to use Report Builder with SSRS 2008 then you must use Report Builder 2.0.

On a side note, I was not able to find information in BOL, white papers, etc. from Microsoft about the compatibility issues surrounding Report Builder 3.0.  Not only does Report Builder 3.0 not work with SSRS 2008 but also SSRS 2005.  I hope this helps in saving you a lot of time and frustration with trying to get Report Builder 3.0 to work with SSRS 2008 and 2005.

T-SQL Enhancement in SQL Server 2008

Hello, and welcome back!

One of the T-SQL enhancements in SQL Server 2008 that I have come to enjoy is the ability to have 1 insert statement with multiple values statements. So for example, in older versions of SQL Server you would have the following insert values statements:

insert into temp_table(id, first_name, last_name, dob)
values(1, ‘Joe’, ‘Smith’, ’02/12/1970′)
insert into temp_table(id, first_name, last_name, dob)
values(1, ‘Alex’, ‘Jones’, ’08/22/1978′)

New way in SQL Server 2008:
insert into temp_table(id, first_name, last_name, dob)
values(1, ‘Joe’, ‘Smith’, ’02/12/1970′),
(2, ‘Alex’, ‘Jones’, ’08/22/1978′),
(3, ‘Mike’, ‘Anderson’, ’11/09/1968′)

This definitely saves on the typing, copying, and pasting… :) Give it a try and let us know what you think!


Use DTExec To Replace DTSRun In SSIS

Hello there!

Today I was tasked with migrating an SQL Server Agent job from SQL Server 2000 to SQL Server 2008.

This job calls a VB script file and within this file executes the dtsrun.exe command and calls a DTS package.  So as I migrated the job to the new server I upgraded the DTS package to SSIS and copied over the VB script file.  So within the VB script file I had to modify the command to go from using DTSRun to DTExec with the new parameters.  So here is what dtsrun command looked like in SQL Server 2000:
‘*DTS executable and Parameters
DTSAPPRUN = “””C:Program FilesMicrosoft SQL Server80ToolsBinndtsrun.exe”” -S server name – E -N DTS package name

Here is what I had to change command to look like for a SSIS package:
‘*SSIS executable and Parameters
DTSAPPRUN = “””C:Program FilesMicrosoft SQL Server100DTSBinnDTExec.exe”” /sq SSIS package name /ser server name”

Now fortunately for me, the DTS package was a simple data import package so it was converted to SSIS without issue.  So, after making the changes to the VB script file and saving… I reran the SQL Server Agent job and the VB Script executed and the SSIS package ran without issue.  I hope you find this helpful and if you have another way of doing this please post here and let us know!

Until next time…


Quick Way To Determine Database File Size

Here is a quick snippet on how to determine the file size(s) of a specific database in SQL Server:

SELECT a.FILEID, CONVERT (decimal(12,2), ROUND (a.size / 128.000,2)) as [FILESIZEINMB],
CONVERT (decimal(12,2),ROUND (fileproperty (,‘SpaceUsed’) / 128.000,2))  as [SPACEUSEDINMB],
CONVERT (decimal(12,2), ROUND ((a.size – fileproperty (,‘SpaceUsed’)) /128.000,2)) as [FREESPACEINMB], as [DATABASENAME], a.FILENAME as [FILENAME]
FROM dbo.sysfiles as a

Troubleshooting Transactional Replication

Hello, and welcome back! Today, dealing with replication has been interesting. Here is the scenario…

I support an environment that contains a very large transactional replication scheme.  I received an alert today that one of the order tables (articles) on a subscriber would not allow NULL values.  Well nothing was changed on the subscriber side so I had to figure out where the issue was.

Fortunately, the Order ID was logged in the error message so on the distribution serverdatabase and all of the other fields for that record had NULL values.  I wanted to remove the offending rows from replication so I ran the following on the distribution database:  exec sp_browsereplcmds.  This returns a result set in a readable version of the replicated commands stored in the distribution database, and is used as a diagnostic tool.  From here you can grab the xact_seqno field from the result set and then you are able to remove the offending rows from the Distribution.dbo.msrepl_transactions and Distribution.dbo.msrepl_commands tables.  Here are the steps:

Step 1 – Stop the SQL Server Agent on the Distribution Server this will stop all replication transactions from going into the distribution database.

Step 2 – Run the following T-SQL commands in the distribution database

use distribution
delete from msrepl_transactions where xact_seqno = <xact_seqno here>
delete from msrepl_commands where xact_seqno = <xact_seqno here>

Step 3 – Start the SQL Server Agent on the distribution server this will start all replication transactions to go into the distribution database.

Step 4 – If need be, restart the synchronization of the distribution agent on the distribution server.  Replication should now continue.

Let me know if this helps or if you need clarification!

What are the different lock modes in SQL Server?

Hello and welcome back to my blog!  Today one of the junior DBAs came to me and asked when he views the locks in SQL Server by executing sp_lock or sys.dm_tran_locks what does Mode (in sp_lock) or request_mode (in sys.dm_tran_locks) mean and what are the values.  It occurred to me that perhaps not all DBAs (juniors as well as seniors) may not understand the modes for locks.  So I thought today I would cover the different lock modes.

When executing sp_lock or sys.dm_tran_locks in SQL Server you will see mode or request_mode in the result set that is generated.  Request_mode is simply the lock mode requested when a session is executing on SQL Server.  The values and the definitions for Moderequest_mode are:

NULL = No access is granted to the resource. Serves as a placeholder.
Sch-S = Schema stability. Ensures that a schema element, such as a table or index, is not dropped while any session holds a schema stability lock on the schema element.
Sch-M = Schema modification. Must be held by any session that wants to change the schema of the specified resource. Ensures that no other sessions are referencing the indicated object.
S = Shared. The holding session is granted shared access to the resource.
U = Update. Indicates an update lock acquired on resources that may eventually be updated. It is used to prevent a common form of deadlock that occurs when multiple sessions lock resources for potential update at a later time.
X = Exclusive. The holding session is granted exclusive access to the resource.
IS = Intent Shared. Indicates the intention to place S locks on some subordinate resource in the lock hierarchy.
IU = Intent Update. Indicates the intention to place U locks on some subordinate resource in the lock hierarchy.
IX = Intent Exclusive. Indicates the intention to place X locks on some subordinate resource in the lock hierarchy.
SIU = Shared Intent Update. Indicates shared access to a resource with the intent of acquiring update locks on subordinate resources in the lock hierarchy.
SIX = Shared Intent Exclusive. Indicates shared access to a resource with the intent of acquiring exclusive locks on subordinate resources in the lock hierarchy.
UIX = Update Intent Exclusive. Indicates an update lock hold on a resource with the intent of acquiring exclusive locks on subordinate resources in the lock hierarchy.
BU = Bulk Update. Used by bulk operations.
RangeS_S = Shared Key-Range and Shared Resource lock. Indicates serializable range scan.
RangeS_U = Shared Key-Range and Update Resource lock. Indicates serializable update scan.
RangeI_N = Insert Key-Range and Null Resource lock. Used to test ranges before inserting a new key into an index.
RangeI_S = Key-Range Conversion lock. Created by an overlap of RangeI_N and S locks.
RangeI_U = Key-Range Conversion lock created by an overlap of RangeI_N and U locks.
RangeI_X = Key-Range Conversion lock created by an overlap of RangeI_N and X locks.
RangeX_S = Key-Range Conversion lock created by an overlap of RangeI_N and RangeS_S. locks.
RangeX_U = Key-Range Conversion lock created by an overlap of RangeI_N and RangeS_U locks.
RangeX_X = Exclusive Key-Range and Exclusive Resource lock. This is a conversion lock used when
updating a key in a range.

Understanding the different locks in SQL Server will help you to quickly identify and determine what mode the specific session is executing.  My next blog post I will talk about locks in further detail and also give more insight in the term “lock escalation”.

Importing A List Of SQL Servers Into SSMS

The other day I was looking for a way to import a long list of SQL Server 2005 servers into SSMS on my workstation.  As I was looking around I was thinking Microsoft had to include a way to import a list of servers into SSMS.  My problem was in my SSMS I did not have “Registered Servers” selected from the “View” menu… I was only showing the “Object Explorer”.  So after selecting “Registered Servers” from the “View” menu I was able to now view Registered Servers in SSMS.

To import servers into SSMS you do the following:

1.  Under Registered Servers right click on “Database Engine” and select “Import…”
2.  The “Import Registered Servers” dialog will appear and select the ellipses button
3.  Navigate to the location where you have the .regsrvr file
4.  If you have multiple Server Groups select the group that you want to import the servers into and click “OK”
5.  The “Import Registered Servers” dialog appears letting you know that “The import was successful”

You will now see your servers from the list in the Registered Servers in SSMS.  One important thing to note here… the import file must be a file that was created from exporting a list of SQL Servers from SSMS.  If you manually create a list of servers and saved it as a text file the import will fail.

I hope you find this helpful.  If you know of another way to successfully import a list of servers into SSMS let us know!

How To Kill Remote Desktop Connections

Yesterday I was attempting to remote desktop into one of the production SQL servers and all of the sessions were taken.  I thought I would open up the Terminal Services Manager on another SQLsServer within the same domain to either disconnect or log off a user session for the original server I was attempting to access.  As I opened up the Terminal Services Manager I noticed that the I was not able to “see view” the original server within the Terminal Services Manager.  So the other way I would be able to do it would be through the command prompt.  Follow the steps below to view and kill remote desktop sessions on your server.

  1. Log into a server on the domain and open a windows command prompt
  2. To view current connections on a server type: qwinsta /server:<server name or ip address>
  3. When the results are displayed pick an ID value that has a state of “Disc” (Disconnected)
  4. To kill a session pick the ID with the state of “Disc” and type the following
  5. rwinsta <ID Value> /server:<server name or ip address>

The remote desktop session has been killed and you are now able to Remote Desktop into your original SQL Server.  If you are asking yourself what does qwinsta and rwinsta mean… well here they are:

qwinsta = Query WINdows STAtion
rwinsta = Reset WINdows STAtion

I hope you have found this useful… I will be adding more administrative tips and tricks on my blog so keep checking back!

– Squirrel

Optimizing Transaction Logs

Welcome back and Happy Saturday!

Today I wanted to share with you that some DBAs seem to overlook the transaction log when thinking about optimizing database performance.  I myself was guilty of not considering optimizing the transaction logs for my production databases.  I am going to blog about Virtual Log Files (VLFs) and why it is important to consider this option when talking about overall database health and performance.  Here is a little background first… I first learned about VLFs when I attended the last Dev Connections Conference in Las Vegas for SQL Server 2008.  At his conference I learned about VLFs from one of the prominent SQL Server experts Kimberly Tripp.  Here is a more detailed breakdown of Virtual Log Files:

Virtual Log Files
The Transaction Log is divided into Virtual Log Files
To get more information about Log file utilization you can run DBCC LogInfo
When you add space or Autogrow
< 64 MB creates 4 new VLFs
>= 64 MB and 1 < GB creates 8 new VLFs
>= 1 GB creates 16 new VLFs
A VLF fills up before moving on to the next
Log Backup does not clear a VLF until you have moved into the next one
(so you don’t want a very small number of huge VLFs)
Too many and you get poor write performance
If more than 50 VLFs time to trim
Ideally around 20 VLFs

For Performance Impact please refer to Linchi Shea’s post on SQLBlog (link must be cut and pasted into your browser):

For more information, please refer to Kimberly’s posts on her website (also, these must be cut and pasted into your browser as the direct links will not work):

I hope I have given all of you something new to ponder as you are getting ready to optimize performance on your production databases.

Fixing Orphaned Users In SQL Server 2005

The other day i restored a QA database in compatibility level 90 (SQL Server 2005) to a production server that had SQL Server 2005 installed.  The restore went as expected however, the logins that were already there on the SQL Server had different SIDs than the users in the database that was restored.  We maintain three different environments for production databases (dev, qa and production) and in each environment the logins have different SIDs and passwords.  So after promoting the databases from one environment to another we need to re-sync the SIDs.

Now back in the days of SQL Server 2000 and earlier you needed to write a T-SQL script that allowed access to the system tables, grab the SIDS from syslogins (sysxlogins) and update the SIDs in the user database.  Now lets fast forward to SQL Server 2005… Microsoft recogonized the need to sync database users to logins and vice versa for different servers.  So in SQL Server 2005 came the stored procedure sp_change_users_login.  To find out more information on this stored procedure you can visit this site: .

Here is a T-SQL script that you run in a query window to allow you to sync database users to SQL Server logins.  Now one thing to note here… make sure you run the script in the context of the database that you want to sync the users for.  Here is the script:



USE <database name>

DECLARE @sql varchar(100);

SELECT ‘EXEC sp_change_users_login ”UPDATE_ONE”, ”’ + name + ”’, ”’ + name + ””
FROM sysusers
WHERE issqluser = 1
AND suser_sname(sid) IS NULL
AND name NOT IN (‘dbo’, ‘guest’)



EXEC (@sql)
PRINT @sql


I hope this helps! Let us know if you have another way to sync database users to SQL Server logins.

Querying DTS Tables For Log Information

Today I was looking the SQL Server Agent jobs on one of my SQL Server 2000 production database servers.  I noticed one job that failed with a very cryptic error.  This was the error message reported:

DTSRun OnStart:  DTSStep_DTSDataPumpTask_6   DTSRun OnProgress:
DTSStep_DTSDataPumpTask_6; 1000 Rows have been transformed or copied.;
PercentComplete = 0; ProgressCount = 1000   DTSRun OnProgress:
DTSStep_DTSDataPumpTask_6; 2000 Rows have been transformed or copied.;
PercentComplete = 0; ProgressCount = 2000   DTSRun OnProgress:
DTSStep_DTSDataPumpTask_6; 3000 Rows have been transformed or copied.;
PercentComplete = 0; ProgressCount = 3000   DTSRun OnProgress: 
DTSStep_DTSDataPumpTask_6; 4000 Rows have been transformed or copied.;
PercentComplete = 0; ProgressCount = 4000   DTSRun OnProgress: 
DTSStep_DTSDataPumpTask_6; 5000 Rows have been transformed or copied.;
PercentComplete = 0; ProgressCount = 5000   DTSRun OnProgress: 
DTSStep_DTSDataP…  Process Exit Code 1.  The step failed.

Now looking at this error message does not really provide all the detail as to why and what failed.  So I did some searching through BOL, MSDB and Google and I finally found what I was looking for.  IF you have logging enabled on the DTS package (I do for all of mine) and have the option to write to SQL Server instead of a log file then you can query for the exact error message produced.  Here is what you need to do:

In SQL Server 2000 open up a Query Analyzer window and select ‘MSDB’ from the drop down or type in ‘USE MSDB’ and execute that and you will now be placed into the context of MSDB.  Next, look at the job that failed and notice the name of the of the DTS package being called.  Take the name and then type and execute the following T-SQL statement:

    select *
    from sysdtspackagelog
    where name = ‘SendCustomerData’  — this is the name of your DTS package
    order by starttime desc

In the result set look at the first record and look for the column that reads ‘lineagefull’ and copy that value in the first record.  The value will look something like this:  E1E90023-93EB-4355-B995-43A762AD43C1

After grabbing that value, execute the following T-SQL statements and use that value in the WHERE clause.

     select *
     from sysdtssteplog
     where lineagefull = ‘E1E90023-93EB-4355-B995-43A762AD43C1’ 

The table sysdtssteplog displays all of the tasks in the DTS package so 1 task equal 1 record in the table.  Scroll over to your left in the result set pane and look for a column called: 


Here you will be able to view the actual error that was generated by one or more tasks in your DTS package!

I hope you found this helpful!

Collecting Performance Counters For A Baseline In SQL Server

Hello again and welcome back!

The other day I was posting in the SQL Server news groups and I came across a person asking about which performance counters should be collected to obtain a performance load baseline in their production SQL Server environment.  It occurred to me there is a lot of information out there about performance counters and what shoud be collected to determine a true load baseline for a production environment.  Now granted everyone’s production SQL Server environment is different however, the following performance counters can be used to give you a wide or general overview of the activity going on in your environment.

In the past I have worked with a company called Scalability Experts on a very large SQL Server consolidation project.  Scalability Experts ran the same counters to get a general overviewloadbaseline on all production SQL Servers.  These counters can be ran on SQL Server 2000, 2005 and 2008 in either a stand-alone or cluster configuration.  Here are the performance counters to run within Performance Monitor:

Memory – Page Reads/sec
Memory – Page Writes/sec
Memory – Pages Input/sec
Memory – Pages Output/sec
Memory – Pages/sec
Network Interface(*) – Bytes Total/sec
Network Interface(*) – Output Queue Length
Network Interface(*) – Packets/sec
PhysicalDisk(*) – % Disk Time
PhysicalDisk(*) – Avg. Disk Bytes/Read
PhysicalDisk(*) – Avg. Disk Bytes/Write
PhysicalDisk(*) – Avg. Disk Queue Length
PhysicalDisk(*) – Avg. Disk Read Queue Length
PhysicalDisk(*) – Avg. Disk sec/Read
PhysicalDisk(*) – Avg. Disk sec/Write
PhysicalDisk(*) – Avg. Disk Write Queue Length
PhysicalDisk(*) – Disk Read Bytes/sec
PhysicalDisk(*) – Disk Reads/sec
PhysicalDisk(*) – Disk Transfers/sec
PhysicalDisk(*) – Disk Write Bytes/sec
PhysicalDisk(*) – Disk Writes/sec
Process(sqlservr) – % Privileged Time
Process(sqlservr) – % Processor Time
Process(sqlservr) – % User Time
Process(sqlservr) – IO Data Operations/sec
Process(sqlservr) – Page Faults/sec
Process(sqlservr) – Private Bytes
Process(sqlservr) – Thread Count
Process(sqlservr) – Virtual Bytes
Process(sqlservr) – Working Set
Processor(*) – % Interrupt Time
Processor(*) – % Privileged Time
Processor(*) – % Processor Time
Processor(*) – % User Time
SQLServer:Buffer Manager – *
SQLServer:Cache Manager(*) – *
SQLServer:Databases(*) – Data File(s) Size (KB)
SQLServer:Databases(*) – Log File(s) Size (KB)
SQLServer:Databases(*) – Transactions/sec
SQLServer:General Statistics – *
SQLServer:Locks(*) – *
SQLServer:Memory Manager – *
SQLServer:SQL Statistics – Batch Requests/sec
SQLServer:SQL Statistics – SQL Compilations/sec
SQLServer:SQL Statistics – SQL Re-Compilations/sec
System – Context Switches/sec
System – Processor Queue Length

My next post will be demostrating how to add these counters into Performance Monitor and setting the specifics (duration, frequency, etc) on the the log.  If you have any other performance counters that you use please post them here!

Black Box Trace In SQL Server

The other day I was looking on a production SQL Server and I noticed a trace file that was running but was not able to associate it to SQL Profile session. I queried sysprocesses and was unable to find a spid that was running SQL Profile. I was perplexed for a little while and started searching the Internet high and low and came across “black box” traces in SQL Server.

So what I did was… I opened up the trace file in SQL Profiler and I was able to view the events and columns for the trace. Then I compared the events and columns to what I found in the trace file to what I found on the Internet. The comparison turned out to be an exact match for events as well as columns. So I started asking around and nobody seemed to know how it got there on the server and if it was being used. Of course when it comes to something like this nobody wants to fess up.

Here is the idea of a “black box” trace. A black box trace records the last 5MB of activity on the SQL Server when it becomes problematic. The trace captures the queries andor errors right before the server becomes unrepsonsive or crashes. This information is useful for sending to Microsoft to help diagnose the problem. The black box trace records the following:
Stored Procedure Execution (RPC:Starting)
T-SQL Batch Execution (SQL:BatchStarting)
Errors and Warnings (Attention and Exception)

You can go to google and query “sql server black box trace” and you will get a lot more information. From what I have read and I can tell on my own server the black box trace does not cause any performance issues. If you have experience using the black box trace feature please post and let us know!