Execute following procedue in Query window
Use Master
GO
EXEC master..XP_FixedDrives
GO
Example:
Send Disk Space Alerts using SQL Server 2005
Step 1: Create the database mail profile account using SQL Server Management Studio. Give the profile name to “FreeSpaceAlertMails”
Step2: Create the below procedure in master database which will check the disk space.
CREATE PROCEDURE sendAlertMails
AS
BEGIN
SET NOCOUNT ON
DECLARE @availableSpace AS FLOAT
DECLARE @alertMessage AS Varchar(4000)
CREATE TABLE #tbldiskSpace
(
driveName VARCHAR(3),
freeSpace FLOAT
)
INSERT INTO #tbldiskSpace EXEC master..XP_FixedDrives
SELECT @availableSpace = ROUND((freeSpace)/1024,1) FROM #tbldiskSpace WHERE driveName = ‘E’
SET @alertMessage = ‘(host:yourpool.sqldbpool.com)E:\ Disk Space Critical. Free Space Available on E:\ Drive is ‘ + CAST(@availableSpace AS VARCHAR) + ‘GB’
IF @availableSpace < 10
BEGIN
EXEC msdb.dbo.sp_send_dbmail
@profile_name = ‘FreeSpaceAlertMails’,
@recipients = ‘yourmail@gmail.com’,
@body = @alertMessage,
@importance = ‘High’,
@subject = ‘domain.com Disk Critical E Drive’ ;
END
DROP TABLE #tbldiskSpace
END
Thursday, October 28, 2010
Trouble shooting Alerts
Troubleshooting SQL Server Alerts
If you have problems with Microsoft SQL Server alerts, review this
troubleshooting checklist to find potential solutions.
1. Check that you are using the latest SQL Server service pack.
Because many alert bugs were fixed in SQL Server service packs, you
should install the latest SQL Server service pack. Check "How can I
check what SQL service pack am I running?" to find out which SQL
Server service pack you are running.
2. Check that the account the SQLServerAgent services runs under is a
member of the Domain Users group.
The LocalSystem account does not have network access rights.
Therefore, if you want to forward events to the application logs of
other Windows NT or Windows 2000 computers, or your jobs require
resources across the network, or you want to notify operators through
e-mail or pagers, you must set the account the SQLServerAgent service
runs under to be a member of the Domain Users group.
3. If all of the alerts are not firing, check that the SQLServerAgent
and EventLog services are running.
These services must be started, if you need the alerts to be fired.
So, if these services are not running, you should run them.
4. If an alert is not firing, make sure that it is enabled.
The alert can be enabled or disabled. To check if an alert is enabled,
you can do the following:
1. Run SQL Server Enterprise Manager.
2. Expand a server group; then expand a server.
3. Expand Management; then expand SQL Server Agent.
4. Double-click the appropriate alert to see if the alert is
enabled.
5. Check the history values of the alert to determine the last date
that the alert worked.
To view the history values of the alert, you can do the following:
1. Run SQL Server Enterprise Manager.
2. Expand a server group; then expand a server.
3. Expand Management; then expand SQL Server Agent.
4. Double-click the appropriate alert to see the alert history.
6. Verify that the counter value is maintained for at least 20
seconds.
Because SQL Server Agent polls the performance counters at 20 second
intervals, if the counter value is maintained for only a few seconds
(less than 20 seconds), there is a high likelihood that the alert will
not fire.
7. Check the SQL Server error log, SQL Server Agent error log, and
Windows NT or Windows 2000 application log to get a more detailed
error description.
Comparing the dates and times for alert failure events between the SQL
Server error log, the SQL Server Agent error log, and the Windows NT
or Windows 2000 application log can help you to determine the reason
of the failure.
8. If the alert fires, but the responsible operator does not receive
notification, try to send 'e-mail', 'pager', or 'net send' message to
this operator manually.
In most cases, this problem arises when you have entered an incorrect
'e-mail', 'pager', or 'net send' addresses. If you can send an 'e-
mail', 'pager', or 'net send' message manually to this operator, check
the account the SQL Server Agent runs under, as well as the operator's
on-duty schedule.
9. If the alert fires, but the notification is not timely, decrease
the 'Delay between responses' setting for the alert and try to send
notifications to as few operators as possible.
To decrease the 'Delay between responses' setting for the alert, you
can do the following:
1. Run SQL Server Enterprise Manager.
2. Expand a server group; then expand a server.
3. Expand Management; then expand SQL Server Agent.
4. Double-click the appropriate alert and choose the 'Response'
tab.
5. Specify a new 'Delay between responses' setting.
10. Alert cannot send e-mail notification with xp_logevent or
RAISERROR.
This is an SQL Server 7.0 and SQL Server 2000 bug. This happens if the
alert is defined to be restricted to a specific database other than
the master database. To work around this, you can define the alert on
the master database, or all databases. To define the alert on all
databases, you can do the following:
1. Run SQL Server Enterprise Manager.
2. Expand a server group; then expand a server.
3. Expand Management; then expand SQL Server Agent.
4. Double-click the appropriate alert and choose '(All Databases)'
in the Database name combobox.
11. Alerts are triggered incorrectly when Security Auditing is
enabled.
This is an SQL Server 7.0 and SQL Server 2000 bug. This bug was fixed
in SQL Server 2000 service pack 1 and in SQL Server 7.0 service pack
4. To work around this problem, you can disable Security Auditing, or
you can install the service packs.
12. After Installing SQL Server 7.0 service pack 3, SQL Server Agent
alerts may fail to work.
This is an SQL Server 7.0 service pack 3 bug. To work around this, you
should install SQL Server 7.0 service pack 4.
13. Responses for performance condition alerts are sent every 20
seconds, regardless of the 'Delay between responses' setting for the
alert.
This is an SQL Server 7.0 bug. To work around this, you should install
SQL Server 7.0 service pack 1 or higher.
If you have problems with Microsoft SQL Server alerts, review this
troubleshooting checklist to find potential solutions.
1. Check that you are using the latest SQL Server service pack.
Because many alert bugs were fixed in SQL Server service packs, you
should install the latest SQL Server service pack. Check "How can I
check what SQL service pack am I running?" to find out which SQL
Server service pack you are running.
2. Check that the account the SQLServerAgent services runs under is a
member of the Domain Users group.
The LocalSystem account does not have network access rights.
Therefore, if you want to forward events to the application logs of
other Windows NT or Windows 2000 computers, or your jobs require
resources across the network, or you want to notify operators through
e-mail or pagers, you must set the account the SQLServerAgent service
runs under to be a member of the Domain Users group.
3. If all of the alerts are not firing, check that the SQLServerAgent
and EventLog services are running.
These services must be started, if you need the alerts to be fired.
So, if these services are not running, you should run them.
4. If an alert is not firing, make sure that it is enabled.
The alert can be enabled or disabled. To check if an alert is enabled,
you can do the following:
1. Run SQL Server Enterprise Manager.
2. Expand a server group; then expand a server.
3. Expand Management; then expand SQL Server Agent.
4. Double-click the appropriate alert to see if the alert is
enabled.
5. Check the history values of the alert to determine the last date
that the alert worked.
To view the history values of the alert, you can do the following:
1. Run SQL Server Enterprise Manager.
2. Expand a server group; then expand a server.
3. Expand Management; then expand SQL Server Agent.
4. Double-click the appropriate alert to see the alert history.
6. Verify that the counter value is maintained for at least 20
seconds.
Because SQL Server Agent polls the performance counters at 20 second
intervals, if the counter value is maintained for only a few seconds
(less than 20 seconds), there is a high likelihood that the alert will
not fire.
7. Check the SQL Server error log, SQL Server Agent error log, and
Windows NT or Windows 2000 application log to get a more detailed
error description.
Comparing the dates and times for alert failure events between the SQL
Server error log, the SQL Server Agent error log, and the Windows NT
or Windows 2000 application log can help you to determine the reason
of the failure.
8. If the alert fires, but the responsible operator does not receive
notification, try to send 'e-mail', 'pager', or 'net send' message to
this operator manually.
In most cases, this problem arises when you have entered an incorrect
'e-mail', 'pager', or 'net send' addresses. If you can send an 'e-
mail', 'pager', or 'net send' message manually to this operator, check
the account the SQL Server Agent runs under, as well as the operator's
on-duty schedule.
9. If the alert fires, but the notification is not timely, decrease
the 'Delay between responses' setting for the alert and try to send
notifications to as few operators as possible.
To decrease the 'Delay between responses' setting for the alert, you
can do the following:
1. Run SQL Server Enterprise Manager.
2. Expand a server group; then expand a server.
3. Expand Management; then expand SQL Server Agent.
4. Double-click the appropriate alert and choose the 'Response'
tab.
5. Specify a new 'Delay between responses' setting.
10. Alert cannot send e-mail notification with xp_logevent or
RAISERROR.
This is an SQL Server 7.0 and SQL Server 2000 bug. This happens if the
alert is defined to be restricted to a specific database other than
the master database. To work around this, you can define the alert on
the master database, or all databases. To define the alert on all
databases, you can do the following:
1. Run SQL Server Enterprise Manager.
2. Expand a server group; then expand a server.
3. Expand Management; then expand SQL Server Agent.
4. Double-click the appropriate alert and choose '(All Databases)'
in the Database name combobox.
11. Alerts are triggered incorrectly when Security Auditing is
enabled.
This is an SQL Server 7.0 and SQL Server 2000 bug. This bug was fixed
in SQL Server 2000 service pack 1 and in SQL Server 7.0 service pack
4. To work around this problem, you can disable Security Auditing, or
you can install the service packs.
12. After Installing SQL Server 7.0 service pack 3, SQL Server Agent
alerts may fail to work.
This is an SQL Server 7.0 service pack 3 bug. To work around this, you
should install SQL Server 7.0 service pack 4.
13. Responses for performance condition alerts are sent every 20
seconds, regardless of the 'Delay between responses' setting for the
alert.
This is an SQL Server 7.0 bug. To work around this, you should install
SQL Server 7.0 service pack 1 or higher.
Wednesday, October 27, 2010
SQLCODE4YOU: DIFFERENCE BETWEEN SQLSERVER 2005 AND 2008
SQLCODE4YOU: DIFFERENCE BETWEEN SQLSERVER 2005 AND 2008: "DIFFERENCE BETWEEN 2005 AND 2008: Feature NameSqlSearver 2005Sqlserver 2008 Backup CompressionNot availableAvailable Page level compressionN..."
DIFFERENCE BETWEEN SQLSERVER 2005 AND 2008
DIFFERENCE BETWEEN 2005 AND 2008:
Feature Name | SqlSearver 2005 | Sqlserver 2008 | |
Backup Compression | Not available | Available | |
Page level compression | Not available | Available | |
Data Compression | Not available | Available | |
Clustering | 8 nodes are supported in Enterprise Edition | 16 nodes are supported in Enterprise Edition | |
Resource Governor | Not available | Available | |
File Stream | Not available | Available | |
Facets | Not available | Available | |
Encryption | Not available | Available ( In mirroring all transactions are encrypted while moving data) |
Show header form Data Base backup with out backup restoration
List header data from Backup without restoration:
Here is code for list heade that is in Backup. It just show header information.
Syntax:
RESTORE HEADERONLY FROM DISK='path of your backup file'
EXAMPLE:
RESTORE HEADERONLY FROM DISK='D:\mydatabse_backup.bak'
Here 'mydatabse_backup.bak' is my data base backup name. So replace this name with your database backup file name.
Here is code for list heade that is in Backup. It just show header information.
Syntax:
RESTORE HEADERONLY FROM DISK='path of your backup file'
EXAMPLE:
RESTORE HEADERONLY FROM DISK='D:\mydatabse_backup.bak'
Here 'mydatabse_backup.bak' is my data base backup name. So replace this name with your database backup file name.
List All files from Backup without restoration, How?
List All files from Backup without restoration:
Here is code for list all files that are in Backup. It just show files list not restore.
Syntax:
RESTORE FILELISTONLY FROM DISK='path of your backup file'
EXAMPLE:
RESTORE FILELISTONLY FROM DISK='D:\mydatabse_backup.bak'
Here 'mydatabse_backup.bak' is my data base backup name. So replace this name with your database backup file name.
Here is code for list all files that are in Backup. It just show files list not restore.
Syntax:
RESTORE FILELISTONLY FROM DISK='path of your backup file'
EXAMPLE:
RESTORE FILELISTONLY FROM DISK='D:\mydatabse_backup.bak'
Here 'mydatabse_backup.bak' is my data base backup name. So replace this name with your database backup file name.
Tuesday, October 26, 2010
SQL Server 2008 FILESTREAM
Implementing SQL Server 2008 FILESTREAM functionality
The new FILESTREAM functionality in SQL Server 2008 lets you configure a varbinary(max) column so that the actual data is stored on the file system, rather than within the database. You can still query the column as you would a regular varbinary(max) column even though the data itself is stored externally.
The FILESTREAM feature integrates Windows NTFS with the SQL Server database engine by storing binary large object (BLOB) data as files on the local file system. You can use Transact-SQL statements to query, insert or update the data, or you can use Win32 file system interfaces for direct streaming access to the data.
Microsoft recommends that you use FILESTREAM only if (1) your BLOB data files are, on average, larger than 1 MB, (2) you need fast read access to that data, and (3) your application uses a middle tier for application logic. Otherwise, you should use a regular varbinary(max) column.
If you do decide to implement a FILESTREAM column, you'll need to take the following steps:
Enable FILESTREAM support on the SQL Server 2008 service.
Configure the database to support FILESTREAM storage.
Define a column that supports FILESTREAM storage.
The new FILESTREAM functionality in SQL Server 2008 lets you configure a varbinary(max) column so that the actual data is stored on the file system, rather than within the database. You can still query the column as you would a regular varbinary(max) column even though the data itself is stored externally.
The FILESTREAM feature integrates Windows NTFS with the SQL Server database engine by storing binary large object (BLOB) data as files on the local file system. You can use Transact-SQL statements to query, insert or update the data, or you can use Win32 file system interfaces for direct streaming access to the data.
Microsoft recommends that you use FILESTREAM only if (1) your BLOB data files are, on average, larger than 1 MB, (2) you need fast read access to that data, and (3) your application uses a middle tier for application logic. Otherwise, you should use a regular varbinary(max) column.
If you do decide to implement a FILESTREAM column, you'll need to take the following steps:
Enable FILESTREAM support on the SQL Server 2008 service.
Configure the database to support FILESTREAM storage.
Define a column that supports FILESTREAM storage.
SQLCODE4YOU: Resource Governor in SQL Server 2008
SQLCODE4YOU: Resource Governor in SQL Server 2008: "Resource Governor in SQL Server 2008 This is new version for Sql Server 2008. Resource governor is one of the best additions in SQL Server..."
Resource Governor in SQL Server 2008
Resource Governor in SQL Server 2008
This is new version for Sql Server 2008.
Resource governor is one of the best additions in SQL Server 2008. It gives you a real control on your environment as a DBA and can be very useful for better utilization of your database server resources. So far so good, but don’t just start using it without knowing how it actually restricts resource utilization. Following are some points to remember for better utilization of this feature:
Normally, most of us assume that if we restricts a memory usage for a user A upto 10% only then he will never be able to utilize more than 10% of server in any case. Right? Wrong. Actually, Resource governer only restricts user to utilize not more than 10% if it feels that remaining memory is not available but if memory is available and there are no pending workload then it will allow the user to use more than its allowed quota of 10%. This is there to optimize the utilization of memory and avoids wastage of resources. But it can have worse effects also because if User A fires it query before other users then server will start utilizing all the available memory and all other users which came afterwards will suffer the consequences.
Please note that Resource Governor is for database engine not for other services in SQL Server 2008. It means you can not control usage of Reporting Services, Analysis Services or Integration Services.
If you have multiple instances running on same machine then you can not use Resource Governor to manage load between these instances.
Keeping these points in your mind will help you to better understand how to use resource governor and what to expect. It is one of the best tools to manage your load and highly recommended but make sure you know the pros and cons of it.
For more information Please visit, It's supper article
http://www.sql-server-performance.com/articles/per/Resource_Governor_in_SQL_Server_2008_p1.aspx
This is new version for Sql Server 2008.
Resource governor is one of the best additions in SQL Server 2008. It gives you a real control on your environment as a DBA and can be very useful for better utilization of your database server resources. So far so good, but don’t just start using it without knowing how it actually restricts resource utilization. Following are some points to remember for better utilization of this feature:
Normally, most of us assume that if we restricts a memory usage for a user A upto 10% only then he will never be able to utilize more than 10% of server in any case. Right? Wrong. Actually, Resource governer only restricts user to utilize not more than 10% if it feels that remaining memory is not available but if memory is available and there are no pending workload then it will allow the user to use more than its allowed quota of 10%. This is there to optimize the utilization of memory and avoids wastage of resources. But it can have worse effects also because if User A fires it query before other users then server will start utilizing all the available memory and all other users which came afterwards will suffer the consequences.
Please note that Resource Governor is for database engine not for other services in SQL Server 2008. It means you can not control usage of Reporting Services, Analysis Services or Integration Services.
If you have multiple instances running on same machine then you can not use Resource Governor to manage load between these instances.
Keeping these points in your mind will help you to better understand how to use resource governor and what to expect. It is one of the best tools to manage your load and highly recommended but make sure you know the pros and cons of it.
For more information Please visit, It's supper article
http://www.sql-server-performance.com/articles/per/Resource_Governor_in_SQL_Server_2008_p1.aspx
Thursday, October 21, 2010
Why Logshipping supports only FullBackup and Bulk-logged recovery models?
Logshipping means ship log files to another location from stand by server.
Log file contains all the DML statements information.
The basic aim behind Logshipping is to maintain all old transactions in seperate place.
If we use simple recovery model all the DML Statements information is not recorded So, This is not supported for logShipping.
Where as in the case of full/bulklogged recovery model all the DML Statements coresponding log information is avilable in log files. So LogShipping accepts only Full/Bulk-Logged recovery models.
Log file contains all the DML statements information.
The basic aim behind Logshipping is to maintain all old transactions in seperate place.
If we use simple recovery model all the DML Statements information is not recorded So, This is not supported for logShipping.
Where as in the case of full/bulklogged recovery model all the DML Statements coresponding log information is avilable in log files. So LogShipping accepts only Full/Bulk-Logged recovery models.
Run command for sqlserver 2008
1.click on start button
2.click on run from start menu
3.type SSMS
4.press enter
5.It opens SQLServer 2008 or 10.0 version
2.click on run from start menu
3.type SSMS
4.press enter
5.It opens SQLServer 2008 or 10.0 version
Run command for for sqlserver 2005
1.click on start button
2.click on run from start menu
3.type sqlwb
4.press enter
5.It opens SQLServer 2005
2.click on run from start menu
3.type sqlwb
4.press enter
5.It opens SQLServer 2005
Friday, October 15, 2010
RESTORE VERIFYONLY In SQL Server 2005
RESTORE
Usage:
RESTORE VERIFYONLY FROM DISK = 'C:\www\EMP_backup_201010130110.bak'
Out Put:
The backup set on file 1 is valid.
VERIFYONLY is used to check wether our backup is working or not. this is not 100% but upto some extent we can conclude that our backup works fine.
Usage:
RESTORE VERIFYONLY FROM DISK = 'C:\www\EMP_backup_201010130110.bak'
Out Put:
The backup set on file 1 is valid.
VERIFYONLY is used to check wether our backup is working or not. this is not 100% but upto some extent we can conclude that our backup works fine.
Friday, October 8, 2010
DBCC CHECKDB
The question I ask in an interview is:
"You get an error in the logs indicating that you have a torn page or checksum error. What do you do?"
Half the candidates look confused. Of the rest, roughly 90% tell me to run DBCC CHECKDB with one of the repair options! When I challenge them to explain how CHECKDB performs the repair, none are able to answer, unaware of the damage they might be doing.
So here is how to use DBCC CHECKDB, and what to do when you have a torn or corrupt page.
So How Do I Use It?
The primary purpose is to check for consistency errors, and should ideally be run every day.
The basic syntax is:
DBCC CHECKDB ('DB Name') WITH NO_INFOMSGS
NO_INFOMSGS prevents an excessive number of informational messages from being generated. There are several other options, but this is the syntax you should aim to use as it performs all integrity checks.
This may take a long time on large databases and you may want to specify the PHYSICAL_ONLY option. This checks physical on-disk structures, but omits the internal logical checks. The syntax is:
DBCC CHECKDB ('DB Name') WITH PHYSICAL_ONLY
What Do I Do?
You do have backups don't you? You might be lucky; a non-clustered index can be dropped and rebuilt but actual data, such as a clustered index, cannot.
By far the best option is to restore from a backup, but let's look at how you investigate which pages are affected and what type of data is affected:
Look at the output from DBCC CHECKDB. You may see something like this:
bject ID 2088535921, index ID 0, partition ID 72345201021503994, alloc unit ID 72345201051571606 (type In-row data): Page (1:94299) could not be processed. See other errors for details.
Msg 8939, Level 16, State 98, Line 1
Table error: Object ID 2088535921, index ID 0, partition ID 72345201021503994, alloc unit ID 72345201051571606 (type In-row data), page (1:94299). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed.
CHECKDB found 0 allocation errors and 2 consistency errors in table 'yourtable' (object ID 2088535921).
CHECKDB found 0 allocation errors and 2 consistency errors in database 'yourdb'.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (yourdb).
From this you can see what page is corrupted (1:94299)
The first thing to do is check if it is data in a heap, in a clustered index, or in a non-clustered index. In the above text you can see it is index ID 0. You could also examine the page (1:94299 in database 'yourdb') as follows:
DBCC TRACEON (3604, -1)
GO
DBCC PAGE('yourdb', 1, 94299, 3)
GO
In the output you will see something like:
Metadata: IndexId = n
If n is greater than 1 it is a non-clustered index and can safely be dropped and recreated. If n is 0 or 1 you have data corruption and need to perform one of the options described below.
Restoring from a backup
If the recovery model is FULL (or BULK_LOGGED, with some limitations), you can backup the tail of the log, perform a restore (with norecovery) from the last clean full backup, followed by subsequent log backups and finally the tail of the log.
If only a few pages are affected you have the option of selectively restoring only the bad pages, as follows:
RESTORE DATABASE yourdb PAGE = '1:94299'
FROM DISK = 'C:\yourdb.bak'
WITH NORECOVERY
If the recovery model is simple you don't have that option, and have to accept that a restore from the last full backup will result in subsequent transactions being lost. In this case, or if you have no backups at all, you may decide that an automatic repair is the only option.
Automatic Repair Options
First let me emphasise the importance of running a backup BEFORE you go any further.
Have a look at the output of the original CHECKDB. It will specify the minimum repair level.
REPAIR_REBUILD
If the minimum repair level is REPAIR_REBUILD you have been lucky.
The syntax is
DBCC CHECKDB('DB Name', REPAIR_REBUILD)
REPAIR_ALLOW_DATA_LOSS
This attempts to repair all errors. Sometimes the only way to repair an error is to deallocate the affected page and modify page links so that it looks like the page never existed. This has the desired effect of restoring the database's structural integrity but means that something has been deleted (hence the ALLOW_DATA_LOSS). There are likely to be issues with referential integrity, not to mention the important data that may now be missing.
The syntax is
DBCC CHECKDB('DB Name', REPAIR_ALLOW_DATA_LOSS)
Make sure you run DBCC CHECKCONSTRAINTS afterwards so you are aware of referential integrity issues and can take the appropriate action.
And Finally
My original reason for writing this was to stress that the correct action when faced with corruption is nearly always to restore from a backup. Only use the automatic repair options as a last resort, and with full understanding of the damage this may do.
Just as important is that regular backups are an essential part of a DBA's responsibilities, and you should use the FULL recovery model with regular log backups for all but the most trivial databases.
DBCC CHECKDB is a powerful tool, but also very dangerous in the wrong hands.
Maybe instead of adding the REPAIR_ALLOW_DATA_LOSS option, Microsoft should have created a separate DBCC command called:
DBCC DELETE_DATA_BECAUSE_I_COULDNT_BE_BOTHERED_TO_TAKE_A_BACKUP
"You get an error in the logs indicating that you have a torn page or checksum error. What do you do?"
Half the candidates look confused. Of the rest, roughly 90% tell me to run DBCC CHECKDB with one of the repair options! When I challenge them to explain how CHECKDB performs the repair, none are able to answer, unaware of the damage they might be doing.
So here is how to use DBCC CHECKDB, and what to do when you have a torn or corrupt page.
So How Do I Use It?
The primary purpose is to check for consistency errors, and should ideally be run every day.
The basic syntax is:
DBCC CHECKDB ('DB Name') WITH NO_INFOMSGS
NO_INFOMSGS prevents an excessive number of informational messages from being generated. There are several other options, but this is the syntax you should aim to use as it performs all integrity checks.
This may take a long time on large databases and you may want to specify the PHYSICAL_ONLY option. This checks physical on-disk structures, but omits the internal logical checks. The syntax is:
DBCC CHECKDB ('DB Name') WITH PHYSICAL_ONLY
What Do I Do?
You do have backups don't you? You might be lucky; a non-clustered index can be dropped and rebuilt but actual data, such as a clustered index, cannot.
By far the best option is to restore from a backup, but let's look at how you investigate which pages are affected and what type of data is affected:
Look at the output from DBCC CHECKDB. You may see something like this:
bject ID 2088535921, index ID 0, partition ID 72345201021503994, alloc unit ID 72345201051571606 (type In-row data): Page (1:94299) could not be processed. See other errors for details.
Msg 8939, Level 16, State 98, Line 1
Table error: Object ID 2088535921, index ID 0, partition ID 72345201021503994, alloc unit ID 72345201051571606 (type In-row data), page (1:94299). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed.
CHECKDB found 0 allocation errors and 2 consistency errors in table 'yourtable' (object ID 2088535921).
CHECKDB found 0 allocation errors and 2 consistency errors in database 'yourdb'.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (yourdb).
From this you can see what page is corrupted (1:94299)
The first thing to do is check if it is data in a heap, in a clustered index, or in a non-clustered index. In the above text you can see it is index ID 0. You could also examine the page (1:94299 in database 'yourdb') as follows:
DBCC TRACEON (3604, -1)
GO
DBCC PAGE('yourdb', 1, 94299, 3)
GO
In the output you will see something like:
Metadata: IndexId = n
If n is greater than 1 it is a non-clustered index and can safely be dropped and recreated. If n is 0 or 1 you have data corruption and need to perform one of the options described below.
Restoring from a backup
If the recovery model is FULL (or BULK_LOGGED, with some limitations), you can backup the tail of the log, perform a restore (with norecovery) from the last clean full backup, followed by subsequent log backups and finally the tail of the log.
If only a few pages are affected you have the option of selectively restoring only the bad pages, as follows:
RESTORE DATABASE yourdb PAGE = '1:94299'
FROM DISK = 'C:\yourdb.bak'
WITH NORECOVERY
If the recovery model is simple you don't have that option, and have to accept that a restore from the last full backup will result in subsequent transactions being lost. In this case, or if you have no backups at all, you may decide that an automatic repair is the only option.
Automatic Repair Options
First let me emphasise the importance of running a backup BEFORE you go any further.
Have a look at the output of the original CHECKDB. It will specify the minimum repair level.
REPAIR_REBUILD
If the minimum repair level is REPAIR_REBUILD you have been lucky.
The syntax is
DBCC CHECKDB('DB Name', REPAIR_REBUILD)
REPAIR_ALLOW_DATA_LOSS
This attempts to repair all errors. Sometimes the only way to repair an error is to deallocate the affected page and modify page links so that it looks like the page never existed. This has the desired effect of restoring the database's structural integrity but means that something has been deleted (hence the ALLOW_DATA_LOSS). There are likely to be issues with referential integrity, not to mention the important data that may now be missing.
The syntax is
DBCC CHECKDB('DB Name', REPAIR_ALLOW_DATA_LOSS)
Make sure you run DBCC CHECKCONSTRAINTS afterwards so you are aware of referential integrity issues and can take the appropriate action.
And Finally
My original reason for writing this was to stress that the correct action when faced with corruption is nearly always to restore from a backup. Only use the automatic repair options as a last resort, and with full understanding of the damage this may do.
Just as important is that regular backups are an essential part of a DBA's responsibilities, and you should use the FULL recovery model with regular log backups for all but the most trivial databases.
DBCC CHECKDB is a powerful tool, but also very dangerous in the wrong hands.
Maybe instead of adding the REPAIR_ALLOW_DATA_LOSS option, Microsoft should have created a separate DBCC command called:
DBCC DELETE_DATA_BECAUSE_I_COULDNT_BE_BOTHERED_TO_TAKE_A_BACKUP
Thursday, October 7, 2010
SQLCODE4YOU: When and how to use Database Snapshots
SQLCODE4YOU: When and how to use Database Snapshots: "SQL Server 2005 offers many features that did not exist in previous versions of the product. One such feature is Database Snapshots. Databas..."
When and how to use Database Snapshots
SQL Server 2005 offers many features that did not exist in previous versions of the product. One such feature is Database Snapshots. Database Snapshots allows you to create a read-only copy of your database that can be used for other purposes, such as reporting, auditing or recovering data. Currently, you can do that by running a backup and restoring it to another database. However, the big advantage you gain by using Database Snapshots instead is the speed at which a snapshot occurs, as well as the ability to create multiple snapshots at different points in time quickly.
How does this new feature work?
The initial thought is that the snapshot needs to make a complete copy of your database so the data stays static. If this were the case, the snapshot would offer no advantage over a backup and restore.
Here's what actually happens: When you create the snapshot, a shell of your database file is created. When data is read from the snapshot, it is actually being read from the primary database. As data changes in your primary database, SQL Server then writes out what the data looked like prior to the data change into a sparse file. The snapshot will then read the sparse file for the data that has been changed and continue to read the primary database for data that has not changed.
If you look at the picture directly below, 90% of the data has not been changed in the primary database, so 90% of the data is still retrieved from here when a query is issued. On the reverse, 10% of the data has changed to satisfy the results for a query that uses the data it would read from the sparse file.
What are the disadvantages?
The following guidelines show you how simple it is to create and use database snapshots.
Creating: To create a database snapshot you must use a T-SQL command; managing snapshots is not currently supported through Management Studio. The syntax to create the snapshot is as follows:
CREATE DATABASE Northwind_Snapshot1000 ON
(
NAME = Northwind_Data,
FILENAME = 'C:\MSSQL\Snapshot\Northwind_data1000.ss'
) AS SNAPSHOT OF Northwind
If you wanted to create a snapshot every hour, you could issue the above statement again and just change the name of the database and the file. Now you will have two snapshots, but different static data.
CREATE DATABASE Northwind_Snapshot1100 ON
(
NAME = Northwind_Data,
FILENAME = 'C:\MSSQL\Snapshot\Northwind_data1100.ss'
)
AS SNAPSHOT OF Northwind
Using: Using the snapshot is just like using any other database. You attach to the database and execute your queries. The thing to remember though is that this is a read-only copy, so you can't issue UPDATE, DELETE or INSERT statements.
Dropping: To get rid of a database snapshot, drop the snapshot just like you would drop any other database.
DROP DATABASE Northwind_Snapshot1000
Restoring: One nice thing about the snapshots feature is that you can restore your entire database back to the point in time when the snapshot was taken. If there were issues with your database and you needed to revert back to the exact time the snapshot was taken, you could use a restore command. Another option is to use snapshots to restore a test environment back to its state when the testing began. So you can take a snapshot, let your users or developers do their testing and then restore the snapshot so the database is back to the point in time prior to when testing began. To restore a database using a snapshot you can use the following command:
RESTORE DATABASE Northwind FROM DATABASE_SNAPSHOT ='Northwind_Snapshot1100'
How does this new feature work?
The initial thought is that the snapshot needs to make a complete copy of your database so the data stays static. If this were the case, the snapshot would offer no advantage over a backup and restore.
Here's what actually happens: When you create the snapshot, a shell of your database file is created. When data is read from the snapshot, it is actually being read from the primary database. As data changes in your primary database, SQL Server then writes out what the data looked like prior to the data change into a sparse file. The snapshot will then read the sparse file for the data that has been changed and continue to read the primary database for data that has not changed.
If you look at the picture directly below, 90% of the data has not been changed in the primary database, so 90% of the data is still retrieved from here when a query is issued. On the reverse, 10% of the data has changed to satisfy the results for a query that uses the data it would read from the sparse file.
What are the advantages?
- Since SQL Server only needs to deal with the data that has changed for Database Snapshots to work, the initial creation of the snapshot or subsequent snapshots is extremely fast.
- Data is only written to the sparse file as data changes, so your overall disk space needs don't double each time you create a new snapshot -- as they would if you were using backup and restore for the secondary copy.
- You can create several snapshots for the same database, so you could have rolling snapshots occurring every hour for auditing purposes.
- It is considerably easier to work with snapshots and much faster than backup and restore operations.
- you can create a static reporting environment quickly with minimal effort.
- Corrupt or deleted data can be retrieved from the snapshot to repair the primary database.
- Database changes can be audited with the use of snapshots.
- You have the ability to restore your database using a snapshot.
What are the disadvantages?
- Your user community is still hitting your primary database. If you already experience performance issues because of reporting requirements, using snapshots for reporting will not resolve that problem.
- The snapshot is tied to your primary database, so you cannot take the primary database offline. If the primary database goes offline, you won't be able to access any of your snapshots.
- The Database Snapshots feature is only available in the Enterprise Edition.
- If data changes rapidly, this option may not offer benefits for keeping snapshots around for a long period of time.
- Full-text indexing is not supported in the snapshot.
The following guidelines show you how simple it is to create and use database snapshots.
Creating: To create a database snapshot you must use a T-SQL command; managing snapshots is not currently supported through Management Studio. The syntax to create the snapshot is as follows:
CREATE DATABASE Northwind_Snapshot1000 ON
(
NAME = Northwind_Data,
FILENAME = 'C:\MSSQL\Snapshot\Northwind_data1000.ss'
) AS SNAPSHOT OF Northwind
If you wanted to create a snapshot every hour, you could issue the above statement again and just change the name of the database and the file. Now you will have two snapshots, but different static data.
CREATE DATABASE Northwind_Snapshot1100 ON
(
NAME = Northwind_Data,
FILENAME = 'C:\MSSQL\Snapshot\Northwind_data1100.ss'
)
AS SNAPSHOT OF Northwind
Using: Using the snapshot is just like using any other database. You attach to the database and execute your queries. The thing to remember though is that this is a read-only copy, so you can't issue UPDATE, DELETE or INSERT statements.
Dropping: To get rid of a database snapshot, drop the snapshot just like you would drop any other database.
DROP DATABASE Northwind_Snapshot1000
Restoring: One nice thing about the snapshots feature is that you can restore your entire database back to the point in time when the snapshot was taken. If there were issues with your database and you needed to revert back to the exact time the snapshot was taken, you could use a restore command. Another option is to use snapshots to restore a test environment back to its state when the testing began. So you can take a snapshot, let your users or developers do their testing and then restore the snapshot so the database is back to the point in time prior to when testing began. To restore a database using a snapshot you can use the following command:
RESTORE DATABASE Northwind FROM DATABASE_SNAPSHOT ='Northwind_Snapshot1100'
Tuesday, October 5, 2010
What is your suggestion for restarting SQL Server as SQL DBA?
Microsoft suggest that 99days. After 99 days we need to restart SQL Server. So that the tempdb and all cleared.
How to find bloked process in SQL Server 2005/2008
SQL Server 2005 or 2008 provides several ways to find which sessions are currently working and which session is bleked some table.
SP_WHO, SP_WHO2 are two commands to finding out who are blocked.
Example: Open Sql Server 2005 or 2008 management Studio. Then type Sp_Who it dispalys like...
51 0 sleeping NT AUTHORITY\SYSTEM PRAKRIT25 0 msdb AWAITING COMMAND 0
0 indicates bloked status...
Same like who we have another option called Sp_who2
SP_WHO, SP_WHO2 are two commands to finding out who are blocked.
Example: Open Sql Server 2005 or 2008 management Studio. Then type Sp_Who it dispalys like...
51 0 sleeping NT AUTHORITY\SYSTEM PRAKRIT25 0 msdb AWAITING COMMAND 0
0 indicates bloked status...
Same like who we have another option called Sp_who2
Insert or Update Script Generator
/*
SQL Server Row Script Creator
SURESH October 2010
This script will generate script to insert/update from a source table in one database to an
identical destination table in another database or server. It can be run for inserts or updates,
and can be run for a single row in insert and update mode, or all rows in a table for insert mode.
*/
declare @tab varchar(50)
,@pk1Val varChar(10)
,@pk1Name varChar(50)
,@qt char(1)
,@StatementType varChar(10)
set nocount on
/*
Instructions:
1) open script and connect to the source database
2) Change the variable values to change the output options for the script below (@tab, @statementtype etc)
3) execute the script (best to use text output)
4) copy the script output into a script window, and run on the destination table.
@Tab = the name of the source table
@pk1Val = if selecting a single row or doing an update statement, the value of the primary key for that row
@pk1Name = if inserting a single row or doing an update statement, the name of the column for the primary key
@StatementType = either 'INSERT' to create an insert statement or 'UPDATE' for an Update statement
*/
select @tab = 'hari', @pk1Val = '', @pk1Name = '', @StatementType = 'INSERT'
declare @tabName varchar(50)
, @colName varchar(50)
, @colType varchar(50)
, @collength varChar(50)
, @colOrder int
, @IsIdent char(1)
create table #output (Line varChar(4000), LineOrder int)
create table #ColumnValues (ColName varChar(250), ColOrder int, RowNumber int, ColValue varchar(4000), colType varchar(50))
declare @out varchar(4000)
,@lineCounter int
,@ColValue varchar(4000)
,@sortCol varchar(50)
/* get the ordering column */
select @sortCol = sc.Name
from sysobjects so
inner join syscolumns sc
on so.id= sc.id
inner join systypes st
on sc.xtype = st.xusertype
where so.Name = @tab
and ((sc.status = 0x80) OR (ColOrder = 1 and not sc.status = 0x80 ))
/* put in the repeating values based on the columns*/
declare objCurs CURSOR FOR
select so.name, sc.name, st.name, sc.length, Case when sc.status = 0x80 then 'Y' else 'N' END as IsIdent, ColOrder
from sysobjects so
inner join syscolumns sc
on so.id= sc.id
inner join systypes st
on sc.xtype = st.xusertype
where so.Name = @tab
DECLARE @counter int, @numCols int, @RowNumber int, @LastRowNumber int, @maxRowNumber int, @maxColOrder int
select @numCols = count(sc.id)
from sysobjects so
inner join syscolumns sc
on so.id= sc.id
where so.Name = @tab
--select @numCols --debug code
open objCurs
Fetch from objCurs
into @tabname, @colName, @colType, @colLength, @isIdent, @colOrder
while @@fetch_status = 0
begin
SET @counter = 0
/* get the value from the table */
if @IsIdent = 'N'
BEGIN
--select @TabName,@ColName, @ColType, @ColLEngth, @isIdent, @ColOrder --debug code
/* increase better type handling by inserting more case statments, handling different data types */
if datalength(@pk1Name) = 0 or datalength(@pk1Val) = 0
begin
/* getting all rows in the table */
exec ('insert into #ColumnValues (ColName, ColOrder, ColValue, ColType)
select ''' + @colName + ''', ' + @ColOrder + ', Convert(nvarchar(4000),' + @colName + ') , ''' + @colType + ''' from ' + @tabName + ' order by ' + @SortCol + ' ' +
' declare @counter int set @counter = 0 ' +
' update #ColumnValues Set @Counter = RowNumber = @Counter + (' + @numCols + ' * 10) where ColName = ''' + @colName + '''' )
end
else
begin
/* filtering by a pk val */
exec ('insert into #ColumnValues (RowNumber, ColName, ColORder, ColValue, ColType)
select 0, ''' + @colName + ''', ' + @ColOrder + ', Convert(nvarchar(4000),' + @colName + ') , ''' + @colType + ''' from ' + @tabName +
' where ' + @pk1Name + ' = ' + @pk1Val)
end
end /* if @isIdent = 'n' */
Fetch Next from objCurs
into @tabname, @colName, @colType, @colLength, @IsIdent, @ColOrder
end
--select * from #ColumnValues --debug code
select @MaxRowNumber = Max(rowNumber) from #columnValues --keep highest row number so we know when we are finished
select @MaxColOrder = max(ColOrder) from #ColumnValues where RowNumber = @MaxRowNumber
/* next cursor for outputting the results from the retval table into the output table */
declare ColVal_Curs cursor for
select ColName , ColOrder , RowNumber , ColValue , colType
from #ColumnValues
order by RowNumber, ColOrder
open ColVal_Curs
--set the last row number to the first in the table, so post loop checking works
select @lastRowNumber = min(rowNumber) from #ColumnValues
set @lineCounter = @LastRowNumber --initialise at the first row
fetch from ColVal_Curs into
@colName, @ColOrder, @RowNumber, @colValue, @ColType
while @@Fetch_status = 0
BEGIN /* cursor loop */
/* get the quote type to enclose the value from the column type */
select @qt = case @colType
when 'nvarchar' then ''''
when 'nchar' then ''''
when 'DateTime' then ''''
when 'ntext' then ''''
when 'varchar' then ''''
else ''
end
--select @linecounter, @colName, @ColOrder, @RowNumber, @colValue, @ColType
if not @ColValue is null
if @rowNumber = @lineCounter
select @out = case @statementType
when 'UPDATE' THEN 'Update ' + @tab + ' SET '
when 'INSERT' then 'INSERT INTO ' + @tab + ' ('
end
begin
if @StatementType = 'UPDATE'
BEGIN
select @Out = @out + @colName + ' = ' + @qt + COALESCE(@ColValue, 'NULL') + @qt + ',' -- + @ColType
insert into #output (Line, LineOrder)
values (@out, @lineCounter)
end
if @statementType = 'INSERT'
BEGIN
/* put comma in */
if @lineCounter > @RowNumber --not first line in set of values for row
select @out = @out + ','
/*put in the name of the column */
insert into #output (Line, LineOrder)
values (@out + @colName
, @lineCounter)
if @lineCounter > @RowNumber --not first line in set of values for row
select @out = ','
else
select @out = ''
/* put in the value of the column */
insert into #output (Line, LineOrder)
values (@out + @qt + COALESCE(@ColValue, 'NULL') + @qt
, @lineCounter + 10 + @numCols)
END
end /*not @ColValue is null */
select @lineCounter = @lineCounter + 1
set @out = ''
/* get the next record*/
fetch from ColVal_Curs into
@colName, @ColOrder, @RowNumber, @colValue, @ColType
--select @ColOrder, @MaxColOrder, @@Fetch_Status --debug code
if (@rowNumber > @lastRowNumber) or (@RowNumber = @MaxRowNumber and @MaxColOrder = @ColOrder and @@FEtch_Status = -1)
BEGIN
/* this bit of processing is done whenever the set of columsn in a row changes to the next row of the original table*/
/* ie we are building a record to insert, and the PK changes because we are at the next record */
/* remove the last comma from the last line */
declare @lastLine int
if @statementType = 'UPDATE'
begin
/*remove last comma*/
update #output
set Line = left(Line,datalength(Line)-1)
where lineOrder = @LineCounter
/* insert a 'where' clause */
insert into #output (line, LineOrder)
select ' WHERE ' + @pk1Name + ' = ' + @pk1Val, Max(LineOrder) + 1 from #output
end
if @statementType = 'INSERT'
BEGIN
/* put in a 'values' statement between the column names and the column values */
insert into #output (Line, LineOrder)
values (') VALUES (', @LastRowNumber + @numCols + 5)
/* close off the lot */
insert into #output (line, lineorder)
select ')', Max(LineOrder) + 1 from #output
END
set @lastRowNumber = @RowNumber
set @lineCounter = @RowNumber /* reset linecounter for next set */
End /* if rownumber > last row number */
end /* cursor loop */
close objCurs
deallocate objCurs
close ColVal_Curs
deallocate ColVal_Curs
/* get the statements out from the list*/
select line as [Copy script from output window below] from #output order by lineorder
/* bug tracking code - uncomment to diagnose problems
select distinct RowNumber from #ColumnValues order by 1
select * from #ColumnValues
order by RowNumber, ColOrder, ColName, ColValue
*/
drop table #output
drop table #ColumnValues
set nocount off
SQL Server Row Script Creator
SURESH October 2010
This script will generate script to insert/update from a source table in one database to an
identical destination table in another database or server. It can be run for inserts or updates,
and can be run for a single row in insert and update mode, or all rows in a table for insert mode.
*/
declare @tab varchar(50)
,@pk1Val varChar(10)
,@pk1Name varChar(50)
,@qt char(1)
,@StatementType varChar(10)
set nocount on
/*
Instructions:
1) open script and connect to the source database
2) Change the variable values to change the output options for the script below (@tab, @statementtype etc)
3) execute the script (best to use text output)
4) copy the script output into a script window, and run on the destination table.
@Tab = the name of the source table
@pk1Val = if selecting a single row or doing an update statement, the value of the primary key for that row
@pk1Name = if inserting a single row or doing an update statement, the name of the column for the primary key
@StatementType = either 'INSERT' to create an insert statement or 'UPDATE' for an Update statement
*/
select @tab = 'hari', @pk1Val = '', @pk1Name = '', @StatementType = 'INSERT'
declare @tabName varchar(50)
, @colName varchar(50)
, @colType varchar(50)
, @collength varChar(50)
, @colOrder int
, @IsIdent char(1)
create table #output (Line varChar(4000), LineOrder int)
create table #ColumnValues (ColName varChar(250), ColOrder int, RowNumber int, ColValue varchar(4000), colType varchar(50))
declare @out varchar(4000)
,@lineCounter int
,@ColValue varchar(4000)
,@sortCol varchar(50)
/* get the ordering column */
select @sortCol = sc.Name
from sysobjects so
inner join syscolumns sc
on so.id= sc.id
inner join systypes st
on sc.xtype = st.xusertype
where so.Name = @tab
and ((sc.status = 0x80) OR (ColOrder = 1 and not sc.status = 0x80 ))
/* put in the repeating values based on the columns*/
declare objCurs CURSOR FOR
select so.name, sc.name, st.name, sc.length, Case when sc.status = 0x80 then 'Y' else 'N' END as IsIdent, ColOrder
from sysobjects so
inner join syscolumns sc
on so.id= sc.id
inner join systypes st
on sc.xtype = st.xusertype
where so.Name = @tab
DECLARE @counter int, @numCols int, @RowNumber int, @LastRowNumber int, @maxRowNumber int, @maxColOrder int
select @numCols = count(sc.id)
from sysobjects so
inner join syscolumns sc
on so.id= sc.id
where so.Name = @tab
--select @numCols --debug code
open objCurs
Fetch from objCurs
into @tabname, @colName, @colType, @colLength, @isIdent, @colOrder
while @@fetch_status = 0
begin
SET @counter = 0
/* get the value from the table */
if @IsIdent = 'N'
BEGIN
--select @TabName,@ColName, @ColType, @ColLEngth, @isIdent, @ColOrder --debug code
/* increase better type handling by inserting more case statments, handling different data types */
if datalength(@pk1Name) = 0 or datalength(@pk1Val) = 0
begin
/* getting all rows in the table */
exec ('insert into #ColumnValues (ColName, ColOrder, ColValue, ColType)
select ''' + @colName + ''', ' + @ColOrder + ', Convert(nvarchar(4000),' + @colName + ') , ''' + @colType + ''' from ' + @tabName + ' order by ' + @SortCol + ' ' +
' declare @counter int set @counter = 0 ' +
' update #ColumnValues Set @Counter = RowNumber = @Counter + (' + @numCols + ' * 10) where ColName = ''' + @colName + '''' )
end
else
begin
/* filtering by a pk val */
exec ('insert into #ColumnValues (RowNumber, ColName, ColORder, ColValue, ColType)
select 0, ''' + @colName + ''', ' + @ColOrder + ', Convert(nvarchar(4000),' + @colName + ') , ''' + @colType + ''' from ' + @tabName +
' where ' + @pk1Name + ' = ' + @pk1Val)
end
end /* if @isIdent = 'n' */
Fetch Next from objCurs
into @tabname, @colName, @colType, @colLength, @IsIdent, @ColOrder
end
--select * from #ColumnValues --debug code
select @MaxRowNumber = Max(rowNumber) from #columnValues --keep highest row number so we know when we are finished
select @MaxColOrder = max(ColOrder) from #ColumnValues where RowNumber = @MaxRowNumber
/* next cursor for outputting the results from the retval table into the output table */
declare ColVal_Curs cursor for
select ColName , ColOrder , RowNumber , ColValue , colType
from #ColumnValues
order by RowNumber, ColOrder
open ColVal_Curs
--set the last row number to the first in the table, so post loop checking works
select @lastRowNumber = min(rowNumber) from #ColumnValues
set @lineCounter = @LastRowNumber --initialise at the first row
fetch from ColVal_Curs into
@colName, @ColOrder, @RowNumber, @colValue, @ColType
while @@Fetch_status = 0
BEGIN /* cursor loop */
/* get the quote type to enclose the value from the column type */
select @qt = case @colType
when 'nvarchar' then ''''
when 'nchar' then ''''
when 'DateTime' then ''''
when 'ntext' then ''''
when 'varchar' then ''''
else ''
end
--select @linecounter, @colName, @ColOrder, @RowNumber, @colValue, @ColType
if not @ColValue is null
if @rowNumber = @lineCounter
select @out = case @statementType
when 'UPDATE' THEN 'Update ' + @tab + ' SET '
when 'INSERT' then 'INSERT INTO ' + @tab + ' ('
end
begin
if @StatementType = 'UPDATE'
BEGIN
select @Out = @out + @colName + ' = ' + @qt + COALESCE(@ColValue, 'NULL') + @qt + ',' -- + @ColType
insert into #output (Line, LineOrder)
values (@out, @lineCounter)
end
if @statementType = 'INSERT'
BEGIN
/* put comma in */
if @lineCounter > @RowNumber --not first line in set of values for row
select @out = @out + ','
/*put in the name of the column */
insert into #output (Line, LineOrder)
values (@out + @colName
, @lineCounter)
if @lineCounter > @RowNumber --not first line in set of values for row
select @out = ','
else
select @out = ''
/* put in the value of the column */
insert into #output (Line, LineOrder)
values (@out + @qt + COALESCE(@ColValue, 'NULL') + @qt
, @lineCounter + 10 + @numCols)
END
end /*not @ColValue is null */
select @lineCounter = @lineCounter + 1
set @out = ''
/* get the next record*/
fetch from ColVal_Curs into
@colName, @ColOrder, @RowNumber, @colValue, @ColType
--select @ColOrder, @MaxColOrder, @@Fetch_Status --debug code
if (@rowNumber > @lastRowNumber) or (@RowNumber = @MaxRowNumber and @MaxColOrder = @ColOrder and @@FEtch_Status = -1)
BEGIN
/* this bit of processing is done whenever the set of columsn in a row changes to the next row of the original table*/
/* ie we are building a record to insert, and the PK changes because we are at the next record */
/* remove the last comma from the last line */
declare @lastLine int
if @statementType = 'UPDATE'
begin
/*remove last comma*/
update #output
set Line = left(Line,datalength(Line)-1)
where lineOrder = @LineCounter
/* insert a 'where' clause */
insert into #output (line, LineOrder)
select ' WHERE ' + @pk1Name + ' = ' + @pk1Val, Max(LineOrder) + 1 from #output
end
if @statementType = 'INSERT'
BEGIN
/* put in a 'values' statement between the column names and the column values */
insert into #output (Line, LineOrder)
values (') VALUES (', @LastRowNumber + @numCols + 5)
/* close off the lot */
insert into #output (line, lineorder)
select ')', Max(LineOrder) + 1 from #output
END
set @lastRowNumber = @RowNumber
set @lineCounter = @RowNumber /* reset linecounter for next set */
End /* if rownumber > last row number */
end /* cursor loop */
close objCurs
deallocate objCurs
close ColVal_Curs
deallocate ColVal_Curs
/* get the statements out from the list*/
select line as [Copy script from output window below] from #output order by lineorder
/* bug tracking code - uncomment to diagnose problems
select distinct RowNumber from #ColumnValues order by 1
select * from #ColumnValues
order by RowNumber, ColOrder, ColName, ColValue
*/
drop table #output
drop table #ColumnValues
set nocount off
Subscribe to:
Posts (Atom)