The goal of implementing a database maintenance plan is to help prevent the following kind of problems:
If implemented correctly, a database maintenance plan can help ensure that a SQL Server’s databases perform adequately and, if there should be a problem, provide the necessary backups to minimize the loss of any data. A good maintenance plan often reduces the time spent troubleshooting problems.
Please note
that the following document provides recommendations for
maintaining the SEViewer and Advisor databases running on SQL
Server 2005 and SQL Server 2008. It does not apply to SQL Express
installations.
Please note
that substantial portions of the text on this page have been taken
from "Brad's Sure Guide to SQL Server Maintenance Plans" by Brad
McGehee, Simple Talk Publishing 2009, Copyright 2009 by Brad M.
McGehee. Some of this text has been modified for the purposes of
this document.
A database maintenance plan should cover the tasks below.
Note that the
most important tasks are at the beginning of the list:
We recommend running the sp_updatestats system stored procedure at least once a day to update statistics. The schedule depends primarily on the event rate and the database auto-trim threshold configured in SE-Viewer - faster event rates necessitate more frequent statistics updates. For >= 100 servers, we recommend keeping approximately 250,000 events and 10,000,000 counters in the SE-Viewer database. For the Advisor database (which stores 3 months of data), we recommend running the update statistics task at least twice a month. Keep in mind that the Advisor database size will be different depending on the event rate over the 3 month period..
We recommend reorganizing indexes at least once a day, and rebuilding the indexes once a week. To make this process easier, we supply a script that either rebuilds or reorganizes the indexes in online mode, depending on the amount of fragmentation.
We recommend checking the database integrity once a week at most because it is a very resource consuming procedure. For the SQL Enterprise Edition, the check integrity procedure is an online procedure, so while it does not block other queries while executing, it does slow down performance.
Backing Up the Database: Ensures that usable data will be available in the event that your data becomes corrupted.
The SIMPLE recovery model minimizes using of the transaction log resulting the greatest query’s performance, but it minimizes also the recovery abilities like restoring database at any point in time. Because the recovery model of SEViewer and Adviser is SIMPLE, we recommend making a database backup at least once a day. It would be better making the backup just after the Maintain Statistics or Maintain indices task. The operation may be performed while the database is in use or online.
When to remove old backups depends on the file storage size and business requirements.
This task is not as critical as some of the others. It clears space by removing the jobs history.
The Query Optimizer uses index and column statistics as part of its evaluation process, as it tries to determine an optimal query execution plan. If the statistics are old, or incomplete, then the Query Optimizer may create an inefficient execution plan, which substantially slows down a query's performance. In theory, index and column statistics are self-maintaining, but in practice this self-maintaining process is not perfect.
In order to ensure that the optimizer has the most complete and current statistics at its disposal, the DBA needs to create a maintenance task to ensure that they are regularly updated, either by rebuilding the indices, or by updating the statistics.
To update the indices and columns statistics we recommend using the sp_updatestats system stored procedure.
The statistics will always need to be updated because the tables are always changing under a normal workload.
In the case that statistics have never updated before, but the system has been operating for a long time (days, weeks or months), updating of the all of the statistics may take a long time to execute, and during execution, there may be degradation in the overall performance for SELECT, INSERT, UPDATE and DELTE operations.
Therefore, when executing the sp_updatestats procedure for the first time the "resample" parameter should be used rather than using the default sample rate. Additionally, because running sp_updatestats the first time may take a long time to execute, it is better to run it when the system’s load is low.
The schedule to update statistics is dependent on the auto-trim
threshold configured in SE-Viewer for "Keep the N most recent events in the database"
[ Note:
This option may be found in SE-Viewer under Tools
Options…
Data Tab]
Below is a table with the recommended intervals for updating the statistics in the SEViewer database. This table illustrates that for the same # of servers and applications/server, that different auto-trim thresholds lead to the different requirements for update frequencies.
Number of servers | Average number of applications per server | Interval (hours) for default threshold (50,000 events in EVENT table) | Interval (hours) for threshold =250,000 events in EVENT table |
10 | 20 | 7 | 35 |
50 | 20 | 2 | 11 |
100 | 20 | 1 | 5 |
200 | 20 | 0.6 | 3 |
Most frequent updates are required for the smallest auto-trim thresholds.
If your configuration differs from the values described in a table, the statistics update frequency should be interpolated or extrapolated from the table’s values.
The more frequently the sp_updatestats stored procedure is scheduled to run, the faster it will execute since there will be fewer statistics that it needs to update.
Alternatively, to update particular statistic there could be used UPDATE STATISTICS command.
Required Permissions | |
sp_updatestats | Membership in sysadmin server fixed role |
UPDATE STATISTICS | ALTER permissions on the table or view for the SEViewer or Advisor database |
The next resource to be maintained in a system with high volume of updates is the indices. Many data manipulation operations (INSERT/UPDATE/DELETE) causes the indices to be fragmented which affects query executions.
The fragmentation can be removed by running the rebuild indexes task on regular basis.
Under workload there are tables that are regularly updated. Therefore their indices should be rebuilt or reorganized on a regular basis.
The index fragmentation is not as critical to the query performance as the wrong (out of date) statistics. Therefore, based on our practical investigations, we suggest running the rebuild indexes task once per 10 runs of sp_updatestats. Below is a table with recommended schedules depending on the SE-Viewer auto-trim threshold configuration.
Number of servers | Average number of application per server | Interval (days) for default threshold (50,000 events in EVENT table) | Interval (days) for threshold =250,000 events in EVENT table |
10 | 20 | 3 | 14 |
50 | 20 | 1 | 5 |
100 | 20 | 1* | 2 |
200 | 20 | 1* | 1 |
*Rebuilding indexes is not as critical as updating statistics. Therefore, for the administrator’s convenience, the lower limit is set to one index rebuilding procedure per day for database.
If your configuration differs from the values described in a table, the Indexes rebuild task frequency should be interpolated or extrapolated from the table’s values.
For the Advisor database we recommend running sp_updatestats twice a month and rebuilding the indices 4 times a year.
When to Reorganize Indices
While the Rebuild Index drops the index and rebuilds it, Reorganize Index is only a reshuffle of the leaf-level pages of an index, such that the physical ordering matches the logical ordering, and wasted space is minimized.
Because Reorganize Index is an online operation, queries can still access the indices even while they are being reorganized, though some decrease of performance may be noticed. In addition, this task does not automatically update statistics like the Rebuild Index task does, so updating the statistics must be performed separately, using the Update Statistics task.
If you need to rebuild indices but you do not have an available maintenance window, then we recommend using the Reorganize Index task, along with the Update Statistics task. Therefore, before you choose Reorganize over Rebuild, or vice versa, you need to determine what your maintenance windows are.
In the script below is an "online" option, that local SQL administrators can be set to either "ON" or "OFF" for offline or online rebuild indices procedure. If there is a maintenance window, then the "OFF" setting should be used because it will allow the script to run faster without blocking queries while executing.
How Often to Reorganize Indices
Please see the section on "How Often to Rebuild Indices
The following script executes the rebuild or reorganize index procedure for each index that requires rebuilding or reorganization. It has 1 parameter to be defined: @online. If defined as ‘ON’, then the rebuild indexes will be the online procedure, if ‘OFF’ then it is offline. Offline mode is faster but will block the other queries while executing.
-- ensure a USE <databasename> statement has been executed first. SET NOCOUNT ON; DECLARE @online varchar(3); DECLARE @objectid int; DECLARE @indexid int; DECLARE @partitioncount bigint; DECLARE @schemaname sysname; DECLARE @objectname sysname; DECLARE @indexname sysname; DECLARE @partitionnum bigint; DECLARE @partitions bigint; DECLARE @frag float; DECLARE @command varchar(8000); DECLARE @msg varchar(8000); DECLARE @dt smalldatetime; --When @online='ON', the script runs slower, but does'nt block the other queries while executing set @online='ON' -- ensure the temporary table does not exist IF EXISTS (SELECT name FROM sys.objects WHERE name = 'work_to_do') DROP TABLE work_to_do; -- conditionally select from the function, converting object and index IDs to names. SELECT object_id AS objectid, index_id AS indexid, partition_number AS partitionnum, avg_fragmentation_in_percent AS frag INTO work_to_do FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, 'LIMITED') WHERE avg_fragmentation_in_percent > 10.0 AND index_id > 0; -- Declare the cursor for the list of partitions to be processed. DECLARE partitions CURSOR FOR SELECT * FROM work_to_do; -- Open the cursor. OPEN partitions; -- Loop through the partitions. FETCH NEXT FROM partitions INTO @objectid, @indexid, @partitionnum, @frag; WHILE @@FETCH_STATUS = 0 BEGIN; SELECT @objectname = o.name, @schemaname = s.name FROM sys.objects AS o JOIN sys.schemas as s ON s.schema_id = o.schema_id WHERE o.object_id = @objectid; SELECT @indexname = name FROM sys.indexes WHERE object_id = @objectid AND index_id = @indexid; SELECT @partitioncount = count (*) FROM sys.partitions WHERE object_id = @objectid AND index_id = @indexid; -- 30 is an arbitrary decision point at which to switch between reorganizing and rebuilding IF @frag < 30.0 BEGIN; SELECT @command = 'ALTER INDEX ' + @indexname + ' ON ' + @schemaname + '.' + @objectname + ' REORGANIZE'; IF @partitioncount > 1 SELECT @command = @command + ' PARTITION=' + CONVERT (CHAR, @partitionnum); set @msg='Executing '+ @command + ', frag='+ltrim(STR(@frag,15,0))+' percent' RAISERROR (@msg, 0, 1) WITH NOWAIT; EXEC (@command); END; IF @frag >= 30.0 BEGIN; SELECT @command = 'ALTER INDEX ' + @indexname +' ON ' + @schemaname + '.' + @objectname + ' REBUILD WITH (ONLINE='+@online+')'; IF @partitioncount > 1 SELECT @command = @command + ' PARTITION=' + CONVERT (CHAR, @partitionnum); set @msg='Executing '+ @command + ', frag='+ltrim(STR(@frag,5,0))+' percent' BEGIN TRY RAISERROR (@msg, 0, 1) WITH NOWAIT; EXEC (@command); END TRY BEGIN CATCH RAISERROR ('Online index operation failed, trying offline',0,1) WITH NOWAIT; set @command=REPLACE(@command,'ONLINE=ON','ONLINE=OFF') set @msg='Executing '+ @command + ', frag='+ltrim(STR(@frag,5,0))+' percent' RAISERROR (@msg, 0, 1) WITH NOWAIT; EXEC (@command); END CATCH END; set @msg='Done.'; RAISERROR (@msg, 0, 1) WITH NOWAIT; FETCH NEXT FROM partitions INTO @objectid, @indexid, @partitionnum, @frag; END; -- Close and deallocate the cursor. CLOSE partitions; DEALLOCATE partitions; -- drop the temporary table IF EXISTS (SELECT name FROM sys.objects WHERE name = 'work_to_do') DROP TABLE work_to_do; GO |
To execute ALTER INDEX, at a minimum, ALTER permission on the table or view is required. Participation in db_owner database fixed role provides sufficient permissions to run the script.
It is possible for data in a SQL Server database to become corrupted due to a failure in the disk subsystem or some other event. While it is not common for a database to become physically damaged in this way, the possibility must be considered. Data corruption may occur only in one specific area of the database and it's possible that the damage may not be discovered for some time, usually until an attempt is made to query the corrupted data. Between the time at which the damage occurred, and the time it was discovered, many days may have passed, and each of the backups made during this time will include the damaged data.
The longer the damage remains undiscovered, the more out-of-date the most recent undamaged backup will be. If you delete older backups on a regular schedule, you may not even have an undamaged copy! In either case, you may end up losing a lot of data so it is important for DBAs to regularly check the physical integrity of their databases.
You should use the following T-SQL command to check for integrity of the database:
DBCC CHECKDB ('{databaseName}') |
Field | Definition |
{databaseName} | The name of the database |
Requires membership in the sysadmin fixed server role or the db_owner fixed database role
This task is very resource intensive, and running it at the wrong time can have detrimental effects on your SQL Server's performance.
Regardless of system activity, running DBCC CHECKDB against large databases such as SEViewer or Advisor databases causes spooling to the tempdb database. Therefore, we recommend placing the tempdb database on a fast disk(s), such as a RAID device, separated from the user database.
The logical checks may take a considerably long time to run. The option PHYSICAL_ONLY limits the checking to the integrity of the physical structure of the page and record headers, the physical structure of B-trees and the allocation consistency of the database. Designed to provide a small overhead check of the physical consistency of the database, this check can also detect torn pages, checksum failures, and common hardware failures that can compromise a user's data. Using the PHYSICAL_ONLY option may cause a much shorter run-time for DBCC CHECKDB on large databases; therefore, we recommend using it on production systems. In addition, we recommend the option “NOINDEX”. This means that resource intensive checks of non-clustered indices are omitted, which reduces the load on the server and shortens the amount of time the task takes to run. All the clustered indices and heaps that hold your data are checked. If non-clustered indices are corrupted you can generally solve the problem in a non-clustered index by simply dropping and re-adding the index.
Here are our recommendations:
Requires membership in the sysadmin fixed server role or the db_owner fixed database role
If your database becomes corrupted, and you don't have a restorable backup, then you will probably end up losing your data.
The database uses a simple recovery model. Therefore it is necessary to make only full database backups. The transaction log backups are not needed. We recommend using following T-SQL command:
BACKUP DATABASE database_name TO DISK=N 'd:\backup\database_name.bak' WITH NAME=N 'database_name_bak' |
Field | Definition |
database_name | The name of the database |
TO DISK | Specifies a logical or physical backup device to use for the backup operation |
NAME | The name of the backup set |
The BACKUP DATABASE permissions default to members of the sysadmin fixed server role and the db_owner and db_backupoperator fixed database roles.
SQL Server uses an online backup process to allow a database backup while the database is still in use. In most cases, taking a full backup incurs minimal overhead on your SQL Server. We recommend performing a full backup of your database every night when servers are less busy (usually at night). In addition, we recommend verifying the backup using the following T-SQL command:
RESTORE VERIFYONLY FROM DISK= N 'd:\backup\database_name.bak' |
This command verifies the backup but does not restore it.
Permissions
For SQL Server 2005 and later versions, obtaining information about a backup set or backup device requires CREATE DATABASE permission, db_owner database fixed role is enough.
While making database backups is important, you don't need to keep them forever. If fact, if you don't clean up older backup files, your SQL Server's hard drives will quickly fill up. It is the job of the DBA to ensure that unneeded backups are removed from a SQL Server on a regular basis based on the amount of available disk storage.
The SQL Server msdb system database stores historical data about various activities, such as details about backups, SQL Server Agent jobs, and Maintenance Plan execution. If left unattended, over time, the msdb database can grow to a considerable size, wasting disk space, and slowing down operations that use the msdb database. In most cases, this data does not need to be kept for a long period, and should be removed using such commands as sp_delete_backuphistory, sp_purge_jobhistory. Keep in mind that other jobs run by DBAs may contribute to the size of the msdb database.
sp_delete_backuphistory requires membership in the sysadmin fixed server role, but permissions can be granted to other users. It could be done by using GRANT EXECUTE command. Members of SQLAgentOperatorRole can purge the job history for all local jobs only.
Last update: Wednesday, June 09, 2010 03:12:01 PM