Locking a SQL Server table to prevent inserts - sql

I am writing this procedure in SQL Server 2008 R2:
CREATE Procedure [dbo].[SetLocalSeed](#tableName nvarchar(128))
AS
BEGIN
-- Find the primary key column name
DECLARE #pkName NVARCHAR(128)
SELECT #pkName = COLUMN_NAME
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
WHERE OBJECTPROPERTY(OBJECT_ID(constraint_name), 'IsPrimaryKey') = 1
AND TABLE_NAME = #tableName
BEGIN TRANSACTION
-- Find the max LOCAL pk value (< 10^7) - hold the lock until the transaction completes.
DECLARE #max BIGINT
DECLARE #sql NVARCHAR(MAX) = 'SELECT #max = MAX([' + #pkName + ']) FROM [' + #tableName + '] WITH (TABLOCKX, HOLDLOCK) WHERE [' + #pkName + '] < POWER(10,7)';
EXEC sp_executeSql #sql, N'#max BIGINT OUT', #max=#max OUTPUT
-- Reset the seed to the table
DBCC CHECKIDENT(#tableName, RESEED, #max)
COMMIT
END
Is this the correct way to lock the table for inserts while I do this query and subsequent identity reseed? Also would like to know if there are any problems in what I'm doing above? This is will be used in a custom replication environment.
TIA

SQL Server by default allows dirty reads, while not allowing dirty writes. To prevent this, you need to explicitly lock the table as you have done. If you don't, it looks like you could run into a situation where two different users could get the same value for your #sql variable, if they both read from the table before one of them does the reseed (while Nick is right about the locks during reseed, you're doing a select outside of the context of the reseed). So I think you have this right.
You'll want to look at this as well, for why you should enclose your transaction in SET_XACT_ABORT_ON/OFF commands.

you can also consider setting Isolation levels to READ COMMITTED in Sql server to read only commited data
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
https://msdn.microsoft.com/en-us/library/ms173763.aspx

Related

Indexing same table in multiple databases

I have almost 150 databases with all the same tables. I know its bad but I don't have control over it. I'm trying to improve performance with some indexes. I know what the indexes should be but I need to build them on the same tables in every database. Is there a way to do this bsides creating them all separately?
I had a similar situation a while back so I came up with this code. You can use dynamic SQL with sp_MSforeachdb to loop through your databases. I've excluded the system databases below but you can include/exclude databases as you like in that first IF.
This code will check each database for your specific table as well as checking to see if that index already exists on that table. If not, it creates it. I included a RAISERROR to show the progress through the databases in SSMS messages. Just change the table/index names below and update the CREATE INDEX statement as appropriate for you.
DECLARE #command varchar(1000)
SELECT #command = 'IF ''?'' NOT IN(''master'', ''model'', ''msdb'', ''tempdb'')
BEGIN USE ?
EXEC(''
DECLARE #DB VARCHAR(200)
SET #DB = DB_NAME()
RAISERROR (#DB, 10, 1) WITH NOWAIT
IF OBJECT_ID(''''dbo.TableName'''', ''''U'''') IS NOT NULL
BEGIN
IF NOT EXISTS (SELECT 1 FROM sys.indexes WHERE name=''''IX_TableName'''' AND object_id = OBJECT_ID(''''TableName''''))
BEGIN
CREATE INDEX [IX_TableName] ON TableName (indexColumn)
END
END
'') END'
EXEC sp_MSforeachdb #command

How can I disable All DML Triggers for a database in SQL Server?

I have a database that handles nearly a 100 DML triggers and for some error fixing and maintenance purposes we need to disable all these triggers temporarily.
Is there any way to disable all DML triggers for a single database at once?
I have gone through a lot of articles and they all have suggested either the script for disabling DDL triggers or say to disable one by one which is not a great option in this case.
Any help would be really appreciated. Thank you in advance!
For anyone that still needs this, here's a solution I made for enabling/disabling all DML triggers within a database on SQL Server:
USE MyDatabase;
BEGIN TRY
DECLARE #NewLine CHAR(2) = CHAR(13) + CHAR(10)
DECLARE #DynamicSQL VARCHAR(MAX) = (
SELECT
STRING_AGG(
CONVERT(
VARCHAR(MAX),
'ALTER TABLE ' + QUOTENAME(SCHEMA_NAME(TRO.schema_id)) + '.' + QUOTENAME(TA.name) + ' DISABLE TRIGGER ALL;' + #NewLine),
'')
WITHIN GROUP (ORDER BY TR.name)
FROM
sys.triggers AS TR
INNER JOIN sys.objects AS TRO ON TR.object_id = TRO.object_id
INNER JOIN sys.objects AS TA ON TRO.parent_object_id = TA.object_id
WHERE
TA.type = 'U') -- U: User defined table
BEGIN TRANSACTION
EXEC (#DynamicSQL)
--SELECT #DynamicSQL AS [processing-instruction(x)] FOR XML PATH('') -- 'Print' longer than 8k characters
COMMIT
END TRY
BEGIN CATCH
DECLARE #ErrorMessage VARCHAR(MAX) = ERROR_MESSAGE()
IF ##TRANCOUNT > 0
ROLLBACK
RAISERROR(#ErrorMessage, 16, 1)
END CATCH
The use of the STRING_AGG makes this solution only work with SQL Server 2017+, but there are many alternatives to do the same on previous versions.
To enable all triggers, just replace the DISABLE with ENABLE.
Note that DISABLE TRIGGER ALL ON DATABASE does not disable DML triggers on the tables, only DDL triggers created on a database level.

Create SQL Server trigger - dynamic SQL too long

Currently I am working on an audit trail using SQL Server triggers to identify inserts, updates and deletes on tables.
Tables can be created dynamically in the database, therefore when this happens I need to create the trigger dynamically.
Therefore at this point I call a stored procedure and pass in the table name.
CREATE PROCEDURE [dbo].[AUDIT_CreateTableTrigger]
#STR_TableName NVARCHAR(MAX)
WITH EXECUTE AS CALLER
AS
BEGIN
DECLARE #STR_Trig NVARCHAR(MAX) = ''
SET #STR_Trig = #STR_Trig + '
CREATE TRIGGER [dbo].[' + #STR_TableName + '_Audit] ON [dbo].[' + #STR_TableName + ']
WITH EXECUTE AS CALLER AFTER
INSERT, UPDATE, DELETE AS
BEGIN
-- do the insert stuff
-- update
-- + delete
END'
EXEC (#STR_Trig) -- then execute the sql
My issue is that I am noticing that the exec isn't reading the statement completely and cuts the procedure off.
I need a way of executing a long piece of SQL code (I have one solution, this involves splitting the dynamic SQL into 3 triggers i.e insert, update and delete to get around this, however would prefer to keep 1 trigger to handle all)
Any suggestions would be appreciated, Thanks
Got this issue fixed: Broke up the query see below for solution
DECLARE #sql1 NVARCHAR(4000) = '',
#sql2 NVARCHAR(4000) = '',
#sql3 NVARCHAR(MAX)
SET #sql1 += '
CREATE TRIGGER [dbo].[' + #STR_TableName + '_Audit] ON [dbo].[' + #STR_TableName + ']
WITH EXECUTE AS CALLER AFTER
INSERT, UPDATE, DELETE AS
BEGIN
BEGIN TRY
--sql query
'
SET #sql2 = '
--more sql query
END'
SET #sql3 = CAST(#sql1 AS nvarchar(MAX)) + CAST (#sql2 AS nvarchar(MAX))
EXEC sp_executesql #sql3

how to batch edit triggers?

I have many triggers for which I'd like to build a list of table using a wildcard, then update the existing triggers on them by adding some column names to the trigger. The column names will be the same in each trigger, but I'm not clear how build the list of tables or how to loop through the list in a single alter trigger statement. I assume I'll have to use a cursor....
There is no magic wand to say "add this code to all the triggers" (or any other object type, for that matter).
For many object types, for batch editing you can quickly generate a script for multiple objects using Object Explorer Details and sorting and/or filtering within that view. For example, if you highlight "Stored Procedures" in Object Explorer, they're all listed in Object Explorer Details, and you can select multiple objects, right-click, and Script Stored Procedure as > CREATE To >
Since triggers are nested under tables, there isn't a handy way to do this (nor are triggers an entity type you can select when you right-click a database and choose Tasks > Generate Scripts). But you can pull the scripts from the metadata quite easily (you'll want Results to Text in Management Studio when running this):
SET NOCOUNT ON;
SELECT OBJECT_DEFINITION([object_id])
+ CHAR(13) + CHAR(10) + 'GO' + CHAR(13) + CHAR(10)
FROM sys.triggers
WHERE type = 'TR';
You can take the output, copy and paste it into the top pane, then once you have added your new code to each trigger, you'll have to do a little more work to do, e.g. search/replace 'CREATE TRIGGER' for 'ALTER TRIGGER'. You could do that as part of the query too, but it relies on the creator(s) having consistent coding conventions. Since some triggers might look like this...
create trigger
... you may have to massage some by hand.
You can also filter the query above if you are only interested in a certain set of tables. For example, to only alter triggers associated with tables that start with Sales you could say:
AND OBJECT_NAME(parent_id) LIKE N'Sales%';
Or only for tables in the Person schema:
AND OBJECT_SCHEMA_NAME(parent_id) = N'Person';
Anyway once you have made all necessary adjustments to the script, you can just run it. A lot easier than expanding every single table and generating a script for those triggers.
In addition to Aarons suggestion, which worked great on a bunch of complex triggers with inconsistent object naming convention, I then attempted to cook something up so I'd remember what I did in 3 months. Enjoy. Create or alter the SP then execute with no params.
CREATE PROCEDURE SP_ALTER_CONTOUR_TRIGS
--sp to bulk edit many triggers at once
--NO ERROR HANDLING!
AS
DECLARE
#sql VARCHAR(500),
#tableName VARCHAR(128),
#triggerName VARCHAR(128),
#tableSchema VARCHAR(128)
DECLARE triggerCursor CURSOR
FOR
SELECT
so_tr.name AS TriggerName,
so_tbl.name AS TableName,
t.TABLE_SCHEMA AS TableSchema
FROM
sysobjects so_tr
INNER JOIN sysobjects so_tbl ON so_tr.parent_obj = so_tbl.id
INNER JOIN INFORMATION_SCHEMA.TABLES t
ON
t.TABLE_NAME = so_tbl.name
WHERE
--here's where you want to build filters to make sure you're
--targeting the trigs you want edited
--BE CAREFUL!
--test the select statement first against sysobjects
--to see that it returns what you expect
so_tr.type = 'TR'
and so_tbl.name like '%contours'
and so_tr.name like'%location_id'
ORDER BY
so_tbl.name ASC,
so_tr.name ASC
OPEN triggerCursor
FETCH NEXT FROM triggerCursor
INTO #triggerName, #tableName, #tableSchema
WHILE ( ##FETCH_STATUS = 0 )
BEGIN
--insert alter statement below
--watch out for cr returns and open and close qoutes!
--seems to act finicky if you don't use schema-bound naming convention
SET #sql = '
ALTER TRIGGER ['+ #tableSchema +'].['
+ #triggerName + '] ON ['+ #tableSchema +'].['
+ #tableName + ']
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT ['+ #tableSchema +'].['+ #tableName + ']
(OBJECTID, Contour, Type, Shape, RuleID, Override)
SELECT
a.OBJECTID, a.Contour, a.Type, a.Shape, a.RuleID, a.Override
FROM
(SELECT
OBJECTID, Contour, Type, Shape, RuleID, Override
FROM inserted)
AS a
END
'
PRINT 'Executing Statement - '+ #sql
EXECUTE ( #sql )
FETCH NEXT FROM triggerCursor
INTO #triggerName, #tableName, #tableSchema
END
CLOSE triggerCursor
DEALLOCATE triggerCursor

Best Approach for Reindexing

I am trying to reduce fragmentation in all of the indexes for a database running on SQL Server 2005.
Currently I am trying to use ALTER INDEX in conjunction with sp_MSforeachtable, to apply it to all of the indexes for all of the tables:
sp_MSforeachtable "ALTER INDEX ALL ON ? REBUILD;"
But for some reason this doesn’t always seem to work?
If I try it for a single index, or all of the indexes for a single table then the fragmentation is cleaned up, it just seems to be when I apply it to the whole database that I get problems.
Previously I might have used DBCC DBREINDEX but BOL states it will be removed in the next version of SQL Server, so I don’t want to use it.
Can anyone give me any advice on the best way to tackle cleaning up all of the indexes in a database?
Thanks
If you want to fully automate your SQL Server Index maintenance then I seriously recommend that you check out Michelle Ufford's stored procedure for this.
Index Defrag Script V4.1
It is what I consider to be the best index maintenance script I have ever read.
One of the best features about this script are that you can customize the threshold values that you use in order to determine whether or not to REBUILD or REORGANIZE a given index strucutre.
It also provides the option to limit the number of CPU cores that are utilized by the procedure. An excellent option if you intend to run the script on a busy live production database.
Warning: As with all internet available code, be sure you test it thoroughly before using in a production environment. You will also most likely want to incorporate your own customisation and features too.
Check out the article and accompanying sample script to handle this task at SQL Fool (Michelle Ufford's website):
http://sqlfool.com/2009/06/index-defrag-script-v30/
This is quite a nice solution to handle this once and for all!
The best practice recommendation is to reorganize your index if you have 5-30% of fragmentation, and only rebuild it if it has more than 30% fragmentation. You can easily use these thresholds or specify your own using this script.
Marc
Or you can use Microsoft's index rebuilding script found here http://msdn.microsoft.com/en-us/library/ms188917.aspx
-- Ensure a USE <databasename> statement has been executed first.
SET NOCOUNT ON;
DECLARE #objectid int;
DECLARE #indexid int;
DECLARE #partitioncount bigint;
DECLARE #schemaname nvarchar(130);
DECLARE #objectname nvarchar(130);
DECLARE #indexname nvarchar(130);
DECLARE #partitionnum bigint;
DECLARE #partitions bigint;
DECLARE #frag float;
DECLARE #command nvarchar(4000);
-- Conditionally select tables and indexes from the sys.dm_db_index_physical_stats function
-- and convert object and index IDs to names.
SELECT
object_id AS objectid,
index_id AS indexid,
partition_number AS partitionnum,
avg_fragmentation_in_percent AS frag
INTO #work_to_do
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, 'LIMITED')
WHERE avg_fragmentation_in_percent > 10.0 AND index_id > 0;
-- Declare the cursor for the list of partitions to be processed.
DECLARE partitions CURSOR FOR SELECT * FROM #work_to_do;
-- Open the cursor.
OPEN partitions;
-- Loop through the partitions.
WHILE (1=1)
BEGIN;
FETCH NEXT
FROM partitions
INTO #objectid, #indexid, #partitionnum, #frag;
IF ##FETCH_STATUS < 0 BREAK;
SELECT #objectname = QUOTENAME(o.name), #schemaname = QUOTENAME(s.name)
FROM sys.objects AS o
JOIN sys.schemas as s ON s.schema_id = o.schema_id
WHERE o.object_id = #objectid;
SELECT #indexname = QUOTENAME(name)
FROM sys.indexes
WHERE object_id = #objectid AND index_id = #indexid;
SELECT #partitioncount = count (*)
FROM sys.partitions
WHERE object_id = #objectid AND index_id = #indexid;
-- 30 is an arbitrary decision point at which to switch between reorganizing and rebuilding.
IF #frag < 30.0
SET #command = N'ALTER INDEX ' + #indexname + N' ON ' + #schemaname + N'.' + #objectname + N' REORGANIZE';
IF #frag >= 30.0
SET #command = N'ALTER INDEX ' + #indexname + N' ON ' + #schemaname + N'.' + #objectname + N' REBUILD';
IF #partitioncount > 1
SET #command = #command + N' PARTITION=' + CAST(#partitionnum AS nvarchar(10));
EXEC (#command);
PRINT N'Executed: ' + #command;
END;
-- Close and deallocate the cursor.
CLOSE partitions;
DEALLOCATE partitions;
-- Drop the temporary table.
DROP TABLE #work_to_do;
GO
I use this script together with SQL Server Agent to automate the task. Hope this helps.
The safest and most portable way is to drop the indices and to re-add them.