Instead of Trigger for Insert on all tables - sql

Can someone please provide source on how to write a generic instead of trigger for insert for all the tables in a database. I would want to run the stored procedure that will create the instead of insert triggers for all the tables in a db .

Without understanding exactly why you want an instead of trigger on every single table, and what else you plan to do in the resulting code aside from insert the supplied values into the base table (just like what would have happened if there was no instead of trigger at all), here is what I came up with. You'll note that it drops the trigger if it already exists, so you can run this multiple times in the same database without "already exists" errors. It ignores IDENTITY, ROWGUIDCOL, computed columns, and TIMESTAMP/ROWVERSION columns. Finally at the end I show how you can quickly inspect, instead of execute (which is commented out) the output script (up to 8K), and cast it to XML if you want to see more (up to 64K). No guarantees you can return the whole thing in SSMS depending on how many tables/columns there are. If you want to check it and/or run it manually you may want to create a table that stores the value - then you can pull it out with an app or what have you. Now if you want this to execute automatically you can follow Yuck's point - save this as a stored procedure and create a DDL trigger that responds to certain DDL events (CREATE TABLE etc).
SET NOCOUNT ON;
DECLARE
#cr char(2) = char(13) + char(10),
#t char(1) = char(9),
#s nvarchar(max) = N'';
;WITH t AS
(
SELECT [object_id],
s = OBJECT_SCHEMA_NAME([object_id]),
n = name
FROM sys.tables WHERE is_ms_shipped = 0
)
SELECT #s += 'IF OBJECT_ID(N''dbo.ioTrigger_' + t.s + '_' + t.n + ''')
IS NOT NULL
BEGIN
DROP TRIGGER dbo.[ioTrigger_' + t.s + '_' + t.n + '];
END
G' + 'O
CREATE TRIGGER ioTrigger_' + t.s + '_' + t.n + '
ON ' + QUOTENAME(t.s) + '.' + QUOTENAME(t.n) + '
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
-- surely you must want to put some other code here?
INSERT ' + QUOTENAME(t.s) + '.' + QUOTENAME(t.n) + '
(
' +
(
SELECT #t + #t + name + ',' + #cr
FROM sys.columns AS c
WHERE c.[object_id] = t.[object_id]
AND is_identity = 0
AND is_rowguidcol = 0
AND is_computed = 0
AND system_type_id <> 189
FOR XML PATH(''), TYPE
).value(N'./text()[1]', N'nvarchar(max)') + '--'
+ #cr + #t + ')'
+ #cr + #t + 'SELECT
' +
(
SELECT #t + #t + name + ',' + #cr
FROM sys.columns AS c
WHERE c.[object_id] = t.[object_id]
AND is_identity = 0
AND is_rowguidcol = 0
AND is_computed = 0
AND system_type_id <> 189
FOR XML PATH(''), TYPE
).value(N'./text()[1]', N'nvarchar(max)') + '--'
+ #cr + #t + 'FROM
inserted;
END' + #cr + 'G' + 'O' + #cr
FROM t
ORDER BY t.s, t.n;
SELECT #s = REPLACE(#s, ',' + #cr + '--' + #cr, #cr);
-- you can inspect at least part of the script by running the
-- following in text mode:
SELECT #s;
-- if you want to see more of the whole thing (but not necessarily
-- the whole thing), run this in grid mode and click on the result:
SELECT CONVERT(XML, #s);
A couple of caveats:
I don't deal with sparse columns, xml collections, filestream etc. so if you have fancy tables you might run into complications with some of those features.
the name of the trigger isn't really "protected" - you could have a schema called foo, another schema called foo_bar, and then a table in foo called bar_none and a table in foo_bar called none. This would lead to a duplicate trigger name because I use an underscore as the separator. I complained about this issue with CDC, but Microsoft closed the bug as "won't fix." Just something to keep in mind if you happen to use schemas with underscores.

You can only create database triggers for DDL statements in SQL Server 2008+.
If you need DML triggers (INSTEAD OF INSERT) on every table in a database, you're going to have to have to either manage them yourself or try to create a database-level DDL trigger that is responsible for then creating or updating the INSTEAD OF INSERT triggers on any CREATE or ALTER table statements. Could get hairy and will almost certainly require using dynamic SQL.
Out of curiosity, are you trying to build some sort of auditing mechanism?

Related

SQL Script for creating test sample data from source table

I've created the script below to be able to quickly create a minimal reproducible example for other questions in general.
This script uses an original table and generates the following PRINT statements:
DROP and CREATE a temp table with structure matching the original table
INSERT INTO statement using examples from the actual data
I can just add the original table name into the variable listed, along with the number of sample records required from the table. When I run it, it generates all of the statements needed in the Messages window in SSMS. Then I can just copy and paste those statements into my posted questions, so those answering have something to work with.
I know that you can get similar results in SSMS through Tasks>Generate Scripts, but this gets things down to the minimal amount of code that's useful for posting here without all of the unnecessary info that SSMS generates automatically. It's just a quick way to create a reproduced version of a simple table with actual sample data in it.
Unfortunately the one scenario that doesn't work is if I run it on very wide tables. It seems to fail on the last STRING_AGG() query where it's building the VALUES portion of the INSERT. When it runs on wide tables, it returns NULL.
Any suggestions to correct this?
EDIT: I figured out the issue I was having with UNIQUEIDENTIFIER columns and revised the query below. Also included an initial check to make sure the table actually exists.
/* ---------------------------------------
-- For creating minimal reproducible examples
-- based on original table and data,
-- builds the following statements
-- -- CREATE temp table with structure matching original table
-- -- INSERT statements based on actual data
--
-- Note: May not work for very wide tables due to limitations on
-- PRINT statements
*/ ---------------------------------------
DECLARE #tableName NVARCHAR(MAX) = 'testTable', -- original table name HERE
#recordCount INT = 5, -- top number of records to insert to temp table
#buildStmt NVARCHAR(MAX),
#insertStmt NVARCHAR(MAX),
#valuesStmt NVARCHAR(MAX),
#insertCol NVARCHAR(MAX),
#strAgg NVARCHAR(MAX),
#insertOutput NVARCHAR(MAX)
IF (EXISTS (SELECT 1 FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = #tableName))
BEGIN
-- build DROP and CREATE statements for temp table from original table
SET #buildStmt = 'IF OBJECT_ID(''tempdb..#' + #tableName + ''') IS NOT NULL DROP TABLE #' + #tableName + CHAR(10) + CHAR(10) +
'CREATE TABLE #' + #tableName + ' (' + CHAR(10)
SELECT #buildStmt = #buildStmt + ' ' + C.[Name] + ' ' +
T.[Name] +
CASE WHEN T.[Name] IN ('varchar','varchar','char','nchar') THEN '(' + CAST(C.[Length] AS VARCHAR) + ') ' ELSE ' ' END +
'NULL,' + CHAR(10)
FROM sysobjects O
JOIN syscolumns C ON C.id = O.id
JOIN systypes T ON T.xusertype = C.xusertype
WHERE O.[name] = #TableName
ORDER BY C.ColID
SET #buildStmt = SUBSTRING(#buildStmt,1,LEN(#buildStmt) - 2) + CHAR(10) + ')' + CHAR(10)
PRINT #buildStmt
-- build INSERT INTO statement from original table
SELECT #insertStmt = 'INSERT INTO #' + #tableName + ' (' +
STUFF ((
SELECT ', [' + C.[Name] + ']'
FROM sysobjects O
JOIN syscolumns C ON C.id = O.id
WHERE O.[name] = #TableName
ORDER BY C.ColID
FOR XML PATH('')), 1, 1, '')
+')'
PRINT #insertStmt
-- build VALUES portion of INSERT from data in original table
SELECT #insertCol = STUFF ((
SELECT '''''''''+CONVERT(NVARCHAR(200),' +
'[' + C.[Name] + ']' +
')+'''''',''+'
FROM sysobjects O
JOIN syscolumns C ON C.id = O.id
JOIN systypes T ON T.xusertype = C.xusertype
WHERE O.[name] = #TableName
ORDER BY C.ColID
FOR XML PATH('')), 1, 1, '')
SET #insertCol = SUBSTRING(#insertCol,1,LEN(#insertCol) - 1)
SELECT #strAgg = ';WITH CTE AS (SELECT TOP(' + CONVERT(VARCHAR,#recordCount) + ') * FROM ' + #tableName + ') ' +
' SELECT #valuesStmt = STRING_AGG(CAST(''' + #insertCol + ' AS NVARCHAR(MAX)),''), ('') ' +
' FROM CTE'
EXEC sp_executesql #strAgg,N'#valuesStmt NVARCHAR(MAX) OUTPUT', #valuesStmt OUTPUT
PRINT 'VALUES (' +REPLACE(SUBSTRING(#valuesStmt,1,LEN(#valuesStmt) - 1),',)',')') + ')'
END
ELSE
BEGIN
PRINT 'Table does NOT exist'
END

STUFF Function Limitations with Dynamic SQL

I am using dynamic SQL to concatenate a bunch of scripts that I have loaded into a table. It will take those individual scripts, concatenate them into one script, and then put that finished script into another table. It has worked beautifully so far, concatenating about 17 scripts successfully. But yesterday, I tried 80. And there is the issue. It is capping out at 78,165 characters every time I try it. The guts of it go like this:
SET #FinScriptSQL
= N'SELECT script, row_number( ) over( order by scriptTypeId ) as execOrder INTO #' + #FinScriptName
+ N' from dbo.' + #FinScriptName
+ N';
DECLARE #tsqlStmt NVARCHAR(MAX);
SELECT #tsqlStmt = STUFF(
(
SELECT (CHAR(13) + CHAR(10) + script) AS [text()]
FROM #' + #FinScriptName
+ N'
ORDER BY execOrder
FOR XML PATH(''''), TYPE).value(''.'', ''nvarchar( max )''),
1 ,
1 ,
NULL
);
CREATE TABLE dbo.' + #FinScriptName + N'_FinishedScripts
(
Name sysname NULL,
FinishedScript NVARCHAR(MAX) NULL
);
INSERT INTO dbo.' + #FinScriptName + N'_FinishedScripts
SELECT ''' + #FinScriptName + N''', #tsqlStmt;'
PRINT #FinScriptSQL;
EXECUTE sp_executesql #FinScriptSQL;
The column of the table where the individual scripts are is an NVARCHAR(MAX), as is the destination column of the new table. I thought it might be a setting in SQL Server, so I have already ensured that the maximum characters retrieved is wide open. Any ideas?

SQL Server - Create single Trigger that runs for ALL tables in the database

I'm trying to create a trigger in SQL Server 2005 that runs on INSERT, UPDATE and DELETE, but for ALL tables in the database (for auditing purposes). Is it possible to do this?
Currently we have separate triggers for every table in the database, and since they all do the same thing, I'm looking to consolidate them into a single trigger.
I know it's possible to create Database triggers, but the only events I can hook into seem to be for schema changes to tables, sprocs etc. but not for inserts and updates to records, unless I'm missing something?
Generic table triggers don't exist in SQL so you'll need to loop through each of your tables (INFORMATION_SCHEMA.Tables) and create your triggers for each using dynamic SQL.
(Or come up with another simple process to create triggers for each table.)
SET NOCOUNT ON;
DECLARE
#cr VARCHAR(2) = CHAR(13) + CHAR(10),
#t VARCHAR(1) = CHAR(9),
#s NVARCHAR(MAX) = N'';
;WITH t AS
(
SELECT [object_id],
s = OBJECT_SCHEMA_NAME([object_id]),
n = OBJECT_NAME([object_id])
FROM sys.tables WHERE is_ms_shipped = 0
)
SELECT #s += 'IF OBJECT_ID(''dbo.ioTrigger_' + t.s + '_' + t.n + ''') IS NOT NULL
DROP TRIGGER [dbo].[ioTrigger_' + t.s + '_' + t.n + '];
G' + 'O
CREATE TRIGGER ioTrigger_' + t.s + '_' + t.n + '
ON ' + QUOTENAME(t.s) + '.' + QUOTENAME(t.n) + '
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
-- surely you must want to put some other code here?
INSERT ' + QUOTENAME(t.s) + '.' + QUOTENAME(t.n) + '
(
' +
(
SELECT #t + #t + name + ',' + #cr
FROM sys.columns AS c
WHERE c.[object_id] = t.[object_id]
AND is_identity = 0
AND is_rowguidcol = 0
AND is_computed = 0
AND system_type_id <> 189
FOR XML PATH(''), TYPE
).value('.[1]', 'NVARCHAR(MAX)') + '--'
+ #cr + #t + ')'
+ #cr + #t + 'SELECT
' +
(
SELECT #t + #t + name + ',' + #cr
FROM sys.columns AS c
WHERE c.[object_id] = t.[object_id]
AND is_identity = 0
AND is_rowguidcol = 0
AND is_computed = 0
AND system_type_id <> 189
FOR XML PATH(''), TYPE
).value('.[1]', 'NVARCHAR(MAX)') + '--'
+ #cr + #t + 'FROM
inserted;
END' + #cr + 'G' + 'O' + #cr
FROM t
ORDER BY t.s, t.n;
SELECT #s = REPLACE(#s, ',' + #cr + '--' + #cr, #cr);
-- you can inspect at least part of the script by running the
-- following in text mode:
SELECT #s;
-- if you want to see more of the whole thing (but not necessarily
-- the whole thing), run this in grid mode and click on result:
SELECT CONVERT(XML, #s);
source page:

SQL query to return all rows where one or more of the fields contains a certain value

Is it possible to run a select on a table to quickly find out if any (one or more) of the fields contain a certain value?
Or would you have to write out all of the column names in the where clause?
As others have said, you're likely going to have to write all the columns into your WHERE clause, either by hand or programatically. SQL does not include functionality to do it directly. A better question might be "why do you need to do this?". Needing to use this type of query is possibly a good indicator that your database isn't properly normalized. If you tell us your schema, we may be able to help with that problem too (if it's an actual problem).
Dig this... It will search on all the tables in the db, but you can mod it down to just one table.
/*This script will find any text value in the database*/
/*Output will be directed to the Messages window. Don't forget to look there!!!*/
SET NOCOUNT ON
DECLARE #valuetosearchfor varchar(128), #objectOwner varchar(64)
SET #valuetosearchfor = '%staff%' --should be formatted as a like search
SET #objectOwner = 'dbo'
DECLARE #potentialcolumns TABLE (id int IDENTITY, sql varchar(4000))
INSERT INTO #potentialcolumns (sql)
SELECT
('if exists (select 1 from [' +
[tabs].[table_schema] + '].[' +
[tabs].[table_name] +
'] (NOLOCK) where [' +
[cols].[column_name] +
'] like ''' + #valuetosearchfor + ''' ) print ''SELECT * FROM [' +
[tabs].[table_schema] + '].[' +
[tabs].[table_name] +
'] (NOLOCK) WHERE [' +
[cols].[column_name] +
'] LIKE ''''' + #valuetosearchfor + '''''' +
'''') as 'sql'
FROM information_schema.columns cols
INNER JOIN information_schema.tables tabs
ON cols.TABLE_CATALOG = tabs.TABLE_CATALOG
AND cols.TABLE_SCHEMA = tabs.TABLE_SCHEMA
AND cols.TABLE_NAME = tabs.TABLE_NAME
WHERE cols.data_type IN ('char', 'varchar', 'nvchar', 'nvarchar','text','ntext')
AND tabs.table_schema = #objectOwner
AND tabs.TABLE_TYPE = 'BASE TABLE'
ORDER BY tabs.table_catalog, tabs.table_name, cols.ordinal_position
DECLARE #count int
SET #count = (SELECT MAX(id) FROM #potentialcolumns)
PRINT 'Found ' + CAST(#count as varchar) + ' potential columns.'
PRINT 'Beginning scan...'
PRINT ''
PRINT 'These columns contain the values being searched for...'
PRINT ''
DECLARE #iterator int, #sql varchar(4000)
SET #iterator = 1
WHILE #iterator <= (SELECT Max(id) FROM #potentialcolumns)
BEGIN
SET #sql = (SELECT [sql] FROM #potentialcolumns where [id] = #iterator)
IF (#sql IS NOT NULL) and (RTRIM(LTRIM(#sql)) <> '')
BEGIN
--SELECT #sql --use when checking sql output
EXEC (#sql)
END
SET #iterator = #iterator + 1
END
PRINT ''
PRINT 'Scan completed'
I think you'd need to list all the columns in the where clause. I'm far from a SQL wizard though...maybe someone else knows a way.
you will have to write it out
Of course you have to write out all columns you want to use as a criteria.
If you add what programming language you are using and in what type of environment you working we can give you a clue or solution of how to do it dynamically.
I think your question really was how to do this dynamically depending of what the user of your program fill in in the "search"-form.. Im right?
If not, then.. Give us more information. ;)

How to drop IDENTITY property of column in SQL Server 2005

I want to be able to insert data from a table with an identity column into a temporary table in SQL Server 2005.
The TSQL looks something like:
-- Create empty temp table
SELECT *
INTO #Tmp_MyTable
FROM MyTable
WHERE 1=0
...
WHILE ...
BEGIN
...
INSERT INTO #Tmp_MyTable
SELECT TOP (#n) *
FROM MyTable
...
END
The above code created #Tmp_Table with an identity column, and the insert subsequently fails with an error "An explicit value for the identity column in table '#Tmp_MyTable' can only be specified when a column list is used and IDENTITY_INSERT is ON."
Is there a way in TSQL to drop the identity property of the column in the temporary table without listing all the columns explicitly? I specifically want to use "SELECT *" so that the code will continue to work if new columns are added to MyTable.
I believe dropping and recreating the column will change its position, making it impossible to use SELECT *.
Update:
I've tried using IDENTITY_INSERT as suggested in one response. It's not working - see the repro below. What am I doing wrong?
-- Create test table
CREATE TABLE [dbo].[TestTable](
[ID] [numeric](18, 0) IDENTITY(1,1) NOT NULL,
[Name] [varchar](50) NULL,
CONSTRAINT [PK_TestTable] PRIMARY KEY CLUSTERED
(
[ID] ASC
)
)
GO
-- Insert some data
INSERT INTO TestTable
(Name)
SELECT 'One'
UNION ALL
SELECT 'Two'
UNION ALL
SELECT 'Three'
GO
-- Create empty temp table
SELECT *
INTO #Tmp
FROM TestTable
WHERE 1=0
SET IDENTITY_INSERT #Tmp ON -- I also tried OFF / ON
INSERT INTO #Tmp
SELECT TOP 1 * FROM TestTable
SET IDENTITY_INSERT #Tmp OFF
GO
-- Drop test table
DROP TABLE [dbo].[TestTable]
GO
Note that the error message "An explicit value for the identity column in table '#TmpMyTable' can only be specified when a column list is used and IDENTITY_INSERT is ON." - I specifically don't want to use a column list as explained above.
Update 2
Tried the suggestion from Mike but this gave the same error:
-- Create empty temp table
SELECT *
INTO #Tmp
FROM (SELECT
m1.*
FROM TestTable m1
LEFT OUTER JOIN TestTable m2 ON m1.ID=m2.ID
WHERE 1=0
) dt
INSERT INTO #Tmp
SELECT TOP 1 * FROM TestTable
As for why I want to do this: MyTable is a staging table which can contain a large number of rows to be merged into another table. I want to process the rows from the staging table, insert/update my main table, and delete them from the staging table in a loop that processes N rows per transaction. I realize there are other ways to achieve this.
Update 3
I couldn't get Mike's solution to work, however it suggested the following solution which does work: prefix with a non-identity column and drop the identity column:
SELECT CAST(1 AS NUMERIC(18,0)) AS ID2, *
INTO #Tmp
FROM TestTable
WHERE 1=0
ALTER TABLE #Tmp DROP COLUMN ID
INSERT INTO #Tmp
SELECT TOP 1 * FROM TestTable
Mike's suggestion to store only the keys in the temporary table is also a good one, though in this specific case there are reasons I prefer to have all columns in the temporary table.
You could try
SET IDENTITY_INSERT #Tmp_MyTable ON
-- ... do stuff
SET IDENTITY_INSERT #Tmp_MyTable OFF
This will allow you to select into #Tmp_MyTable even though it has an identity column.
But this will not work:
-- Create empty temp table
SELECT *
INTO #Tmp_MyTable
FROM MyTable
WHERE 1=0
...
WHILE ...
BEGIN
...
SET IDENTITY_INSERT #Tmp_MyTable ON
INSERT INTO #Tmp_MyTable
SELECT TOP (#n) *
FROM MyTable
SET IDENTITY_INSERT #Tmp_MyTable OFF
...
END
(results in the error "An explicit value for the identity column in table '#Tmp' can only be specified when a column list is used and IDENTITY_INSERT is ON.")
It seems there is no way without actually dropping the column - but that would change the order of columns as OP mentioned. Ugly hack: Create a new table based on #Tmp_MyTable ...
I suggest you write a stored procedure that creates a temporary table based on a table name (MyTable) with the same columns (in order), but with the identity property missing.
You could use following code:
select t.name as tablename, typ.name as typename, c.*
from sys.columns c inner join
sys.tables t on c.object_id = t.[object_id] inner join
sys.types typ on c.system_type_id = typ.system_type_id
order by t.name, c.column_id
to get a glimpse on how reflection works in TSQL. I believe you will have to loop over the columns for the table in question and execute dynamic (hand-crafted, stored in strings and then evaluated) alter statements to the generated table.
Would you mind posting such a stored procedure for the rest of the world? This question seems to come up quite a lot in other forums as well...
IF you are just processing rows as you describe, wouldn't it be better to just select the top N primary key values into a temp table like:
CREATE TABLE #KeysToProcess
(
TempID int not null primary key identity(1,1)
,YourKey1 int not null
,YourKey2 int not null
)
INSERT INTO #KeysToProcess (YourKey1,YourKey2)
SELECT TOP n YourKey1,YourKey2 FROM MyTable
The keys should not change very often (I hope) but other columns can with no harm to doing it this way.
get the ##ROWCOUNT of the insert and you can do a easy loop on TempID where it will be from 1 to ##ROWCOUNT
and/or
just join #KeysToProcess to your MyKeys table and be on your way, with no need to duplicate all the data.
This runs fine on my SQL Server 2005, where MyTable.MyKey is an identity column.
-- Create empty temp table
SELECT *
INTO #TmpMikeMike
FROM (SELECT
m1.*
FROM MyTable m1
LEFT OUTER JOIN MyTable m2 ON m1.MyKey=m2.MyKey
WHERE 1=0
) dt
INSERT INTO #TmpMike
SELECT TOP 1 * FROM MyTable
SELECT * from #TmpMike
EDIT
THIS WORKS, with no errors...
-- Create empty temp table
SELECT *
INTO #Tmp_MyTable
FROM (SELECT
m1.*
FROM MyTable m1
LEFT OUTER JOIN MyTable m2 ON m1.KeyValue=m2.KeyValue
WHERE 1=0
) dt
...
WHILE ...
BEGIN
...
INSERT INTO #Tmp_MyTable
SELECT TOP (#n) *
FROM MyTable
...
END
however, what is your real problem? Why do you need to loop while inserting "*" into this temp table? You may be able to shift strategy and come up with a much better algorithm overall.
EDIT Toggling IDENTITY_INSERT as suggested by Daren is certainly the more elegant approach, in my case I needed to eliminate the identity column so that I could reinsert selected data into the source table
The way that I addressed this was to create the temp table just as you do, explicitly drop the identity column, and then dynamically build the sql so that I have a column list that excludes the identity column (as in your case so the proc would still work if there were changes to the schema) and then execute the sql here's a sample
declare #ret int
Select * into #sometemp from sometable
Where
id = #SomeVariable
Alter Table #sometemp Drop column SomeIdentity
Select #SelectList = ''
Select #SelectList = #SelectList
+ Coalesce( '[' + Column_name + ']' + ', ' ,'')
from information_schema.columns
where table_name = 'sometable'
and Column_Name <> 'SomeIdentity'
Set #SelectList = 'Insert into sometable ('
+ Left(#SelectList, Len(#SelectList) -1) + ')'
Set #SelectList = #SelectList
+ ' Select * from #sometemp '
exec #ret = sp_executesql #selectlist
I have wrote this procedure as compilation of many answers to automatically and fast drop column identity:
CREATE PROCEDURE dbo.sp_drop_table_identity #tableName VARCHAR(256) AS
BEGIN
DECLARE #sql VARCHAR (4096);
DECLARE #sqlTableConstraints VARCHAR (4096);
DECLARE #tmpTableName VARCHAR(256) = #tableName + '_noident_temp';
BEGIN TRANSACTION
-- 1) Create temporary table with edentical structure except identity
-- Idea borrowed from https://stackoverflow.com/questions/21547/in-sql-server-how-do-i-generate-a-create-table-statement-for-a-given-table
-- modified to ommit Identity and honor all constraints, not primary key only!
SELECT
#sql = 'CREATE TABLE [' + so.name + '_noident_temp] (' + o.list + ')'
+ ' ' + j.list
FROM sysobjects so
CROSS APPLY (
SELECT
' [' + column_name + '] '
+ data_type
+ CASE data_type
WHEN 'sql_variant' THEN ''
WHEN 'text' THEN ''
WHEN 'ntext' THEN ''
WHEN 'xml' THEN ''
WHEN 'decimal' THEN '(' + CAST(numeric_precision as VARCHAR) + ', ' + CAST(numeric_scale as VARCHAR) + ')'
ELSE COALESCE('(' + CASE WHEN character_maximum_length = -1 THEN 'MAX' ELSE CAST(character_maximum_length as VARCHAR) END + ')', '')
END
+ ' '
/* + case when exists ( -- Identity skip
select id from syscolumns
where object_name(id)=so.name
and name=column_name
and columnproperty(id,name,'IsIdentity') = 1
) then
'IDENTITY(' +
cast(ident_seed(so.name) as varchar) + ',' +
cast(ident_incr(so.name) as varchar) + ')'
else ''
end + ' ' */
+ CASE WHEN IS_NULLABLE = 'No' THEN 'NOT ' ELSE '' END
+ 'NULL'
+ CASE WHEN information_schema.columns.column_default IS NOT NULL THEN ' DEFAULT ' + information_schema.columns.column_default ELSE '' END
+ ','
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE table_name = so.name
ORDER BY ordinal_position
FOR XML PATH('')
) o (list)
CROSS APPLY(
SELECT
CHAR(10) + 'ALTER TABLE ' + #tableName + '_noident_temp ADD ' + LEFT(alt, LEN(alt)-1)
FROM(
SELECT
CHAR(10)
+ ' CONSTRAINT ' + tc.constraint_name + '_ni ' + tc.constraint_type + ' (' + LEFT(c.list, LEN(c.list)-1) + ')'
+ COALESCE(CHAR(10) + r.list, ', ')
FROM
information_schema.table_constraints tc
CROSS APPLY(
SELECT
'[' + kcu.column_name + '], '
FROM
information_schema.key_column_usage kcu
WHERE
kcu.constraint_name = tc.constraint_name
ORDER BY
kcu.ordinal_position
FOR XML PATH('')
) c (list)
OUTER APPLY(
-- https://stackoverflow.com/questions/3907879/sql-server-howto-get-foreign-key-reference-from-information-schema
SELECT
' REFERENCES [' + kcu1.constraint_schema + '].' + '[' + kcu2.table_name + ']' + '([' + kcu2.column_name + ']) '
+ CHAR(10)
+ ' ON DELETE ' + rc.delete_rule
+ CHAR(10)
+ ' ON UPDATE ' + rc.update_rule + ', '
FROM information_schema.referential_constraints as rc
JOIN information_schema.key_column_usage as kcu1 ON (kcu1.constraint_catalog = rc.constraint_catalog AND kcu1.constraint_schema = rc.constraint_schema AND kcu1.constraint_name = rc.constraint_name)
JOIN information_schema.key_column_usage as kcu2 ON (kcu2.constraint_catalog = rc.unique_constraint_catalog AND kcu2.constraint_schema = rc.unique_constraint_schema AND kcu2.constraint_name = rc.unique_constraint_name AND kcu2.ordinal_position = KCU1.ordinal_position)
WHERE
kcu1.constraint_catalog = tc.constraint_catalog AND kcu1.constraint_schema = tc.constraint_schema AND kcu1.constraint_name = tc.constraint_name
) r (list)
WHERE tc.table_name = #tableName
FOR XML PATH('')
) a (alt)
) j (list)
WHERE
xtype = 'U'
AND name NOT IN ('dtproperties')
AND so.name = #tableName
SELECT #sql as '1) #sql';
EXECUTE(#sql);
-- 2) Obtain current back references on our table from others to reenable it later
-- https://stackoverflow.com/questions/3907879/sql-server-howto-get-foreign-key-reference-from-information-schema
SELECT
#sqlTableConstraints = (
SELECT
'ALTER TABLE [' + kcu1.constraint_schema + '].' + '[' + kcu1.table_name + ']'
+ ' ADD CONSTRAINT ' + kcu1.constraint_name + '_ni FOREIGN KEY ([' + kcu1.column_name + '])'
+ CHAR(10)
+ ' REFERENCES [' + kcu2.table_schema + '].[' + kcu2.table_name + ']([' + kcu2.column_name + '])'
+ CHAR(10)
+ ' ON DELETE ' + rc.delete_rule
+ CHAR(10)
+ ' ON UPDATE ' + rc.update_rule + ' '
FROM information_schema.referential_constraints as rc
JOIN information_schema.key_column_usage as kcu1 ON (kcu1.constraint_catalog = rc.constraint_catalog AND kcu1.constraint_schema = rc.constraint_schema AND kcu1.constraint_name = rc.constraint_name)
JOIN information_schema.key_column_usage as kcu2 ON (kcu2.constraint_catalog = rc.unique_constraint_catalog AND kcu2.constraint_schema = rc.unique_constraint_schema AND kcu2.constraint_name = rc.unique_constraint_name AND kcu2.ordinal_position = KCU1.ordinal_position)
WHERE
kcu2.table_name = 'department'
FOR XML PATH('')
);
SELECT #sqlTableConstraints as '8) #sqlTableConstraints';
-- Execute at end
-- 3) Drop outer references for switch (structure must be identical: http://msdn.microsoft.com/en-gb/library/ms191160.aspx) and rename table
SELECT
#sql = (
SELECT
' ALTER TABLE [' + kcu1.constraint_schema + '].' + '[' + kcu1.table_name + '] DROP CONSTRAINT ' + kcu1.constraint_name
FROM information_schema.referential_constraints as rc
JOIN information_schema.key_column_usage as kcu1 ON (kcu1.constraint_catalog = rc.constraint_catalog AND kcu1.constraint_schema = rc.constraint_schema AND kcu1.constraint_name = rc.constraint_name)
JOIN information_schema.key_column_usage as kcu2 ON (kcu2.constraint_catalog = rc.unique_constraint_catalog AND kcu2.constraint_schema = rc.unique_constraint_schema AND kcu2.constraint_name = rc.unique_constraint_name AND kcu2.ordinal_position = KCU1.ordinal_position)
WHERE
kcu2.table_name = #tableName
FOR XML PATH('')
);
SELECT #sql as '3) #sql'
EXECUTE (#sql);
-- 4) Switch partition
-- http://www.calsql.com/2012/05/removing-identity-property-taking-more.html
SET #sql = 'ALTER TABLE ' + #tableName + ' switch partition 1 to ' + #tmpTableName;
SELECT #sql as '4) #sql';
EXECUTE(#sql);
-- 5) Rename real old table to bak
SET #sql = 'EXEC sp_rename ' + #tableName + ', ' + #tableName + '_bak';
SELECT #sql as '5) #sql';
EXECUTE(#sql);
-- 6) Rename temp table to real
SET #sql = 'EXEC sp_rename ' + #tmpTableName + ', ' + #tableName;
SELECT #sql as '6) #sql';
EXECUTE(#sql);
-- 7) Drop bak table
SET #sql = 'DROP TABLE ' + #tableName + '_bak';
SELECT #sql as '7) #sql';
EXECUTE(#sql);
-- 8) Create again doped early constraints
SELECT #sqlTableConstraints as '8) #sqlTableConstraints';
EXECUTE(#sqlTableConstraints);
-- It still may fail if there references from objects with WITH CHECKOPTION
-- it may be recreated - https://stackoverflow.com/questions/1540988/sql-2005-force-table-rename-that-has-dependencies
COMMIT
END
Use is pretty simple:
EXEC sp_drop_table_identity #tableName = 'some_very_big_table'
Benefits and limitations:
It uses switch partition (applicable to not partitioned tables too) statement for fast move without full data copy. It also apply some conditions for applicability.
It make on the fly table copy without identity. Such solution I also post separately and it also may need tuning on not so trivial structures like compound fields (it cover my needs).
If table included in objects with schema bound by CHECKOUPTION (sp, views) it prevent do switching (see last comment in code). It may be additionally scripted to temporary drop such binding. I had not do that yet.
All feedback welcome.
Most efficient way to drop identity columns (especially for large databases) on SQL Server is to modify DDL metadata directly, on SQL Server older than 2005 this can be done with:
sp_configure 'allow update', 1
go
reconfigure with override
go
update syscolumns set colstat = 0 --turn off bit 1 which indicates identity column
where id = object_id('table_name') and name = 'column_name'
go
exec sp_configure 'allow update', 0
go
reconfigure with override
go
SQL Server 2005+ doesn't support reconfigure with override, but you can execute Ad Hoc Queries when SQL Server instance is started in single-user mode (start db instance with -m flag, i.e. "C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn\sqlservr.exe -m", make sure to run as Administrator) with Dedicated Admin Console (from SQL Management Studio connect with ADMIN: prefix, i.e. ADMIN:MyDatabase). Column metdata is stored in sys.sysschobjs internal table (not shown without DAC):
use myDatabase
update sys.syscolpars set status = 1, idtval = null -- status=1 - primary key, idtval=null - remove identity data
where id = object_id('table_name') AND name = 'column_name'
More on this approach on this blog