I want to be able to insert data from a table with an identity column into a temporary table in SQL Server 2005.
The TSQL looks something like:
-- Create empty temp table
SELECT *
INTO #Tmp_MyTable
FROM MyTable
WHERE 1=0
...
WHILE ...
BEGIN
...
INSERT INTO #Tmp_MyTable
SELECT TOP (#n) *
FROM MyTable
...
END
The above code created #Tmp_Table with an identity column, and the insert subsequently fails with an error "An explicit value for the identity column in table '#Tmp_MyTable' can only be specified when a column list is used and IDENTITY_INSERT is ON."
Is there a way in TSQL to drop the identity property of the column in the temporary table without listing all the columns explicitly? I specifically want to use "SELECT *" so that the code will continue to work if new columns are added to MyTable.
I believe dropping and recreating the column will change its position, making it impossible to use SELECT *.
Update:
I've tried using IDENTITY_INSERT as suggested in one response. It's not working - see the repro below. What am I doing wrong?
-- Create test table
CREATE TABLE [dbo].[TestTable](
[ID] [numeric](18, 0) IDENTITY(1,1) NOT NULL,
[Name] [varchar](50) NULL,
CONSTRAINT [PK_TestTable] PRIMARY KEY CLUSTERED
(
[ID] ASC
)
)
GO
-- Insert some data
INSERT INTO TestTable
(Name)
SELECT 'One'
UNION ALL
SELECT 'Two'
UNION ALL
SELECT 'Three'
GO
-- Create empty temp table
SELECT *
INTO #Tmp
FROM TestTable
WHERE 1=0
SET IDENTITY_INSERT #Tmp ON -- I also tried OFF / ON
INSERT INTO #Tmp
SELECT TOP 1 * FROM TestTable
SET IDENTITY_INSERT #Tmp OFF
GO
-- Drop test table
DROP TABLE [dbo].[TestTable]
GO
Note that the error message "An explicit value for the identity column in table '#TmpMyTable' can only be specified when a column list is used and IDENTITY_INSERT is ON." - I specifically don't want to use a column list as explained above.
Update 2
Tried the suggestion from Mike but this gave the same error:
-- Create empty temp table
SELECT *
INTO #Tmp
FROM (SELECT
m1.*
FROM TestTable m1
LEFT OUTER JOIN TestTable m2 ON m1.ID=m2.ID
WHERE 1=0
) dt
INSERT INTO #Tmp
SELECT TOP 1 * FROM TestTable
As for why I want to do this: MyTable is a staging table which can contain a large number of rows to be merged into another table. I want to process the rows from the staging table, insert/update my main table, and delete them from the staging table in a loop that processes N rows per transaction. I realize there are other ways to achieve this.
Update 3
I couldn't get Mike's solution to work, however it suggested the following solution which does work: prefix with a non-identity column and drop the identity column:
SELECT CAST(1 AS NUMERIC(18,0)) AS ID2, *
INTO #Tmp
FROM TestTable
WHERE 1=0
ALTER TABLE #Tmp DROP COLUMN ID
INSERT INTO #Tmp
SELECT TOP 1 * FROM TestTable
Mike's suggestion to store only the keys in the temporary table is also a good one, though in this specific case there are reasons I prefer to have all columns in the temporary table.
You could try
SET IDENTITY_INSERT #Tmp_MyTable ON
-- ... do stuff
SET IDENTITY_INSERT #Tmp_MyTable OFF
This will allow you to select into #Tmp_MyTable even though it has an identity column.
But this will not work:
-- Create empty temp table
SELECT *
INTO #Tmp_MyTable
FROM MyTable
WHERE 1=0
...
WHILE ...
BEGIN
...
SET IDENTITY_INSERT #Tmp_MyTable ON
INSERT INTO #Tmp_MyTable
SELECT TOP (#n) *
FROM MyTable
SET IDENTITY_INSERT #Tmp_MyTable OFF
...
END
(results in the error "An explicit value for the identity column in table '#Tmp' can only be specified when a column list is used and IDENTITY_INSERT is ON.")
It seems there is no way without actually dropping the column - but that would change the order of columns as OP mentioned. Ugly hack: Create a new table based on #Tmp_MyTable ...
I suggest you write a stored procedure that creates a temporary table based on a table name (MyTable) with the same columns (in order), but with the identity property missing.
You could use following code:
select t.name as tablename, typ.name as typename, c.*
from sys.columns c inner join
sys.tables t on c.object_id = t.[object_id] inner join
sys.types typ on c.system_type_id = typ.system_type_id
order by t.name, c.column_id
to get a glimpse on how reflection works in TSQL. I believe you will have to loop over the columns for the table in question and execute dynamic (hand-crafted, stored in strings and then evaluated) alter statements to the generated table.
Would you mind posting such a stored procedure for the rest of the world? This question seems to come up quite a lot in other forums as well...
IF you are just processing rows as you describe, wouldn't it be better to just select the top N primary key values into a temp table like:
CREATE TABLE #KeysToProcess
(
TempID int not null primary key identity(1,1)
,YourKey1 int not null
,YourKey2 int not null
)
INSERT INTO #KeysToProcess (YourKey1,YourKey2)
SELECT TOP n YourKey1,YourKey2 FROM MyTable
The keys should not change very often (I hope) but other columns can with no harm to doing it this way.
get the ##ROWCOUNT of the insert and you can do a easy loop on TempID where it will be from 1 to ##ROWCOUNT
and/or
just join #KeysToProcess to your MyKeys table and be on your way, with no need to duplicate all the data.
This runs fine on my SQL Server 2005, where MyTable.MyKey is an identity column.
-- Create empty temp table
SELECT *
INTO #TmpMikeMike
FROM (SELECT
m1.*
FROM MyTable m1
LEFT OUTER JOIN MyTable m2 ON m1.MyKey=m2.MyKey
WHERE 1=0
) dt
INSERT INTO #TmpMike
SELECT TOP 1 * FROM MyTable
SELECT * from #TmpMike
EDIT
THIS WORKS, with no errors...
-- Create empty temp table
SELECT *
INTO #Tmp_MyTable
FROM (SELECT
m1.*
FROM MyTable m1
LEFT OUTER JOIN MyTable m2 ON m1.KeyValue=m2.KeyValue
WHERE 1=0
) dt
...
WHILE ...
BEGIN
...
INSERT INTO #Tmp_MyTable
SELECT TOP (#n) *
FROM MyTable
...
END
however, what is your real problem? Why do you need to loop while inserting "*" into this temp table? You may be able to shift strategy and come up with a much better algorithm overall.
EDIT Toggling IDENTITY_INSERT as suggested by Daren is certainly the more elegant approach, in my case I needed to eliminate the identity column so that I could reinsert selected data into the source table
The way that I addressed this was to create the temp table just as you do, explicitly drop the identity column, and then dynamically build the sql so that I have a column list that excludes the identity column (as in your case so the proc would still work if there were changes to the schema) and then execute the sql here's a sample
declare #ret int
Select * into #sometemp from sometable
Where
id = #SomeVariable
Alter Table #sometemp Drop column SomeIdentity
Select #SelectList = ''
Select #SelectList = #SelectList
+ Coalesce( '[' + Column_name + ']' + ', ' ,'')
from information_schema.columns
where table_name = 'sometable'
and Column_Name <> 'SomeIdentity'
Set #SelectList = 'Insert into sometable ('
+ Left(#SelectList, Len(#SelectList) -1) + ')'
Set #SelectList = #SelectList
+ ' Select * from #sometemp '
exec #ret = sp_executesql #selectlist
I have wrote this procedure as compilation of many answers to automatically and fast drop column identity:
CREATE PROCEDURE dbo.sp_drop_table_identity #tableName VARCHAR(256) AS
BEGIN
DECLARE #sql VARCHAR (4096);
DECLARE #sqlTableConstraints VARCHAR (4096);
DECLARE #tmpTableName VARCHAR(256) = #tableName + '_noident_temp';
BEGIN TRANSACTION
-- 1) Create temporary table with edentical structure except identity
-- Idea borrowed from https://stackoverflow.com/questions/21547/in-sql-server-how-do-i-generate-a-create-table-statement-for-a-given-table
-- modified to ommit Identity and honor all constraints, not primary key only!
SELECT
#sql = 'CREATE TABLE [' + so.name + '_noident_temp] (' + o.list + ')'
+ ' ' + j.list
FROM sysobjects so
CROSS APPLY (
SELECT
' [' + column_name + '] '
+ data_type
+ CASE data_type
WHEN 'sql_variant' THEN ''
WHEN 'text' THEN ''
WHEN 'ntext' THEN ''
WHEN 'xml' THEN ''
WHEN 'decimal' THEN '(' + CAST(numeric_precision as VARCHAR) + ', ' + CAST(numeric_scale as VARCHAR) + ')'
ELSE COALESCE('(' + CASE WHEN character_maximum_length = -1 THEN 'MAX' ELSE CAST(character_maximum_length as VARCHAR) END + ')', '')
END
+ ' '
/* + case when exists ( -- Identity skip
select id from syscolumns
where object_name(id)=so.name
and name=column_name
and columnproperty(id,name,'IsIdentity') = 1
) then
'IDENTITY(' +
cast(ident_seed(so.name) as varchar) + ',' +
cast(ident_incr(so.name) as varchar) + ')'
else ''
end + ' ' */
+ CASE WHEN IS_NULLABLE = 'No' THEN 'NOT ' ELSE '' END
+ 'NULL'
+ CASE WHEN information_schema.columns.column_default IS NOT NULL THEN ' DEFAULT ' + information_schema.columns.column_default ELSE '' END
+ ','
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE table_name = so.name
ORDER BY ordinal_position
FOR XML PATH('')
) o (list)
CROSS APPLY(
SELECT
CHAR(10) + 'ALTER TABLE ' + #tableName + '_noident_temp ADD ' + LEFT(alt, LEN(alt)-1)
FROM(
SELECT
CHAR(10)
+ ' CONSTRAINT ' + tc.constraint_name + '_ni ' + tc.constraint_type + ' (' + LEFT(c.list, LEN(c.list)-1) + ')'
+ COALESCE(CHAR(10) + r.list, ', ')
FROM
information_schema.table_constraints tc
CROSS APPLY(
SELECT
'[' + kcu.column_name + '], '
FROM
information_schema.key_column_usage kcu
WHERE
kcu.constraint_name = tc.constraint_name
ORDER BY
kcu.ordinal_position
FOR XML PATH('')
) c (list)
OUTER APPLY(
-- https://stackoverflow.com/questions/3907879/sql-server-howto-get-foreign-key-reference-from-information-schema
SELECT
' REFERENCES [' + kcu1.constraint_schema + '].' + '[' + kcu2.table_name + ']' + '([' + kcu2.column_name + ']) '
+ CHAR(10)
+ ' ON DELETE ' + rc.delete_rule
+ CHAR(10)
+ ' ON UPDATE ' + rc.update_rule + ', '
FROM information_schema.referential_constraints as rc
JOIN information_schema.key_column_usage as kcu1 ON (kcu1.constraint_catalog = rc.constraint_catalog AND kcu1.constraint_schema = rc.constraint_schema AND kcu1.constraint_name = rc.constraint_name)
JOIN information_schema.key_column_usage as kcu2 ON (kcu2.constraint_catalog = rc.unique_constraint_catalog AND kcu2.constraint_schema = rc.unique_constraint_schema AND kcu2.constraint_name = rc.unique_constraint_name AND kcu2.ordinal_position = KCU1.ordinal_position)
WHERE
kcu1.constraint_catalog = tc.constraint_catalog AND kcu1.constraint_schema = tc.constraint_schema AND kcu1.constraint_name = tc.constraint_name
) r (list)
WHERE tc.table_name = #tableName
FOR XML PATH('')
) a (alt)
) j (list)
WHERE
xtype = 'U'
AND name NOT IN ('dtproperties')
AND so.name = #tableName
SELECT #sql as '1) #sql';
EXECUTE(#sql);
-- 2) Obtain current back references on our table from others to reenable it later
-- https://stackoverflow.com/questions/3907879/sql-server-howto-get-foreign-key-reference-from-information-schema
SELECT
#sqlTableConstraints = (
SELECT
'ALTER TABLE [' + kcu1.constraint_schema + '].' + '[' + kcu1.table_name + ']'
+ ' ADD CONSTRAINT ' + kcu1.constraint_name + '_ni FOREIGN KEY ([' + kcu1.column_name + '])'
+ CHAR(10)
+ ' REFERENCES [' + kcu2.table_schema + '].[' + kcu2.table_name + ']([' + kcu2.column_name + '])'
+ CHAR(10)
+ ' ON DELETE ' + rc.delete_rule
+ CHAR(10)
+ ' ON UPDATE ' + rc.update_rule + ' '
FROM information_schema.referential_constraints as rc
JOIN information_schema.key_column_usage as kcu1 ON (kcu1.constraint_catalog = rc.constraint_catalog AND kcu1.constraint_schema = rc.constraint_schema AND kcu1.constraint_name = rc.constraint_name)
JOIN information_schema.key_column_usage as kcu2 ON (kcu2.constraint_catalog = rc.unique_constraint_catalog AND kcu2.constraint_schema = rc.unique_constraint_schema AND kcu2.constraint_name = rc.unique_constraint_name AND kcu2.ordinal_position = KCU1.ordinal_position)
WHERE
kcu2.table_name = 'department'
FOR XML PATH('')
);
SELECT #sqlTableConstraints as '8) #sqlTableConstraints';
-- Execute at end
-- 3) Drop outer references for switch (structure must be identical: http://msdn.microsoft.com/en-gb/library/ms191160.aspx) and rename table
SELECT
#sql = (
SELECT
' ALTER TABLE [' + kcu1.constraint_schema + '].' + '[' + kcu1.table_name + '] DROP CONSTRAINT ' + kcu1.constraint_name
FROM information_schema.referential_constraints as rc
JOIN information_schema.key_column_usage as kcu1 ON (kcu1.constraint_catalog = rc.constraint_catalog AND kcu1.constraint_schema = rc.constraint_schema AND kcu1.constraint_name = rc.constraint_name)
JOIN information_schema.key_column_usage as kcu2 ON (kcu2.constraint_catalog = rc.unique_constraint_catalog AND kcu2.constraint_schema = rc.unique_constraint_schema AND kcu2.constraint_name = rc.unique_constraint_name AND kcu2.ordinal_position = KCU1.ordinal_position)
WHERE
kcu2.table_name = #tableName
FOR XML PATH('')
);
SELECT #sql as '3) #sql'
EXECUTE (#sql);
-- 4) Switch partition
-- http://www.calsql.com/2012/05/removing-identity-property-taking-more.html
SET #sql = 'ALTER TABLE ' + #tableName + ' switch partition 1 to ' + #tmpTableName;
SELECT #sql as '4) #sql';
EXECUTE(#sql);
-- 5) Rename real old table to bak
SET #sql = 'EXEC sp_rename ' + #tableName + ', ' + #tableName + '_bak';
SELECT #sql as '5) #sql';
EXECUTE(#sql);
-- 6) Rename temp table to real
SET #sql = 'EXEC sp_rename ' + #tmpTableName + ', ' + #tableName;
SELECT #sql as '6) #sql';
EXECUTE(#sql);
-- 7) Drop bak table
SET #sql = 'DROP TABLE ' + #tableName + '_bak';
SELECT #sql as '7) #sql';
EXECUTE(#sql);
-- 8) Create again doped early constraints
SELECT #sqlTableConstraints as '8) #sqlTableConstraints';
EXECUTE(#sqlTableConstraints);
-- It still may fail if there references from objects with WITH CHECKOPTION
-- it may be recreated - https://stackoverflow.com/questions/1540988/sql-2005-force-table-rename-that-has-dependencies
COMMIT
END
Use is pretty simple:
EXEC sp_drop_table_identity #tableName = 'some_very_big_table'
Benefits and limitations:
It uses switch partition (applicable to not partitioned tables too) statement for fast move without full data copy. It also apply some conditions for applicability.
It make on the fly table copy without identity. Such solution I also post separately and it also may need tuning on not so trivial structures like compound fields (it cover my needs).
If table included in objects with schema bound by CHECKOUPTION (sp, views) it prevent do switching (see last comment in code). It may be additionally scripted to temporary drop such binding. I had not do that yet.
All feedback welcome.
Most efficient way to drop identity columns (especially for large databases) on SQL Server is to modify DDL metadata directly, on SQL Server older than 2005 this can be done with:
sp_configure 'allow update', 1
go
reconfigure with override
go
update syscolumns set colstat = 0 --turn off bit 1 which indicates identity column
where id = object_id('table_name') and name = 'column_name'
go
exec sp_configure 'allow update', 0
go
reconfigure with override
go
SQL Server 2005+ doesn't support reconfigure with override, but you can execute Ad Hoc Queries when SQL Server instance is started in single-user mode (start db instance with -m flag, i.e. "C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn\sqlservr.exe -m", make sure to run as Administrator) with Dedicated Admin Console (from SQL Management Studio connect with ADMIN: prefix, i.e. ADMIN:MyDatabase). Column metdata is stored in sys.sysschobjs internal table (not shown without DAC):
use myDatabase
update sys.syscolpars set status = 1, idtval = null -- status=1 - primary key, idtval=null - remove identity data
where id = object_id('table_name') AND name = 'column_name'
More on this approach on this blog
Related
I've created the script below to be able to quickly create a minimal reproducible example for other questions in general.
This script uses an original table and generates the following PRINT statements:
DROP and CREATE a temp table with structure matching the original table
INSERT INTO statement using examples from the actual data
I can just add the original table name into the variable listed, along with the number of sample records required from the table. When I run it, it generates all of the statements needed in the Messages window in SSMS. Then I can just copy and paste those statements into my posted questions, so those answering have something to work with.
I know that you can get similar results in SSMS through Tasks>Generate Scripts, but this gets things down to the minimal amount of code that's useful for posting here without all of the unnecessary info that SSMS generates automatically. It's just a quick way to create a reproduced version of a simple table with actual sample data in it.
Unfortunately the one scenario that doesn't work is if I run it on very wide tables. It seems to fail on the last STRING_AGG() query where it's building the VALUES portion of the INSERT. When it runs on wide tables, it returns NULL.
Any suggestions to correct this?
EDIT: I figured out the issue I was having with UNIQUEIDENTIFIER columns and revised the query below. Also included an initial check to make sure the table actually exists.
/* ---------------------------------------
-- For creating minimal reproducible examples
-- based on original table and data,
-- builds the following statements
-- -- CREATE temp table with structure matching original table
-- -- INSERT statements based on actual data
--
-- Note: May not work for very wide tables due to limitations on
-- PRINT statements
*/ ---------------------------------------
DECLARE #tableName NVARCHAR(MAX) = 'testTable', -- original table name HERE
#recordCount INT = 5, -- top number of records to insert to temp table
#buildStmt NVARCHAR(MAX),
#insertStmt NVARCHAR(MAX),
#valuesStmt NVARCHAR(MAX),
#insertCol NVARCHAR(MAX),
#strAgg NVARCHAR(MAX),
#insertOutput NVARCHAR(MAX)
IF (EXISTS (SELECT 1 FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = #tableName))
BEGIN
-- build DROP and CREATE statements for temp table from original table
SET #buildStmt = 'IF OBJECT_ID(''tempdb..#' + #tableName + ''') IS NOT NULL DROP TABLE #' + #tableName + CHAR(10) + CHAR(10) +
'CREATE TABLE #' + #tableName + ' (' + CHAR(10)
SELECT #buildStmt = #buildStmt + ' ' + C.[Name] + ' ' +
T.[Name] +
CASE WHEN T.[Name] IN ('varchar','varchar','char','nchar') THEN '(' + CAST(C.[Length] AS VARCHAR) + ') ' ELSE ' ' END +
'NULL,' + CHAR(10)
FROM sysobjects O
JOIN syscolumns C ON C.id = O.id
JOIN systypes T ON T.xusertype = C.xusertype
WHERE O.[name] = #TableName
ORDER BY C.ColID
SET #buildStmt = SUBSTRING(#buildStmt,1,LEN(#buildStmt) - 2) + CHAR(10) + ')' + CHAR(10)
PRINT #buildStmt
-- build INSERT INTO statement from original table
SELECT #insertStmt = 'INSERT INTO #' + #tableName + ' (' +
STUFF ((
SELECT ', [' + C.[Name] + ']'
FROM sysobjects O
JOIN syscolumns C ON C.id = O.id
WHERE O.[name] = #TableName
ORDER BY C.ColID
FOR XML PATH('')), 1, 1, '')
+')'
PRINT #insertStmt
-- build VALUES portion of INSERT from data in original table
SELECT #insertCol = STUFF ((
SELECT '''''''''+CONVERT(NVARCHAR(200),' +
'[' + C.[Name] + ']' +
')+'''''',''+'
FROM sysobjects O
JOIN syscolumns C ON C.id = O.id
JOIN systypes T ON T.xusertype = C.xusertype
WHERE O.[name] = #TableName
ORDER BY C.ColID
FOR XML PATH('')), 1, 1, '')
SET #insertCol = SUBSTRING(#insertCol,1,LEN(#insertCol) - 1)
SELECT #strAgg = ';WITH CTE AS (SELECT TOP(' + CONVERT(VARCHAR,#recordCount) + ') * FROM ' + #tableName + ') ' +
' SELECT #valuesStmt = STRING_AGG(CAST(''' + #insertCol + ' AS NVARCHAR(MAX)),''), ('') ' +
' FROM CTE'
EXEC sp_executesql #strAgg,N'#valuesStmt NVARCHAR(MAX) OUTPUT', #valuesStmt OUTPUT
PRINT 'VALUES (' +REPLACE(SUBSTRING(#valuesStmt,1,LEN(#valuesStmt) - 1),',)',')') + ')'
END
ELSE
BEGIN
PRINT 'Table does NOT exist'
END
Let's say I have an empty table with 5 columns and a large number of tables with a random selection of those 5 columns. How can I insert the columns that are present in smaller tables into the corresponding columns in the large table?
For Example:
Table A has columns 1, 2, 3, 4, 5
Table B has columns 1, 2, 5
I want to insert the values of table B into the corresponding columns of A, and leave columns 3 and 4 in A as NULL.
I know this is not a good way to use SQL, don't ask how I got into this mess!
I have tried:
CASE WHEN COL_LENGTH('MyTable', 'MyColumn') IS NOT NULL THEN MyColumn ELSE NULL END
but I get an error "Invalid column name", even though SQL doesn't have to use the (non-existant) column.
Any suggestions?
List the columns when you insert:
insert into a (col1, col2, col5)
select col1, col2, col5
from b;
Assuming that col3 and col4 allow NULL values and have no DEFAULT (or the default is NULL), then these are populated with NULL values for all rows inserted by the statement.
You need to utilize dynamic SQL. With the below script, you dynamically get the parameter list from the system table where columns from table A match table B. Then you build your dynamic SQL statement and execute it.
The code below currently will PRINT the SQL statement; however, once you are satisfied, you can comment out the PRINT(#SQL) and comment back in the EXECUTE(#SQL)
DECLARE #TableAName VARCHAR(100)
,#TableBName VARCHAR(100);
SET #TableAName = 'TableA';
SET #TableBName = 'TableB';
DECLARE #Parameter VARCHAR(1000) = ''
,#SQL VARCHAR(8000)
,#TBLASchema VARCHAR(100)
,#TBLBSchema VARCHAR(100)
,#DatabaseName VARCHAR(100);
SET #TBLASchema = (SELECT OBJECT_SCHEMA_NAME(T.object_id)
FROM sys.tables AS T
WHERE T.name = #TableAName);
SET #TBLBSchema = (SELECT OBJECT_SCHEMA_NAME(T.object_id)
FROM sys.tables AS T
WHERE T.name = #TableBName);
SET #DatabaseName = DB_NAME();
SELECT #Parameter = #Parameter + ',[' + TBL_COLS.name + ']'
FROM sys.tables AS T
JOIN sys.columns AS TBL_COLS
ON T.[object_id] = TBL_COLS.[object_id]
AND T.name = #TableAName
WHERE TBL_COLS.name IN (SELECT TBL2_COLS.name
FROM sys.tables AS T
JOIN sys.columns AS TBL2_COLS
ON T.[object_id] = TBL2_COLS.[object_id]
AND T.name = #TableBName);
SET #Parameter = SUBSTRING(#Parameter,2,LEN(#Parameter)-1);
SET #SQL = 'INSERT INTO [' + #DatabaseName + '].[' + #TBLASchema + '].[' + #TableAName + '] '
+ '(' + #Parameter + ')'
+ ' SELECT ' + #Parameter
+ ' FROM [' + #DatabaseName + '].[' + #TBLBSchema + '].[' + #TableBName + '];'
PRINT(#SQL);
--EXECUTE(#SQL);
I want to move all the table from one database to another with primary key and all other keys
using SQL queries. I am using SQL Server 2005 and I got a SQL queries to move the table but the keys are not moved.
And my queries is as follows
set #cSQL='Select Name from SRCDB.sys.tables where Type=''U'''
Insert into #TempTable
exec (#cSQL)
while((select count(tName) from #t1Table)>0)
begin
select top 1 #cName=tName from #t1Table
set #cSQL='Select * into NEWDB.dbo.'+#cName+' from SRCDB.dbo.'+#cName +' where 1=2'
exec(#cSQL)
delete from #t1Table where tName=#cName
end
where SRCDB is the name of source database and NEWDB is the name of destination database
How can I achieve this..?
Can anyone help me in this...
Thank you...
The following T-SQL statement move all the table, primary key and foreign key from one database to another. Notice that the method SELECT * INTO FROM ... WHERE 1 = 2 does not create COMPUTED columns and user-data types. Suppose also that all primary keys are clustered
--ROLLBACK
SET XACT_ABORT ON
BEGIN TRAN
DECLARE #dsql nvarchar(max) = N''
SELECT #dsql += ' SELECT * INTO NEWDB.dbo.' + name + ' FROM SRCDB.dbo. ' + name + ' WHERE 1 = 2'
FROM sys.tables
--PRINT #dsql
EXEC sp_executesql #dsql
SET #dsql = N''
;WITH cte AS
(SELECT 1 AS orderForExec, table_name, column_name, constraint_name, ordinal_position,
'PRIMARY KEY' AS defConst, NULL AS refTable, NULL AS refCol
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
WHERE OBJECTPROPERTY(OBJECT_ID(constraint_name), 'IsPrimaryKey') = 1
UNION ALL
SELECT 2, t3.table_name, t3.column_name, t1.constraint_name, t3.ordinal_position,
'FOREIGN KEY', t2.table_name, t2.column_name
FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS as t1
JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE t2 ON t1 .UNIQUE_CONSTRAINT_NAME = t2.CONSTRAINT_NAME
JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE t3 ON t1.CONSTRAINT_NAME = t3.CONSTRAINT_NAME
AND t3.ordinal_position = t2.ordinal_position
)
SELECT #dsql += ' ALTER TABLE NEWDB.dbo.' + c1.table_name +
' ADD CONSTRAINT ' + c1.constraint_name + ' ' + c1.defConst + ' (' +
STUFF((SELECT ',' + c2.column_name
FROM cte c2
WHERE c2.constraint_name = c1.constraint_name
ORDER BY c2.ordinal_position ASC
FOR XML PATH(''), TYPE
).value('.', 'nvarchar(max)'), 1, 1, '') + ')' +
CASE WHEN defConst = 'FOREIGN KEY' THEN ' REFERENCES ' + c1.refTable + ' (' +
STUFF((SELECT ',' + c2.refCol
FROM cte c2
WHERE c2.constraint_name = c1.constraint_name
ORDER BY c2.ordinal_position ASC
FOR XML PATH(''), TYPE
).value('.', 'nvarchar(max)'), 1, 1, '') + ')' ELSE '' END
FROM (SELECT DISTINCT orderForExec, table_name, defConst, constraint_name, refTable FROM cte) AS c1
ORDER BY orderForExec
--PRINT #dsql
EXEC sp_executesql #dsql
COMMIT TRAN
You can generate customized script of Source Database and run the script for Destination Database.
Here is the link and slightly better [one][2]
Get the complete table and then perform the delete queries on Destination database as per requirement
If you want to do with Query. I guess this link would be helpful
DECLARE #strSQL NVARCHAR(MAX)
DECLARE #Name VARCHAR(50)
SELECT Name into #TempTable FROM SRCDB.sys.tables WHERE Type='U'
WHILE((SELECT COUNT(Name) FROM #TempTable) > 0)
BEGIN
SELECT TOP 1 #Name = Name FROM #TempTable
SET #strSQL = 'SELECT * INTO NEWDB.dbo.[' + #Name + '] FROM SRCDB.dbo.[' + #Name + ']'
EXEC(#strSQL)
DELETE FROM #TempTable WHERE Name = #Name
END
DROP TABLE #TempTable
If you have destination table already created then just set identity insert on and change query like below :
SET #strSQL = ' SET IDENTITY_INSERT NEWDB.dbo.[' + #Name + '] ON; ' +
' INSERT INTO NEWDB.dbo.[' + #Name + '] SELECT * FROM SRCDB.dbo.[' + #Name + ']' +
' SET IDENTITY_INSERT NEWDB.dbo.[' + #Name + '] OFF '
UPDATE :
If you don't want records and only want to create table with all key constaints then check this solution :
In SQL Server, how do I generate a CREATE TABLE statement for a given table?
The following script copies many tables from a source DB into another destination DB, taking into account that some of these tables have auto-increment columns:
http://sqlhint.com/sqlserver/copy-tables-auto-increment-into-separate-database
I'm working on cleaning up an ERP and I need to get rid of references to unused users and user groups. There are many foreign key constraints and therefor I want to be sure to really get rid of all traces!
I found this tidy tidbit of code to find all tables in my db with a certain column name, in this case let's look at the user groups:
select table_name from information_schema.columns
where column_name = 'GROUP_ID'
With the results I can search through the 40+ tables for my unused ID... but this is tedius. So I'd like to automate this and create a query that loops through all these tables and deletes the rows where it finds Unused_Group in the GROUP_ID column.
Before deleting anything I'd like to visualize the existing data, so I started to build something like this using string concatenation:
declare #group varchar(50) = 'Unused_Group'
declare #table1 varchar(50) = 'TABLE1'
declare #table2 varchar(50) = 'TABLE2'
declare #tableX varchar(50) = 'TABLEX'
select #query1 = 'SELECT ''' + rtrim(#table1) + ''' as ''Table'', '''
+ rtrim(#group) + ''' = CASE WHEN EXISTS (SELECT GROUP_ID FROM ' + rtrim(#table1)
+ ' WHERE GROUP_ID = ''' + rtrim(#group) + ''') then ''MATCH'' else ''-'' end FROM '
+ rtrim(#table1)
select #query2 = [REPEAT FOR #table2 to #tableX]...
EXEC(#query1 + ' UNION ' + #query2 + ' UNION ' + #queryX)
This gives me the results:
TABLE1 | Match
TABLE2 | -
TABLEX | Match
This works for my purposes and I can run it for any user group without changing any other code, and is of course easily adaptable to DELETE from these same tables, but is unmanageable for the 75 or so tables that I have to deal with between users and groups.
I ran into this link on dynamic SQL which was intense and dense enough to scare me away for the moment... but I think the solution might be in there somewhere.
I'm very familiar with FOR() loops in JS and other languages, where this would be a piece of cake with a well structured array, but apparently it's not so simple in SQL (I'm still learning, but found alot of negative talk about the FOR and GOTO solutions available...). Ideally a I'd have a script that queries to find tables with a certain column name, query each table as above, and spit me a list of matches, and then execute a second similar script to delete the rows.
Can anyone help point me in the right direction?
Ok, try this, there are three variables; column, colValue and preview. Column should be the column you're checking equality on (Group_ID), colValue the value you're looking for (Unused_Group) and preview should be 1 to view what you'll delete and 0 to delete it.
Declare #column Nvarchar(256),
#colValue Nvarchar(256),
#preview Bit
Set #column = 'Group_ID'
Set #colValue = 'Unused_Group'
Set #preview = 1 -- 1 = preview; 0 = delete
If Object_ID('tempdb..#tables') Is Not Null Drop Table #tables
Create Table #tables (tID Int, SchemaName Nvarchar(256), TableName Nvarchar(256))
-- Get all the tables with a column named [GROUP_ID]
Insert #tables
Select Row_Number() Over (Order By s.name, so.name), s.name, so.name
From sysobjects so
Join sys.schemas s
On so.uid = s.schema_id
Join syscolumns sc
On so.id = sc.id
Where so.xtype = 'u'
And sc.name = #column
Select *
From #tables
Declare #SQL Nvarchar(Max),
#schema Nvarchar(256),
#table Nvarchar(256),
#iter Int = 1
-- As long as there are tables to look at keep looping
While Exists (Select 1
From #tables)
Begin
-- Get the next table record to look at
Select #schema = SchemaName,
#table = TableName
From #tables
Where tID = #iter
-- If the table we're going to look at has dependencies on tables we have not
-- yet looked at move it to the end of the line and look at it after we look
-- at it's dependent tables (Handle foreign keys)
If Exists (Select 1
From sysobjects o
Join sys.schemas s1
On o.uid = s1.schema_id
Join sysforeignkeys fk
On o.id = fk.rkeyid
Join sysobjects o2
On fk.fkeyid = o2.id
Join sys.schemas s2
On o2.uid = s2.schema_id
Join #tables t
On o2.name = t.TableName Collate Database_Default
And s2.name = t.SchemaName Collate Database_Default
Where o.name = #table
And s1.name = #schema)
Begin
-- Move the table to the end of the list to retry later
Update t
Set tID = (Select Max(tID) From #tables) + 1
From #tables t
Where tableName = #table
And schemaName = #schema
-- Move on to the next table to look at
Set #iter = #iter + 1
End
Else
Begin
-- Delete the records we don't want anymore
Set #Sql = Case
When #preview = 1
Then 'Select * ' -- If preview is 1 select from table
Else 'Delete t ' -- If preview is not 1 the delete from table
End +
'From [' + #schema + '].[' + #table + '] t
Where ' + #column + ' = ''' + #colValue + ''''
Exec sp_executeSQL #SQL;
-- After we've done the work remove the table from our list
Delete t
From #tables t
Where tableName = #table
And schemaName = #schema
-- Move on to the next table to look at
Set #iter = #iter + 1
End
End
Turning this into a stored procedure would simply involve changing the variables declaration at the top to a sproc creation so you would get rid of...
Declare #column Nvarchar(256),
#colValue Nvarchar(256),
#preview Bit
Set #column = 'Group_ID'
Set #colValue = 'Unused_Group'
Set #preview = 1 -- 1 = preview; 0 = delete
...
And replace it with...
Create Proc DeleteStuffFromManyTables (#column Nvarchar(256), #colValue Nvarchar(256), #preview Bit = 1)
As
...
And you'd call it with...
Exec DeleteStuffFromManyTable 'Group_ID', 'Unused_Group', 1
I commented the hell out of the code to help you understand what it's doing; good luck!
You're on the right track with INFORMATION_SCHEMA objects. Execute the below in a query editor, it produces SELECT and DELETE statements for tables that contain GROUP_ID column with 'Unused_Group' value.
-- build select DML to manually review data that will be deleted
SELECT 'SELECT * FROM [' + TABLE_SCHEMA + '].[' + TABLE_NAME + '] WHERE [GROUP_ID] = ''Unused_Group'';'
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME = 'GROUP_ID';
-- build delete DML to remove data
SELECT 'DELETE FROM [' + TABLE_SCHEMA + '].[' + TABLE_NAME + '] WHERE [GROUP_ID] = ''Unused_Group'';'
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME = 'GROUP_ID';
Since this seems to be a one-time cleanup effort, and especially since you need to review data before it is deleted, I don't see the value in making this more complicated.
Consider adding referential integrity and enforcing cascade delete, if you can. It won't help with visualizing the data before you delete it, but will help controlling orphaned rows.
I have two database's, named DB1 and DB2 in Sql server 2008. These two database's have the same tables and same table data also. However, I want to check if there are any differences between the data in these tables.
Could anyone help me with a script for this?
select *
from (
select 'T1' T, *
from DB1.dbo.Table
except
select 'T2' T, *
from DB2.dbo.Table
) as T
union all
select *
from (
select 'T2' T, *
from DB2.dbo.Table
except
select 'T1' T, *
from DB1.dbo.Table
) as T
ORDER BY 2,3,4, ..., 1 -- make T1 and T2 to be close in output 2,3,4 are UNIQUE KEY SEGMENTS
Test code:
declare #T1 table (ID int)
declare #T2 table (ID int)
insert into #T1 values(1),(2)
insert into #T2 values(2),(3)
select *
from (
select *
from #T1
except
select *
from #T2
) as T
union all
select *
from (
select *
from #T2
except
select *
from #T1
) as T
Result:
ID
-----------
1
3
Note: It can take long time to compare big table, when developing "tuned" solution or refactorig, which will give same result as REFERERCE - it may be wise to chekc simple parameters first: like
select count(t.*) from (
select count(*) c0, SUM(BINARY_CHECKSUM(*)%1000000) c1 FROM T_REF_TABLE
-- select 12345 c0, -214365454 c1 -- constant values FROM T_REF_TABLE
except
select count(*) , SUM(BINARY_CHECKSUM(*)%1000000) FROM T_WORK_COPY
) t
When this is empty, you have probably things under controll, and may be you can modify when you fail you will see "constant values FROM T_REF" to isert to save even more time for next check!!!
I’d really suggest that people who encounter this problem go and find a third party database comparison tool.
Reason – these tools save a lot of time and make the process less error prone.
I’ve used comparison tools from ApexSQL (Diff and Data Diff) but you can’t go wrong with other tools marc_s and Marina Nastenko already pointed out.
If you’re absolutely sure that you are only going to compare tables once then SQL is fine but if you’re going to need this from time to time you’ll be better off with some 3rd party tool.
If you don’t have budget to buy it then just use it in trial mode to get the job done.
I hope new readers will find this useful even though it’s a late answer…
I'v done things like this using the Checksum(*) function
In essance it creates a row level checksum on all the columns data, you could then compare the checksum of each row for each table to each other, use a left join, to find rows that are different.
Hope that made sense...
Better with an example....
select *
from
( select checksum(*) as chk, userid as k from UserAccounts) as t1
left join
( select checksum(*) as chk, userid as k from UserAccounts) as t2 on t1.k = t2.k
where t1.chk <> t2.chk
select * from DB1.dbo.Table a inner join DB2.dbo.Table b on b.PrimKey = a.PrimKey
where a.FirstColumn <> b.FirstColumn ...
Checksum that Matt recommended is probably a better approach to compare columns rather than comparing each column
Comparing the two Databases in SQL Database. Try this Query it may help.
SELECT T.[name] AS [table_name], AC.[name] AS [column_name], TY.[name] AS
system_data_type FROM [***Database Name 1***].sys.[tables] AS T
INNER JOIN [***Database Name 1***].sys.[all_columns] AC ON T.[object_id] = AC.[object_id]
INNER JOIN [***Database Name 1***].sys.[types] TY ON AC.[system_type_id] = TY.[system_type_id]
EXCEPT SELECT T.[name] AS [table_name], AC.[name] AS [column_name], TY.[name] AS system_data_type FROM ***Database Name 2***.sys.[tables] AS T
INNER JOIN ***Database Name 2***.sys.[all_columns] AC ON T.[object_id] = AC.[object_id]
INNER JOIN ***Database Name 2***.sys.[types] TY ON AC.[system_type_id] = TY.[system_type_id]
If the database are in the same server use [DatabaseName].[Owner].[TableName] format when accessing a table that resides in a different database.
Eg: [DB1].[dbo].[TableName]
If databases in different server look at on Creating Linked Servers (SQL Server Database Engine)
Another solution (non T-SQL): you can use tablediff utility.
For example if you want to compare two tables (Localitate) from two different servers (ROBUH01 & ROBUH02) you can use this shell command:
C:\Program Files\Microsoft SQL Server\100\COM>tablediff -sourceserver ROBUH01 -s
ourcedatabase SIM01 -sourceschema dbo -sourcetable Localitate -destinationserver
ROBUH02 -destinationschema dbo -destinationdatabase SIM02 -destinationtable Lo
calitate
Results:
Microsoft (R) SQL Server Replication Diff Tool Copyright (c) 2008 Microsoft Corporation User-specified agent parameter values:
-sourceserver ROBUH01
-sourcedatabase SIM01
-sourceschema dbo
-sourcetable Localitate
-destinationserver ROBUH02
-destinationschema dbo
-destinationdatabase SIM02
-destinationtable Localitate
Table [SIM01].[dbo].[Localitate] on ROBUH01 and Table [SIM02].[dbo].[Localitate ] on ROBUH02 have 10 differences.
Err Id Dest.
Only 21433 Dest.
Only 21434 Dest.
Only 21435 Dest.
Only 21436 Dest.
Only 21437 Dest.
Only 21438 Dest.
Only 21439 Dest.
Only 21441 Dest.
Only 21442 Dest.
Only 21443
The requested operation took 9,9472657 seconds.
------------------------------------------------------------------------
If both database on same server. You can check similar tables by using following query :
select
fdb.name, sdb.name
from
FIRSTDBNAME.sys.tables fdb
join SECONDDBNAME.sys.tables sdb
on fdb.name = sdb.name -- compare same name tables
order by
1
By listing out similar table you can compare columns schema using sys.columns view.
Hope this helps you.
In order to compare two databases, I've written the procedures bellow.
If you want to compare two tables you can use procedure 'CompareTables'. Example :
EXEC master.dbo.CompareTables 'DB1', 'dbo', 'table1', 'DB2', 'dbo', 'table2'
If you want to compare two databases, use the procedure 'CompareDatabases'. Example :
EXEC master.dbo.CompareDatabases 'DB1', 'DB2'
Note : - I tried to make the procedures secure, but anyway, those procedures are only for testing and debugging.
- If you want a complete solution for comparison use third party like (Visual Studio, ...)
USE [master]
GO
create proc [dbo].[CompareDatabases]
#FirstDatabaseName nvarchar(50),
#SecondDatabaseName nvarchar(50)
as
begin
-- Check that databases exist
if not exists(SELECT name FROM sys.databases WHERE name=#FirstDatabaseName)
return 0
if not exists(SELECT name FROM sys.databases WHERE name=#SecondDatabaseName)
return 0
declare #result table (TABLE_NAME nvarchar(256))
SET NOCOUNT ON
insert into #result EXEC('(Select distinct TABLE_NAME from ' + #FirstDatabaseName + '.INFORMATION_SCHEMA.COLUMNS '
+'Where TABLE_SCHEMA=''dbo'')'
+ 'intersect'
+ '(Select distinct TABLE_NAME from ' + #SecondDatabaseName + '.INFORMATION_SCHEMA.COLUMNS '
+'Where TABLE_SCHEMA=''dbo'')')
DECLARE #TABLE_NAME nvarchar(256)
DECLARE curseur CURSOR FOR
SELECT TABLE_NAME FROM #result
OPEN curseur
FETCH curseur INTO #TABLE_NAME
WHILE ##FETCH_STATUS = 0
BEGIN
print 'TABLE : ' + #TABLE_NAME
EXEC master.dbo.CompareTables #FirstDatabaseName, 'dbo', #TABLE_NAME, #SecondDatabaseName, 'dbo', #TABLE_NAME
FETCH curseur INTO #TABLE_NAME
END
CLOSE curseur
DEALLOCATE curseur
SET NOCOUNT OFF
end
GO
.
USE [master]
GO
CREATE PROC [dbo].[CompareTables]
#FirstTABLE_CATALOG nvarchar(256),
#FirstTABLE_SCHEMA nvarchar(256),
#FirstTABLE_NAME nvarchar(256),
#SecondTABLE_CATALOG nvarchar(256),
#SecondTABLE_SCHEMA nvarchar(256),
#SecondTABLE_NAME nvarchar(256)
AS
BEGIN
-- Verify if first table exist
DECLARE #table1 nvarchar(256) = #FirstTABLE_CATALOG + '.' + #FirstTABLE_SCHEMA + '.' + #FirstTABLE_NAME
DECLARE #return_status int
EXEC #return_status = master.dbo.TableExist #FirstTABLE_CATALOG, #FirstTABLE_SCHEMA, #FirstTABLE_NAME
IF #return_status = 0
BEGIN
PRINT #table1 + ' : Table Not FOUND'
RETURN 0
END
-- Verify if second table exist
DECLARE #table2 nvarchar(256) = #SecondTABLE_CATALOG + '.' + #SecondTABLE_SCHEMA + '.' + #SecondTABLE_NAME
EXEC #return_status = master.dbo.TableExist #SecondTABLE_CATALOG, #SecondTABLE_SCHEMA, #SecondTABLE_NAME
IF #return_status = 0
BEGIN
PRINT #table2 + ' : Table Not FOUND'
RETURN 0
END
-- Compare the two tables
DECLARE #sql AS NVARCHAR(MAX)
SELECT #sql = '('
+ '(SELECT ''' + #table1 + ''' as _Table, * FROM ' + #FirstTABLE_CATALOG + '.' + #FirstTABLE_SCHEMA + '.' + #FirstTABLE_NAME + ')'
+ 'EXCEPT'
+ '(SELECT ''' + #table1 + ''' as _Table, * FROM ' + #SecondTABLE_CATALOG + '.' + #SecondTABLE_SCHEMA + '.' + #SecondTABLE_NAME + ')'
+ ')'
+ 'UNION'
+ '('
+ '(SELECT ''' + #table2 + ''' as _Table, * FROM ' + #SecondTABLE_CATALOG + '.' + #SecondTABLE_SCHEMA + '.' + #SecondTABLE_NAME + ')'
+ 'EXCEPT'
+ '(SELECT ''' + #table2 + ''' as _Table, * FROM ' + #FirstTABLE_CATALOG + '.' + #FirstTABLE_SCHEMA + '.' + #FirstTABLE_NAME + ')'
+ ')'
DECLARE #wrapper AS NVARCHAR(MAX) = 'if exists (' + #sql + ')' + char(10) + ' (' + #sql + ')ORDER BY 2'
Exec(#wrapper)
END
GO
.
USE [master]
GO
CREATE PROC [dbo].[TableExist]
#TABLE_CATALOG nvarchar(256),
#TABLE_SCHEMA nvarchar(256),
#TABLE_NAME nvarchar(256)
AS
BEGIN
IF NOT EXISTS(SELECT name FROM sys.databases WHERE name=#TABLE_CATALOG)
RETURN 0
declare #result table (TABLE_SCHEMA nvarchar(256), TABLE_NAME nvarchar(256))
SET NOCOUNT ON
insert into #result EXEC('Select TABLE_SCHEMA, TABLE_NAME from ' + #TABLE_CATALOG + '.INFORMATION_SCHEMA.COLUMNS')
SET NOCOUNT OFF
IF EXISTS(SELECT TABLE_SCHEMA, TABLE_NAME FROM #result
WHERE TABLE_SCHEMA=#TABLE_SCHEMA AND TABLE_NAME=#TABLE_NAME)
RETURN 1
RETURN 0
END
GO
Although this question is for SQL 2018, which in the year 2021 might not be that old, when you use Azure Data Studio, there is an extension that can be installed called SQL Server Schema Compare that does this for you.