Resolving compound key collisions on mass updates - sql

Let's say I have a table Managers with fields Id and Name. There is another table Accounts that has fields Id and Name. These two tables have their relationships defined in a many to many table ManagedAccounts which has a composite key of ManagerId and AccountId. So you can have multiple managers on a certain account, but there can't be the same manager on the account multiple times.
Now, I have a stored procedure called MergeAccounts that takes in a Manager Id and a list of Manager Ids in the form of a comma delimited varchar. It currently looks a lot like this:
create procedure MergeAccounts #managerId nvarchar(12), #mergedManagers nvarchar(max) as declare #reassignment nvarchar(max)
set #reassignment='update ManagedAccounts set ManagerId='+#managerId+' where ManagerId in ('+#mergedManagers+')'
exec sp_executesql #reassignment
Since two managers could be on the same account, it'll give me an error saying that I've violated the compound key I have on that table. How do I need to structure my code to simply delete any redundant rows without regards to order?

Change your dynamic SQL to delete any potential collisions first. Then do your update. Wrap it all in a transaction.
(BTW, I would avoid using dynamic SQL altogether by creating a table-valued function that returns a table from a comma-separated list... this is very useful and you can probably find a function like that already written if you google it)
set #reassignment='
BEGIN TRAN;
BEGIN TRY
DELETE m1
FROM ManagedAccounts m1
JOIN ManagedAccounts m2 ON m1.AccountId = m2.AccountId
WHERE m2.ManagerId = ' + #managerId + '
AND m1.ManagerId IN (' + #mergedAccounts + ')
UPDATE ManagedAccounts SET ManagerId=' + #managerId + ' WHERE ManagerId IN (' + #mergedManagers + ')
COMMIT;
END TRY
BEGIN CATCH
ROLLBACK;
END CATCH;';

Related

Generate tables with unique names

I need to create non-temporary tables in a MariaDB 10.3 database using Node. I therefore need a way of generating a table name that is guaranteed to be unique.
The Node function cannot access information regarding any unique feature about what or when the tables are made, so I cannot build the name from a timestamp or connection ID. I can only verify the name's uniqueness using the current database.
This question had a PostgreSQL answer suggesting the following:
SET #name = GetBigRandomNumber();
WHILE TableExists(#name)
BEGIN
SET #name = GetBigRandomNumber();
END
I attempted a MariaDB implementation using #name = CONCAT(MD5(RAND()),MD5(RAND())) to generate a random 64 character string, and (COUNT(*) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME LIKE #name) >0 to check if it was a unique name:
SET #name = CONCAT(MD5(RAND()),MD5(RAND()));
WHILE ((COUNT(*) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME LIKE #name) >0) DO
SET #name = CONCAT(MD5(RAND()),MD5(RAND()));
END WHILE;
CREATE TABLE #name ( ... );
However I get a syntax error when I try to run the above query. My SQL knowledge isn't that great so I'm at a loss as to what the problem might be.
Furthermore, is this approach efficient? The randomly generated name is long enough that it is very unlikely to have any clashes with any current table in the database, so the WHILE loop will very rarely need to run, but is there some sort of built in function to auto increment table names, or something similar?
SET #name := UUID();
If the dashes in that cause trouble, then
SET #name := REPLACE(UUID(), '-', '');
It will be safer (toward uniqueness) than RAND(). And, in theory, there will be no need to verify its uniqueness. After all, that's the purpose of UUIDs.

dynamic sql not working . Regular sql working [duplicate]

It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure.
However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed.
A simple 2 table scenario:
I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items.
I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order.
Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree.
The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned.
What I was hoping to do was this:
Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this:
TempSQL = "
INSERT INTO #ItemsToQuery
OrderId, ItemsId
FROM
Orders, Items
WHERE
Orders.OrderID = Items.OrderId AND
/* Some unpredictable Order filters go here */
AND
/* Some unpredictable Items filters go here */
"
Then, I would call a stored procedure,
CREATE PROCEDURE GetItemsAndOrders(#tempSql as text)
Execute (#tempSQL) --to create the #ItemsToQuery table
SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery)
SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery)
The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client.
3 around come to mind but I'm look for a better one:
1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure.
2) I could perform all the queries from the client.
The first would be something like this:
SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables.
I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one.
If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.
You first need to create your table first then it will be available in the dynamic SQL.
This works:
CREATE TABLE #temp3 (id INT)
EXEC ('insert #temp3 values(1)')
SELECT *
FROM #temp3
This will not work:
EXEC (
'create table #temp2 (id int)
insert #temp2 values(1)'
)
SELECT *
FROM #temp2
In other words:
Create temp table
Execute proc
Select from temp table
Here is complete example:
CREATE PROC prTest2 #var VARCHAR(100)
AS
EXEC (#var)
GO
CREATE TABLE #temp (id INT)
EXEC prTest2 'insert #temp values(1)'
SELECT *
FROM #temp
1st Method - Enclose multiple statements in the same Dynamic SQL Call:
DECLARE #DynamicQuery NVARCHAR(MAX)
SET #DynamicQuery = 'Select * into #temp from (select * from tablename) alias
select * from #temp
drop table #temp'
EXEC sp_executesql #DynamicQuery
2nd Method - Use Global Temp Table:
(Careful, you need to take extra care of global variable.)
IF OBJECT_ID('tempdb..##temp2') IS NULL
BEGIN
EXEC (
'create table ##temp2 (id int)
insert ##temp2 values(1)'
)
SELECT *
FROM ##temp2
END
Don't forget to delete ##temp2 object manually once your done with it:
IF (OBJECT_ID('tempdb..##temp2') IS NOT NULL)
BEGIN
DROP Table ##temp2
END
Note: Don't use this method 2 if you don't know the full structure on database.
I had the same issue that #Muflix mentioned. When you don't know the columns being returned, or they are being generated dynamically, what I've done is create a global table with a unique id, then delete it when I'm done with it, this looks something like what's shown below:
DECLARE #DynamicSQL NVARCHAR(MAX)
DECLARE #DynamicTable VARCHAR(255) = 'DynamicTempTable_' + CONVERT(VARCHAR(36), NEWID())
DECLARE #DynamicColumns NVARCHAR(MAX)
--Get "#DynamicColumns", example: SET #DynamicColumns = '[Column1], [Column2]'
SET #DynamicSQL = 'SELECT ' + #DynamicColumns + ' INTO [##' + #DynamicTable + ']' +
' FROM [dbo].[TableXYZ]'
EXEC sp_executesql #DynamicSQL
SET #DynamicSQL = 'IF OBJECT_ID(''tempdb..##' + #DynamicTable + ''' , ''U'') IS NOT NULL ' +
' BEGIN DROP TABLE [##' + #DynamicTable + '] END'
EXEC sp_executesql #DynamicSQL
Certainly not the best solution, but this seems to work for me.
I would strongly suggest you have a read through http://www.sommarskog.se/arrays-in-sql-2005.html
Personally I like the approach of passing a comma delimited text list, then parsing it with text to table function and joining to it. The temp table approach can work if you create it first in the connection. But it feel a bit messier.
Result sets from dynamic SQL are returned to the client. I have done this quite a lot.
You're right about issues with sharing data through temp tables and variables and things like that between the SQL and the dynamic SQL it generates.
I think in trying to get your temp table working, you have probably got some things confused, because you can definitely get data from a SP which executes dynamic SQL:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + ''''
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO
Also:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * INTO #temp FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + '''; SELECT * FROM #temp;'
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO

Performing one SQL command on multiple tables (without re-writing the SQL command)

I have a series of SQL commands I would like to run on about 40 different tables. There must be a way to do this without writing 40 different commands...
I am running this in SQL Server. All tables have different names, and the column I want to manipulate (VariableColumn below) also varies in name. I do have a list of the names for both the tables and the columns.
The end effect of this code: I am connecting VariableColumn as a foreign key to the DOC_ID column in the DOCS table. Some tables have values in their VariableColumn that do not correspond to any in the DOC_ID column (outdated data), so I am first deleting any such rows.
The command:
-- Delete rows in VariableTable that have invalid VariableColumn values
DELETE FROM VariableTable
FROM VariableTable v
LEFT OUTER JOIN DOCS d
ON d.DOC_ID = v.VariableColumn
WHERE d.DOC_ID IS NULL
-- Add foreign key to VariableTable table
ALTER TABLE VariableTable
ADD CONSTRAINT FK_DOCS_VariableTable_VariableColumn FOREIGN KEY (VariableColumn)
REFERENCES DOCS(DOC_ID);
Since you have the list of table and column names you can have them in a table. And you can use them in a cursor to build and execute your commands.
For example:
DECLARE #Target TABLE (tbl SYSNAME,col SYSNAME)
INSERT #Target VALUES ('tbl_1','col_a'),('tbl_2','col_b')
DECLARE #tbl SYSNAME
DECLARE #col SYSNAME
DECLARE #sql NVARCHAR(MAX)
DECLARE work CURSOR FOR
SELECT tbl,col
FROM #Target
OPEN work
FETCH NEXT FROM work INTO #tbl,#col
WHILE ##FETCH_STATUS = 0
BEGIN
SET #sql = 'PRINT ''Do something to table: ' + #tbl + ' column: '+ #col + ''''
EXECUTE sp_executesql #sql
FETCH NEXT FROM work INTO #tbl,#col
END
CLOSE work
DEALLOCATE work
Assuming this is a one off batch you want to run, you could generate this with a simple generator such as NimbleText (http://NimbleText.com/Live)
The data is a list of the tables and columns you want to edit, e.g.
Person, PersonID
Document, DocumentID
Vehicle, VehicleID
etc...
The pattern is like this:
-- Delete rows in $0 that have invalid $1 values
DELETE FROM $0
FROM $0 v
LEFT OUTER JOIN DOCS d
ON d.DOC_ID = v.$0
WHERE d.DOC_ID IS NULL
-- Add foreign key to $0 table
ALTER TABLE $0
ADD CONSTRAINT FK_DOCS_$0_$1 FOREIGN KEY ($1)
REFERENCES DOCS(DOC_ID);
Press "Calculate", Grab the result and execute it in SQL.
Trivial PowerShell to modify a template SQL file and SqlCommand come to my mind. Not yours?
This seems just like an stored procedure, I've never used them in sql-server but you can check how-to in MSDN Stored procedures guide

How to check if a column value is referred in some other table as that column is a foreign key in other table (sql server)?

I have a table that its primary key "ID" field is used in many other table as foreign key.
How can I check that a particular record from this table (for example first record "ID = 1") is used in other table?
If a particular record is used in some other table I don't want to do any operations on that row.
Very blunt solution:
try to delete the record.
If you get an integrity contraint violation, this means it's referenced by another record, catch this exception
If the delete worked, rollback your delete
I said it was blunt :)
On the surface, your question doesn't make sense. Let's look at some data.
users
user_id user_email
--
1 abc#def.com
2 def#hij.com
user_downloads
user_id filename downloaded_starting
1 123.pdf 2013-05-29 08:00:13
1 234.pdf 2013-05-29 08:05:27
1 345.pdf 2013-05-29 08:10:33
There's a foreign key on user_downloads: foreign key (user_id) references users (user_id).
As long as you don't also declare that foreign key as ON DELETE CASCADE, then you can't delete the corresponding row in users. You don't have to check for the presence of rows in other tables, and you shouldn't. In a big system, that might mean checking hundreds of tables.
If you don't declare the foreign key as ON UPDATE CASCADE, you can't update the user_id if it's referenced by any other table. So, again, you don't have to check.
If you use the email address as the target for a foreign key reference, then, once again, don't use ON DELETE CASCADE and don't use ON UPDATE CASCADE. Don't use those declarations, and you don't have to check. If you don't use the email address as the target for a foreign key reference, it doesn't make sense to prevent updates to it.
So if you build your tables right, you don't have to check any of that stuff.
You could use a trigger to roll back any transaction that gives a true for
"where exists( select * from otherTable Where fk = id union select * from anotherTable Where fk = id union etc)
It wont be too heavy if you have any index on each of the tables which starts with fk, (which you should have for general speed anyway), SQL will just check the index for the id. ie a single read for each table checked.
Use the following if you do not wish to use a trial and error method:
DECLARE #schema NVARCHAR(20)
DECLARE #table NVARCHAR(50)
DECLARE #column NVARCHAR(50)
DECLARE #SQL NVARCHAR(1000)
DECLARE #ID INT
DECLARE #exists INT
DECLARE #x NVARCHAR(100)
SELECT #x = '#exists int output', #ID = 1, #schema = 'dbo', #table = 'Gebruiker', #column = 'GebruikerHasGebruiker_id'
SELECT #SQL = 'SELECT #exists = 1 WHERE EXISTS( ' + STUFF((
SELECT ' UNION ALL SELECT ' + U2.COLUMN_NAME + ' AS ID FROM ' + U2.TABLE_SCHEMA + '.' + U2.TABLE_NAME + ' WHERE ' + U2.COLUMN_NAME + ' = ' + cast(#id as VARCHAR(10))
FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS R INNER JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE U ON R.UNIQUE_CONSTRAINT_NAME = U.CONSTRAINT_NAME
INNER JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE U2 ON R.CONSTRAINT_NAME = U2.CONSTRAINT_NAME
WHERE U.TABLE_SCHEMA = #schema
AND U.TABLE_NAME = #table
AND U.COLUMN_NAME = #column
FOR XML PATH('')
),1,11, '') + ')'
EXEC sp_executesql #SQL, #x, #exists = #exists OUTPUT
IF 1 <> #exists
BEGIN
-- do you stuff here
END
But in 99% of the cases you could do this, it's overkill. It is faster if you already know the FKs and just create a query.
Edit:
A little explanation. This dynamic SQL looks in the INFORMATION SCHEMA to see all relations with other tables. It uses that information to create a query to check if your ID exists in that table. With a UNION it adds all results and returns 1 if any results are found. This can be used for any database, for any column, as long as you don't check for a FK over multiple columns.
Using this solution you don't need to hard code all referenced tables.
use tempdb
go
/* provide test data*/
if OBJECT_ID(N't2') is not null
drop table t2
if OBJECT_ID(N't1') is not null
drop table t1
create table t1(i int not null primary key)
create table t2(i int not null, constraint fk_t1_t2 foreign key (i) references t1(i))
go
insert into t1 values(1),(2)
insert into t2 values(1)
/* checking if the primary key value referenced in other tables */
declare #forCheck int=1 /* id to be checked if it referenced in other tables */
declare #isReferenced bit=0
begin tran
begin try
delete from t1 where i=#forCheck
end try
begin catch
set #isReferenced=1
end catch
rollback
select #isReferenced
The Approach should be to collect all the dependent objects and query them to check if the parent tables records exists.
i use a Procedure which returns the dependent objects.
The Reason i can not post that procedure is exceeding the limited number 30000 characters to post it is 48237 characters. let me know your mail-id i will send you the procedure.
Iterate through the result of the procedure to check if any dependent column holds your primary tables data.

Newbie T-SQL dynamic stored procedure -- how can I improve it?

I'm new to T-SQL; all my experience is in a completely different database environment (Openedge). I've learned enough to write the procedure below -- but also enough to know that I don't know enough!
This routine will have to go into a live environment soon, and it works, but I'm quite certain there are a number of c**k-ups and gotchas in it that I know nothing about.
The routine copies data from table A to table B, replacing the data in table B. The tables could be in any database. I plan to call this routine multiple times from another stored procedure. Permissions aren't a problem: the routine will be run by the dba as a timed job.
Could I have your suggestions as to how to make it fit best-practice? To bullet-proof it?
ALTER PROCEDURE [dbo].[copyTable2Table]
#sdb varchar(30),
#stable varchar(30),
#tdb varchar(30),
#ttable varchar(30),
#raiseerror bit = 1,
#debug bit = 0
as
begin
set nocount on
declare #source varchar(65)
declare #target varchar(65)
declare #dropstmt varchar(100)
declare #insstmt varchar(100)
declare #ErrMsg nvarchar(4000)
declare #ErrSeverity int
set #source = '[' + #sdb + '].[dbo].[' + #stable + ']'
set #target = '[' + #tdb + '].[dbo].[' + #ttable + ']'
set #dropStmt = 'drop table ' + #target
set #insStmt = 'select * into ' + #target + ' from ' + #source
set #errMsg = ''
set #errSeverity = 0
if #debug = 1
print('Drop:' + #dropStmt + ' Insert:' + #insStmt)
-- drop the target table, copy the source table to the target
begin try
begin transaction
exec(#dropStmt)
exec(#insStmt)
commit
end try
begin catch
if ##trancount > 0
rollback
select #errMsg = error_message(),
#errSeverity = error_severity()
end catch
-- update the log table
insert into HHG_system.dbo.copyaudit
(copytime, copyuser, source, target, errmsg, errseverity)
values( getdate(), user_name(user_id()), #source, #target, #errMsg, #errSeverity)
if #debug = 1
print ( 'Message:' + #errMsg + ' Severity:' + convert(Char, #errSeverity) )
-- handle errors, return value
if #errMsg <> ''
begin
if #raiseError = 1
raiserror(#errMsg, #errSeverity, 1)
return 1
end
return 0
END
Thanks!
I'm speaking from a Sybase perspective here (I'm not sure if you're using SQLServer or Sybase) but I suspect you'll find the same issues in either environment, so here goes...
Firstly, I'd echo the comments made in earlier answers about the assumed dbo ownership of the tables.
Then I'd check with your DBAs that this stored proc will be granted permissions to drop tables in any database other than tempdb. In my experience, DBAs hate this and rarely provide it as an option due to the potential for disaster.
DDL operations like drop table are only allowed in a transaction if the database has been configured with the option sp_dboption my_database, "ddl in tran", true. Generally speaking, things done inside transactions involving DDL should be very short since they will lock up the frequently referenced system tables like sysobjects and in doing so, block the progress of other dataserver processes. Given that we've no way of knowing how much data needs to be copied, it could end up being a very long transaction which locks things up for everyone for a while. What's more, the DBAs will need to run that command on every database which contains tables that might contain a '#Target' table of this stored proc. If you were to use a transaction for the drop table it'd be a good idea to make it separate from any transaction handling the data insertion.
While you can do drop table commands in a transaction if the ddl in tran option is set, it's not possible to do select * into inside a transaction. Since select * into is a combination of table creation with insert, it would implicitly lock up the database (possibly for a while if there's a lot of data) if it were executed in a transaction.
If there are foreign key constraints on your #target table, you won't be able to just drop it without getting rid of the foreign key constraints first.
If you've got an 'id' column which relies upon a numeric identity type (often used as an autonumber feature to generate values for surrogate primary keys), be aware that you won't be able to copy the values from the '#Source' table's 'id' column across to the '#Target' table's id column.
I'd also check the size of your transaction log in any possible database which might hold a '#Target' table in relation to the size of any possible '#Source' table. Given that all the copying is being done in a single transaction, you may well find yourself copying a table so large that it blows out the transaction log in your prod dataserver, bringing all processes to a crashing halt. I've seen people using chunking to achieve this over particularly large tables, but then you end up needing to put your own checks into the code to make sure that you've actually captured a consistent snapshot of the table.
Just a thought - if this is being used to get snapshots, how about BCP? That could be used to dump out the contents of the table giving you the snapshot you're looking for. If you use the -c option you'd even get it in a human readable form.
All the best,
Stuart
This line seems a bit dangerous:
set #dropStmt = 'drop table ' + #target
What if the target table doesn't exist?
I'd try to safeguard that somehow - something like:
set #dropStmt =
'if object_id(' + #target + ') IS NOT NULL DROP TABLE ' + #target
That way, only if the call to OBJECT_ID(tablename) doesn't return NULL (that means: table doesn't exist) and the table is guaranteed to exist, issue the DROP TABLE statement.
Firstly, replace all the code like
set #source = '[' + #sdb + '].[dbo].[' + #stable + ']'
with code like
set #source = QuoteName(#sdb) + '.[dbo].' + QuoteName(#stable)
Secondly, your procedure assumes all objects are owned by dbo - this may not be the case.
Thirdly, your variable names are too short at 30 characters - 128 is the length of sysname.
I find the whole process you wrote to be terribly dangerous. Even if this is running from the database and not by the user, dynamic SQL is a poor practice! In databases using this to be able to do this to any table anytime is dangerous and would out and out be forbidden in the databases I work with. It is way too easy to accidentally drop the wrong tables! Nor is it possible to correctly test all possible values that the sp could run with, so this could be buggy code as well and you won't know until it has been in production.
Further, in dropping and recreating with select into, you must not have indexes or feoriegn key constraints or the things you need to havefor performance and data integrity. BAD BAD IDEA in general (OK if these are just staging tables of some type but not for anything else).
This is a task for SSIS. We save our SSIS packages and commit them to Subversion just like everything else. We can do a diff on them (they are just XML files) and we can tell what is running on prod and what configuration we are using.
You should not drop and recreate tables unless they are relatively small. You should update existing records, delete records no longer needed, and only add new ones. If you havea million records and 27000 have changed, 10 have been deleted, and 3000 are new, why drop and insert all 1,000,000 records. It is wasteful of server resources, could cause locking and blocking issues, and could create issues if the users are looking at the tables at the time you run this and the data suddenly disappears and takes some minutes to come back. Users get cranky about that.