Drop all temporary tables for an instance - sql

I was wondering how / if it's possible to have a query which drops all temporary tables?
I've been trying to work something out using the tempdb.sys.tables, but am struggling to format the name column to make it something that can then be dropped - another factor making things a bit trickier is that often the temp table names contain a '_' which means doing a replace becomes a bit more fiddly (for me at least!)
Is there anything I can use that will drop all temp tables (local or global) without having to drop them all individually on a named basis?
Thanks!

The point of temporary tables is that they are.. temporary. As soon as they go out of scope
#temp create in stored proc : stored proc exits
#temp created in session : session disconnects
##temp : session that created it disconnects
The query disappears. If you find that you need to remove temporary tables manually, you need to revisit how you are using them.
For the global ones, this will generate and execute the statement to drop them all.
declare #sql nvarchar(max)
select #sql = isnull(#sql+';', '') + 'drop table ' + quotename(name)
from tempdb..sysobjects
where name like '##%'
exec (#sql)
It is a bad idea to drop other sessions' [global] temp tables though.
For the local (to this session) temp tables, just disconnect and reconnect again.

The version below avoids all of the hassles of dealing with the '_'s. I just wanted to get rid of non-global temp tables, hence the '#[^#]%' in my WHERE clause, drop the [^#] if you want to drop global temp tables as well, or use a '##%' if you only want to drop global temp tables.
The DROP statement seems happy to take the full name with the '_', etc., so we don't need to manipulate and edit these. The OBJECT_ID(...) NOT NULL allows me to avoid tables that were not created by my session, presumably since these tables should not be 'visible' to me, they come back with NULL from this call. The QUOTENAME is needed to make sure the name is correctly quoted / escaped. If you have no temp tables, #d_sql will be the empty string still, so we check for that before printing / executing.
DECLARE #d_sql NVARCHAR(MAX)
SET #d_sql = ''
SELECT #d_sql = #d_sql + 'DROP TABLE ' + QUOTENAME(name) + ';
'
FROM tempdb..sysobjects
WHERE name like '#[^#]%'
AND OBJECT_ID('tempdb..'+QUOTENAME(name)) IS NOT NULL
IF #d_sql <> ''
BEGIN
PRINT #d_sql
-- EXEC( #d_sql )
END

In a stored procedure they are dropped automatically when the execution of the proc completes.
I normally come across the desire for this when I copy code out of a stored procedure to debug part of it and the stored proc does not contain the drop table commands.
Closing and reopening the connection works as stated in the accepted answer. Rather than doing this manually after each execution you can enable SQLCMD mode on the Query menu in SSMS
And then use the :connect command (adjust to your server/instance name)
:connect (local)\SQL2014
create table #foo(x int)
create table #bar(x int)
select *
from #foo
Can be run multiple times without problems. The messages tab shows
Connecting to (local)\SQL2014...
(0 row(s) affected)
Disconnecting connection from (local)\SQL2014...

Related

sp_executesql with user defined table type not working with two databases [duplicate]

I'm using SQL Server 2008.
How can I pass Table Valued parameter to a Stored procedure across different Databases, but same server?
Should I create the same table type in both databases?
Please, give an example or a link according to the problem.
Thanks for any kind of help.
In response to this comment (if I'm correct and that using TVPs between databases isn't possible):
What choice do I have in this situation? Using XML type?
The purist approach would be to say that if both databases are working with the same data, they ought to be merged into a single database. The pragmatist realizes that this isn't always possible - but since you can obviously change both the caller and callee, maybe just use a temp table that both stored procs know about.
I don't believe it's possible - you can't reference a table type from another database, and even with identical type definitions in both DBs, a value of one type isn't assignable to the other.
You don't pass the temp table between databases. A temp table is always stored in tempdb, and is accessible to your connection, so long as the connection is open and the temp table isn't dropped.
So, you create the temp table in the caller:
CREATE TABLE #Values (ID int not null,ColA varchar(10) not null)
INSERT INTO #Values (ID,ColA)
/* Whatever you do to populate the table */
EXEC OtherDB..OtherProc
And then in the callee:
CREATE PROCEDURE OtherProc
/* No parameter passed */
AS
SELECT * from #Values
Table UDTs are only valid for stored procs within the same database.
So yes you would have to create the type on each server and reference it in the stored procs - e.g. just run the first part of this example in both DBs http://msdn.microsoft.com/en-us/library/bb510489.aspx.
If you don't need the efficency you can always use other methods - i.e. pass an xml document parameter or have the s.p. expect a temp table with the input data.
Edit: added example
create database Test1
create database Test2
go
use Test1
create type PersonalMessage as TABLE
(Message varchar(50))
go
create proc InsertPersonalMessage #Message PersonalMessage READONLY AS
select * from #Message
go
use Test2
create type PersonalMessage as TABLE
(Message varchar(50))
go
create proc InsertPersonalMessage #Message PersonalMessage READONLY AS
select * from #Message
go
use Test1
declare #mymsg PersonalMessage
insert #mymsg select 'oh noes'
exec InsertPersonalMessage #mymsg
go
use Test2
declare #mymsg2 PersonalMessage
insert #mymsg2 select 'oh noes'
exec InsertPersonalMessage #mymsg2
Disadvantage is that there are two copies of the data.
But you would be able to run the batch against each database simultaneously.
Whether this is any better than using a table table is really down to what processing/data sizes you have - btw to use a temp table from an s.p. you just access it from the s.p. code (and it fails if it doesn't exist).
Another way to solve this (though not necessarily the correct way) is to only utilize the UDT as a part of a dynamic SQL call.
USE [db1]
CREATE PROCEDURE [dbo].[sp_Db2Data_Sync]
AS
BEGIN
/*
*
* Presumably, you have some other logic here that requires this sproc to live in db1.
* Maybe it's how you get your identifier?
*
*/
DECLARE #SQL VARCHAR(MAX) = '
USE [db2]
DECLARE #db2tvp tableType
INSERT INTO #db2tvp
SELECT dataColumn1
FROM db2.dbo.tblData td
WHERE td.Id = ' + CAST(#YourIdentifierHere AS VARCHAR) '
EXEC db2.dbo.sp_BulkData_Sync #db2tvp
'
EXEC(#SQL)
END
It's definitely not a purist approach, and it doesn't work for every use case, but it is technically an option.

Passing temp table from one execution to another

I want to pass a temp table from one execution path to another one nested in side it
What I have tried is this:
DECLARE #SQLQuery AS NVARCHAR(MAX)
SET #SQLQuery = '
--populate #tempTable with values
EXECUTE('SELECT TOP (100) * FROM ' + tempdb..#tempTable)
EXECUTE sp_executesql #SQLQuery
but it fails with this error message:
Incorrect syntax near 'tempdb'
Is there a another\better way to pass temporary table between execution contexts?
You can create a global temp table using the ##tablename syntax (double hash). The difference is explained on the TechNet site:
There are two types of temporary tables: local and global. They differ from each other in their names, their visibility, and their availability. Local temporary tables have a single number sign (#) as the first character of their names; they are visible only to the current connection for the user, and they are deleted when the user disconnects from the instance of SQL Server. Global temporary tables have two number signs (##) as the first characters of their names; they are visible to any user after they are created, and they are deleted when all users referencing the table disconnect from the instance of SQL Server.
For example, if you create the table employees, the table can be used by any person who has the security permissions in the database to use it, until the table is deleted. If a database session creates the local temporary table #employees, only the session can work with the table, and it is deleted when the session disconnects. If you create the global temporary table ##employees, any user in the database can work with this table. If no other user works with this table after you create it, the table is deleted when you disconnect. If another user works with the table after you create it, SQL Server deletes it after you disconnect and after all other sessions are no longer actively using it.
If a temporary table is created with a named constraint and the temporary table is created within the scope of a user-defined transaction, only one user at a time can execute the statement that creates the temp table. For example, if a stored procedure creates a temporary table with a named primary key constraint, the stored procedure cannot be executed simultaneously by multiple users.
The next suggestion may be even more helpful:
Many uses of temporary tables can be replaced with variables that have the table data type. For more information about using table variables, see table (Transact-SQL).
Your temp table will be visible inside the dynamic sql with no problem. I am not sure if you are creating the temp table inside the dynamic sql or before.
Here it is with the table created BEFORE the dynamic sql.
create table #Temp(SomeValue varchar(10))
insert #Temp select 'made it'
exec sp_executesql N'select * from #Temp'
The reason for your syntax error is that you are doing an unnecessary EXECUTE inside an EXECUTE, and you didn't escape the nested single-quote. This would be the correct way to write it:
SET #SQLQuery='
--populate #tempTable with values
SELECT TOP 100 * FROM tempdb..#tempTable'
However, I have a feeling that the syntax error is only the beginning of your problems. Impossible to tell what you're ultimately trying to do here, only seeing this much of the code, though.
Your quotations are messed up. Try:
SET #SQLQuery='
--populate #tempTable with values
EXECUTE(''SELECT TOP 100 * FROM '' + tempdb..#tempTable + '') '

Statement 'SELECT INTO' is not supported in this version of SQL Server - SQL Azure

I am getting
Statement 'SELECT INTO' is not supported in this version of SQL Server
in SQL Server
for the below query inside stored procedure
DECLARE #sql NVARCHAR(MAX)
,#sqlSelect NVARCHAR(MAX) = ''
,#sqlFrom NVARCHAR(MAX) = ''
,#sqlTempTable NVARCHAR(MAX) = '#itemSearch'
,#sqlInto NVARCHAR(MAX) = ''
,#params NVARCHAR(MAX)
SET #sqlSelect ='SELECT
,IT.ITEMNR
,IT.USERNR
,IT.ShopNR
,IT.ITEMID'
SET #sqlFrom =' FROM dbo.ITEM AS IT'
SET #sqlInto = ' INTO ' + #sqlTempTable + ' ';
IF (#cityId > 0)
BEGIN
SET #sqlFrom = #sqlFrom +
' INNER JOIN dbo.CITY AS CI2
ON CI2.CITYID = #cityId'
SET #sqlSelect = #sqlSelect +
'CI2.LATITUDE AS CITYLATITUDE
,CI2.LONGITUDE AS CITYLONGITUDE'
END
SELECT #params =N'#cityId int '
SET #sql = #sqlSelect +#sqlInto +#sqlFrom
EXEC sp_executesql #sql,#params
I have around 50,000 records, so decided to use Temp Table. But surprised to see this error.
How can i achieve the same in SQL Azure?
Edit: Reading this blog http://blogs.msdn.com/b/sqlazure/archive/2010/05/04/10007212.aspx suggesting us to CREATE a Table inside Stored procedure for storing data instead of Temp table. Is it safe under concurrency? Will it hit performance?
Adding some points taken from http://blog.sqlauthority.com/2011/05/28/sql-server-a-quick-notes-on-sql-azure/
Each Table must have clustered index. Tables without a clustered index are not supported.
Each connection can use single database. Multiple database in single transaction is not supported.
‘USE DATABASE’ cannot be used in Azure.
Global Temp Tables (or Temp Objects) are not supported.
As there is no concept of cross database connection, linked server is not the concept in Azure at this moment.
SQL Azure is shared environment and because of the same there is no concept of Windows Login.
Always drop TempDB objects after their need as they create pressure on TempDB.
During buck insert use batchsize option to limit the number of rows to be inserted. This will limit the usage of Transaction log space.
Avoid unnecessary usage of grouping or blocking ORDER by operations as they leads to high end memory usage.
SELECT INTO is one of the many things that you can unfortunately not perform in SQL Azure.
What you'd have to do is first create the temporary table, then perform the insert. Something like:
CREATE TABLE #itemSearch (ITEMNR INT, USERNR INT, IT.ShopNR INT, IT.ITEMID INT)
INSERT INTO #itemSearch
SELECT IT.ITEMNR, IT.USERNR, IT.ShopNR ,IT.ITEMID
FROM dbo.ITEM AS IT
The new Azure DB Update preview has this problem resolved:
The V12 preview enables you to create a table that has no clustered
index. This feature is especially helpful for its support of the T-SQL
SELECT...INTO statement which creates a table from a query result.
http://azure.microsoft.com/en-us/documentation/articles/sql-database-preview-whats-new/
Create the table using # prefix, e.g. create table #itemsearch then use insert into. The scope of the temp table is limited to the session so there will no concurrency problems.
Well, As we all know SQL Azure table must have a clustered index, that is why SELECT INTO failure copy data from one table in to another table.
If you want to migrate, you must create a table first with same structure and then execute INSERT INTO statement.
For temporary table which followed by # you don't need to create Index.
how to create index and how to execute insert into for temp table?

Newbie T-SQL dynamic stored procedure -- how can I improve it?

I'm new to T-SQL; all my experience is in a completely different database environment (Openedge). I've learned enough to write the procedure below -- but also enough to know that I don't know enough!
This routine will have to go into a live environment soon, and it works, but I'm quite certain there are a number of c**k-ups and gotchas in it that I know nothing about.
The routine copies data from table A to table B, replacing the data in table B. The tables could be in any database. I plan to call this routine multiple times from another stored procedure. Permissions aren't a problem: the routine will be run by the dba as a timed job.
Could I have your suggestions as to how to make it fit best-practice? To bullet-proof it?
ALTER PROCEDURE [dbo].[copyTable2Table]
#sdb varchar(30),
#stable varchar(30),
#tdb varchar(30),
#ttable varchar(30),
#raiseerror bit = 1,
#debug bit = 0
as
begin
set nocount on
declare #source varchar(65)
declare #target varchar(65)
declare #dropstmt varchar(100)
declare #insstmt varchar(100)
declare #ErrMsg nvarchar(4000)
declare #ErrSeverity int
set #source = '[' + #sdb + '].[dbo].[' + #stable + ']'
set #target = '[' + #tdb + '].[dbo].[' + #ttable + ']'
set #dropStmt = 'drop table ' + #target
set #insStmt = 'select * into ' + #target + ' from ' + #source
set #errMsg = ''
set #errSeverity = 0
if #debug = 1
print('Drop:' + #dropStmt + ' Insert:' + #insStmt)
-- drop the target table, copy the source table to the target
begin try
begin transaction
exec(#dropStmt)
exec(#insStmt)
commit
end try
begin catch
if ##trancount > 0
rollback
select #errMsg = error_message(),
#errSeverity = error_severity()
end catch
-- update the log table
insert into HHG_system.dbo.copyaudit
(copytime, copyuser, source, target, errmsg, errseverity)
values( getdate(), user_name(user_id()), #source, #target, #errMsg, #errSeverity)
if #debug = 1
print ( 'Message:' + #errMsg + ' Severity:' + convert(Char, #errSeverity) )
-- handle errors, return value
if #errMsg <> ''
begin
if #raiseError = 1
raiserror(#errMsg, #errSeverity, 1)
return 1
end
return 0
END
Thanks!
I'm speaking from a Sybase perspective here (I'm not sure if you're using SQLServer or Sybase) but I suspect you'll find the same issues in either environment, so here goes...
Firstly, I'd echo the comments made in earlier answers about the assumed dbo ownership of the tables.
Then I'd check with your DBAs that this stored proc will be granted permissions to drop tables in any database other than tempdb. In my experience, DBAs hate this and rarely provide it as an option due to the potential for disaster.
DDL operations like drop table are only allowed in a transaction if the database has been configured with the option sp_dboption my_database, "ddl in tran", true. Generally speaking, things done inside transactions involving DDL should be very short since they will lock up the frequently referenced system tables like sysobjects and in doing so, block the progress of other dataserver processes. Given that we've no way of knowing how much data needs to be copied, it could end up being a very long transaction which locks things up for everyone for a while. What's more, the DBAs will need to run that command on every database which contains tables that might contain a '#Target' table of this stored proc. If you were to use a transaction for the drop table it'd be a good idea to make it separate from any transaction handling the data insertion.
While you can do drop table commands in a transaction if the ddl in tran option is set, it's not possible to do select * into inside a transaction. Since select * into is a combination of table creation with insert, it would implicitly lock up the database (possibly for a while if there's a lot of data) if it were executed in a transaction.
If there are foreign key constraints on your #target table, you won't be able to just drop it without getting rid of the foreign key constraints first.
If you've got an 'id' column which relies upon a numeric identity type (often used as an autonumber feature to generate values for surrogate primary keys), be aware that you won't be able to copy the values from the '#Source' table's 'id' column across to the '#Target' table's id column.
I'd also check the size of your transaction log in any possible database which might hold a '#Target' table in relation to the size of any possible '#Source' table. Given that all the copying is being done in a single transaction, you may well find yourself copying a table so large that it blows out the transaction log in your prod dataserver, bringing all processes to a crashing halt. I've seen people using chunking to achieve this over particularly large tables, but then you end up needing to put your own checks into the code to make sure that you've actually captured a consistent snapshot of the table.
Just a thought - if this is being used to get snapshots, how about BCP? That could be used to dump out the contents of the table giving you the snapshot you're looking for. If you use the -c option you'd even get it in a human readable form.
All the best,
Stuart
This line seems a bit dangerous:
set #dropStmt = 'drop table ' + #target
What if the target table doesn't exist?
I'd try to safeguard that somehow - something like:
set #dropStmt =
'if object_id(' + #target + ') IS NOT NULL DROP TABLE ' + #target
That way, only if the call to OBJECT_ID(tablename) doesn't return NULL (that means: table doesn't exist) and the table is guaranteed to exist, issue the DROP TABLE statement.
Firstly, replace all the code like
set #source = '[' + #sdb + '].[dbo].[' + #stable + ']'
with code like
set #source = QuoteName(#sdb) + '.[dbo].' + QuoteName(#stable)
Secondly, your procedure assumes all objects are owned by dbo - this may not be the case.
Thirdly, your variable names are too short at 30 characters - 128 is the length of sysname.
I find the whole process you wrote to be terribly dangerous. Even if this is running from the database and not by the user, dynamic SQL is a poor practice! In databases using this to be able to do this to any table anytime is dangerous and would out and out be forbidden in the databases I work with. It is way too easy to accidentally drop the wrong tables! Nor is it possible to correctly test all possible values that the sp could run with, so this could be buggy code as well and you won't know until it has been in production.
Further, in dropping and recreating with select into, you must not have indexes or feoriegn key constraints or the things you need to havefor performance and data integrity. BAD BAD IDEA in general (OK if these are just staging tables of some type but not for anything else).
This is a task for SSIS. We save our SSIS packages and commit them to Subversion just like everything else. We can do a diff on them (they are just XML files) and we can tell what is running on prod and what configuration we are using.
You should not drop and recreate tables unless they are relatively small. You should update existing records, delete records no longer needed, and only add new ones. If you havea million records and 27000 have changed, 10 have been deleted, and 3000 are new, why drop and insert all 1,000,000 records. It is wasteful of server resources, could cause locking and blocking issues, and could create issues if the users are looking at the tables at the time you run this and the data suddenly disappears and takes some minutes to come back. Users get cranky about that.

How do I paramaterise a T-SQL stored procedure that drops a table?

I'm after a simple stored procedure to drop tables. Here's my first attempt:
CREATE PROC bsp_susf_DeleteTable (#TableName char)
AS
IF EXISTS (SELECT name FROM sysobjects WHERE name = #TableName)
BEGIN
DROP TABLE #TableName
END
When I parse this in MS Query Analyser I get the following error:
Server: Msg 170, Level 15, State 1, Procedure bsp_susf_DeleteTable, Line 6
Line 6: Incorrect syntax near '#TableName'.
Which kind of makes sense because the normal SQL for a single table would be:
IF EXISTS (SELECT name FROM sysobjects WHERE name = 'tbl_XYZ')
BEGIN
DROP TABLE tbl_XYZ
END
Note the first instance of tbl_XYZ (in the WHERE clause) has single quotes around it, while the second instance in the DROP statement does not. If I use a variable (#TableName) then I don't get to make this distinction.
So can a stored procedure be created to do this? Or do I have to copy the IF EXISTS ... everywhere?
You should be able to use dynamic sql:
declare #sql varchar(max)
if exists (select name from sysobjects where name = #TableName)
BEGIN
set #sql = 'drop table ' + #TableName
exec(#sql)
END
Hope this helps.
Update: Yes, you could make #sql smaller, this was just a quick example. Also note other comments about SQL Injection Attacks
Personally I would be very wary of doing this. If you feel you need it for administrative purposes, please make sure the rights to execute this are extremely limited. Further, I would have the proc copy the table name and the date and the user executing it to a logging table. That way at least you will know who dropped the wrong table. You may want other protections as well. For instance you may want to specify certain tables that cannot be dropped ever using this proc.
Further this will not work on all tables in all cases. You cannot drop a table that has a foreign key associated with it.
Under no circumstances would I allow a user or anyone not the database admin to execute this proc. If you havea a system design where users can drop tables, there is most likely something drastically wrong with your design and it should be rethought.
Also, do not use this proc unless you have a really, really good backup schedule in place and experience restoring from backups.
You'll have to use EXEC to execute that query as a string. In other words, when you pass in the table name, define a varchar and assign the query and tablename, then exec the variable you created.
Edit: HOWEVER, I don't recommend that because someone could pass in sql rather than a TableName and cause all kinds of wonderful problems. See Sql injection for more information.
Your best bet is to create a parameterized query on the client side for this. For example, in C# I would do something like:
// EDIT 2: on second thought, ignore this code; it probably won't work
SqlCommand sc = new SqlCommand();
sc.Connection = someConnection;
sc.CommandType = Command.Text;
sc.CommandText = "drop table #tablename";
sc.Parameters.AddWithValue("#tablename", "the_table_name");
sc.ExecuteNonQuery();