Passing temp table from one execution to another - sql

I want to pass a temp table from one execution path to another one nested in side it
What I have tried is this:
DECLARE #SQLQuery AS NVARCHAR(MAX)
SET #SQLQuery = '
--populate #tempTable with values
EXECUTE('SELECT TOP (100) * FROM ' + tempdb..#tempTable)
EXECUTE sp_executesql #SQLQuery
but it fails with this error message:
Incorrect syntax near 'tempdb'
Is there a another\better way to pass temporary table between execution contexts?

You can create a global temp table using the ##tablename syntax (double hash). The difference is explained on the TechNet site:
There are two types of temporary tables: local and global. They differ from each other in their names, their visibility, and their availability. Local temporary tables have a single number sign (#) as the first character of their names; they are visible only to the current connection for the user, and they are deleted when the user disconnects from the instance of SQL Server. Global temporary tables have two number signs (##) as the first characters of their names; they are visible to any user after they are created, and they are deleted when all users referencing the table disconnect from the instance of SQL Server.
For example, if you create the table employees, the table can be used by any person who has the security permissions in the database to use it, until the table is deleted. If a database session creates the local temporary table #employees, only the session can work with the table, and it is deleted when the session disconnects. If you create the global temporary table ##employees, any user in the database can work with this table. If no other user works with this table after you create it, the table is deleted when you disconnect. If another user works with the table after you create it, SQL Server deletes it after you disconnect and after all other sessions are no longer actively using it.
If a temporary table is created with a named constraint and the temporary table is created within the scope of a user-defined transaction, only one user at a time can execute the statement that creates the temp table. For example, if a stored procedure creates a temporary table with a named primary key constraint, the stored procedure cannot be executed simultaneously by multiple users.
The next suggestion may be even more helpful:
Many uses of temporary tables can be replaced with variables that have the table data type. For more information about using table variables, see table (Transact-SQL).

Your temp table will be visible inside the dynamic sql with no problem. I am not sure if you are creating the temp table inside the dynamic sql or before.
Here it is with the table created BEFORE the dynamic sql.
create table #Temp(SomeValue varchar(10))
insert #Temp select 'made it'
exec sp_executesql N'select * from #Temp'

The reason for your syntax error is that you are doing an unnecessary EXECUTE inside an EXECUTE, and you didn't escape the nested single-quote. This would be the correct way to write it:
SET #SQLQuery='
--populate #tempTable with values
SELECT TOP 100 * FROM tempdb..#tempTable'
However, I have a feeling that the syntax error is only the beginning of your problems. Impossible to tell what you're ultimately trying to do here, only seeing this much of the code, though.

Your quotations are messed up. Try:
SET #SQLQuery='
--populate #tempTable with values
EXECUTE(''SELECT TOP 100 * FROM '' + tempdb..#tempTable + '') '

Related

Dynamic Schema name in SQL View

I have two datasets:
one is data about dogs [my data]
the second is a lookup table of matching keys [I have no control over this data]
The matching keys are updated regularly, and I want to create a View (or something that fulfills the same purpose) of the Dog dataset, which always joins on the most recent matching keys. Furthermore, I need to be able to reference it inline - as though it was a table.
The match updates in the lookup table are differentiated by their schema names, so to get the most recent, I just have to identify the latest schema name and swap it out of the query.
Given that both Views and Table Valued Functions prohibit dynamic SQL, and Stored Procedures can't be referenced like a table can be how can I achieve this in just SQL?
The match updates in the lookup table are differentiated by their schema names, so to get the most recent, I just have to identify the latest schema name and swap it out of the query.
You can use a view to solve this problem, but you need some way of altering it whenever new data is entered into the database.
I'm assuming that whenever a new schema is created, a new table is also created in that schema, but the table name and it's column names are always the same. Note that this assumption is critical to the solution I'm about to propose - and that solution is to use a DDL trigger listening to the create_table event on the database level to alter your view so that it will reference the schema of the newly created table.
Another assumption I'm making is that you either already have the initial view, or that you are working with SQL Server 2016 or higher (that allows create or alter syntax).
So first, let's create the initial view:
CREATE VIEW dbo.TheView
AS
SELECT NULL As Test
GO
Then, I've added the DML trigger, which creates and executes a dynamic alter view statement based on the schema of the newly created table:
CREATE TRIGGER AlterViewWhenSchemaChanges
ON DATABASE
FOR CREATE_TABLE
AS
DECLARE #Sql nvarchar(max),
#NewTableName sysname,
#NewSchemaName sysname;
SELECT #NewSchemaName = EVENTDATA().value('(/EVENT_INSTANCE/SchemaName)[1]', 'NVARCHAR(255)'),
#NewTableName = EVENTDATA().value('(/EVENT_INSTANCE/ObjectName)[1]', 'NVARCHAR(255)');
-- We only want to alter the view when this specific table is created!
IF #NewTableName = 'TableName'
BEGIN
SELECT #Sql =
'ALTER VIEW dbo.TheView
AS
SELECT Col as test
FROM '+ #NewSchemaName +'.'+ #NewTableName
EXEC(#Sql)
END
GO
This way, whenever a new table with the specific name (TableName in my example) is created, the view gets altered to reference the last TableName created (which is obviously created in the newest schema).
Testing the script:
SELECT * FROM dbo.TheView;
GO
Results:
Test
NULL
Create a new schema with the table TableName
CREATE SCHEMA SchemaName
CREATE TABLE SchemaName.TableName (Col int);
GO
-- insert some data
INSERT INTO SchemaName.TableName(Col) VALUES (123);
-- get the data from the altered view
SELECT * FROM dbo.TheView
Results:
test
123
You can see a live demo on Rextester.

sp_executesql with user defined table type not working with two databases [duplicate]

I'm using SQL Server 2008.
How can I pass Table Valued parameter to a Stored procedure across different Databases, but same server?
Should I create the same table type in both databases?
Please, give an example or a link according to the problem.
Thanks for any kind of help.
In response to this comment (if I'm correct and that using TVPs between databases isn't possible):
What choice do I have in this situation? Using XML type?
The purist approach would be to say that if both databases are working with the same data, they ought to be merged into a single database. The pragmatist realizes that this isn't always possible - but since you can obviously change both the caller and callee, maybe just use a temp table that both stored procs know about.
I don't believe it's possible - you can't reference a table type from another database, and even with identical type definitions in both DBs, a value of one type isn't assignable to the other.
You don't pass the temp table between databases. A temp table is always stored in tempdb, and is accessible to your connection, so long as the connection is open and the temp table isn't dropped.
So, you create the temp table in the caller:
CREATE TABLE #Values (ID int not null,ColA varchar(10) not null)
INSERT INTO #Values (ID,ColA)
/* Whatever you do to populate the table */
EXEC OtherDB..OtherProc
And then in the callee:
CREATE PROCEDURE OtherProc
/* No parameter passed */
AS
SELECT * from #Values
Table UDTs are only valid for stored procs within the same database.
So yes you would have to create the type on each server and reference it in the stored procs - e.g. just run the first part of this example in both DBs http://msdn.microsoft.com/en-us/library/bb510489.aspx.
If you don't need the efficency you can always use other methods - i.e. pass an xml document parameter or have the s.p. expect a temp table with the input data.
Edit: added example
create database Test1
create database Test2
go
use Test1
create type PersonalMessage as TABLE
(Message varchar(50))
go
create proc InsertPersonalMessage #Message PersonalMessage READONLY AS
select * from #Message
go
use Test2
create type PersonalMessage as TABLE
(Message varchar(50))
go
create proc InsertPersonalMessage #Message PersonalMessage READONLY AS
select * from #Message
go
use Test1
declare #mymsg PersonalMessage
insert #mymsg select 'oh noes'
exec InsertPersonalMessage #mymsg
go
use Test2
declare #mymsg2 PersonalMessage
insert #mymsg2 select 'oh noes'
exec InsertPersonalMessage #mymsg2
Disadvantage is that there are two copies of the data.
But you would be able to run the batch against each database simultaneously.
Whether this is any better than using a table table is really down to what processing/data sizes you have - btw to use a temp table from an s.p. you just access it from the s.p. code (and it fails if it doesn't exist).
Another way to solve this (though not necessarily the correct way) is to only utilize the UDT as a part of a dynamic SQL call.
USE [db1]
CREATE PROCEDURE [dbo].[sp_Db2Data_Sync]
AS
BEGIN
/*
*
* Presumably, you have some other logic here that requires this sproc to live in db1.
* Maybe it's how you get your identifier?
*
*/
DECLARE #SQL VARCHAR(MAX) = '
USE [db2]
DECLARE #db2tvp tableType
INSERT INTO #db2tvp
SELECT dataColumn1
FROM db2.dbo.tblData td
WHERE td.Id = ' + CAST(#YourIdentifierHere AS VARCHAR) '
EXEC db2.dbo.sp_BulkData_Sync #db2tvp
'
EXEC(#SQL)
END
It's definitely not a purist approach, and it doesn't work for every use case, but it is technically an option.

What's the scoping rule for temporary tables within exec within stored procedures?

Compare the following stored procedures:
CREATE PROCEDURE testProc1
AS
SELECT * INTO #temp FROM information_schema.tables
SELECT * FROM #temp
GO
CREATE PROCEDURE testProc2
AS
EXEC('SELECT * INTO #temp FROM information_schema.tables')
SELECT * FROM #temp
GO
Now, if I run testProc1, it works, and #temp seems to only exist for the duration of that call. However, testProc2 doesn't seem to work at all, since I get an Invalid object name '#temp' error message instead.
Why the distinction, and how can I use a temp table to SELECT * INTO if the source table name is a parameter to the stored procedure and can have arbitrary structure?
Note that I'm using Microsoft SQL Server 2005.
From BOL:
Local temporary tables are visible
only in the current session... ...
Temporary tables are automatically
dropped when they go out of scope,
unless explicitly dropped using DROP
TABLE
The distinction between your first and second procedures is that in the first, the table is defined in the same scope that it is selected from; in the second, the EXEC() creates the table in its own scope, so the select fails in this case...
However, note that the following works just fine:
CREATE PROCEDURE [dbo].[testProc3]
AS
SELECT * INTO #temp FROM information_schema.tables
EXEC('SELECT * FROM #temp')
GO
And it works because the scope of EXEC is a child of the scope of the stored procedure. When the table is created in the parent scope, it also exists for any of the children.
To give you a good solution, we'd need to know more about the problem that you're trying to solve... but, if you simply need to select from the created table, performing the select in the child scope works just fine:
CREATE PROCEDURE [dbo].[testProc4]
AS
EXEC('SELECT * INTO #temp FROM information_schema.tables; SELECT * FROM #temp')
GO
You could try using a global temp table (named ##temp not #temp). However be aware that other connections can see this table as well.

Drop all temporary tables for an instance

I was wondering how / if it's possible to have a query which drops all temporary tables?
I've been trying to work something out using the tempdb.sys.tables, but am struggling to format the name column to make it something that can then be dropped - another factor making things a bit trickier is that often the temp table names contain a '_' which means doing a replace becomes a bit more fiddly (for me at least!)
Is there anything I can use that will drop all temp tables (local or global) without having to drop them all individually on a named basis?
Thanks!
The point of temporary tables is that they are.. temporary. As soon as they go out of scope
#temp create in stored proc : stored proc exits
#temp created in session : session disconnects
##temp : session that created it disconnects
The query disappears. If you find that you need to remove temporary tables manually, you need to revisit how you are using them.
For the global ones, this will generate and execute the statement to drop them all.
declare #sql nvarchar(max)
select #sql = isnull(#sql+';', '') + 'drop table ' + quotename(name)
from tempdb..sysobjects
where name like '##%'
exec (#sql)
It is a bad idea to drop other sessions' [global] temp tables though.
For the local (to this session) temp tables, just disconnect and reconnect again.
The version below avoids all of the hassles of dealing with the '_'s. I just wanted to get rid of non-global temp tables, hence the '#[^#]%' in my WHERE clause, drop the [^#] if you want to drop global temp tables as well, or use a '##%' if you only want to drop global temp tables.
The DROP statement seems happy to take the full name with the '_', etc., so we don't need to manipulate and edit these. The OBJECT_ID(...) NOT NULL allows me to avoid tables that were not created by my session, presumably since these tables should not be 'visible' to me, they come back with NULL from this call. The QUOTENAME is needed to make sure the name is correctly quoted / escaped. If you have no temp tables, #d_sql will be the empty string still, so we check for that before printing / executing.
DECLARE #d_sql NVARCHAR(MAX)
SET #d_sql = ''
SELECT #d_sql = #d_sql + 'DROP TABLE ' + QUOTENAME(name) + ';
'
FROM tempdb..sysobjects
WHERE name like '#[^#]%'
AND OBJECT_ID('tempdb..'+QUOTENAME(name)) IS NOT NULL
IF #d_sql <> ''
BEGIN
PRINT #d_sql
-- EXEC( #d_sql )
END
In a stored procedure they are dropped automatically when the execution of the proc completes.
I normally come across the desire for this when I copy code out of a stored procedure to debug part of it and the stored proc does not contain the drop table commands.
Closing and reopening the connection works as stated in the accepted answer. Rather than doing this manually after each execution you can enable SQLCMD mode on the Query menu in SSMS
And then use the :connect command (adjust to your server/instance name)
:connect (local)\SQL2014
create table #foo(x int)
create table #bar(x int)
select *
from #foo
Can be run multiple times without problems. The messages tab shows
Connecting to (local)\SQL2014...
(0 row(s) affected)
Disconnecting connection from (local)\SQL2014...

Why does Microsoft SQL Server check columns but not tables in stored procs?

Microsoft SQL Server seems to check column name validity, but not table name validity when defining stored procedures. If it detects that a referenced table name exists currently, it validates the column names in a statement against the columns in that table. So, for example, this will run OK:
CREATE PROCEDURE [dbo].[MyProcedure]
AS
BEGIN
SELECT
Col1, Col2, Col3
FROM
NonExistentTable
END
GO
... as will this:
CREATE PROCEDURE [dbo].[MyProcedure]
AS
BEGIN
SELECT
ExistentCol1, ExistentCol2, ExistentCol3
FROM
ExistentTable
END
GO
... but this fails, with 'Invalid column name':
CREATE PROCEDURE [dbo].[MyProcedure]
AS
BEGIN
SELECT
NonExistentCol1, NonExistentCol2, NonExistentCol3
FROM
ExistentTable
END
GO
Why does SQL Server check columns, but not tables, for existence? Surely it's inconsistent; it should do both, or neither. It's useful for us to be able to define SPs which may refer to tables AND/OR columns which don't exist in the schema yet, so is there a way to turn off SQL Server's checking of column existence in tables which currently exist?
This is called deferred name resolution.
There is no way of turning it off. You can use dynamic SQL or (a nasty hack!) add a reference to a non existent table so that compilation of that statement is deferred.
CREATE PROCEDURE [dbo].[MyProcedure]
AS
BEGIN
CREATE TABLE #Dummy (c int)
SELECT
NonExistantCol1, NonExistantCol2, NonExistantCol3
FROM
ExistantTable
WHERE NOT EXISTS(SELECT * FROM #Dummy)
DROP TABLE #Dummy
END
GO
This article in MSDN should answer your question.
From the article:
When a stored procedure is executed for the first time, the query
processor reads the text of the stored procedure from the
sys.sql_modules catalog view and checks that the names of the objects
used by the procedure are present. This process is called deferred
name resolution because table objects referenced by the stored
procedure need not exist when the stored procedure is created, but
only when it is executed.