Ive searched StackOverFlow , and didnt find any.
Is there any way for me to know if a Table Variable already exists ?
something like :
IF OBJECT_ID('tempdb..#tbl') IS NOT NULL
DROP TABLE #tbl
but for table Var...
Table variables, because they are variables, are distinct from either temporary or non-temporary tables in that they are not created – they are declared. They are much closer in that respect to ‘normal’ variables rather than to tables.
So, there's as much sense in talking about a table variable's existence as in talking about the existence of any variable: if you have declared the thing in your source code, it exists starting from that point until the end of its scope, which, in SQL Server, is known to be either the batch or the stored procedure/function it is declared in. And if you haven't declared the variable and are trying to reference it in your code, your code will just not compile, rendering any existence check pointless, if ever possible.
Perhaps, if you feel the need to drop and re-(create/declare) a table variable in your script, then you should probably consider using a temporary table instead.
Table variables #table are little bit different from temporary tables #table.
Table variables #table need to declare while temporary tables #table should create.
So as per definition declare variables exist between their defined scope (Begin and End) statement. So no need to drop table variables.
But you can use delete #table statement if you want to delete/drop a table variable.
I know this is an old thread, but hopefully this might help someone who lands here. When developing from SSMS, you may want to re-run a statement that selects into a table variable (eg. select * into #tblvarFoo from dbName.schema.Foo). But, the second time you run it, you get an error that it already exists. So, you decide to drop it first. But then, you have the problem the OP had:
Before I drop a table I should check if it exists, otherwise I will
get an exception...
You don't have to drop the table variable or check for its existence.
Just reconnect (right click in the query window and select "Connection->Change Connection...") to same Server/db as before.
Related
I have a problem with temporary tables that does not "live" long enough in my code.
My problem looks like this: I want to create a temporary table in one "codevariable" and use it later. An example of my code structure is like below:
declare #RW varchar(MAX)
set #RW = '
select *
into #temptable
from table1'
exec(#RW)
--Alots of other code.
select *
from #temptable
This results in an error message that sais "Invalid object name '#temptable'. And it's very clear that my temporary table does'nt exists anymore. But I've checked that the table creates in the first step. For example the following code works:
declare #RW varchar(MAX)
set #RW = '
select *
into #temptable
from table1
select *
from #temptable'
exec(#RW)
So my GUESS is that the temporary table only lives within it's code variable. Is there a way to create a temporary table that lives longer? Or, do I just needs to accept this for what it is or am I missing something? I have a work around solution that is not very efficient. What I'm thinking of is creating a regular table which I later delete. This would mean a lot of writing to disks but it's something that the system I work with would survive, but not be happy with. Is there any other way to handle this?
A temporary table only persists for the duration of the scope that declared it. For a "normal" connection that will be when the connection is dropped. For example, if you're using SSMS and open a query window and run CREATE TABLE #T (ID int); it'll create the table. As you're still connected, the table won't be dropped and will still exist. If you run the statement again (without dropping it) you'll get an error that it already exists. As soon as you close that query window, the temporary table will be dropped.
For a dynamic statement, the scope is the duration of that dynamic statement. This means that as soon as the dynamic statement completes, the table will be dropped:
EXEC sys.sp_executesql N'CREATE TABLE #T (ID int);';
SELECT *
FROM #t;
Notice this errors, as the scope the table was created in has completed, and thus dropped.
If you are using dynamic statements to create temporary tables, you need to make all the references to said temporary table within the dynamic statement.
Otherwise, if you need to reference it outside of the statement, I personally find I create an "permanent" object in tempdb, and then clean up afterwards.
EXEC sys.sp_executesql N'CREATE TABLE tempdb.dbo.T (ID int);';
SELECT *
FROM tempdb.dbo.T;
DROP TABLE tempdb.dbo.T;
These tables are still dropped in the event the instance is restarted as well.
Note that "global" temporary table behave slightly differently. As global temporary table can be referenced in any connection, while it exists. This means that another connection could be using the table while the scope that created it ends. As a result a global temporary table persists until the scope that declared is ends and there are no other active connections using the object. This means that the objects could be dropped mid batch in another connection.
IF OBJECT_ID('tempdb..#mytesttable') IS NOT NULL
DROP TABLE #mytesttable
SELECT id, name
FROM
INTO #mytesttable mytable
Question: is it good to check temp table existence (e.g: OBJECT_ID('tempdb..#mytesttable') ) when I first time create this temp table inside a procedure?
What will be the best practice in terms of performance?
It is good practice to check if table exists or not. and it doesn't have any performance impact. anyhow, temp tables automatically drops once procedure execution completes in sql server. sql server appends unique num with temp table name. so if same procedure executes more than once at same time also , it will not cause any issue. it depends on session. all procs executions have different session.
Yes, it is a good practice. I am always doing this at the beginning of the routine. In SQL Server 2016 + objects can DIE (Drop If Exists), so you can simplify the call:
DROP TABLE IF EXISTS #mytesttable;
Also, even at the end of the routine you temporary objects are destroyed and even the SQL Engine gives unique names to temporary tables, there is still another behavior to consider.
If you are naming your temporary table with same name, when nested procedure calls are involved (one stored procedure calls another) it is possible to get an error or corrupt your data. This is due to the fact that temporary table are visible in the current execution scope - so, #table in one procedure will be visible in the second called procedure (so you can modify its data or drop it).
I'm writing a trigger in which I need to check the incoming data and potentially change it. Then later in the trigger I need to use that new data for further processing. A highly simplified version looks something like this:
ALTER TRIGGER [db].[trig_update]
ON [db].[table]
AFTER UPDATE
AS
BEGIN
DECLARE #thisprofileID int
IF (Inserted.profileID IS NULL)
BEGIN
SELECT #thisprofileID=profileID
FROM db.otherTable
WHERE userid = #thisuserID;
UPDATE db.table
SET profileID = #thisprofileID
WHERE userid = #thisuserID;
-- XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-- XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
END
IF ({conditional})
BEGIN
UPDATE db.thirdTable
SET [profileID] = Inserted.profileID
...{20+ other fields}
FROM Inserted ...{a few joins}
WHERE {various criteria}
END
END
The problem that we're running into is that the update statement fails because Inserted.profileID is null, and thirdTable.profileID is set to not allow nulls. table.profileID will never stay null; if it is created as null then this trigger should catch it and set it to a value. But even though we're updated 'table', Inserted still has the null value. So far it makes sense to me why this is happening.
I'm unsure how to correct the problem. In the area with commented Xs I tried running an update query against the Inserted table to update profileID, but this resulted in an error because the pseudo-table apparently can't be updated. Am I incorrect in this presumption? That would be an easy solution.
The next most logical solution to me would be to INSERT INTO a table variable to make a copy of Inserted and then use that in the rest of the trigger, but that fails because the table variable is not defined. Defining that table variable would require more fields than I care to count, and will present a major maintenance nightmare any time that we need to make changes to the structure of 'table'. So assuming this is the best approach, is there an easy way to copy the data and structure of Inserted into a table variable without explicitly defining the structure?
I don't think that a temp table (which I could otherwise easily insert into) would be a good solution because my limited understanding is that they are far slower than a table variable that lives only inside the trigger. I assume that temp table also must be public, and cause problems if our trigger fires twice and both instances need the temp table.
I have a temporary table in the stored procedure which is causing the time out for the query as it is doing a complex calculation. I want to drop it after it is used. It was created like
DECLARE #SecondTable TABLE
Now I cannot drop it using
drop #SecondTable
in fact I have to use
drop #SecondTable
Does somebody know why?
I'm by no means a SQL guru, but why is the drop even necessary?
If it's a table variable, it will no longer exist once the stored proc exits.
I'm actually surprised that DROP #SecondTable doesn't error out on you; since you're dropping a temporary table there; not a table variable.
EDIT
So based on your comment, my updates are below:
1.) If you're using a table variable (#SecondTable); then no drop is necessary. SQL Server will take care of this for you.
2.) It sounds like your timeout is caused by the calculations using the table, not the dropping of the table itself. In this case; I'd probably recommend using a temporary table instead of a table variable; since a temporary table will let you add indexes and such to improve performance; while a table variable will not. If this still isn't sufficient; you might need to increase the timeout duration on the query.
3.) In SQL; a table variable (#SecondTable) and temporary table (#SecondTable) are two completely different things. I'd refer to the MSDN documentation for Table Variables and Temporary Tables
If I create a table variable like this:
Declare #MyTable Table(ID int,Name varchar(50))
Is it better on the server to run a delete query on the variable at the end of your queries? Sort of like closing an object?
Delete From #MyTable
Or is it unnecessary?
Using Delete will be worse.
Instead of just having SQL Server implicitly drop the table variable when it goes out of scope (which has minimal logging) you will also add fully logged delete operations for each row to the tempdb transaction log.
I can't see how this will be better for performance - it will at best be the same (since the #table will be dropped when it's out of scope anyway), and at worst will be more expensive because it actually has to perform the delete first. Do you think there is any advantage in doing this:
DELETE #temptable;
DROP TABLE #temptable;
Instead of just this:
DROP TABLE #temptable;
I will admit that I haven't tested this in the #table case, but that's something you can test and benchmark as well. It should be clear that in the above case running the DELETE first will take more resources than not bothering.
There is probably a reason there is no way to DROP TABLE #MyTable; or DEALLOCATE #MyTable; - but nobody here wrote the code around table variables and it is unlikely we'll know the official reason(s) why we can't release these objects early. But dropping the table wouldn't mean you're freeing up the space anyway - you're just marking the pages in a certain way.