If I create a table variable like this:
Declare #MyTable Table(ID int,Name varchar(50))
Is it better on the server to run a delete query on the variable at the end of your queries? Sort of like closing an object?
Delete From #MyTable
Or is it unnecessary?
Using Delete will be worse.
Instead of just having SQL Server implicitly drop the table variable when it goes out of scope (which has minimal logging) you will also add fully logged delete operations for each row to the tempdb transaction log.
I can't see how this will be better for performance - it will at best be the same (since the #table will be dropped when it's out of scope anyway), and at worst will be more expensive because it actually has to perform the delete first. Do you think there is any advantage in doing this:
DELETE #temptable;
DROP TABLE #temptable;
Instead of just this:
DROP TABLE #temptable;
I will admit that I haven't tested this in the #table case, but that's something you can test and benchmark as well. It should be clear that in the above case running the DELETE first will take more resources than not bothering.
There is probably a reason there is no way to DROP TABLE #MyTable; or DEALLOCATE #MyTable; - but nobody here wrote the code around table variables and it is unlikely we'll know the official reason(s) why we can't release these objects early. But dropping the table wouldn't mean you're freeing up the space anyway - you're just marking the pages in a certain way.
Related
I've been trying to create a temp table and update it but when I go to view the temp table, it doesn't show any of the updated rows
declare global temporary table hierarchy (
code varchar(5)
description varchar(30);
INSERT INTO session.hierarchy
SELECT code, 30_description
FROM table1
WHERE code like '_....';
SELECT *
FROM session.hierarchy;
This is a frequently asked question.
When using DGTT with Db2 (declare global temporary table), you need to know that the default is to discard all rows after a COMMIT action. That is the reason the table appears to be empty after you insert - the rows got deleted if autocommit is enabled. If that is not what you want, you should use the on commit preserve rows clause when declaring the table.
It is also very important to the with replace option when creating stored procedures, this is often the most friendly for development and testing, and it is not the default. Otherwise, if the same session attempts to repeat the declaration of the DGTT the second and subsequent attempts will fail because the DGTT already exists.
It can also be interesting for problem determination sometimes to use on rollback preserve rows but that is less often used.
When using a DGTT, one of the main advantages is that you can arrange for the population of the table (inserts, updates ) to be unlogged which can give a great performance boost if you have millions of rows to add to the DGTT.
Suggestion is therefore:
declare global temporary table ... ( )...
not logged
on commit preserve rows
with replace;
For DPF installations, also consider using distribute by hash (...) for best performance.
I have a problem with temporary tables that does not "live" long enough in my code.
My problem looks like this: I want to create a temporary table in one "codevariable" and use it later. An example of my code structure is like below:
declare #RW varchar(MAX)
set #RW = '
select *
into #temptable
from table1'
exec(#RW)
--Alots of other code.
select *
from #temptable
This results in an error message that sais "Invalid object name '#temptable'. And it's very clear that my temporary table does'nt exists anymore. But I've checked that the table creates in the first step. For example the following code works:
declare #RW varchar(MAX)
set #RW = '
select *
into #temptable
from table1
select *
from #temptable'
exec(#RW)
So my GUESS is that the temporary table only lives within it's code variable. Is there a way to create a temporary table that lives longer? Or, do I just needs to accept this for what it is or am I missing something? I have a work around solution that is not very efficient. What I'm thinking of is creating a regular table which I later delete. This would mean a lot of writing to disks but it's something that the system I work with would survive, but not be happy with. Is there any other way to handle this?
A temporary table only persists for the duration of the scope that declared it. For a "normal" connection that will be when the connection is dropped. For example, if you're using SSMS and open a query window and run CREATE TABLE #T (ID int); it'll create the table. As you're still connected, the table won't be dropped and will still exist. If you run the statement again (without dropping it) you'll get an error that it already exists. As soon as you close that query window, the temporary table will be dropped.
For a dynamic statement, the scope is the duration of that dynamic statement. This means that as soon as the dynamic statement completes, the table will be dropped:
EXEC sys.sp_executesql N'CREATE TABLE #T (ID int);';
SELECT *
FROM #t;
Notice this errors, as the scope the table was created in has completed, and thus dropped.
If you are using dynamic statements to create temporary tables, you need to make all the references to said temporary table within the dynamic statement.
Otherwise, if you need to reference it outside of the statement, I personally find I create an "permanent" object in tempdb, and then clean up afterwards.
EXEC sys.sp_executesql N'CREATE TABLE tempdb.dbo.T (ID int);';
SELECT *
FROM tempdb.dbo.T;
DROP TABLE tempdb.dbo.T;
These tables are still dropped in the event the instance is restarted as well.
Note that "global" temporary table behave slightly differently. As global temporary table can be referenced in any connection, while it exists. This means that another connection could be using the table while the scope that created it ends. As a result a global temporary table persists until the scope that declared is ends and there are no other active connections using the object. This means that the objects could be dropped mid batch in another connection.
I have a temporary table in the stored procedure which is causing the time out for the query as it is doing a complex calculation. I want to drop it after it is used. It was created like
DECLARE #SecondTable TABLE
Now I cannot drop it using
drop #SecondTable
in fact I have to use
drop #SecondTable
Does somebody know why?
I'm by no means a SQL guru, but why is the drop even necessary?
If it's a table variable, it will no longer exist once the stored proc exits.
I'm actually surprised that DROP #SecondTable doesn't error out on you; since you're dropping a temporary table there; not a table variable.
EDIT
So based on your comment, my updates are below:
1.) If you're using a table variable (#SecondTable); then no drop is necessary. SQL Server will take care of this for you.
2.) It sounds like your timeout is caused by the calculations using the table, not the dropping of the table itself. In this case; I'd probably recommend using a temporary table instead of a table variable; since a temporary table will let you add indexes and such to improve performance; while a table variable will not. If this still isn't sufficient; you might need to increase the timeout duration on the query.
3.) In SQL; a table variable (#SecondTable) and temporary table (#SecondTable) are two completely different things. I'd refer to the MSDN documentation for Table Variables and Temporary Tables
I'm working with a SQL Server 2008 installation that was maintained for years by another team of programmers.
I'm having a problem that rows of data seem to be mysteriously disappearing from a specific table in my server.
I would like to be able to set up some sort of monitoring system that would tell me when the table is modified, and a summary of the modification.
I think that "triggers" might be what I'm looking for, but I've never used them before. Are triggers what I want to use, and if so, what is a good resource for learning to use them? Is there a better solution?
I think that I should mention that the table I'm referring to is not that frequently updated, so I don't think that adding a little bit of overhead should be a big deal, but I would prefer a solution that I can brush away once the problem is resolved.
A FOR DELETE trigger could help you capture the rows that are being deleted. You could create an audit table (copy of the table that you'd like to monitor) and then add this code to your trigger:
INSERT INTO [Your Audit Table]
SELECT * FROM deleted
I've also seen some "more advanced" scenarios involving FOR XML.
I don't know that the trigger would help determine who is deleting the records, but you might be able to PROVE that the records are being deleted, and perhaps what time, etc. That could help you troubleshoot further.
The following sample should be a basic idea of what you're looking for.
CREATE TABLE MyTestTable(col1 int, col2 varchar(10));
GO
CREATE TABLE MyLogTable(col1 int, col2 varchar(10), ModDate datetime, ModBy varchar(50));
GO
CREATE TRIGGER tr_MyTestTable_IO_UD ON MyTestTable AFTER UPDATE, DELETE
AS
INSERT MyLogTable
SELECT col1, col2, GETDATE(), SUSER_SNAME()
FROM deleted;
GO
Insert MyTestTable Values (1, 'aaaaa');
Insert MyTestTable Values (2, 'bbbbb');
UPDATE MyTestTable Set col2 = 'bbbcc' WHERE col1 = 2;
DELETE MyTestTable;
GO
SELECT * FROM MyLogTable;
GO
However, keep in mind that there are still ways of deleting records that won't be caught by a trigger. (TRUNCATE TABLE and various bulk update commands.)
Another solution would be to attach Sql Profiler to the database with specific conditions. This will log every query run for your inspection.
I like to stay away from triggers but they could help for your problem like Draghon said
I think you have it figured out. A trigger is likely your best bet as it's as close to the data as you can get. Inspecting the code (programming or even a stored procedure) would not give you as much an assurance as a trigger would; a Delete trigger in this case.
Check out this article: http://www.go4expert.com/forums/showthread.php?t=15510
Ive searched StackOverFlow , and didnt find any.
Is there any way for me to know if a Table Variable already exists ?
something like :
IF OBJECT_ID('tempdb..#tbl') IS NOT NULL
DROP TABLE #tbl
but for table Var...
Table variables, because they are variables, are distinct from either temporary or non-temporary tables in that they are not created – they are declared. They are much closer in that respect to ‘normal’ variables rather than to tables.
So, there's as much sense in talking about a table variable's existence as in talking about the existence of any variable: if you have declared the thing in your source code, it exists starting from that point until the end of its scope, which, in SQL Server, is known to be either the batch or the stored procedure/function it is declared in. And if you haven't declared the variable and are trying to reference it in your code, your code will just not compile, rendering any existence check pointless, if ever possible.
Perhaps, if you feel the need to drop and re-(create/declare) a table variable in your script, then you should probably consider using a temporary table instead.
Table variables #table are little bit different from temporary tables #table.
Table variables #table need to declare while temporary tables #table should create.
So as per definition declare variables exist between their defined scope (Begin and End) statement. So no need to drop table variables.
But you can use delete #table statement if you want to delete/drop a table variable.
I know this is an old thread, but hopefully this might help someone who lands here. When developing from SSMS, you may want to re-run a statement that selects into a table variable (eg. select * into #tblvarFoo from dbName.schema.Foo). But, the second time you run it, you get an error that it already exists. So, you decide to drop it first. But then, you have the problem the OP had:
Before I drop a table I should check if it exists, otherwise I will
get an exception...
You don't have to drop the table variable or check for its existence.
Just reconnect (right click in the query window and select "Connection->Change Connection...") to same Server/db as before.