What's the benefit of doing temporary tables (#table) instead of persistent tables (table)? - sql-server-2005

I can think of two main benefits:
Avoiding concurrency problems, if you have many processes creating/dropping tables you can get in trouble as one process tries to create an already existing table.
Performance, I imagine that creating temporary tables (with #) is more performant than regular tables.
Is there any other reason, and is any of my reasons false?

You can't compare temporary and persistent tables:
Persistent tables keep your data and can be used by any process.
Temporary ones are throw away and #ones are visible only to that connection
You'd use a temp table to spool results for further processing and such.
There is little difference in performance (either way) between the two types of table.
You shouldn't be dropping and creating tables all the time... any app that relies on this is doing something wrong, not least way too many SQL calls.

(1)Temp Tables are created in the SQL Server TEMPDB database and therefore require more IO resources and locking. Table Variables and Derived Tables are created in memory.
(2)Temp Tables will generally perform better for large amounts of data that can be worked on using parallelism whereas Table Variables are best used for small amounts of data (I use a rule of thumb of 100 or less rows) where parallelism would not provide a significant performance improvement.
(3)You cannot use a stored procedure to insert data into a Table Variable or Derived Table. For example, the following will work: INSERT INTO #MyTempTable EXEC dbo.GetPolicies_sp whereas the following will generate an error: INSERT INTO #MyTableVariable EXEC dbo.GetPolicies_sp.
(4)Derived Tables can only be created from a SELECT statement but can be used within an Insert, Update, or Delete statement.
(5) In order of scope endurance, Temp Tables extend the furthest in scope, followed by Table Variables, and finally Derived Tables.

1)
A table variable's lifespan is only for the duration of the transaction that it runs in. If we execute the DECLARE statement first, then attempt to insert records into the #temp table variable we receive the error because the table variable has passed out of existence. The results are the same if we declare and insert records into #temp in one transaction and then attempt to query the table. If you notice, we need to execute a DROP TABLE statement against #temp. This is because the table persists until the session ends or until the table is dropped.
2)
table variables have certain clear limitations.
-Table variables can not have Non-Clustered Indexes
-You can not create constraints in table variables
-You can not create default values on table variable columns
-Statistics can not be created against table variables
-Similarities with temporary tables include:
Similarities with temporary tables include:
-Instantiated in tempdb
-Clustered indexes can be created on table variables and temporary tables
-Both are logged in the transaction log
-Just as with temp and regular tables, users can perform all Data Modification Language -(DML) queries against a table variable: SELECT, INSERT, UPDATE, and DELETE.

Related

Temp table doesn't store updated values

I've been trying to create a temp table and update it but when I go to view the temp table, it doesn't show any of the updated rows
declare global temporary table hierarchy (
code varchar(5)
description varchar(30);
INSERT INTO session.hierarchy
SELECT code, 30_description
FROM table1
WHERE code like '_....';
SELECT *
FROM session.hierarchy;
This is a frequently asked question.
When using DGTT with Db2 (declare global temporary table), you need to know that the default is to discard all rows after a COMMIT action. That is the reason the table appears to be empty after you insert - the rows got deleted if autocommit is enabled. If that is not what you want, you should use the on commit preserve rows clause when declaring the table.
It is also very important to the with replace option when creating stored procedures, this is often the most friendly for development and testing, and it is not the default. Otherwise, if the same session attempts to repeat the declaration of the DGTT the second and subsequent attempts will fail because the DGTT already exists.
It can also be interesting for problem determination sometimes to use on rollback preserve rows but that is less often used.
When using a DGTT, one of the main advantages is that you can arrange for the population of the table (inserts, updates ) to be unlogged which can give a great performance boost if you have millions of rows to add to the DGTT.
Suggestion is therefore:
declare global temporary table ... ( )...
not logged
on commit preserve rows
with replace;
For DPF installations, also consider using distribute by hash (...) for best performance.

SQL Server temp table check performance

IF OBJECT_ID('tempdb..#mytesttable') IS NOT NULL
DROP TABLE #mytesttable
SELECT id, name
FROM
INTO #mytesttable mytable
Question: is it good to check temp table existence (e.g: OBJECT_ID('tempdb..#mytesttable') ) when I first time create this temp table inside a procedure?
What will be the best practice in terms of performance?
It is good practice to check if table exists or not. and it doesn't have any performance impact. anyhow, temp tables automatically drops once procedure execution completes in sql server. sql server appends unique num with temp table name. so if same procedure executes more than once at same time also , it will not cause any issue. it depends on session. all procs executions have different session.
Yes, it is a good practice. I am always doing this at the beginning of the routine. In SQL Server 2016 + objects can DIE (Drop If Exists), so you can simplify the call:
DROP TABLE IF EXISTS #mytesttable;
Also, even at the end of the routine you temporary objects are destroyed and even the SQL Engine gives unique names to temporary tables, there is still another behavior to consider.
If you are naming your temporary table with same name, when nested procedure calls are involved (one stored procedure calls another) it is possible to get an error or corrupt your data. This is due to the fact that temporary table are visible in the current execution scope - so, #table in one procedure will be visible in the second called procedure (so you can modify its data or drop it).

Stored procedures and the use of temporary tables within them?

I know basic sql commands, and this is my first time working with stored procedures. In the stored procedure I am looking at, there are several temporary tables.
The procedure, is triggerred every morning, which then pulls a specific ID and then loops through each ID to grab certain parameters.
My question is: are temporary tables used in stored procedures so that when the procedure goes off the variables will be instantly passed into the parameters and loop, and then the temporary tables will be cleared, thus restarting the process for the next loop?
Temporary tables are used because as soon as the session that created them (or stored procedure) is closed the temporary table is gone. A Temp table with a single # in front of the name (also called a local temp table) is only visible in the session it was created in so a temp table with the same name can be created in multiple sessions without bumping into each other (SQL Server adds characters to the name to make it unique). If a temp table with two ## in front of it is created (a global temp table) then it is unique within SQL Server so other sessions can see it. Temp tables are the equivalent of a scratch pad. When SQL Server is restarted all temp tables and their values are gone. Temp tables can have indexes created against them and SQL Server can use statistics on Temp tables to create efficient query plans.
For stored procedures (SPs), they are the least restricted and most capable objects, for example:
Their usual alternatives, views and functions, are not allowed to utilize many things, like DML statements, temp. tables, transactions, etc.
SPs can avoid returning result sets. They also can return more than one result set.
For temp. tables:
Yes, once the SP is done, the table disappears along with its
contents (at least for single-# tables).
They have advantages &
disadvantages compared to their alternatives (actual tables, table
varaibles, various non-table solutions).
Stored Procedures, in my opinion, don't forcibly need Temporary tables. It's up to the SP's scope to decide if using a TempTable is the best approach.
For example, let's suppose that we want to retrieve a List of elements that come out from joining a few tables, then it's the best to have a TempTable to put the joined fields. On the other hand, if we're using a Stored Procedure to retrieve a single or field, I don't see the need for a Temp table.
Temp tables are only available during the usage of the Stored Procedure, once it's finished, the table goes out of scope.

Multiple DDL queries on a single table in a single transaction

Can I rely on SQL Server (and SQL Server Compact) to do multiple ALTER TABLE queries on a single table in one go, instead of applying changes sequentally, if all queries are within a single transaction?
I'm generating DML queries from the code and want to simplify code, but I'd prefer to avoid performance issues.
For example, is there difference, performance-wise, between this code:
BEGIN TRAN
ALTER TABLE t ADD COLUMN a int
ALTER TABLE t ADD COLUMN b int
ALTER TABLE t ADD COLUMN c int
COMMIT TRAN
and this code:
BEGIN TRAN
ALTER TABLE t ADD COLUMN a int
, ADD COLUMN b int
, ADD COLUMN c int
COMMIT TRAN
P.S. Information on other relational database engines would be useful too, just in case.
They will be sequential within the transaction, all-or-nothing to other sessions outside the transaction. If you're concerned about performance, consider the fact that executing DDL statements on a table will lock the entire table for the duration of the transaction, so do not do this on live high-traffic tables.
The second case is in principle preferable because it gives the opportunity for the optimizer to do its business. In particular, the data dictionary will most likely be read and written in one and all changes to the tablespace will also be done in one go. But I doubt just by adding ints, like in the example, it will make a big difference. And definitely engine-specific.

Time associated with dropping a table

Does the time it takes to drop a table in SQL reflect the quantity of data within the table?
Lets say for dropping an identical table one with 100000 rows and 1000 rows.
Is this MySQL? The ext3 Linux filesystem is known to be slow to drop large tables. The time to drop will have to do more with the size of the table, more than the rows in the table.
http://www.mysqlperformanceblog.com/2009/06/16/slow-drop-table/
It will definitely depend on the dbserver and on the specific storage params in that db server and upon the existence of LOBs in the table. For MOST scenarios, it's very fast.
In Oracle, dropping a table is very quick and unlikely to matter compared to other operations you would be performing (creating the table and populating it).
It would not be idomatic to be creating and dropping tables enough in Oracle for this to be any sort of a performance factor. You would instead consider using global temporary tables.
From http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm
In addition to permanent tables,
Oracle Database can create temporary
tables to hold session-private data
that exists only for the duration of a
transaction or session.
The CREATE GLOBAL TEMPORARY TABLE
statement creates a temporary table
that can be transaction-specific or
session-specific. For
transaction-specific temporary tables,
data exists for the duration of the
transaction. For session-specific
temporary tables, data exists for the
duration of the session. Data in a
temporary table is private to the
session. Each session can only see and
modify its own data. DML locks are not
acquired on the data of the temporary
tables. The LOCK statement has no
effect on a temporary table, because
each session has its own private data.