Temp table doesn't store updated values - sql

I've been trying to create a temp table and update it but when I go to view the temp table, it doesn't show any of the updated rows
declare global temporary table hierarchy (
code varchar(5)
description varchar(30);
INSERT INTO session.hierarchy
SELECT code, 30_description
FROM table1
WHERE code like '_....';
SELECT *
FROM session.hierarchy;

This is a frequently asked question.
When using DGTT with Db2 (declare global temporary table), you need to know that the default is to discard all rows after a COMMIT action. That is the reason the table appears to be empty after you insert - the rows got deleted if autocommit is enabled. If that is not what you want, you should use the on commit preserve rows clause when declaring the table.
It is also very important to the with replace option when creating stored procedures, this is often the most friendly for development and testing, and it is not the default. Otherwise, if the same session attempts to repeat the declaration of the DGTT the second and subsequent attempts will fail because the DGTT already exists.
It can also be interesting for problem determination sometimes to use on rollback preserve rows but that is less often used.
When using a DGTT, one of the main advantages is that you can arrange for the population of the table (inserts, updates ) to be unlogged which can give a great performance boost if you have millions of rows to add to the DGTT.
Suggestion is therefore:
declare global temporary table ... ( )...
not logged
on commit preserve rows
with replace;
For DPF installations, also consider using distribute by hash (...) for best performance.

Related

SQL Server temp table check performance

IF OBJECT_ID('tempdb..#mytesttable') IS NOT NULL
DROP TABLE #mytesttable
SELECT id, name
FROM
INTO #mytesttable mytable
Question: is it good to check temp table existence (e.g: OBJECT_ID('tempdb..#mytesttable') ) when I first time create this temp table inside a procedure?
What will be the best practice in terms of performance?
It is good practice to check if table exists or not. and it doesn't have any performance impact. anyhow, temp tables automatically drops once procedure execution completes in sql server. sql server appends unique num with temp table name. so if same procedure executes more than once at same time also , it will not cause any issue. it depends on session. all procs executions have different session.
Yes, it is a good practice. I am always doing this at the beginning of the routine. In SQL Server 2016 + objects can DIE (Drop If Exists), so you can simplify the call:
DROP TABLE IF EXISTS #mytesttable;
Also, even at the end of the routine you temporary objects are destroyed and even the SQL Engine gives unique names to temporary tables, there is still another behavior to consider.
If you are naming your temporary table with same name, when nested procedure calls are involved (one stored procedure calls another) it is possible to get an error or corrupt your data. This is due to the fact that temporary table are visible in the current execution scope - so, #table in one procedure will be visible in the second called procedure (so you can modify its data or drop it).

Can you easily and efficiently copy or edit the INSERTED table in a trigger?

I'm writing a trigger in which I need to check the incoming data and potentially change it. Then later in the trigger I need to use that new data for further processing. A highly simplified version looks something like this:
ALTER TRIGGER [db].[trig_update]
ON [db].[table]
AFTER UPDATE
AS
BEGIN
DECLARE #thisprofileID int
IF (Inserted.profileID IS NULL)
BEGIN
SELECT #thisprofileID=profileID
FROM db.otherTable
WHERE userid = #thisuserID;
UPDATE db.table
SET profileID = #thisprofileID
WHERE userid = #thisuserID;
-- XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-- XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
END
IF ({conditional})
BEGIN
UPDATE db.thirdTable
SET [profileID] = Inserted.profileID
...{20+ other fields}
FROM Inserted ...{a few joins}
WHERE {various criteria}
END
END
The problem that we're running into is that the update statement fails because Inserted.profileID is null, and thirdTable.profileID is set to not allow nulls. table.profileID will never stay null; if it is created as null then this trigger should catch it and set it to a value. But even though we're updated 'table', Inserted still has the null value. So far it makes sense to me why this is happening.
I'm unsure how to correct the problem. In the area with commented Xs I tried running an update query against the Inserted table to update profileID, but this resulted in an error because the pseudo-table apparently can't be updated. Am I incorrect in this presumption? That would be an easy solution.
The next most logical solution to me would be to INSERT INTO a table variable to make a copy of Inserted and then use that in the rest of the trigger, but that fails because the table variable is not defined. Defining that table variable would require more fields than I care to count, and will present a major maintenance nightmare any time that we need to make changes to the structure of 'table'. So assuming this is the best approach, is there an easy way to copy the data and structure of Inserted into a table variable without explicitly defining the structure?
I don't think that a temp table (which I could otherwise easily insert into) would be a good solution because my limited understanding is that they are far slower than a table variable that lives only inside the trigger. I assume that temp table also must be public, and cause problems if our trigger fires twice and both instances need the temp table.

What's the benefit of doing temporary tables (#table) instead of persistent tables (table)?

I can think of two main benefits:
Avoiding concurrency problems, if you have many processes creating/dropping tables you can get in trouble as one process tries to create an already existing table.
Performance, I imagine that creating temporary tables (with #) is more performant than regular tables.
Is there any other reason, and is any of my reasons false?
You can't compare temporary and persistent tables:
Persistent tables keep your data and can be used by any process.
Temporary ones are throw away and #ones are visible only to that connection
You'd use a temp table to spool results for further processing and such.
There is little difference in performance (either way) between the two types of table.
You shouldn't be dropping and creating tables all the time... any app that relies on this is doing something wrong, not least way too many SQL calls.
(1)Temp Tables are created in the SQL Server TEMPDB database and therefore require more IO resources and locking. Table Variables and Derived Tables are created in memory.
(2)Temp Tables will generally perform better for large amounts of data that can be worked on using parallelism whereas Table Variables are best used for small amounts of data (I use a rule of thumb of 100 or less rows) where parallelism would not provide a significant performance improvement.
(3)You cannot use a stored procedure to insert data into a Table Variable or Derived Table. For example, the following will work: INSERT INTO #MyTempTable EXEC dbo.GetPolicies_sp whereas the following will generate an error: INSERT INTO #MyTableVariable EXEC dbo.GetPolicies_sp.
(4)Derived Tables can only be created from a SELECT statement but can be used within an Insert, Update, or Delete statement.
(5) In order of scope endurance, Temp Tables extend the furthest in scope, followed by Table Variables, and finally Derived Tables.
1)
A table variable's lifespan is only for the duration of the transaction that it runs in. If we execute the DECLARE statement first, then attempt to insert records into the #temp table variable we receive the error because the table variable has passed out of existence. The results are the same if we declare and insert records into #temp in one transaction and then attempt to query the table. If you notice, we need to execute a DROP TABLE statement against #temp. This is because the table persists until the session ends or until the table is dropped.
2)
table variables have certain clear limitations.
-Table variables can not have Non-Clustered Indexes
-You can not create constraints in table variables
-You can not create default values on table variable columns
-Statistics can not be created against table variables
-Similarities with temporary tables include:
Similarities with temporary tables include:
-Instantiated in tempdb
-Clustered indexes can be created on table variables and temporary tables
-Both are logged in the transaction log
-Just as with temp and regular tables, users can perform all Data Modification Language -(DML) queries against a table variable: SELECT, INSERT, UPDATE, and DELETE.

PL/SQL embedded insert into table that may not exist

I much prefer using this 'embedded' style inserts in a pl/sql block (opposed to the execute immediate style dynamic sql - where you have to delimit quotes etc).
-- a contrived example
PROCEDURE CreateReport( customer IN VARCHAR2, reportdate IN DATE )
BEGIN
-- drop table, create table with explicit column list
CreateReportTableForCustomer;
INSERT INTO TEMP_TABLE
VALUES ( customer, reportdate );
END;
/
The problem here is that oracle checks if 'temp_table' exists and that it has the correct number of colunms and throws a compile error if it doesn't exist.
So I was wondering if theres any way round that?! Essentially I want to use a placeholder for the table name to trick oracle into not checking if the table exists.
EDIT:
I should have mentioned that a user is able to execute any 'report' (as above). A mechanism that will execute an arbitrary query but always write to the temp_table ( in the user's schema). Thus each time the report proc is run it drops the temp_table and recreates it with, most probably, a different column list.
You could use a dynamic SQL statement to insert into the maybe-existent temp_table, and then catch and handle the exception that occurs when the table doesn't exist.
Example:
execute immediate 'INSERT INTO '||TEMP_TABLE_NAME||' VALUES ( :customer, :reportdate )' using customer, reportdate;
Note that having the table name vary in a dynamic SQL statement is not very good, so if you ensure the table names stay the same, that would be best.
Maybe you should be using a global temporary table (GTT). These are permanent table structures that hold temporary data for an Oracle session. Many different sessions can insert data into the same GTT, and each will only be able to see their own data. The data is automatically deleted either on COMMIT or when the session ends, according to the GTT's definition.
You create the GTT (once only) like this:
create globabal temporary table my_gtt
(customer number, report_date date)
on commit delete/preserve* rows;
* delete as applicable
Then your programs can just use it like any other table - the only difference being it always begins empty for your session.
Using GTTs are much preferable to dropping/recreating tables on the fly - if your application needs a different structure for each report, I strongly suggest you work out all the different structures that each report needs, and create separate GTTs as needed by each, instead of creating ordinary tables at runtime.
That said, if this is just not feasible (and I've seen good examples when it's not, e.g. in a system that supports a wide range of ad-hoc requests from users), you'll have to go with the EXECUTE IMMEDIATE approach.

How can remove lock from table in SQL Server 2005?

I am using the Function in stored procedure , procedure contain transaction and update the table and insert values in the same table , while the function is call in procedure is also fetch data from same table.
i get the procedure is hang with function.
Can have any solution for the same?
If I'm hearing you right, you're talking about an insert BLOCKING ITSELF, not two separate queries blocking each other.
We had a similar problem, an SSIS package was trying to insert a bunch of data into a table, but was trying to make sure those rows didn't already exist. The existing code was something like (vastly simplified):
INSERT INTO bigtable
SELECT customerid, productid, ...
FROM rawtable
WHERE NOT EXISTS (SELECT CustomerID, ProductID From bigtable)
AND ... (other conditions)
This ended up blocking itself because the select on the WHERE NOT EXISTS was preventing the INSERT from occurring.
We considered a few different options, I'll let you decide which approach works for you:
Change the transaction isolation level (see this MSDN article). Our SSIS package was defaulted to SERIALIZABLE, which is the most restrictive. (note, be aware of issues with READ UNCOMMITTED or NOLOCK before you choose this option)
Create a UNIQUE index with IGNORE_DUP_KEY = ON. This means we can insert ALL rows (and remove the "WHERE NOT IN" clause altogether). Duplicates will be rejected, but the batch won't fail completely, and all other valid rows will still insert.
Change your query logic to do something like put all candidate rows into a temp table, then delete all rows that are already in the destination, then insert the rest.
In our case, we already had the data in a temp table, so we simply deleted the rows we didn't want inserted, and did a simple insert on the rest.
This can be difficult to diagnose. Microsoft has provided some information here:
INF: Understanding and resolving SQL Server blocking problems
A brute force way to kill the connection(s) causing the lock is documented here:
http://shujaatsiddiqi.blogspot.com/2009/01/killing-sql-server-process-with-x-lock.html
Some more Microsoft info here: http://support.microsoft.com/kb/323630
How big is the table? Do you have problem if you call the procedure from separate windows? Maybe the problem is related to the amount of data the procedure is working with and lack of indexes.