I had a query script in SQL server management studio which is as below:
if OBJECT_ID('tempdb..#temp') IS NOT NULL
drop table #temp
select somecolumn into #temp from sometable where somecondition
if OBJECT_ID('tempdb..#temp') IS NOT NULL
drop table #temp
select somecolumn2 into #temp from sometable2 where somecondition2
I add the drop table line to ensure the #temp table is cleaned from the cache. However, for repeated running the script, I still got error as "here is already an object named '#temp' in the database." in the second select line. It seems that the drop table didn't take effect as I wish.
if OBJECT_ID('tempdb..#temp') IS NOT NULL
drop table #temp
select somecolumn into #temp from sometable where somecondition
GO --<-- Separate these two block with a batch separator
if OBJECT_ID('tempdb..#temp') IS NOT NULL
drop table #temp
select somecolumn2 into #temp from sometable2 where somecondition2
Related
If I run each of these batches separately, it works. However, if they are combined into one script (like what is done when a DACPAC script runs, or putting them both into one tab in SSMS), I get an Invalid column name error on the second insert. Why is that? If I need these to run in one script, do I need to use a different name for the temp table for the second batch? Or am I missing something that would allow me to use the same name?
IF OBJECT_ID('tempdb..#source') IS NOT NULL DROP TABLE #source
SELECT FirstName, LastName INTO #source FROM Musician WHERE 1 = 0; -- set up temp table schema
INSERT INTO #source ( FirstName, LastName )
VALUES
('Geddy', 'Lee'),
('Alex', 'Lifeson')
SELECT * FROM #source
GO
IF OBJECT_ID('tempdb..#source') IS NOT NULL DROP TABLE #source
SELECT [Name], Genre INTO #source FROM Band WHERE 1 = 0; -- set up temp table schema
INSERT INTO #source ( [Name], Genre )
VALUES
('Rush', 'Rock'),
('Ratt', 'Rock')
SELECT * FROM #source
GO
Each batch is parsed independently. So it works when you use GO because they are in different batches.
When you put everything in the same batch, SQL Server parses what it sees, and it is blind to logic like DROP commands hidden behind IF conditionals. Try the following and you'll find the same:
IF (1=0) DROP TABLE IF EXISTS #x; CREATE TABLE #x(i int);
IF (1=1) DROP TABLE IF EXISTS #x; CREATE TABLE #x(j date);
You and I both know that only one of those will ever execute, but the parser spots the redundant table name before it ever gets to execution (or evaluating any conditionals).
This works because, again, each batch is now parsed in isolation:
IF (1=0) DROP TABLE IF EXISTS #x; CREATE TABLE #x(i int);
GO
IF (1=1) DROP TABLE IF EXISTS #x; CREATE TABLE #x(j date);
This will in fact fail even though it passes parsing (highlight and select Parse instead of Execute), so the blindness goes both ways:
IF (1=0) DROP TABLE IF EXISTS #x; CREATE TABLE #x(i int);
GO
IF (1=1) CREATE TABLE #x(j date);
Using go after dropping the tables in both block will do the trick.
IF OBJECT_ID('tempdb..#source') IS NOT NULL DROP TABLE #source
go
SELECT FirstName, LastName INTO #source FROM Musician WHERE 1 = 0; -- set up temp table schema
INSERT INTO #source ( FirstName, LastName )
VALUES
('Geddy', 'Lee'),
('Alex', 'Lifeson')
SELECT * FROM #source
GO
IF OBJECT_ID('tempdb..#source') IS NOT NULL DROP TABLE #source
go
SELECT [Name], Genre INTO #source FROM Band WHERE 1 = 0; -- set up temp table schema
INSERT INTO #source ( [Name], Genre )
VALUES
('Rush', 'Rock'),
('Ratt', 'Rock')
SELECT * FROM #source
GO
I have a temp table which needs to be recreated with different where conditions. Even though I have a drop statement for the temp table the query fails when executed, is there any way to overcome this issue. Please find the below example for better clarity. any help is much appreciated.
drop table if exists table1;
create table table1(id int)
insert into table1 values (2),(3)
drop table if exists #temp;
select * into #temp from(select * from table1 where id=2)a;
drop table if exists #temp;
select * into #temp from(select * from table1 where id=3)a;
Try using this. It is good practice to use GO to make batches of your query.
drop table if exists table1;
go
create table table1(id int)
insert into table1 values (2),(3)
go
drop table if exists #temp;
go
select * into #temp from table1 where id=2;
go
drop table if exists #temp;
go
select * into #temp from table1 where id=3;
I have a procedure Sp1:
Begin
select product_id, product_name
from product
select dept_id, dept_name
from department
end
My procedure returns two result sets, now I call to this procedure in another procedure using:
exec SP1
How can I access the results of SP1 in this other procedure?
You can get the results from an SP into a table by using the INSERT INTO..EXEC syntax. I don't advise it, however, as it relies on all datasets being returned from the SP to have the same definition:
USE Sandbox;
GO
CREATE PROC TestProc1 AS
SELECT *
FROM (VALUES(1,'T-Shirt'),
(2,'Jeans'),
(3,'Spotlight')) V(ProductID,ProductName);
SELECT *
FROM (VALUES(1,'Clothing'),
(2,'Lighting')) V(DeptID, DepartmentName);
GO
CREATE TABLE #TempTable (ID int, [Name] varchar(15));
INSERT INTO #TempTable
EXEC TestProc1;
SELECT *
FROM #TempTable;
GO
DROP TABLE #TempTable
DROP PROC TestProc1;
As soon as you throw in a dataset that has a different definition (for example, different number of columns, or perhaps a value that can't be implicitly cast (i.e. 'abc' to an int) it'll fail. For example:
USE Sandbox;
GO
CREATE PROC TestProc1 AS
SELECT *
FROM (VALUES(1,'T-Shirt',1),
(2,'Jeans',1),
(3,'Spotlight',2)) V(ProductID,ProductName,DeptID);
SELECT *
FROM (VALUES(1,'Clothing'),
(2,'Lighting')) V(DeptID, DepartmentName);
GO
CREATE TABLE #TempTable (ID int, [Name] varchar(15));
--fails
INSERT INTO #TempTable
EXEC TestProc1;
SELECT *
FROM #TempTable;
GO
DROP TABLE #TempTable;
GO
CREATE TABLE #TempTable (ID int, [Name] varchar(15),OtherID int);
--fails
INSERT INTO #TempTable
EXEC TestProc1;
SELECT *
FROM #TempTable;
GO
DROP TABLE #TempTable
DROP PROC TestProc1;
You should really be using multiple SP's and handling the data that way.
After having created a temporary table and declaring the data types like so;
CREATE TABLE #TempTable(
ID int,
Date datetime,
Name char(20))
How do I then insert the relevant data which is already held on a physical table within the database?
INSERT INTO #TempTable (ID, Date, Name)
SELECT id, date, name
FROM physical_table
To insert all data from all columns, just use this:
SELECT * INTO #TempTable
FROM OriginalTable
Don't forget to DROP the temporary table after you have finished with it and before you try creating it again:
DROP TABLE #TempTable
SELECT ID , Date , Name into #temp from [TableName]
My way of Insert in SQL Server. Also I usually check if a temporary table exists.
IF OBJECT_ID('tempdb..#MyTable') IS NOT NULL DROP Table #MyTable
SELECT b.Val as 'bVals'
INTO #MyTable
FROM OtherTable as b
SELECT *
INTO #TempTable
FROM table
I have provided two approaches to solve the same issue,
Solution 1: This approach includes 2 steps, first create a temporary table with
specified data type, next insert the value from the existing data
table.
CREATE TABLE #TempStudent(tempID int, tempName varchar(MAX) )
INSERT INTO #TempStudent(tempID, tempName) SELECT id, studName FROM students where id =1
SELECT * FROM #TempStudent
Solution 2: This approach is simple, where you can directly insert the values to
temporary table, where automatically the system take care of creating
the temp table with the same data type of original table.
SELECT id, studName INTO #TempStudent FROM students where id =1
SELECT * FROM #TempStudent
After you create the temp table you would just do a normal INSERT INTO () SELECT FROM
INSERT INTO #TempTable (id, Date, Name)
SELECT t.id, t.Date, t.Name
FROM yourTable t
The right query:
drop table #tmp_table
select new_acc_no, count(new_acc_no) as count1
into #tmp_table
from table
where unit_id = '0007'
group by unit_id, new_acc_no
having count(new_acc_no) > 1
insert into #temptable (col1, col2, col3)
select col1, col2, col3 from othertable
Note that this is considered poor practice:
insert into #temptable
select col1, col2, col3 from othertable
If the definition of the temp table were to change, the code could fail at runtime.
Basic operation of Temporary table is given below, modify and use as per your requirements,
-- CREATE A TEMP TABLE
CREATE TABLE #MyTempEmployeeTable(tempUserID varchar(MAX), tempUserName varchar(MAX) )
-- INSERT VALUE INTO A TEMP TABLE
INSERT INTO #MyTempEmployeeTable(tempUserID,tempUserName) SELECT userid,username FROM users where userid =21
-- QUERY A TEMP TABLE [This will work only in same session/Instance, not in other user session instance]
SELECT * FROM #MyTempEmployeeTable
-- DELETE VALUE IN TEMP TABLE
DELETE FROM #MyTempEmployeeTable
-- DROP A TEMP TABLE
DROP TABLE #MyTempEmployeeTable
INSERT INTO #TempTable(ID, Date, Name)
SELECT OtherID, OtherDate, OtherName FROM PhysicalTable
insert #temptable
select idfield, datefield, namefield from yourrealtable
All the above mentioned answers will almost fullfill the purpose. However, You need to drop the temp table after all the operation on it. You can follow-
INSERT INTO #TempTable (ID, Date, Name)
SELECT id, date, name
FROM physical_table;
IF OBJECT_ID('tempdb.dbo.#TempTable') IS NOT NULL
DROP TABLE #TempTable;
IF EXISTS (SELECT name FROM sysobjects WHERE name = 'myTrigger' AND type = 'TR')
BEGIN
DROP TRIGGER myTrigger
END
GO
go
create trigger myTrigger
on mytable_backup
instead of insert
as
begin
declare #seq int
select #seq = seq from inserted
if exists (select * from mytable_backup where seq= #seq) begin
delete from mytable_backup where seq=#seq
end
insert into mytable_backup
select * from inserted
end
go
I've written this trigger to check while inserting if seq column is repeated then update the previous row with same seq if seq doesn't exits insert it with new seq.
In ssis package I'm using OLEDB table(Mytable) as a source which contains.
Name,Age,Seq
Gauraw,30,1
Gauraw,31,1
Kiran,28,3
Kiran,29,3
kiran,28,3
Venkatesh,,4
Venkatesh,28,4
Now I'm loading this table to OLEDB destination(Mytable_backup) as destination.
I suppose to get output as.
Gauraw,31,1
kiran,28,3
Venkatesh,28,4
But I'm getting all the records from Mytable into Mytable_backup.
is anything wrong with my trigger?
I think that this trigger will just take the first row and compare it with the existing. If I understand what you want to do you can quit easy do this:
IF EXISTS (SELECT name FROM sysobjects WHERE name = 'myTrigger' AND type = 'TR')
BEGIN
DROP TRIGGER myTrigger
END
GO
go
create trigger myTrigger
on mytable_backup
instead of insert
as
begin
insert into mytable_backup
select
*
from
inserted
WHERE NOT EXISTS
(
SELECT
NULL
FROM
mytable_backup AS mytable
WHERE
inserted.seq=mytable.seq
)
end
go
EDIT
So I found out what was going on. If you insert all of the rows in one go the inserted contains all the rows.. Sorry my mistake. If there are duplicates in your data your example do not show which to choose. I have chosen the one with the maximum of age (don't know what your requirements is). Here is a update with the full example
Table structure
CREATE TABLE mytable_backup
(
Name VARCHAR(100),
Age INT,
Seq INT
)
GO
Trigger
create trigger myTrigger
on mytable_backup
instead of insert
as
begin
;WITH CTE
AS
(
SELECT
ROW_NUMBER() OVER(PARTITION BY inserted.Seq ORDER BY Age) AS RowNbr,
inserted.*
FROM
inserted
WHERE NOT EXISTS
(
SELECT
NULL
FROM
mytable_backup
WHERE
mytable_backup.Seq=inserted.Seq
)
)
insert into mytable_backup(Age,Name,Seq)
SELECT
CTE.Age,
CTE.Name,
cte.Seq
FROM
CTE
WHERE
CTE.RowNbr=1
end
GO
Insert of test data
INSERT INTO mytable_backup
VALUES
('Gauraw',30,1),
('Gauraw',31,1),
('Kiran',28,3),
('Kiran',29,3),
('kiran',28,3),
('Venkatesh',20,4),
('Venkatesh',28,4)
SELECT * FROM mytable_backup
Drop of the database objects
DROP TRIGGER myTrigger
DROP TABLE mytable_backup
Your original code has two flaws:
It assumes that only one record is inserted at a time.
Your insert into mytable_backup happens outside of the if condition. That insert will execute every time.