I am trying to insert data to a table based on another table in SQL Management Studio 2014. I have a list of User having count 9500. For each user I want to insert multiple data (Detailed data) in another table. Here is my Case:
DECLARE #MAXID INT, #Counter INT
DECLARE #TEMP1 TABLE (
ROWID int identity(1,1) primary key,
userName nvarchar(50),
userEmail nvarchar(256)
)
insert into #TEMP1
select UserName, UserEmail from utblUsers
SET #COUNTER = 1
SELECT #MAXID = COUNT(*) FROM #TEMP1
declare #userEmail nvarChar(256),#UserName nvarchar(50)
WHILE (#COUNTER <= #MAXID/2)
BEGIN
SELECT #userEmail=UserEmail, #UserName=UserName
FROM #TEMP1 AS PT
WHERE ROWID = #COUNTER
exec sifms.dbo.sp_UserDetailInsert #UserName, #userEmail
SET #COUNTER = #COUNTER + 1
END;
Here inside the loop I am calling sp_UserDetailInsert which is inserting data to my detail table which looks like this:
DECLARE #cnt INT = 1,#sumAmount bigint, #deductedFromId int;
WHILE #cnt <= 5
BEGIN
if #cnt=1
begin
-------Doing some calculation for SumAmount and get the deductedFromID
Insert into utblUserDetails values (#UserName,#userEmail, #sumAmount, #deductedFromId)
end
else if #cnt=2
begin
-------Doing some calculation for SumAmount and get the deductedFromID
Insert into utblUserDetails values (#UserName,#userEmail, #sumAmount, #deductedFromId)
end
.
.
.
.
else #cnt=5
begin
-------Doing some calculation for SumAmount and get the deductedFromID
Insert into utblUserDetails values (#UserName,#userEmail, #sumAmount, #deductedFromId)
end
END
My query is executing absolutely fine for few minutes. But the problems are
Query is executing very slowly
SQL Management Studio is getting closed after few minutes of execution.
I have tried clearing cache using
DBCC FREESYSTEMCACHE ('ALL')
DBCC FREESESSIONCACHE
DBCC FREEPROCCACHE
But the result is still same. I have tried using cursor as well but cursor is much havier and it consume more memory than the loop. As the loop is light weight than cursor, I prefer loop in this case.I just want to get the user details at first and then I want to minimize the complexity.
I have to write a. insert statement that looks at a table and inserts a record if the conditions are met. This is a one time thing so not overly concerned about it being efficient.
the table contains a work breakdown structure for a project ( each project having, a project level(wbs1), a phase level(wbs2) and a task level (wbs3)
that table looks like this
Wbs1 wbs2 wbs3 name
262 ProjectA
262 01 Data Analsys
262 01 01 Data cleansing
262 01 02 Data Transforming
I need to insert a phase(WBS2) to each project(WBS1) with an insert statement, for example adding a wbs2 "02" to each project(wbs1).
writing the insert statment is no problem and I select the data from the project level since most of it is redundant so no issue there, im just not sure how to have it loop through and add the phase to each project, since there are multiple rows with the same project(wbs1) number
insert statement sample
Insert into dbo.pr ([WBS1],[WBS2],[WBS3],[Name])
(Select [WBS1],'999',[WBS3],'In-House Expenses'
from dbo.pr where wbs1 = #ProjectID
and wbs2 ='')
How do i run this statement to inserta row every project?(wbs1)
hopefully this makes sense.
You can use a temporary table with an added RowNumber field and then a WHILE loop to handle the looping over each row. You can then run an IF EXISTS as a criteria check before running the stored procedure. See below for example
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
DECLARE #ProjectId NVARCHAR(50) = '262'
CREATE TABLE #Temp (RowNumber INT, wbs1 NVARCHAR(255), wbs2 NVARCHAR(255), wbs3 NVARCHAR(255), name NVARCHAR(255))
INSERT INTO #Temp
SELECT ROW_NUMBER() OVER (ORDER BY wbs1, wbs2, wbs3, name)
,pr.*
FROM pr
select *
from #temp
-- Create loop variables to handle incremeting
DECLARE #Counter INT = 1;
DECLARE #MaxLoop INT = (SELECT COUNT(wbs1) FROM #temp)
WHILE #Counter <= #MaxLoop
BEGIN
-- Use if Exists to check the current looped meets whatever critiera you have
IF EXISTS (SELECT 'true'
FROM #Temp
WHERE RowNumber = #Counter
AND wbs1 = #ProjectId
AND wbs2 = ''
)
BEGIN
Insert into pr (wbs1,wbs2,wbs3,name)
(Select [WBS1],'999',[WBS3],'In-House Expenses'
from #temp where RowNumber = #Counter)
END
-- Remember to increment the counter
SET #Counter = #Counter + 1;
END
SELECT *
FROM pr
drop table #temp
Have a Table with the CSV Values in the columns as below
ID Name text
1 SID,DOB 123,12/01/1990
2 City,State,Zip NewYork,NewYork,01234
3 SID,DOB 456,12/21/1990
What is need to get is 2 tables in this scenario as out put with the corresponding values
ID SID DOB
1 123 12/01/1990
3 456 12/21/1990
ID City State Zip
2 NewYork NewYork 01234
Is there any way of achieving it using a Cursor or any other method in SQL server?
There are several ways that this can be done. One way that I would suggest would be to split the data from the comma separated list into multiple rows.
Since you are using SQL Server, you could implement a recursive CTE to split the data, then apply a PIVOT function to create the columns that you want.
;with cte (id, NameItem, Name, textItem, text) as
(
select id,
cast(left(Name, charindex(',',Name+',')-1) as varchar(50)) NameItem,
stuff(Name, 1, charindex(',',Name+','), '') Name,
cast(left(text, charindex(',',text+',')-1) as varchar(50)) textItem,
stuff(text, 1, charindex(',',text+','), '') text
from yt
union all
select id,
cast(left(Name, charindex(',',Name+',')-1) as varchar(50)) NameItem,
stuff(Name, 1, charindex(',',Name+','), '') Name,
cast(left(text, charindex(',',text+',')-1) as varchar(50)) textItem,
stuff(text, 1, charindex(',',text+','), '') text
from cte
where Name > ''
and text > ''
)
select id, SID, DOB
into table1
from
(
select id, nameitem, textitem
from cte
where nameitem in ('SID', 'DOB')
) d
pivot
(
max(textitem)
for nameitem in (SID, DOB)
) piv;
See SQL Fiddle with Demo. The recursive version will work great but if you have a large dataset, you could have some performance issues so you could also use a user defined function to split the data:
create FUNCTION [dbo].[Split](#String1 varchar(MAX), #String2 varchar(MAX), #Delimiter char(1))
returns #temptable TABLE (colName varchar(MAX), colValue varchar(max))
as
begin
declare #idx1 int
declare #slice1 varchar(8000)
declare #idx2 int
declare #slice2 varchar(8000)
select #idx1 = 1
if len(#String1)<1 or #String1 is null return
while #idx1 != 0
begin
set #idx1 = charindex(#Delimiter,#String1)
set #idx2 = charindex(#Delimiter,#String2)
if #idx1 !=0
begin
set #slice1 = left(#String1,#idx1 - 1)
set #slice2 = left(#String2,#idx2 - 1)
end
else
begin
set #slice1 = #String1
set #slice2 = #String2
end
if(len(#slice1)>0)
insert into #temptable(colName, colValue) values(#slice1, #slice2)
set #String1 = right(#String1,len(#String1) - #idx1)
set #String2 = right(#String2,len(#String2) - #idx2)
if len(#String1) = 0 break
end
return
end;
Then you can use a CROSS APPLY to get the result for each row:
select id, SID, DOB
into table1
from
(
select t.id,
c.colname,
c.colvalue
from yt t
cross apply dbo.split(t.name, t.text, ',') c
where c.colname in ('SID', 'DOB')
) src
pivot
(
max(colvalue)
for colname in (SID, DOB)
) piv;
See SQL Fiddle with Demo
You'd need to approach this as a multi-step ETL project. I'd probably start with exporting the two types of rows into a couple staging tables. So, for example:
select * from yourtable /* rows that start with a number */
where substring(text,1,1) in
('0','1','2','3','4','5','6','7','8','9')
select * from yourtable /* rows that don't start with a number */
where substring(text,1,1)
not in ('0','1','2','3','4','5','6','7','8','9')
/* or simply this to follow your example explicitly */
select * from yourtable where name like 'sid%'
select * from yourtable where name like 'city%'
Once you get the two types separated then you can split them out with one of the already written split functions found readily out on the interweb.
Aaron Bertrand (who is on here often) has written up a great post on the variety of ways to split comma delimted strings using SQL. Each of the methods are compared and contrasted here.
http://www.sqlperformance.com/2012/07/t-sql-queries/split-strings
If your row count is minimal (under 50k let's say) and it's going to be a one time operation than pick the easiest way and don't worry too much about all the performance numbers.
If you have a ton of rows or this is an ETL process that will run all the time then you'll really want to pay attention to that stuff.
A simple solution using cursors to build temporary tables. This has the limitation of making all columns VARCHAR and would be slow for large amounts of data.
--** Set up example data
DECLARE #Source TABLE (ID INT, Name VARCHAR(50), [text] VARCHAR(200));
INSERT INTO #Source
(ID, Name, [text])
VALUES (1, 'SID,DOB', '123,12/01/1990')
, (2, 'City,State,Zip', 'NewYork,NewYork,01234')
, (3, 'SID,DOB', '456,12/21/1990');
--** Declare variables
DECLARE #Name VARCHAR(200) = '';
DECLARE #Text VARCHAR(1000) = '';
DECLARE #SQL VARCHAR(MAX);
--** Set up cursor for the tables
DECLARE cursor_table CURSOR FAST_FORWARD READ_ONLY FOR
SELECT s.Name
FROM #Source AS s
GROUP BY Name;
OPEN cursor_table
FETCH NEXT FROM cursor_table INTO #Name;
WHILE ##FETCH_STATUS = 0
BEGIN
--** Dynamically create a temp table with the specified columns
SET #SQL = 'CREATE TABLE ##Table (' + REPLACE(#Name, ',', ' VARCHAR(50),') + ' VARCHAR(50));';
EXEC(#SQL);
--** Set up cursor to insert the rows
DECLARE row_cursor CURSOR FAST_FORWARD READ_ONLY FOR
SELECT s.Text
FROM #Source AS s
WHERE Name = #Name;
OPEN row_cursor;
FETCH NEXT FROM row_cursor INTO #Text;
WHILE ##FETCH_STATUS = 0
BEGIN
--** Dynamically insert the row
SELECT #SQL = 'INSERT INTO ##Table VALUES (''' + REPLACE(#Text, ',', ''',''') + ''');';
EXEC(#SQL);
FETCH NEXT FROM row_cursor INTO #Text;
END
--** Display the table
SELECT *
FROM ##Table;
--** Housekeeping
CLOSE row_cursor;
DEALLOCATE row_cursor;
DROP TABLE ##Table;
FETCH NEXT FROM cursor_table INTO #Name;
END
CLOSE cursor_table;
DEALLOCATE cursor_table;
I need to use a select expression in a while loop and I use the below sample code:
declare #i integer
set #i=1
while (#i<10)
begin
select #i as m;
set #i=#i+1
END
this code returns 10 separate table! I want it to return all select results in one table... is that possible? if yes... how?
You can use a temp table or table variable for this.
Here's how to do it using a temp table.
CREATE TABLE #t (m INT)
DECLARE #i INT
SET #i=1
WHILE (#i<10)
BEGIN
INSERT INTO #t SELECT #i
SET #i=#i+1
END
SELECT m FROM #t
Very similar with a table variable
DECLARE #t TABLE (m INT)
DECLARE #i INT
SET #i=1
WHILE (#i<10)
BEGIN
INSERT INTO #t SELECT #i
SET #i=#i+1
END
SELECT m FROM #t
It is not possible. Each SELECT statement generates its own result set. You can use temp table to add results of each iteration and then get all in one table. To generate sequence of integers you can use this (for SQL SERVER 2005 + )
;WITH CTE
AS
(
SELECT 1 N
UNION ALL
SELECT N + 1 FROM CTE
WHERE N<10
)
SELECT N FROM CTE
squillman got it...with #t (create table # - table persisted at top-level scope while session is open - if you have several batch statements, you can reference this table in any after declaration until you drop the table)
Cris also got it with #test (declare # table - variable - only persisted in the current scope - single execution batch block...note that there are several performance issues that can be introduced if you use this)
the last type of temp table you can use is a global temp table (create table ## - lasts as long as the session that created it stays open or until it is dropped)
with #t, you may want to add this to the beginning of your script if you don't close the session:
IF OBJECT_ID('tempdb..#t') IS NOT NULL
DROP TABLE #t
Enjoy the temp tables!
declare #i integer
DECLARE #test TABLE(
m /*your data type*/
)
set #i=1
while (#i<10)
begin
insert into #test select #i;
set #i=#i+1
END
select * from #test
I have a relational table with lots of columns. (import_table)
I'm trying to insert all of this data into an object-oriented database.
The object oriented database has tables:
#table (tableId, name)
#row (rowId, table_fk)
#column(colId, table_fk, col_name)
#value(valueId, col_fk, row_fk)
So far I have created a procedure that will read the import_table information_schema and insert the table and the columns correctly into the object-orientated structure.
I then copy the import_data into a temp table with an extra identity-column just to get row-ids. Then iterate through all rows, with an inner loop to iterate through each column and do an insert pr. column.
Like this:
SELECT ROWID=IDENTITY(INT, 1, 1), * INTO #TEST
FROM import_table
DECLARE #COUNTER INT = 1
WHILE #COUNTER <= (SELECT COUNT(*) FROM #TEST)
BEGIN
INSERT INTO #ROW (ROWID, TABLE_FK) VALUES(#COUNTER, 1)
DECLARE #COLUMNCOUNTER INT = 1
WHILE #COLUMNCOUNTER <= (SELECT COUNT(*) FROM #COLUMN WHERE TABLE_FK = 1)
BEGIN
DECLARE #COLNAME NVARCHAR(254) = select col_name from #column where table_fk = 1 and rowid = #columnCounter
DECLARE #INSERTSQL NVARCHAR(1000) = 'insert into #value (column_fk, row_fk, value) select '+cast(#columnCounter as nvarchar(20))', '+cast(#counter as nvarchar(20))+', ' + #colName+' from #test where rowId = '+cast(#counter as nvarchar20))
exec (#insertSQL)
set #columncounter = #columncounter +1
end
set #counter = #counter +1
end
This works, but it is extremely slow.
Any suggestions on how to speed things up?
Redesign the database.
What you have there is a property bad type of table. These are known trade off setups, and guess what - speed is the item they are bad in. Ouch.
You likely could do things faster in C# outside and then stream the inserts back in.
One way to speed up your code is to wrap your inserts in transactions (maybe one transaction per iteration of the outer loop?). Having each one in a separate transaction, as the code is now, will be very slow if there are lots of inserts.