I have a situation where i want to update multiple columns but i don't want to write so many UPDATE lines.
Is it possible to use a variable as column name ? , then that variable will change inside WHILE loop, consider the following sample below.
EX:
Table: Test
Column01
Column02
Column03
Column04
Column05
Column06
Column07
Column08
Column09
Column10
DECLARE #ctr INT = 01
WHILE #ctr <= 10
BEGIN
-->> Is it possible to perform concatenation here to manipulate column name ?
UPDATE Test SET (Column + #ctr) = SomeValueHere... -->> assume that #ctr = 01
END
SET #ctr = #ctr + 1
The query every loop will look something:
-->> The first loop will look something like
UPDATE Test SET Column01 = SomeValueHere...
-->> The second loop will look something like
UPDATE Test SET Column02 = SomeValueHere...
How can i achieve this ?
Thank in advance.
You can do this with dynamic SQL.
DECLARE #script NVARCHAR(max)
SET #script = 'UPDATE Test SET (Column' + #ctr + ') = ' [whatever]
EXECUTE (#script)
Related
Here's the story. I'm trying to pull metadata from a master table and from a series of tables whose names are based on values in the master table. There is no foreign key.
If there was a key, it is that the primary key from from the master table is appended to the end of the child table. The master table is hsi.keytypetable. The child tables are hsi.keyitemxxx where the xxx is a value (keytypenum) pulled from the master table.
All I'm trying to pull from the child table right now is a count of values. In the current form, the query, #sql1, is failing to populate #keytypenum, although when I look at query itself, and run it in a separate window, it works like a champ. The problem continues in the second query, #sql2, where I am getting the error,
Must declare the scalar variable "#keytypenum"
As far as I can tell, I've declared the thing. I'm guessing I have a similar problem with syntax in each query.
SET CONCAT_NULL_YIELDS_NULL OFF
declare #keytypedata table
(
Keyword varchar(50),
DateType varchar(50),
"Length" int,
"Count" int
)
declare #keywordcount int
declare #x int = 1
declare #keytypenum int
declare #sql1 varchar(max)
declare #sql2 varchar(max)
/* Determine how many records are in the master table so that I can cycle thru each one, getting the count of the child tables. */
Select #keywordcount = (Select count(*) from hsi.keytypetable)
/* #x is the counter. I'll cycle through each row in the master using a WHILE loop */
WHILE #x < #keywordcount+1
BEGIN
/* One row at a time, I'll pull the KEYTYPENUM and store it in #keytypenum. (I don't really need the order by, but I like having things in order!)
** I take the rows in order b using my counter, #x, as the offset value and fetch only 1 row at a time. When I run this query in a separate screen,
** it works well, obviously with providing a fixed offset value. */
set #sql1 =
'Set #keytypenum =
(Select
KEYTYPENUM
from hsi.keytypetable
order by KEYTYPENUM
OFFSET ' + cast(#x as varchar(4)) + ' ROWS
FETCH NEXT 1 ROWS ONLY)'
EXEC(#sql1)
/* For debugging purposes, I wanted to see that #keytypenum got assigned. This is working. */
print 'KeyTypeNum: '+cast(#keytypenum as varchar(4))
/* I don't know why I had to be my table variable, #keytypedata, in single quotes at the beginning, but it wouldn't work if
** I didn't. The problem comes later on with restricting the query by the aforementioned #keytypenum. Remember this variable is an INT. All values
** for this field are indeed integers, and there are presently 955 rows in the table. The maximum value is 1012, so we're safe with varchar(4).
*/
SET #sql2 =
'Insert into ' + '#keytypedata' + '
Select
keytype,
CASE
WHEN k.datatype = 1 THEN ''Numeric 20''
WHEN k.datatype = 2 THEN ''Dual Table Alpha''
WHEN k.datatype = 3 THEN ''Currency''
WHEN k.datatype = 4 THEN ''Date''
WHEN k.datatype = 5 THEN ''Float''
WHEN k.datatype = 6 THEN ''Numeric 9''
WHEN k.datatype = 9 THEN ''DateTime''
WHEN k.datatype = 10 THEN ''Single Table Alpha''
WHEN k.datatype = 11 THEN ''Specific Currency''
WHEN k.datatype = 12 THEN ''Mixed Case Dual Table Alpha''
WHEN k.datatype = 13 THEN ''Mixed Case Single Table Alpha''
END,
keytypelen,
(Select count(*) from hsi.keyitem' + cast(#keytypenum as varchar(4)) + ')
FROM
hsi.keytypetable k
where
k.keytypenum = ' + cast(#keytypenum as varchar(4))+''
/* Printing out where I am with cycling thru the master table, just for troubleshooting*/
print #x
/* Increment the counter*/
set #x = #x + 1
END
/* This query is simply to display the final results. */
select *
from #keytypedata
order by 1
/* Print statements below are for troubleshooting. They should show what the 2 queries currently look like. */
Print #sql1
Print #sql2
Yeah, you can't do that. Your variables are going out of scope.
Each variable has it's scope restricted to it's own session, and exec() basically creates a new session.
This will throw and error that #x is undefined:
declare #x int = 1
exec ('select #x')
As will this:
exec ('declare #x int = 2')
exec ('select #x')
And this:
exec ('declare #x int = 2')
select #x
You'd have to do it like this:
exec ('declare #x int = 2; select #x')
Or otherwise pass the results back somehow. The sp_executesql suggestion in #TT.'s answer is a good idea.
When you declare variables, the visibility is restricted to the scope in which they are declared. When you EXEC a statement, a new session and thereby a new scope is created.
What you need to do is output the scalar variable using an OUTPUT parameter in a call to sp_executesql:
SET #sql='Set #keytypenum = ...';
EXEC sp_executesql #sql,N'#keytypenum INT OUT', #keytypenum OUTPUT;
Note that the #sql variable needs to be an NVARCHAR.
You can find more examples here.
I am trying to create some queries passing the column name dynamically, but for some reason is returning the Column name and not the value.
I am not very familiar with this technique, for now #cmd is empty because before I write the dynamic query I wanted to make sure I will pass the correct parameters. In other words, I want to print the value that is in the column A1.
Can anyone please tell or guide me to get the value instead? I will appreciate any help.
HubFinal
id Cart PO A1 A1E
----------------------------------------------------------
01 Cart1 24432 upc1,1/25/2016,1 Available
-----------------------------------------------------------
02 Cart2 24888 upc10,1/25/2030,1 No Available
Query
WHILE (#i <= 1)
BEGIN
-- get Column Name Example A1
SET #Compartment = (SELECT compartment FROM #Compartment_table WHERE idx = #i);
-- get data from HUBFINAL to insert into HUBTEMP
SET #PO = (Select PO FROM HubFinal Where CartPlate =#CartPlate);
-- pass dynamically the comlumn name, in this case A1
SET #CompValue = (Select #Compartment From HubFinal Where CartPlate =#CartPlate);
Print #Compartment
Print #PO
Print #CompValue
--insert to final table
Declare #cmd nvarchar(4000) =
-- do something with values gotten above
EXEC(#cmd)
-- increment counter for next compartment
SET #i = #i + 1
END
Output
-- this is what is printed
A1
24432
A1
as #Sean Lange told you ... It's not recommended to loop in sql server as it will hit the performance (you should find another way to solve you problem), but if you want to get the value of a dynamic column name you can do it like that
as I don't know the data type you are working with I assuming that it's NVARCHAR
DECLARE #value NVARCHAR(MAX);
SET #CompValue = CONVERT(NVARCHAR(MAX), 'Select #val='+ #Compartment + ' From HubFinal Where CartPlate = #CartPlate')
EXECUTE sp_executesql #CompValue, N'#CartPlate NVARCHAR(MAX),#val NVARCHAR OUTPUT', #CartPlate = #CartPlate, #val= #value OUTPUT
PRINT(#value)
I want to optimize the insertion of millions of records in SQL server.
while #i <= 2000000 begin
set #sql = 'select #max_c= isnull(max(c), 0) from nptb_data_v7'
Execute sp_executesql #sql, N'#max_c int output', #max_c output
set #max_c= #max_c+ 1
while #j <= 10 begin
set #sql_insert = #sql_insert + '(' + cast(#i as varchar) +',' + cast(#b as varchar) + ',' + cast(#max_c as varchar)+ '),'
if len(ltrim(rtrim(#sql_insert)))>=3800
begin
set #sql_insert = SUBSTRING(#sql_insert, 1 , len(ltrim(rtrim(#sql_insert)))-1)
Execute sp_executesql #sql_insert
set #sql_insert = 'insert into dbo.nptb_data_v7 values '
end
set #b = #b + 1
if #b > 100000
begin
set #b = 1
end
set #j = #j + 1
end
set #i = #i + 1
set #j=1
end
set #sql_insert = SUBSTRING(#sql_insert, 1 , len(ltrim(rtrim(#sql_insert)))-1)
Execute sp_executesql #sql_insert
end
I want to optimize the above code as it is taking hours to complete this.
There are quite a few critical things I want to hit on. First, iteratively doing just about anything (especially inserting millions of rows) in SQL is almost never the right way to go. So right off the bat, we need to look at throwing that model out the window.
Second, your approach appears to be to loop over a set of numbers millions of times and add a new set of parentheses to the VALUES clause in the insert, winding you up with a ridiciulously long string of brackets which just look like (1,1,2),(1,2,2)... etc. If you WERE going to do this iteratively, you'd want to make it so that every loop just did an insert rather than building an unwielding insert string.
Third, nothing here needs dynamic SQL; not the first assignment of #max_c, nor the insertion into nptb_data_v7. You could statically construct all of these statements without having to use dynamic sql and a) obfuscate your code and b) open yourself up to injection attack.
With those out of the way, now we can get down to brass tacks. All this appears to be doing is creating combinations of an auto incrementing number between 1 and 2 million, and based on some rules, a value for #i, #b and the current iteration.
The first thing you need here is a tally table (just a big table of integers). There are tons of ways to do this, but here's a succinct script which should get you started.
http://www.sqlservercentral.com/scripts/Advanced+SQL/62486/
Once you have this table made, your script becomes a matter of self joining your newly created numbers/tally table to itself so that the rules you define are satisfied for your insert. Since it's not 100% clear what you're code is trying to get at, nor can I run it from the information provided, this is where I have to leave it up to you. If you can provide a better summary of what your tables look like, your variable declarations and your objective, I may be able to help you write some code. But hopefully this should get you started.
I have the following UI on my program
Then I have a table with the following columns
What I want to do is to write a query that looks at the "Item" string on my combo box and then updates the above column, in this case Handbooks, of the table above where the Generalist name matches the column. The record should be updated every time, in other words, I want to replace the information every time.
I have no idea where to begin on this. This the query I used to create the table I want to update.
SELECT repName.Rep_Name, repName.Handbooks, repName.Leaves
FROM repName INNER JOIN
Positions ON repName.Job_Code = Positions.Job_Code
ORDER BY repName.Rep_Name
In case this helps somewhat
My first guess is, as I put on the comment above, that your design is not well.
Nonetheless if in your scenario you still need to do what you are asking, then you could use dynamic sql:
DECLARE #sqlCommand varchar(1000)
DECLARE #column varchar(50)
SET #column = 'foo'
SET #value = 'bar'
SET #sqlCommand = 'UPDATE TABLE SET ' + #column + ' = ' + #value
EXEC (#sqlCommand)
You could pass the value with parameters or whatever approach is better to your case.
I want to copy large amount of datas from one table to another table.I used cursors in Stored Procedure to do the same.But it is working only for tables with less records.If the tables contain more records it is executing for long time and hanged.Please give some suggestion as how can i copy the datas in faster way,My SP is as below:
--exec uds_shop
--select * from CMA_UDS.dbo.Dim_Shop
--select * from UDS.dbo.Dim_Shop
--delete from CMA_UDS.dbo.Dim_Shop
alter procedure uds_shop
as
begin
declare #dwkeyshop int
declare #shopdb int
declare #shopid int
declare #shopname nvarchar(60)
declare #shoptrade int
declare #dwkeytradecat int
declare #recordowner nvarchar(20)
declare #LogMessage varchar(600)
Exec CreateLog 'Starting Process', 1
DECLARE cur_shop CURSOR FOR
select
DW_Key_Shop,Shop_ID,Shop_Name,Trade_Sub_Category_Code,DW_Key_Source_DB,DW_Key_Trade_Category,Record_Owner
from
UDS.dbo.Dim_Shop
OPEN cur_shop
FETCH NEXT FROM cur_shop INTO #dwkeyshop,#shopid,#shopname,#shoptrade, #shopdb ,#dwkeytradecat,#recordowner
WHILE ##FETCH_STATUS = 0
BEGIN
Set #LogMessage = ''
Set #LogMessage = 'Records insertion/updation start date and time : ''' + Convert(varchar(19), GetDate()) + ''''
if (isnull(#dwkeyshop, '') <> '')
begin
if not exists (select crmshop.DW_Key_Shop from CMA_UDS.dbo.Dim_Shop as crmshop where (convert(varchar,crmshop.DW_Key_Shop)+CONVERT(varchar,crmshop.DW_Key_Source_DB)) = convert(varchar,(CONVERT(varchar, #dwkeyshop) + CONVERT(varchar, #shopdb))) )
begin
Set #LogMessage = Ltrim(Rtrim(#LogMessage)) + ' ' + 'Record for shop table is inserting...'
insert into
CMA_UDS.dbo.Dim_Shop
(DW_Key_Shop,DW_Key_Source_DB,DW_Key_Trade_Category,Record_Owner,Shop_ID,Shop_Name,Trade_Sub_Category_Code)
values
(#dwkeyshop,#shopdb,#dwkeytradecat,#recordowner,#shopid,#shopname,#shoptrade)
Set #LogMessage = Ltrim(Rtrim(#LogMessage)) + ' ' + 'Record successfully inserted in shop table for shop Id : ' + Convert(varchar, #shopid)
end
else
begin
Set #LogMessage = Ltrim(Rtrim(#LogMessage)) + ' ' + 'Record for Shop table is updating...'
update
CMA_UDS.dbo.Dim_Shop
set DW_Key_Trade_Category=#dwkeytradecat,
Record_Owner=#recordowner,
Shop_ID=#shopid,Shop_Name=#shopname,Trade_Sub_Category_Code=#shoptrade
where
DW_Key_Shop=#dwkeyshop and DW_Key_Source_DB=#shopdb
Set #LogMessage = Ltrim(Rtrim(#LogMessage)) + ' ' + 'Record successfully updated for shop Id : ' + Convert(varchar, #shopid)
end
end
Exec CreateLog #LogMessage, 0
FETCH NEXT FROM cur_shop INTO #dwkeyshop,#shopid,#shopname,#shoptrade, #shopdb ,#dwkeytradecat,#recordowner
end
CLOSE cur_shop
DEALLOCATE cur_shop
End
Assuming targetTable and destinationTable have the same schema...
INSERT INTO targetTable t
SELECT * FROM destinationTable d
WHERE someCriteria
Avoid the use of cursors unless there is no other way (rare).
You can use the WHERE clause to filter out any duplicate records.
If you have an identity column, use an explicit column list that doesn't contain the identity column.
You can also try disabling constraints and removing indexes provided you replace them (and make sure the constraints are checked) afterwards.
If you are on SQL Server 2008 (onwards) you can use the MERGE statement.
From my personal experience, when you copy the huge data from one table to another (with similar constraints), drop the constraints on the table where the data is getting copied. Once the copy is done, reinstate all the constraints again.
I could reduce the copy time from 7 hours to 30 mins in my case (100 million records with 6 constraints)
INSERT INTO targetTable
SELECT * FROM destinationTable
WHERE someCriteria (based on Criteria you can copy/move the records)
Cursors are notoriously slow and ram can begin to become a problem for very large datasets.
It does look like you are doing a good bit of logging in each iteration, so you may be stuck with the cursor, but I would instead look for a way to break the job up into multiple invocations so that you can keep your footprint small.
If you have an autonumber column, I would add a '#startIdx bigint' to the procedure, and redefine your cursor statement to take the 'TOP 1000' 'WHERE [autonumberFeild] <= #startIdx Order by [autonumberFeild]'. Then create a new stored procedure with something like:
DECLARE #startIdx bigint = 0
WHILE select COUNT(*) FROM <sourceTable> > #startIdx
BEGIN
EXEC <your stored procedure> #startIdx
END
SET #startIdx = #startIdx + 1000
Also, make sure your database files are set to auto-grow, and that it does so in large increments, so you are not spending all your time growing your datafiles.