Update a column in a table repeatedly - sql

I have a table like this
CREATE TABLE #tmp(ColSelect NVARCHAR(400),ColParValues XML)
that ColSelect contains SQL Select Statement and ColParValues contains some xml data for parameter value in ColSelect
for example ColSelectcontains:
"SELECT [$12]+19/[$16]-[$54]"
and col 2 contains name value pair that refer to ColSelect parameters
How can I update my table that replace each parameter with relevant value from ColParValues. I use this statement:
update #tmp
SET
ColSelect=REPLACE(ColSelect,c.value('#Value','nvarchar(10)'),c.value('#Res','DECIMAL(24,12)'))
FROM #tmp t1
CROSS APPLY t1.ColParValues.nodes('/root/r') AS n(c)
but this statement replace just one parameter value in each row.
And this is sample data link

This is one solution :
create table #tmp (colselect varchar(200),colparvalues xml)
insert into #tmp (colselect,colparvalues)
values ('select case when [$71]+[$29]+10<25 then 1 else 0 end'
,'<root><r Value="[$71]" Res="1"/><r Value="[$29]" Res="5"/></root>'),
('select case when [$95]*[$29]+10<25 then 1 else 0 end'
,'<root><r Value="[$95]" Res="3"/><r Value="[$29]" Res="5"/></root>')
WHILE ##ROWCOUNT >0
update #tmp
SET
ColSelect=REPLACE(ColSelect,c.value('#Value','nvarchar(10)'),c.value('#Res','DECIMAL(24,12)'))
FROM #tmp t1
CROSS APPLY t1.ColParValues.nodes('/root/r') AS n(c)
WHERE t1.colselect LIKE'%'+replace(c.value('#Value','nvarchar(10)'),'[','')+'%'
select * from #tmp
drop table #tmp
However, very much similar to cursor in performance. Check the performance. Use if ok.

Related

SQL copying record with out specifying column list; ignoring Identity

I'm trying to copy a record from TableA back to TableA, but using a new Identity.
I don't want to specify column list as I have over 100 columns, and there may be more in the future. Id like a chunk of code that can run when/if things change.
After looking similar questions, I have attempted this code
SELECT * INTO #tmp FROM TableA WHERE Id = 1;
ALTER TABLE #tmp DROP COLUMN Id;
INSERT INTO TableA SELECT * FROM #tmp;
Drop Table #tmp;
I am however still getting this error
An explicit value for the identity column in table 'dbo.TableA' can only be specified when a column list is used and IDENTITY_INSERT is ON.
Running a Select * FROM #tmp gives me what I would expect. A single record with all my Columns with the exception of the Id column.
Any Ideas?
Thanks!
EDIT
Here is a pictures of the properties of the Id Column
Use Dynamic SQL: get your list of columns (except ID), build an insert statement using that list, and then call exec on it:
SELECT *
INTO #tmp
FROM TableA
WHERE Id = 1;
ALTER TABLE #tmp DROP COLUMN id;
DECLARE #cols varchar(max);
SELECT
#cols = COALESCE(#cols + ',', '') + COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'TableA' AND COLUMN_NAME <> 'id'
--select #cols -- display column list for debugging
DECLARE #sqltxt varchar(max) = 'INSERT INTO TableA (' + #cols + ') SELECT * FROM #tmp';
--SELECT #sqltxt -- display the statement for debugging
exec (#sqltxt)
DROP TABLE #tmp
Try This
Step 1 :
INSERT INTO Employee1(FirstName,LastName,ManagerID,Salary)
SELECT FirstName,LastName,ManagerID,Salary
FROM Employee1
WHERE EmployeeID=X -- Your Emplyee ID
Step 2:
DELETE FROM Employee1 WHERE EmployeeID=X

Inserting into one Temp table with If condition [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
SQL IF statement is being ignored
Odd error using IF/ELSE IF statements
I'm trying to do inserting into one temptable. But it is throwing an error. Please help
Declare #Live BIT = 'True'
Step 1:-
If #Live = 'True'
BEGIN
IF OBJECT_ID('tempdb..#tempTable') IS NOT NULL
DROP TABLE #tempTable
SELECT TOP 1 INTO #tempTable FROM Table1
END
ELSE
IF OBJECT_ID('tempdb..#tempTable') IS NOT NULL
DROP TABLE #tempTable
SELECT TOP 5 INTO #tempTable FROM Table1
Step 2:-
IF OBJECT_ID('tempdb..#tempTable') IS NOT NULL
DROP TABLE #tempTable
If #Live = 'True'
BEGIN
SELECT TOP 1 INTO #tempTable FROM Table1
END
ELSE
SELECT TOP 5 INTO #tempTable FROM Table1
In Step1 & Step2, error showing as
"There is already an object named '#tempTable' in the database."
Condition: I have to insert the records into one #tempTable
Is there any other alternate to achieve this requirement?
I would recommend delcaring the temp table before hand then using an INSERT INTO instead of a SELECT INTO
IF OBJECT_ID('tempdb..#tempTable') IS NOT NULL
BEGIN
DROP TABLE #tempTable
END
CREATE TABLE #tempTable
(
--columns go here
)
If #Live = 'True'
BEGIN
INSERT INTO #tempTable
SELECT TOP 1 * FROM Table1
END
ELSE
INSERT INTO #tempTable
SELECT TOP 5 * FROM Table1
run this command before running query cause there is already a #tempTable
DROP TABLE #tempTable
and try this code
IF OBJECT_ID('tempdb..#tempTable') is not null
drop table #tempTable
create table #tempTable (a int)
insert into #tempTable (a) select 1
select * from #tempTable

How to SELECT * INTO [tmp table] without declare table?

i want use select statement on the a table and inserting result into a temp table variable, but i don't declare temp table with columns and i want use like this:
Declare #tmp table;
SELECT * INTO #tmp FROM myTable
this want declare columns and data types for #tmp
please help me
You can do this simply without the DECLARE command - which is not valid for #temp tables anyway, only #table variables. Did you try just the following without trying to define #tmp first:
SELECT * INTO #tmp FROM myTable;
With data:
select *
into #tmp
from myTable
No data:
select *
into #tmp
from myTable
where 0=1
BTW, you can not do this with table variables.
select *
into #tmp
from myTable
Table variables need to be declared with the columns.

create temp table of trigger data

I am trying to create an audit trigger without having to specifiy the column list more than once.
To this end, I want to product a temporary table of the content of the INSERTED or DELETED data in the trigger, then process that into an audit table.
If I use this:
IF #ChangeType = 'D'
SELECT * INTO #tmp FROM DELETED
ELSE
SELECT * INTO #tmp FROM INSERTED
Then I get a compilation error at the 2nd SELECT * INTO that the table #tmp already exists.
If I try and work around this using dynamic SQL:
SET #Sql = 'SELECT * INTO #tmp FROM '
IF #ChangeType = 'D'
SET #Sql = #Sq + 'DELETED'
ELSE
SET #Sql = #Sql + 'INSERTED'
EXEC (#Sql)
Then I get an error that the DELETED and INSERTED tables do not exist.
How can I get the INSERTED and DELETED tables in a trigger into a temporary or other in-memory table?
Try to create the temporary table outside the if, like:
SELECT TOP 0 * INTO #tmp FROM DELETED
IF #ChangeType = 'D'
INSERT INTO #tmp SELECT * FROM DELETED
ELSE
INSERT INTO #tmp SELECT * FROM INSERTED
This is a known problem due to the resolve-on-parse of the temp table object. With two SELECT - INTO statements in the same scope, SQL Server throws the towel.
SELECT * INTO #tmp FROM DELETED WHERE 1=0
IF #ChangeType = 'D'
INSERT #tmp SELECT * FROM DELETED
ELSE
INSERT #tmp SELECT * FROM INSERTED
I'd be interested as to why you need to copy the data into another table in the first place. But, that's off-topic...
Temporary table (#temp) are notionally stored on disc, and Table Variables (#temp) are notionally only in memory and may be more optimal for small tasks. (Assumes writes to the table will normally only affect small numbers of rows.)
Temporary tables, however, can be created using the SELECT INTO trick, avoiding the need to know the table definition in advance.
If you do know the table definition in advance, however, can't you simply use something such as the following?
DECLARE #temp TABLE (id AS INT, val as INT)
IF #ChangeType = 'D'
INSERT INTO #temp SELECT * FROM DELETED
ELSE
INSERT INTO #temp SELECT * FROM INSERTED
Personally, I'd even avoid using * if possible. Your subsequent queries will only use specific fields, so I'd only copy the fields I was using. This has the added benefit that if fields are added to the table, the code doesn't break...
DECLARE #temp TABLE (id AS INT, val as INT)
IF #ChangeType = 'D'
INSERT INTO #temp SELECT id, val FROM DELETED
ELSE
INSERT INTO #temp SELECT id, val FROM INSERTED
In my mind, the advantage of specifying the fields (which is what you wish to avoid), is that you can ensure that you always only copy what you need.

Define variable to use with IN operator (T-SQL)

I have a Transact-SQL query that uses the IN operator. Something like this:
select * from myTable where myColumn in (1,2,3,4)
Is there a way to define a variable to hold the entire list "(1,2,3,4)"? How should I define it?
declare #myList {data type}
set #myList = (1,2,3,4)
select * from myTable where myColumn in #myList
DECLARE #MyList TABLE (Value INT)
INSERT INTO #MyList VALUES (1)
INSERT INTO #MyList VALUES (2)
INSERT INTO #MyList VALUES (3)
INSERT INTO #MyList VALUES (4)
SELECT *
FROM MyTable
WHERE MyColumn IN (SELECT Value FROM #MyList)
DECLARE #mylist TABLE (Id int)
INSERT INTO #mylist
SELECT id FROM (VALUES (1),(2),(3),(4),(5)) AS tbl(id)
SELECT * FROM Mytable WHERE theColumn IN (select id from #mylist)
There are two ways to tackle dynamic csv lists for TSQL queries:
1) Using an inner select
SELECT * FROM myTable WHERE myColumn in (SELECT id FROM myIdTable WHERE id > 10)
2) Using dynamically concatenated TSQL
DECLARE #sql varchar(max)
declare #list varchar(256)
select #list = '1,2,3'
SELECT #sql = 'SELECT * FROM myTable WHERE myColumn in (' + #list + ')'
exec sp_executeSQL #sql
3) A possible third option is table variables. If you have SQl Server 2005 you can use a table variable. If your on Sql Server 2008 you can even pass whole table variables in as a parameter to stored procedures and use it in a join or as a subselect in the IN clause.
DECLARE #list TABLE (Id INT)
INSERT INTO #list(Id)
SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4
SELECT
*
FROM
myTable
JOIN #list l ON myTable.myColumn = l.Id
SELECT
*
FROM
myTable
WHERE
myColumn IN (SELECT Id FROM #list)
Use a function like this:
CREATE function [dbo].[list_to_table] (#list varchar(4000))
returns #tab table (item varchar(100))
begin
if CHARINDEX(',',#list) = 0 or CHARINDEX(',',#list) is null
begin
insert into #tab (item) values (#list);
return;
end
declare #c_pos int;
declare #n_pos int;
declare #l_pos int;
set #c_pos = 0;
set #n_pos = CHARINDEX(',',#list,#c_pos);
while #n_pos > 0
begin
insert into #tab (item) values (SUBSTRING(#list,#c_pos+1,#n_pos - #c_pos-1));
set #c_pos = #n_pos;
set #l_pos = #n_pos;
set #n_pos = CHARINDEX(',',#list,#c_pos+1);
end;
insert into #tab (item) values (SUBSTRING(#list,#l_pos+1,4000));
return;
end;
Instead of using like, you make an inner join with the table returned by the function:
select * from table_1 where id in ('a','b','c')
becomes
select * from table_1 a inner join [dbo].[list_to_table] ('a,b,c') b on (a.id = b.item)
In an unindexed 1M record table the second version took about half the time...
I know this is old now but TSQL => 2016, you can use STRING_SPLIT:
DECLARE #InList varchar(255) = 'This;Is;My;List';
WITH InList (Item) AS (
SELECT value FROM STRING_SPLIT(#InList, ';')
)
SELECT *
FROM [Table]
WHERE [Item] IN (SELECT Tag FROM InList)
Starting with SQL2017 you can use STRING_SPLIT and do this:
declare #myList nvarchar(MAX)
set #myList = '1,2,3,4'
select * from myTable where myColumn in (select value from STRING_SPLIT(#myList,','))
DECLARE #myList TABLE (Id BIGINT) INSERT INTO #myList(Id) VALUES (1),(2),(3),(4);
select * from myTable where myColumn in(select Id from #myList)
Please note that for long list or production systems it's not recommended to use this way as it may be much more slower than simple INoperator like someColumnName in (1,2,3,4) (tested using 8000+ items list)
slight improvement on #LukeH, there is no need to repeat the "INSERT INTO":
and #realPT's answer - no need to have the SELECT:
DECLARE #MyList TABLE (Value INT)
INSERT INTO #MyList VALUES (1),(2),(3),(4)
SELECT * FROM MyTable
WHERE MyColumn IN (SELECT Value FROM #MyList)
No, there is no such type. But there are some choices:
Dynamically generated queries (sp_executesql)
Temporary tables
Table-type variables (closest thing that there is to a list)
Create an XML string and then convert it to a table with the XML functions (really awkward and roundabout, unless you have an XML to start with)
None of these are really elegant, but that's the best there is.
If you want to do this without using a second table, you can do a LIKE comparison with a CAST:
DECLARE #myList varchar(15)
SET #myList = ',1,2,3,4,'
SELECT *
FROM myTable
WHERE #myList LIKE '%,' + CAST(myColumn AS varchar(15)) + ',%'
If the field you're comparing is already a string then you won't need to CAST.
Surrounding both the column match and each unique value in commas will ensure an exact match. Otherwise, a value of 1 would be found in a list containing ',4,2,15,'
As no one mentioned it before, starting from Sql Server 2016 you can also use json arrays and OPENJSON (Transact-SQL):
declare #filter nvarchar(max) = '[1,2]'
select *
from dbo.Test as t
where
exists (select * from openjson(#filter) as tt where tt.[value] = t.id)
You can test it in
sql fiddle demo
You can also cover more complicated cases with json easier - see Search list of values and range in SQL using WHERE IN clause with SQL variable?
This one uses PATINDEX to match ids from a table to a non-digit delimited integer list.
-- Given a string #myList containing character delimited integers
-- (supports any non digit delimiter)
DECLARE #myList VARCHAR(MAX) = '1,2,3,4,42'
SELECT * FROM [MyTable]
WHERE
-- When the Id is at the leftmost position
-- (nothing to its left and anything to its right after a non digit char)
PATINDEX(CAST([Id] AS VARCHAR)+'[^0-9]%', #myList)>0
OR
-- When the Id is at the rightmost position
-- (anything to its left before a non digit char and nothing to its right)
PATINDEX('%[^0-9]'+CAST([Id] AS VARCHAR), #myList)>0
OR
-- When the Id is between two delimiters
-- (anything to its left and right after two non digit chars)
PATINDEX('%[^0-9]'+CAST([Id] AS VARCHAR)+'[^0-9]%', #myList)>0
OR
-- When the Id is equal to the list
-- (if there is only one Id in the list)
CAST([Id] AS VARCHAR)=#myList
Notes:
when casting as varchar and not specifying byte size in parentheses the default length is 30
% (wildcard) will match any string of zero or more characters
^ (wildcard) not to match
[^0-9] will match any non digit character
PATINDEX is an SQL standard function that returns the position of a pattern in a string
DECLARE #StatusList varchar(MAX);
SET #StatusList='1,2,3,4';
DECLARE #Status SYS_INTEGERS;
INSERT INTO #Status
SELECT Value
FROM dbo.SYS_SPLITTOINTEGERS_FN(#StatusList, ',');
SELECT Value From #Status;
Most of these seem to focus on separating-out each INT into its own parenthetical, for example:
(1),(2),(3), and so on...
That isn't always convenient. Especially since, many times, you already start with a comma-separated list, for example:
(1,2,3,...) and so on...
In these situations, you may care to do something more like this:
DECLARE #ListOfIds TABLE (DocumentId INT);
INSERT INTO #ListOfIds
SELECT Id FROM [dbo].[Document] WHERE Id IN (206,235,255,257,267,365)
SELECT * FROM #ListOfIds
I like this method because, more often than not, I am trying to work with IDs that should already exist in a table.
My experience with a commonly proposed technique offered here,
SELECT * FROM Mytable WHERE myColumn IN (select id from #mylist)
is that it induces a major performance degradation if the primary data table (Mytable) includes a very large number of records. Presumably, that is because the IN operator’s list-subquery is re-executed for every record in the data table.
I’m not seeing any offered solution here that provides the same functional result by avoiding the IN operator entirely. The general problem isn’t a need for a parameterized IN operation, it’s a need for a parameterized inclusion constraint. My favored technique for that is to implement it using an (inner) join:
DECLARE #myList varchar(50) /* BEWARE: if too small, no error, just missing data! */
SET #myList = '1,2,3,4'
SELECT *
FROM myTable
JOIN STRING_SPLIT(#myList,',') MyList_Tbl
ON myColumn = MyList_Tbl.Value
It is so much faster because the generation of the constraint-list table (MyList_Tbl) is executed only once for the entire query execution. Typically, for large data sets, this technique executes at least five times faster than the functionally equivalent parameterized IN operator solutions, like those offered here.
I think you'll have to declare a string and then execute that SQL string.
Have a look at sp_executeSQL