for loop in transact sql - sql

I have a table that contains some data that needs to be updated. Let's assume that the table structure is
(Code, Col1, Col2, Col3)
I need to run an update on specific rows that have the Code value (let's say the values are (1,3,4,5,9,6,30,25,87,100)).
The Col3 value is Col1+Col2 and the Col1 values are (1001,1025,400,25,963,13,432,25,87,100).
I created the following SQL Query:
Declare #Col1 float
Declare #Code nvarchar
set #Col1 = 1001
set #Code = 1
update MyTable set
Col1 = #Col1,
Col3 = #Col1 + Col2
where Code = #Code
So, instead copying all this code after the Declare lines and manually assigning values, is it possible to create two arrays, one for Col1 values other for Code values and iterate through the Code and updating it dynamically?

Usually iteration over rows of data using loops or cursors considered as bad practice in SQL since it is much slower in most cases.
In your particular case there is no need to iterate over some "arrays" to perform your desired update.
Instead you can create temporary table like this:
create table #temp_table (Col1 float, Code nvarchar(10))
fill it with your data like:
insert into #temp_table (Col1, Code)
select 1001, '1'
union all
select 1025, '3'
... and so on
and then perform your update:
update MyTable set
Col1 = T1.Col1,
Col3 = T1.Col1 + Col2
from MyTable as T
inner join #temp_table as T1 on T.Code = T1.Code

You don't need a loop for this, you can just create a Cte (or temp table) with the values you want to update, and JOIN to it in an UPDATE statement:
;With ToUpdate (Code, Col1) As
(
Select 1, 1001 Union All
Select 3, 1025 Union All
Select 4, 400 Union All
Select 5, 25 Union All
Select 9, 963 Union All
Select 6, 13 Union All
Select 30, 432 Union All
Select 25, 25 Union All
Select 87, 87 Union All
Select 100, 100
)
Update T
Set Col1 = U.Col1,
Col3 = U.Col1 + Col2
From MyTable T
Join ToUpdate U On U.Code = T.Code

Related

SSIS Metadata discovery only supports temp tables when analyzing a single- statement batch

I need to use a temp table and CTE in SQL task in my SSIS package. But I got the following errors
" Metadata discovery only supports temp tables when analyzing a single- statement batch "
I tried to refer SSIS Package not wanting to fetch metadata of temporary table but I don't know how to do it in my case
The query that I'm using is
--Creating multiple temp table to increase preforming
SELECT *
INTO #TEMP_1
FROM MYTABLE
SELECT *
INTO #TEMP_2
FROM MYTABLE
SELECT *
INTO #TEMP_3
FROM MYTABLE
--create a final result set
;WITH CTE AS (
SELECT * FROM #TEMP_1
UNION
SELECT * FROM #TEMP_2
UNION
SELECT * FROM #TEMP_3
)
--data I eventually need
SELECT col1,col2,col3
FROM CTE
WHERE some_condition
All those code is working fine in SSMS but failed when I load to SSIS.
Any suggestion?
If you're working with temporary tables and you're on SQL Server 2012+, an option is to use EXECUTE with the RESULT SETS option.
Given your supplied query, I wrapped it with an Execute and specified the column names and types.
EXECUTE(N'
SELECT 1 AS col1, NULL AS col2, NULL AS col3, ''A'' AS col4 INTO #TEMP_1
SELECT NULL AS col1, 2 AS col2, NULL AS col3, ''B'' AS col4 INTO #TEMP_2
SELECT NULL AS col1, NULL AS col2, 3 AS col3, ''John'' AS col4 INTO #TEMP_3
--Creating multiple temp table to increase preforming
--create a final result set
;WITH CTE AS (
SELECT * FROM #TEMP_1
UNION
SELECT * FROM #TEMP_2
UNION
SELECT * FROM #TEMP_3
)
--data I eventually need
SELECT col1,col2,col3
FROM CTE
WHERE 1=1 AND col4=''John''
') WITH RESULT SETS
(
(Column1 int, Column2 int, Column3 int)
);
The OLE DB Source component was able to interpret that metadata just fine and out comes my data.
This solution works because it's is dynamic SQL. I am creating a big string and passing it to the EXEC call and specifying the RESULT SET which allows SSIS to properly infer the metadata. If, as I suspect, you need to fool around with dynamic filters and such, then you can do some fancy string building early on and reduce the EXEC call.
-- Do whatever needs to be done to properly define your queries
-- add filtering, etc
-- Before you EXEC, I find printing the resulting SQL handy so I can manually run it through a parser looking for mistakes
DECLARE #DynamicSQL nvarchar(max) = N'
SELECT 1 AS col1, NULL AS col2, NULL AS col3, ''A'' AS col4 INTO #TEMP_1
SELECT NULL AS col1, 2 AS col2, NULL AS col3, ''B'' AS col4 INTO #TEMP_2
SELECT NULL AS col1, NULL AS col2, 3 AS col3, ''John'' AS col4 INTO #TEMP_3
--Creating multiple temp table to increase preforming
--create a final result set
;WITH CTE AS (
SELECT * FROM #TEMP_1
UNION
SELECT * FROM #TEMP_2
UNION
SELECT * FROM #TEMP_3
)
--data I eventually need
SELECT col1,col2,col3
FROM CTE
WHERE 1=1 AND col4=''John''
';
EXECUTE(#DynamicSQL) WITH RESULT SETS
(
(Column1 int, Column2 int, Column3 int)
);
To embed a quote within the existing single quoted mess of statements, you need to double them - two single quotes, not a double

How to SELECT from either Table A or Table B based on a Value without using If statement

Example:
Declare #Division int = 1
IF #Division = 1
BEGIN
SELECT * FROM A
END
ELSE IF #Division = 2
BEGIN
SELECT * FROM B
END
The thing is, I don't want to use if statements, as there's hundreds of Divisions to consider.
This is part of a larger query where it's joined onto other tables.
How do I do this?
If the tables have exactly the same columns, you could do a UNION ALL:
declare #Division int = 1
SELECT * FROM A WHERE #Division = 1
UNION ALL
SELECT * FROM B WHERE #Division = 2
-- etc
Or if they don't have exactly the same columns, but it's possible to gather all the interesting columns from each, then a more extensive version might work:
SELECT COL1 as ID, COL2 as VALUE, COL3 as DESCR FROM A WHERE #Division = 1
UNION ALL
SELECT COL6 as ID, COL7 as VALUE, NULL as DESCR FROM B WHERE #Division = 2
-- etc
I'm not sure however how efficient all this will be, compared to your initial version that uses the IF.
You can also put all of it in a CTE for easy joining with other tables:
;WITH DivisionCTE (ID, VALUE, DESCR) AS
(
SELECT COL1 as ID, COL2 as VALUE, COL3 as DESCR FROM A WHERE #Division = 1
UNION ALL
SELECT COL6 as ID, COL7 as VALUE, NULL as DESCR FROM B WHERE #Division = 2
-- etc
)
SELECT *
FROM DivisionCTE x
INNER JOIN OtherTable y ON x.ID = Y.ID
-- etc
Simply create view on all required table by Union
create view DivisionWiseData
as
begin
select *,1 AS Division from tabA
union
select *,2 AS Division from tabB
and so on...
after that simply use
select * from dbo.DivisionWiseData where divisionid=#divisionid

Concatenate the row values after join [duplicate]

This question already has answers here:
How to concatenate text from multiple rows into a single text string in SQL Server
(47 answers)
Closed 8 years ago.
How to concatenate the row values after joining Table1 and table2.
Table 1:
-----
Col1
------
1
2
Table 2:
-----------------
Col1 Col2
-----------------
1 A
1 B
1 C
2 D
2 E
2 F
Desired Result:
-----------------
Col1 Col2
-----------------
1 A,B,C
2 D,E,F
Try this:
create table #table1(
col1 int
)
create table #table2(
col1 int,
col2 char(1),
)
insert into #table1
select 1 union all
select 2
insert into #table2
select 1, 'A' union all
select 1, 'B' union all
select 1, 'C' union all
select 2, 'D' union all
select 2, 'E' union all
select 2, 'F'
select
col1,
col2 =
stuff((
select
', ' + t2.col2
from #table2 t2
where
t2.col1 = t1.col1
group by t2.col2
for xml path(''), type).value('.', 'varchar(max)'
), 1, 2, '')
from #table1 t1
drop table #table1
drop table #table2
Mysql:
SELECT group_concat(table2.col2) FROM
table2 JOIN table1 ON table1.col1 = table2.col1
GROUP BY table2.col1
You can use cursor as the following cod.
checked for syntax only
create table #Desired_Result(col1 int,col2 varchar(20))
DECLARE cur cursor FAST_FORWARD READ_ONLY
FOR
SELECT col1,col2
DECLARE #d int
declare #l varchar(20)
declare #str1 varchar(30)=''
declare #str2 varchar(30)=''
OPEN cur
FETCH NEXT FROM cur INTO #d,#l
WHILE ##FETCH_STATUS=0
BEGIN
if #d=1
set #str1=#str1+#l+','
else
if #d=2
set #str2=#str2+#l+','
FETCH NEXT FROM cur INTO #d,#l
END
#str1=substring(#str1,1,len(#str1)-1)
#str2=substring(#str2,1,len(#str2)-1)
insert into #Desired_Result values (col1,col2)(1,#str1)
insert into #Desired_Result values (col1,col2)(2,#str2)
select * from #Desired_Result
Close cur
DEALLOCATE cur

Execute queries until non-empty result

I have three queries, looking like these:
SELECT * FROM Table1 WHERE Column1 = 'a'
SELECT * FROM Table2 WHERE Column2 = 'b'
SELECT * FROM Table1 A, Table2 B WHERE A.Column1 <> B.Column1
Now all logic is implemented on the client side as following. Execute the first query, if HasRows, set a flag to 1 and return the rows. Otherwise execute the second query, if HasRows, set the flag to 2 and return the rows. Otherwise execute the third query, set the flag to 3 and return the rows.
How to do this with a single query? Flag stuff, I guess, should be solved adding Flag to the queries:
SELECT Flag = 1, * FROM Table1 WHERE Column1 = 'a'
SELECT Flag = 2, * FROM Table2 WHERE Column2 = 'b'
SELECT Flag = 3, * FROM Table1 A, Table2 B WHERE A.Column1 <> B.Column1
But now what? How to check, if a query returns non-empty result?
Also, I'd like to cache the results, in other words, to avoid executing the same query twice - once for checking and the second time - for returning data.
Regards,
You could use a table variable to store the result and only return it at the end of the SQL block. Checking ##rowcount would tell you if the previous insert added any rows; if it's zero, you can run further queries:
declare #result table (flag int, col1 int, col2 varchar(50))
insert #result select 1, col1, col2 from Table1 where Column1 = 'a'
if ##rowcount = 0
begin
insert #result select 2, col1, col2 from Table2 where Column1 = 'b'
end
if ##rowcount = 0
begin
insert #result select 3, col1, col2 from Table1 A, Table2 B
where A.Column1 <> B.Column1
end
select * from #result
This approach only works if each select has the same column definition.

Tidiest way to filter out rows where all columns = a value

I have a query with loads of columns. I want to select rows where not all the columns are equal to 0.
select * from table
where
not
( column1 = 0 and
column2 = 0 and
column3 = 0 and
...
column45 = 0)
Is this really the tidiest way to do it?
Supposing I then need to change it to ignore when all columns are 1, or negative.. Its a lot of cut and paste..
It appears as though the 45 individual columns have a similar meaning. As such, I would encourage you to properly normalize this table. If you did, the query would be simpler and would likely perform better.
You could parameterize the query and put it in a stored procedure or table-valued function. You'd only need to write the query a fixed number of times (once per operation type) regardless of the value(s) you choose.
create function dbo.fn_notequal_columns
(
#value int
)
returns table
as
(
select * from [table]
where column1 <> #value and column2 <> #value ...
)
select * from dbo.fn_notequal_columns(0)
You could use CHECKSUM. However, I don't know the internals of CHECKSUM so can't guarantee it would work over large datasets.
CREATE TABLE dbo.FooBar (
keyCol int NOT NULL IDENTITY (1, 1),
col1 int NOT NULL,
col2 int NOT NULL,
col3 int NOT NULL
)
INSERT FooBar (col1, col2, col3)
SELECT -45, 0, 45
UNION ALL
SELECT 0, 23, 0
UNION ALL
SELECT 0, 0, 0
UNION ALL
SELECT 1, 0, 0
SELECT
CHECKSUM(col1, col2, col3)
FROM
dbo.FooBar
SELECT
*
FROM
dbo.FooBar
WHERE
CHECKSUM(col1, col2, col3) = 0
(1) You have the wrong connective in the condition - you need OR and not AND.
With the question amended, the observation above is no longer correct.
(2) If you have 45 columns that you need to filter on, you are going to be hard pressed to do any better than what you have written. Pain though it be...
This observation remains true.
You could add a computed column that does the calculation for you. It is not technically any tidier, except that now when you use it in any query you only have to check the computed column as opposed to repeating the calculation.
CREATE TABLE dbo.foo
(
col1 INT,
col2 INT,
col3 INT,
all_0 AS
(
CONVERT(BIT, CASE
WHEN col1 = 0 AND col2 = 0 AND col3 = 0
THEN 1 ELSE 0
END)
)
);
If your numbers are constrained in some way to be >= 0, you could do something slightly tidier, such as:
WHERE col1 + col2 + col3 = 0 -- or 45, if there are 45 such columns
-- and you are looking for each column = 1
you could create a view of a normalized structure and use that as your source for this query:
SELECT all other fields, 'Column1', COL1 FROM tableName
UNION
SELECT all other fields, 'Column2, COL2 FROM TableName
UNION ...
SELECT all other fields, 'Column45', COL45 FROM tableName