Loop on query with parameters coming from variables - sql

I need to perform a set of queries to extract the distinct values of a group of fields from different tables,
so far I managed to create the cursor that makes the loop as expected, but I'm not able to create the query in the loop assigning the values as I need.
please look at below example:
DECLARE #sourcetablename NVARCHAR(250),
#targettablename NVARCHAR(250),
#sourcefieldname NVARCHAR(250);
DECLARE DMTCursor CURSOR FOR
SELECT
SOURCETABLE, TARGETTABLE, ENTITYFIELD
FROM DMTSOURCE ;
OPEN DMTCursor ;
FETCH NEXT FROM DMTCursor INTO
#sourcetablename,
#targettablename,
#sourcefieldname;
WHILE ##FETCH_STATUS = 0
BEGIN
PRINT #sourcetablename + ' ' + #targettablename+ ' ' + #sourcefieldname;
FETCH NEXT FROM DMTCursor INTO
#sourcetablename,
#targettablename,
#sourcefieldname;
END;
CLOSE DMTCursor ;
DEALLOCATE DMTCursor ;
in this example I used a print function to test the code and it actually shows the values that I want to use as parameters for the query, but I cannot figure out how to use them in the select, let's make for instance that the first row coming from the cursor retrieves:
#sourcetablename = sourcetable1
#targettablename = targettable1
#sourcefieldname = sourcefield1
I want the loop query (replacing the PRINT line) be like:
select distinct
'targettable1' AS ENTITY,
'sourcefield1' AS SOURCE_FIELD,
sourcefield1 AS DISTINCTVALUE
from targettable1
just replacing the variables in the query does not work, it shows the error about the table (targettable1) not declared as a variable and I need a way to set the first 2 parameters as "fixed" strings (that's why I wrote them among '' in above example) in the query.
thanks in advance for help,

thanks shawnnt00,
Dynamic sql helped me to figure out the SELECT inside the loop:
SET #SQL=N'
select distinct ''' +
#targettablename + ''' AS ENTITY, ''' +
#sourcefieldname + ''' AS SOURCE_FIELD, ' +
#sourcefieldname + ' AS VALUE
from ' + #targettablename ;
EXEC sp_executesql #SQL;
the point of discussion was not about the cursor but just on the parameters management,
thanks everybody for support

Related

Read columns in SQL tables which are the result of another query

I need to check that all primary key columns do have all values in uppercase.
So, I have a first request which returns me the table-field pairs which are part of PK.
SELECT table_name, field_name FROM dico WHERE pkey > 0;
(dico is some table which gives that information. No need to look it up in the SQL Schema…)
And, for all those pairs tx/fx listed from that first query above, I need to look for values which would not be uppercased.
SELECT DISTINCT 't1', 'f1', f1 FROM t1 WHERE f1 <> UPPER(f1) UNION ALL
SELECT DISTINCT 't2', 'f2', f2 FROM t2 WHERE f2 <> UPPER(f2) UNION ALL
...
SELECT DISTINCT 'tn', 'fn', fn FROM tn WHERE fn <> UPPER(fn);
(I'm putting the table name and field name as "strings" in the output, so that I know from where the wrong values are coming.)
As you see, I do have the code for both requests, but I do not know how to combine them (if possible, in a generic way that would work for both SQL Server and Oracle).
Can you give me some idea on how to finish that?
One way that I could think of is to use a statement block that contains a loop.
Unfortunately, the structure of a statement block will be different for every different database system (the one for SQL Server will be different for Oracle).
I wrote an example using SQL Server further below (fiddle link is at: https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=85cd786adf32247da1aa73c0341d1b72).
Just in case, the dynamic query gets very long (possibly longer than the limit of varchar, which is 8000 characters), SQL Server has varchar(max) that can hold up to 2GB (https://learn.microsoft.com/en-us/sql/t-sql/data-types/char-and-varchar-transact-sql?view=sql-server-ver15). This can be used for #DynamicQuery, replacing VARCHAR(3000) in the example below (modified/alternative fiddle link, just to show that the data type really exists and can be used, is at: https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=7fbb5d130aad35e682d8ce7ffaf09ede).
Please note that the example is not using your exact queries because I do not have access to the exact same data as the one you have (e.g. I cannot test the example using dico table because I do not have access to that table).
However, I made the example so that it uses a similar basic structure of logic from your queries, so that later on it can be customised to suit your exact need/scenario (e.g. by changing the table names and field names to match the ones that you use, as well as by adding the WHERE clause as you need).
In the example, your 1st query will be run immediately and the result will be handled by a cursor.
After that, a loop (using WHILE statement/structure) will loop through the cursor for the result of the 1st query to dynamically build the 2nd query (inserting the table names and the field names from the 1st query).
Note that at this point, the 2nd query is still being built, not being run yet.
Eventually, after the loop has finished, the resulting/compiled 2nd query will be run/executed (using the EXEC command).
-- START of test data creation.
create table TableA
( message varchar(200)
);
insert into TableA([message]) values ('abc');
insert into TableA([message]) values ('def');
create table TableB
( message varchar(200)
);
insert into TableB([message]) values ('ghi');
insert into TableB([message]) values ('jkl');
-- END of test data creation.
-- START of dynamic SQL
declare #TableAndFieldDetails CURSOR
declare #TableName VARCHAR(50)
declare #FieldName VARCHAR(50)
declare #DynamicQuery VARCHAR(3000) = ''
begin
SET #TableAndFieldDetails = CURSOR FOR
-- START of the 1st query
SELECT [INFORMATION_SCHEMA].COLUMNS.TABLE_NAME,
[INFORMATION_SCHEMA].COLUMNS.COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE INFORMATION_SCHEMA.COLUMNS.COLUMN_NAME LIKE '%message%'
-- END of the 1st query
-- START of dynamically building the 2nd query
OPEN #TableAndFieldDetails
FETCH NEXT FROM #TableAndFieldDetails INTO #TableName, #FieldName
WHILE ##FETCH_STATUS = 0
BEGIN
IF #DynamicQuery <> ''
BEGIN
SET #DynamicQuery += ' UNION ALL '
END
-- The one line right below is each individual part/element of the 2nd query
SET #DynamicQuery += 'SELECT ''' + #TableName + ''', ''' + #FieldName + ''', ' + #FieldName + ' FROM ' + #TableName
FETCH NEXT FROM #TableAndFieldDetails INTO #TableName, #FieldName
END
CLOSE #TableAndFieldDetails
DEALLOCATE #TableAndFieldDetails
-- END of dynamically building the 2nd query
EXEC (#DynamicQuery)
end
-- END of dynamic SQL

SQL server not returning all rows

Here is the procedure that I am using to search for skills from a single column. There are 3 variables that I need to pass to the SP and I need to get the results accordingly. I do understand that searching for multiple values from a single cell with a delimiter is prone to errors, but this query is not working after I tried to put the whole thing within another IF ELSE condition.
ALTER procedure [dbo].[spFilterThisResume]
#Skill varchar(100),
#Exp INT, #Dt date
AS
DECLARE #NoStart INT
IF (#Exp = '')
SET #Exp = NULL
IF (#Dt = '')
SET #NoStart = 1
BEGIN
DECLARE #SkillId varchar(100)
DECLARE MY_CURSOR CURSOR
LOCAL STATIC READ_ONLY FORWARD_ONLY
FOR
SELECT * FROM dbo.SplitStrings_CTE(#Skill,',')
OPEN MY_CURSOR
FETCH NEXT FROM MY_CURSOR INTO #SkillId
WHILE ##FETCH_STATUS = 0
BEGIN
IF (#NoStart = 1)
BEGIN
SELECT * FROM tblResume
(Skills LIKE '%,'+(#SkillId)+',%' OR Skills LIKE (#SkillId)+',%' OR Skills LIKE '%,'+(#SkillId) OR Skills = (#SkillId)
AND (Experience LIKE #Exp))
END
ELSE
BEGIN
SELECT * FROM tblResume
(Skills LIKE '%,'+(#SkillId)+',%' OR Skills LIKE (#SkillId)+',%' OR Skills LIKE '%,'+(#SkillId) OR Skills = (#SkillId)
AND (Experience LIKE #Exp)
AND (CreatedDate LIKE #Dt))
END
END
PRINT #SkillId
FETCH NEXT FROM MY_CURSOR INTO #SkillId
END
CLOSE MY_CURSOR
DEALLOCATE MY_CURSOR
The result I got before I tried to put this into IF ELSE block was accurate. It returned even a single occurance of a single skill that I passed as parameter. But I do not understand which part of the query messed up the whole resultset that I got earlier. I unfortunately did not save the working SP into a notepad. So please help me to identify what mistake I made.
NOTE: SELECT * FROM dbo.SplitStrings_CTE(#Skill,',') is a function that I am using to split the input csv into its component arguments
What is going wrong is that your begin and end blocks are all out of line, and you end up in an infinite loop. If I remove actual queries, and add in a comment for each bit to shorten it, and label each begin/end with a number to tie them up you have:
ALTER procedure [dbo].[spFilterThisResume]
AS
-- DECLARE VARIABLES
BEGIN -- 1 BEGIN 1
-- DECLARE CURSOR AND OPEN IT
WHILE ##FETCH_STATUS = 0
BEGIN -- 2 START WHILE
IF (#NoStart = 1)
BEGIN -- 3 START IF
-- QUERY WITH NO DATE CHECK
END -- 3 END IF
ELSE
BEGIN -- 3 START IF
-- QUERY WITH DATE CHECK
END -- 3 END IF
END -- 2 END WHILE
PRINT #SkillId
FETCH NEXT FROM MY_CURSOR INTO #SkillId
END -- 1 END 1
CLOSE MY_CURSOR
DEALLOCATE MY_CURSOR
So your second FETCH NEXT FROM MY_CURSOR INTO #SkillId falls outside of the WHILE. So you never update ##FETCH_STATUS, it will always be 0 after the first fetch, and you will never exit the loop to get to the second FETCH, to correct it you would need to just put your second FETCH inside the loop.
However, this is needless use of a cursor, and as a general rule you should avoid cursors as much as possible. You could just use:
SELECT *
FROM tblResume AS r
WHERE r.Experience LIKE #Exp
AND EXISTS
( SELECT 1
FROM dbo.SplitStrings_CTE(#Skill,',') AS s
WHERE ',' + r.Skills + ',' LIKE '%,' + s.Value + ',%'
);
This uses your original split function still, but in a single query, so imagine, as from your previous question you have rows in Resume:
C,C++
P,H,D
ASP,.net,C,C#,C++,R+
C++
And you pass C++,C#, rather than running the query twice, once for C++ and once for C# you can run the query once checking for each value, the exists clause essentially expands out to:
SELECT
FROM tblResume
WHERE r.Experience = #Exp
AND ( ',' + r.Skills + ',' LIKE '%,C++,%'
OR ',' + r.Skills + ',' LIKE '%,C#,%'
)
Also, since #Dt is a DATE, the check #Dt = '' is really checking if `#Dt = '1900-01-01`` which is unlikely to be required behaviour? I suspect you want:
IF #dt IS NULL
BEGIN
SELECT *
FROM tblResume AS r
WHERE r.Experience LIKE #Exp
AND EXISTS
( SELECT 1
FROM dbo.SplitStrings_CTE(#Skill,',') AS s
WHERE ',' + r.Skills + ',' LIKE '%,' + s.Value + ',%'
);
END
ELSE
BEGIN
SELECT *
FROM tblResume AS r
WHERE r.Experience LIKE #Exp
AND r.CreatedDate = #Dt
AND EXISTS
( SELECT 1
FROM dbo.SplitStrings_CTE(#Skill,',') AS s
WHERE ',' + r.Skills + ',' LIKE '%,' + s.Value + ',%'
);
END
i.e. Checking where the date is not passed, rather than when it is 1900-01-01.
----
ADDENDUM
With regard to why you need to use the predicate as follows:
WHERE ',' + r.Skills + ',' LIKE '%,' + s.Value + ',%'
Again, using some of your previous example data (an one extra row):
1. C
2. ASP,.net,C,C#,C++,R+
3. C++
So if you were just looking for 'C', you might use:
WHERE r.Skills LIKE '%C%';
But as you noted in your previous question, this would also yield row 3 because C++ is like '%C%', which is not correct, it is therefore necessary to look for %,C,%, i.e. where there is a complete match on the skill:
WHERE r.Skills LIKE '%,C,%';
This would now mean that the first row is not returned, because C on it's own is not like %,C,%' - therefore we need to put the delimeter at the start and end ofr.Skills` to ensure that the first and last terms in the list are not excluded from the search.
If you pass NULL for #Exp you would get no rows returned since the predicate [Anything] = NULL yields NULL, so will never be true. This is not just the case in my answer, it is the case in your question too.
Your sql is returning incorrect results because you are using ( and ) to wrap both AND and OR together.
You need to structure your query like this.
SELECT
* FROM tblResume
WHERE
(
Skills LIKE '%,'+(#SkillId)+',%'
OR Skills LIKE (#SkillId)+',%'
OR Skills LIKE '%,'+(#SkillId)
OR Skills = (#SkillId)
)
AND (Experience LIKE #Exp)
Note:As mentioned in the answer by GarethD, you can avoid the use of cursor in such a case.

iterative executing stored procedure with a set based approach

I have an issue where I am trying to replace the following code with a different solution. Currently I am using a cursor but it is running to slowly. I am under the understanding that iterative solutions can only be completed with cursors or while loops but I am trying to find a set based approach and running out of ideas. I was hoping that I could find some inspiration here. Thanks all.
--used to find a unique list of Some_ID
#Id1, #Id2, #Id3
DECLARE SomeCursor CURSOR FOR
SELECT SOME_ID FROM SomeTable
WHERE ID1=#Id1 AND ID2=#Id2 and ID3=#Id3
OPEN SomeCursor
FETCH NEXT FROM SomeCursor INTO #SomeID
WHILE ##Fetch_Status = 0
BEGIN
Print #SomeID
--simply populates a single table with values pulled from
--other tables in the database based on the give parameters.
EXEC SP_PART1 #SomeID, #parameters...
print 'part 2 starting'
EXEC SP_PART2 #SomeID, #parameters...
FETCH NEXT FROM SomeCursor INTO #SomeID
print getdate()
END
CLOSE SomeCursor;
DEALLOCATE SomeCursor;
Your only option to make this set-based is to rewrite the sps to make them set-based (using table-valed parameters intead of individual ones) or to write set based code in this proc instead of re-using procs designed for single record use. This is a case where code re-use is usually not appropriate.
I'm not too sure what you want, but why not use your select statement to create your sql scripts and execute them all at once with something like this.
DECLARE #sql VARCHAR(MAX);
SELECT #sql = COALESCE(#sql,'') + 'EXEC SP_Part1 ' + SOME_ID + '; EXEC SP_Part2 ' + SomeID + '; GO '
FROM SomeTable
WHERE ID1=#Id1 AND ID2=#Id2 and ID3=#Id3
EXEC (#sql)

How can I get tables name from sys.tables and store the output in a variable?

I have written a stored procedure where a table name and database name collect
using cursor from different two tables.
But my problem is when I run a query to find out a table exists in a database or not, then show a error.
Now how can I run the query and store output into a variable?
Declare #table_exist nvarchar(200),#val1 nvarchar(200),#return1 nvarchar(200);
SET #table_exist=
'SELECT 1 FROM '+#db_name+'.sys.tables
where name='+#table_name+'';
EXEC sp_executesql #table_exist,#return1 OUTPUT;
select #return1;
ERROR:
Invalid column name 'table name'
You should use quotename when building dynamic query:
SET #table_exist=
'SELECT 1 FROM '+ quotename(#db_name)+'.sys.tables
where name='+quotename(#table_name)+'';
When facing such an error the best would be to print #table_exists and see what is actually built.
I haven't looked properly at your query. You are missing apostrophes:
SET #table_exist=
'SELECT 1 FROM '+ quotename(#db_name)+'.sys.tables
where name=''' + #table_name +'''';
UPDATE:
When using output variable you should set it in a query:
SET #table_exist=
'SELECT #return1 = 1 FROM ' + quotename(#db_name) + '.sys.tables
where name='''+#table_name+'''';
To prevent result set from returning to client, create temporary table and insert result set into it. In this case this will leave only one result set, the result of select #return1:
declare #tbl table (exist bit)
insert into #tbl
EXEC sp_executesql #table_exist, N'#return1 nvarchar(200) out', #return1 OUT;
select #return1;
It is best to write your query by using ' and " properly . Which makes less mistakes while writing query.
The error on you code is that you are confused by using only '. Best to use ' for variable only.
Write your code as:
"SELECT 1 FROM ' "+#db_name+" '.sys.tables
where name=' "+#table_name+" ' ";

Running the same SQL code against a number of tables sequentially

I have a number of tables (around 40) containing snapshot data about 40 million plus vehicles. Each snapshot table is at a specific point in time (the end of the quarter) and is identical in terms of structure.
Whilst most of our analysis is against single snapshots, on occasion we need to run some analysis against all the snapshots at once. For instance, we may need to build a new table containing all the Ford Focus cars from every single snapshot.
To achieve this we currently have two options:
a) write a long, long, long batch file repeating the same code over and over again, just changing the FROM clause
[drawbacks - it takes a long time to write and changing a single line of code in one of blocks requires fiddly changes in all the other blocks]
b) use a view to union all the tables together and query that instead
[drawbacks - our tables are stored in separate database instances and cannot be indexed, plus the resulting view is something like 600 million records long by 125 columns wide, so is incredibly slow]
So, what I would like to find out is whether I can either use dynamic sql or put the SQL into a loop to spool through all tables. This would be something like:
for each *table* in TableList
INSERT INTO output_table
SELECT *table* as OriginTableName, Make, Model
FROM *table*
next *table* in TableList
Is this possible? This would mean that updating the original SQL when our client changes what they need (a very regular occurrence!) would be very simple and we would benefit from all the indexes we already have on the original tables.
Any pointers, suggestions or help will be much appreciated.
If you can identify your tables (e.g. a naming pattern), you could simply say:
DECLARE #sql NVARCHAR(MAX);
SELECT #sql = N'';
SELECT #sql = #sql + 'INSERT output_table SELECT ''' + name + ''', Make, Model
FROM dbo.' + QUOTENAME(name) + ';'
FROM sys.tables
WHERE name LIKE 'pattern%';
-- or WHERE name IN ('t1', 't2', ... , 't40');
EXEC sp_executesql #sql;
This assumes they're all in the dbo schema. If they're not, the adjustment is easy... just replace dbo with ' + QUOTENAME(SCHEMA_NAME([schema_id])) + '...
In the end I used two methods:
Someone on another forum suggested making use of sp_msforeachtable and a table which contains all the table names. Their suggestion was:
create table dbo.OutputTable (OriginTableName nvarchar(500), RecordCount INT)
create table dbo.TableList (Name nvarchar (500))
insert dbo.TableList
select '[dbo].[swap]'
union select '[dbo].[products]'
union select '[dbo].[structures]'
union select '[dbo].[stagingdata]'
exec sp_msforeachtable #command1 = 'INSERT INTO dbo.OutputTable SELECT ''?'', COUNT(*) from ?'
,#whereand = 'and syso.object_id in (select object_id(Name) from dbo.TableList)'
select * from dbo.OutputTable
This works perfectly well for some queries, but seems to suffer from the fact that one cannot use a GROUP BY clause within the query (or, at least, I could not find a way to do this).
The final solution I used was to use Dynamic SQL with a lookup table containing the table names. In a very simple form, this looks like:
DECLARE #TableName varchar(500)
DECLARE #curTable CURSOR
DECLARE #sql NVARCHAR(1000)
SET #curTable = CURSOR FOR
SELECT [Name] FROM Vehicles_LookupTables.dbo.AllStockTableList
OPEN #curTable
FETCH NEXT
FROM #curTable INTO #TableName
WHILE ##FETCH_STATUS = 0
BEGIN
SET #sql = 'SELECT ''' +#TableName + ''', Make, sum(1) as Total FROM ' + #TableName + ' GROUP BY Make'
EXEC sp_executesql #sql
FETCH NEXT
FROM #curTable INTO #TableName
END
CLOSE #curTable
DEALLOCATE #curTable