Update multiple columns by loop? - sql

I have a select statement which I want to convert into an update statement for all the columns in the table which have the name Variable[N].
For example, I want to do these things:
I want to be able to convert the SQL below into an update statement.
I have n columns with the name variable[N]. The example below only updates column variable63, but I want to dynamically run the update on all columns with names variable1 through variableN without knowing how many variable[N] columns I have in advance. Also, in the example below I get the updated result into NewCol. I actually want to update the respective variable column with the results if possible, variable63 in my example below.
I want to have a wrapper that loops over column variable1 through variableN and perform the same respective update operation on all those columns:
SELECT
projectid
,documentid
,revisionno
,configurationid
,variable63
,ISNULL(Variable63,
(SELECT TOP 1
variable63
FROM table1
WHERE
documentid = t.documentid
and projectid=t.projectid
and configurationid=t.configurationid
and cast(revisionno as int) < cast(t.revisionno as int)
AND Variable63 is NOT NULL
ORDER BY
projectid desc
,documentid desc
,revisionno desc
,configurationid desc
)) as NewCol
FROM table1 t;

There's no general way to loop through variables in SQL, you're supposed to know exactly what you want to modify. In some databases, it will be possible to query system tables to dynamically build an update statement (I know how to do that in InterBase and it's decessor Firebird), but you haven't told us anything which database engine you're using.
Below is a way you could update several fields that are null, COALESCE and CASE are two way of doing the same thing, as is using LEFT JOIN or NOT EXISTS. Use the ones you and your database engine is most comfortable with. Beware that all records will be updated, so this is not a good solution if your database contains millions of records, each record is large and you want this query to be executed lots of times.
UPDATE table1 t
SET t.VARIABLE63 =
COALESCE(t.VARIABLE63,
(SELECT VARIABLE63
FROM table1 t0
LEFT JOIN table1 tNot
ON tNot.documentid = t.documentid
AND tNot.projectid=t.projectid
AND tNot.configurationid=t.configurationid
AND cast(tNot.revisionno as int) > cast(t0.revisionno as int)
AND cast(tNot.revisionno as int) < cast(t.revisionno as int)
AND tNot.Variable63 is NOT NULL
WHERE t0.documentid = t.documentid
AND t0.projectid=t.projectid
AND t0.configurationid=t.configurationid
AND cast(t0.revisionno as int) < cast(t.revisionno as int)
AND t0.Variable63 is NOT NULL
AND tNot.Variable63 is NULL)),
t.VARIABLE64 = CASE WHEN t.VARIABLE64 IS NOT NULL then t.VARIABLE64
ELSE (SELECT VARIABLE64
FROM table1 t0
WHERE t0.documentid = t.documentid
AND t0.projectid=t.projectid
AND t0.configurationid=t.configurationid
AND cast(t0.revisionno as int) < cast(t.revisionno as int)
AND t0.Variable64 is NOT NULL
AND NOT EXISTS(SELECT 1
FROM table1 tNot
WHERE tNot.documentid = t.documentid
AND tNot.projectid=t.projectid
AND tNot.configurationid=t.configurationid
AND cast(tNot.revisionno as int) > cast(t0.revisionno as int)
AND cast(tNot.revisionno as int) < cast(t.revisionno as int)
AND tNot.Variable64 is NOT NULL)) END

OK I think I got it. Function that loops through columns and runs an update command per column.
DECLARE #sql NVARCHAR(1000),
#cn NVARCHAR(1000)--,
--#r NVARCHAR(1000),
--#start INT
DECLARE col_names CURSOR FOR
SELECT column_name
FROM information_schema.columns
WHERE table_name = 'PIVOT_TABLE'
ORDER BY ordinal_position
--SET #start = 0
DECLARE #op VARCHAR(max)
SET #op=''
OPEN col_names FETCH next FROM col_names INTO #cn
WHILE ##FETCH_STATUS = 0
BEGIN
--print #cn
IF UPPER(#cn)<> 'DOCUMENTID' and UPPER(#cn)<> 'CONFIGURATIONID' and UPPER(#cn)<> 'PROJECTID' and UPPER(#cn)<> 'REVISIONNO'
BEGIN
SET #sql = 'UPdate pt
set pt.' + #cn + ' = ((SELECT TOP 1 t.' + #cn + ' FROM pivot_table t WHERE t.documentid = pt.documentid and t.projectid=pt.projectid
and t.configurationid=pt.configurationid and cast(t.revisionno as int) < cast(pt.revisionno as int) AND t.' + #cn + ' is NOT NULL
ORDER BY revisionno desc)) from PIVOT_TABLE pt where pt.' + #cn + ' is NULL;'
EXEC Sp_executesql
#sql
--print #cn
END
FETCH next FROM col_names INTO #cn
END
CLOSE col_names
DEALLOCATE col_names;

Related

SQL Loop through tables and columns to find which columns are NOT empty

I created a temp table #test containing 3 fields: ColumnName, TableName, and Id.
I would like to see which rows in the #test table (columns in their respective tables) are not empty? I.e., for every column name that i have in the ColumnName field, and for the corresponding table found in the TableName field, i would like to see whether the column is empty or not. Tried some things (see below) but didn't get anywhere. Help, please.
declare #LoopCounter INT = 1, #maxloopcounter int, #test varchar(100),
#test2 varchar(100), #check int
set #maxloopcounter = (select count(TableName) from #test)
while #LoopCounter <= #maxloopcounter
begin
DECLARE #PropIDs TABLE (tablename varchar(max), id int )
Insert into #PropIDs (tablename, id)
SELECT [tableName], id FROM #test
where id = #LoopCounter
set #test2 = (select columnname from #test where id = #LoopCounter)
declare #sss varchar(max)
set #sss = (select tablename from #PropIDs where id = #LoopCounter)
set #check = (select count(#test2)
from (select tablename
from #PropIDs
where id = #LoopCounter) A
)
print #test2
print #sss
print #check
set #LoopCounter = #LoopCounter + 1
end
In order to use variables as column names and table names in your #Check= query, you will need to use Dynamic SQL.
There is most likely a better way to do this but I cant think of one off hand. Here is what I would do.
Use the select and declare a cursor rather than a while loop as you have it. That way you dont have to count on sequential id's. The cursor would fetch fields columnname, id and tablename
In the loop build a dynamic sql statement
Set #Sql = 'Select Count(*) Cnt Into #Temp2 From ' + TableName + ' Where ' + #columnname + ' Is not null And ' + #columnname <> '''''
Exec(#Sql)
Then check #Temp2 for a value greater than 0 and if this is what you desire you can use the #id that was fetched to update your #Temp table. Putting the result into a scalar variable rather than a temp table would be preferred but cant remember the best way to do that and using a temp table allows you to use an update join so it would well in my opinion.
https://www.mssqltips.com/sqlservertip/1599/sql-server-cursor-example/
http://www.sommarskog.se/dynamic_sql.html
Found a way to extract all non-empty tables from the schema, then just joined with the initial temp table that I had created.
select A.tablename, B.[row_count]
from (select * from #test) A
left join
(SELECT r.table_name, r.row_count, r.[object_id]
FROM sys.tables t
INNER JOIN (
SELECT OBJECT_NAME(s.[object_id]) table_name, SUM(s.row_count) row_count, s.[object_id]
FROM sys.dm_db_partition_stats s
WHERE s.index_id in (0,1)
GROUP BY s.[object_id]
) r on t.[object_id] = r.[object_id]
WHERE r.row_count > 0 ) B
on A.[TableName] = B.[table_name]
WHERE ROW_COUNT > 0
order by b.row_count desc
How about this one - bitmask computed column checks for NULLability. Value in the bitmask tells you if a column is NULL or not. Counting base 2.
CREATE TABLE FindNullComputedMask
(ID int
,val int
,valstr varchar(3)
,NotEmpty as
CASE WHEN ID IS NULL THEN 0 ELSE 1 END
|
CASE WHEN val IS NULL THEN 0 ELSE 2 END
|
CASE WHEN valstr IS NULL THEN 0 ELSE 4 END
)
INSERT FindNullComputedMask
SELECT 1,1,NULL
INSERT FindNullComputedMask
SELECT NULL,2,NULL
INSERT FindNullComputedMask
SELECT 2,NULL, NULL
INSERT FindNullComputedMask
SELECT 3,3,3
SELECT *
FROM FindNullComputedMask

SQL Server 2012 trigger: Execute dynamic sql for each row

I have a trigger in a table which keeps all the changes (Insert, Update, Delete). When I insert only one row per time it works fine. But when I am trying to insert multiple rows at once I receive this error :
Subquery returned more than 1 value. This is not permitted when the
subquery follows =, !=, <, <= , >, >= or when the subquery is used as
an expression.
Here is the code of the trigger ( I removed some parts that are not needed to shorten the code like variable declarations etc.)
UPDATE: The actual error is in these lines when #tempTrigT contains more than one rows:
Select * into #tempTrigT from (select * from deleted where #Action in ( 'U','D')) A UNION (select * from inserted where #Action ='I')
set #sql = 'set #audit_oldvalue=(select cast([' +#Item +'] as NVARCHAR(4000)) from #tempTrigT)';
EXEC SP_EXECUTESQL #sql,N'#audit_oldvalue sql_variant OUTPUT',#audit_oldvalue OUTPUT -- If inserted #audit_oldvalue gets the new value
set #sql = 'set #audit_value=(select cast(i.[' +#Item +'] as NVARCHAR(4000)) from dbo.TForms i inner join #tempTrigT d on i.id = d.id)';
EXEC SP_EXECUTESQL #sql,N'#audit_value sql_variant OUTPUT',#audit_value OUTPUT
How I can change it to work for multiple rows as well?
You are missing a row identifier so that you are only handling one row per loop. Something like:
DECLARE #ID int = (SELECT MIN(id) FROM #tempTrigT) to define a row at the start of your loop
WHERE id = #ID to filter to this row throughout the loop
DELETE FROM #tempTrigT WHERE id = #ID at the end of your loop, when that id is done processing
Then again, that may not even work if the id can repeat in #tempTrigT.
And with all that said...
I would definitely consider separating this into multiple triggers and save yourself the complexity you are facing by trying to loop through deleted or inserted records and handle them all accordingly. I would also consider simplifying your audit process. The end goal is to be able to look back at what records used to be, which you can achieve really simply:
INSERT INTO [dbo].[AuditTrailTForms] (TForms_Cols, ChangeDate, Change_User, Change_Type)
SELECT T.*, GETDATE(), COALESCE(ModifiedBy,suser_name()), 'Inserted'
FROM inserted i
JOIN TForms T on i.id = T.id
Then you can worry about making it easier to view which column values changed later on when you query these tables:
SELECT *
FROM (SELECT *, GETDATE(), 'Current', 'Current'
FROM TForms
WHERE ID = #AuditID
UNION ALL
SELECT *
FROM AuditTrailTForms
WHERE ID = #AuditID
--AND Change_Type =
--AND Change_User =
) T
ORDER BY ChangeDate DESC
Edit: Using an identity column:
You can use an identity column to define a row for each loop like so:
DECLARE #TotalRows int = (SELECT MAX(identityColumn) FROM #tempTrigT
DECLARE #RowID int = 1
WHILE #RowID <= #TotalRows
BEGIN
--Do stuff
--For Example
SET #sql = 'set #audit_oldvalue=(SELECT cast([' +#Item +'] as NVARCHAR(4000))
FROM #tempTrigT
WHERE T.IdentityColumn = #RowID)';
EXEC SP_EXECUTESQL #sql,N'#audit_oldvalue sql_variant OUTPUT',#audit_oldvalue OUTPUT
--then increment to the next row when you're done
SET #RowID = #RowID + 1
END

Delete Duplicate Records with Same Values

I have a TSQL statement that is taking several hours to run. I'm sure I need to look into the import process to avoid duplicates being inserted but for the time being I'd just like to remove all records except one with duplicate values. ParameterValueId is the primary key on the table but I have many duplicate entries that need to be deleted. I only need one record for each ParameterId, SiteId, MeasurementDateTime, and ParameterValue. Below is my current method for deleting duplicate records. It finds all values that have a count > 1. It then finds the first Id with those values and deletes all of the records with those values that don't match the first ID found by those values. Besides the print statements is there a more efficient way of doing this. Can I do a way with the cursor at all to improve performance?
BEGIN TRANSACTION
SET NOCOUNT ON
DECLARE #BeginningRecordCount INT
SET #BeginningRecordCount =
(
SELECT COUNT(*)
FROM ParameterValues
)
DECLARE #ParameterId UNIQUEIDENTIFIER
DECLARE #SiteId UNIQUEIDENTIFIER
DECLARE #MeasurementDateTime DATETIME
DECLARE #ParameterValue FLOAT
DECLARE CDuplicateValues CURSOR FOR
SELECT
[ParameterId]
,[SiteId]
,[MeasurementDateTime]
,[ParameterValue]
FROM [ParameterValues]
GROUP BY
[ParameterId]
,[SiteId]
,[MeasurementDateTime]
,[ParameterValue]
HAVING COUNT(*) > 1
OPEN CDuplicateValues
FETCH NEXT FROM CDuplicateValues INTO
#ParameterId
,#SiteId
,#MeasurementDateTime
,#ParameterValue
DECLARE #FirstParameterValueId UNIQUEIDENTIFIER
DECLARE #DuplicateRecordsDeleting INT
WHILE ##FETCH_STATUS <> -1
BEGIN
SET #FirstParameterValueId =
(
SELECT TOP 1 ParameterValueId
FROM ParameterValues
WHERE
ParameterId = #ParameterId
AND SiteId = #SiteId
AND MeasurementDateTime = #MeasurementDateTime
AND ParameterValue = #ParameterValue
)
SET #DuplicateRecordsDeleting =
(
SELECT COUNT(*)
FROM ParameterValues
WHERE
ParameterId = #ParameterId
AND SiteId = #SiteId
AND MeasurementDateTime = #MeasurementDateTime
AND ParameterValue = #ParameterValue
AND ParameterValueId <> #FirstParameterValueId
)
PRINT 'DELETING ' + CAST(#DuplicateRecordsDeleting AS NVARCHAR(50))
+ ' records with values ParameterId : ' + CAST(#ParameterId AS NVARCHAR(50))
+ ', SiteId : ' + CAST (#SiteId AS NVARCHAR(50))
+ ', MeasurementDateTime : ' + CAST(#MeasurementDateTime AS NVARCHAR(50))
+ ', ParameterValue : ' + CAST(#ParameterValue AS NVARCHAR(50))
DELETE FROM ParameterValues
WHERE
ParameterId = #ParameterId
AND SiteId = #SiteId
AND MeasurementDateTime = #MeasurementDateTime
AND ParameterValue = #ParameterValue
AND ParameterValueId <> #FirstParameterValueId
FETCH NEXT FROM CDuplicateValues INTO
#ParameterId
,#SiteId
,#MeasurementDateTime
,#ParameterValue
END
CLOSE CDuplicateValues
DEALLOCATE CDuplicateValues
DECLARE #EndingRecordCount INT
SET #EndingRecordCount =
(
SELECT COUNT(*)
FROM ParameterValues
)
PRINT 'Beginning Record Count : ' + CAST(#BeginningRecordCount AS NVARCHAR(50))
PRINT 'Ending Record Count : ' + CAST(#EndingRecordCount AS NVARCHAR(50))
PRINT 'Total Records Deleted : ' + CAST((#BeginningRecordCount - #EndingRecordCount) AS NVARCHAR(50))
SET NOCOUNT OFF
PRINT 'RUN THE COMMIT OR ROLLBACK STATEMENT AFTER VERIFYING DATA.'
--COMMIT
--ROLLBACK
Use option with CTE and OVER clause. OUTPUT.. INTO clause saves the information from rows affected by an DELETE statement into #delParameterValues table. Further, in the body of procedure, you can use this table to print the affected rows.
DECLARE #delParameterValues TABLE
(
ParameterId UNIQUEIDENTIFIER,
SiteId UNIQUEIDENTIFIER,
MeasurementDateTime DATETIME,
ParameterValue FLOAT,
DeletedRecordCount int
)
;WITH cte AS
(
SELECT *, ROW_NUMBER() OVER (PARTITION BY [ParameterId],[SiteId],[MeasurementDateTime],[ParameterValue] ORDER BY 1/0) AS rn,
COUNT(*) OVER (PARTITION BY [ParameterId],[SiteId],[MeasurementDateTime],[ParameterValue]) AS cnt
FROM [ParameterValues]
)
DELETE cte
OUTPUT DELETED.[ParameterId],
DELETED.[SiteId],
DELETED.[MeasurementDateTime],
DELETED.[ParameterValue],
DELETED.cnt INTO #delParameterValues
WHERE rn != 1
SELECT DISTINCT *
FROM #delParameterValues
Demo on SQLFiddle
you can do it in a single sql:
DELETE p FROM ParameterValues p
LEFT JOIN
(SELECT ParameterId, SiteId, MeasurementDateTime, ParameterValue, MAX(ParameterValueId) AS MAX_PARAM
FROM ParameterValues
GROUP BY ParameterId, SiteId, MeasurementDateTime, ParameterValue
) m
ON m.ParameterId = p.ParameterId
AND m.SiteId = p.SiteId
AND m.MeasurementDateTime = p.MeasurementDateTime
AND m.ParameterValue = p.ParameterValue
AND m.MAX_PARAM = p.ParameterValueId
WHERE m.ParameterId IS NULL
Of course it will not print the output, but you can still print the rows before and after

Export data from a non-normalized database

I need to export data from a non-normalized database where there are multiple columns to a new normalized database.
One example is the Products table, which has 30 boolean columns (ValidSize1, ValidSize2 ecc...) and every record has a foreign key which points to a Sizes table where there are 30 columns with the size codes (XS, S, M etc...). In order to take the valid sizes for a product I have to scan both tables and take the value SizeCodeX from the Sizes table only if ValidSizeX on the product is true. Something like this:
Products Table
--------------
ProductCode <PK>
Description
SizesTableCode <FK>
ValidSize1
ValidSize2
[...]
ValidSize30
Sizes Table
-----------
SizesTableCode <PK>
SizeCode1
SizeCode2
[...]
SizeCode30
For now I am using a "template" query which I repeat for 30 times:
SELECT
Products.Code,
Sizes.SizesTableCode, -- I need this code because different codes can have same size codes
Sizes.Size_1
FROM Products
INNER JOIN Sizes
ON Sizes.SizesTableCode = Products.SizesTableCode
WHERE Sizes.Size_1 IS NOT NULL
AND Products.ValidSize_1 = 1
I am just putting this query inside a loop and I replace the "_1" with the loop index:
SET #counter = 1;
SET #max = 30;
SET #sql = '';
WHILE (#counter <= #max)
BEGIN
SET #sql = #sql + ('[...]'); -- Here goes my query with dynamic indexes
IF #counter < #max
SET #sql = #sql + ' UNION ';
SET #counter = #counter + 1;
END
INSERT INTO DestDb.ProductsSizes EXEC(#sql); -- Insert statement
GO
Is there a better, cleaner or faster method to do this? I am using SQL Server and I can only use SQL/TSQL.
You can prepare a dynamic query using the SYS.Syscolumns table to get all value in row
DECLARE #SqlStmt Varchar(MAX)
SET #SqlStmt=''
SELECT #SqlStmt = #SqlStmt + 'SELECT '''+ name +''' column , UNION ALL '
FROM SYS.Syscolumns WITH (READUNCOMMITTED)
WHERE Object_Id('dbo.Products')=Id AND ([Name] like 'SizeCode%' OR [Name] like 'ProductCode%')
IF REVERSE(#SqlStmt) LIKE REVERSE('UNION ALL ') + '%'
SET #SqlStmt = LEFT(#SqlStmt, LEN(#SqlStmt) - LEN('UNION ALL '))
print ( #SqlStmt )
Well, it seems that a "clean" (and much faster!) solution is the UNPIVOT function.
I found a very good example here:
http://pratchev.blogspot.it/2009/02/unpivoting-multiple-columns.html

SQL Query to check if 40 columns in table is null

How do I select few columns in a table that only contain NULL values for all the rows?
Suppose if Table has 100 columns, among this 100 columns 60 columns has null values.
How can I write where condition to check if 60 columns are null.
maybe with a COALESCE
SELECT * FROM table WHERE coalesce(col1, col2, col3, ..., colN) IS NULL
where c1 is null and c2 is null ... and c60 is null
shortcut using string concatenation (Oracle syntax):
where c1||c2||c3 ... c59||c60 is null
First of all, if you have a table that has so many nulls and you use SQL Server 2008 - you might want to define the table using sparse columns (http://msdn.microsoft.com/en-us/library/cc280604.aspx).
Secondly I am not sure if coalesce solves the question asks - it seems like Ammu might actually want to find the list of columns that are null for all rows, but I might have misunderstood. Nevertheless - it is an interesting question, so I wrote a procedure to list null columns for any given table:
IF (OBJECT_ID(N'PrintNullColumns') IS NOT NULL)
DROP PROC dbo.PrintNullColumns;
go
CREATE PROC dbo.PrintNullColumns(#tablename sysname)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #query nvarchar(max);
DECLARE #column sysname;
DECLARE columns_cursor CURSOR FOR
SELECT c.name
FROM sys.tables t JOIN sys.columns c ON t.object_id = c.object_id
WHERE t.name = #tablename AND c.is_nullable = 1;
OPEN columns_cursor;
FETCH NEXT FROM columns_cursor INTO #column;
WHILE (##FETCH_STATUS = 0)
BEGIN
SET #query = N'
DECLARE #c int
SELECT #c = COUNT(*) FROM ' + #tablename + ' WHERE ' + #column + N' IS NOT NULL
IF (#c = 0)
PRINT (''' + #column + N''');'
EXEC (#query);
FETCH NEXT FROM columns_cursor INTO #column;
END
CLOSE columns_cursor;
DEALLOCATE columns_cursor;
SET NOCOUNT OFF;
RETURN;
END;
go
If you don't want to write the columns names, Try can do something like this.
This will show you all the rows when all of the columns values are null except for the columns you specified (IgnoreThisColumn1 & IgnoreThisColumn2).
DECLARE #query NVARCHAR(MAX);
SELECT #query = ISNULL(#query+', ','') + [name]
FROM sys.columns
WHERE object_id = OBJECT_ID('yourTableName')
AND [name] != 'IgnoreThisColumn1'
AND [name] != 'IgnoreThisColumn2';
SET #query = N'SELECT * FROM TmpTable WHERE COALESCE('+ #query +') IS NULL';
EXECUTE(#query)
Result
If you don't want rows when all the columns are null except for the columns you specified, you can simply use IS NOT NULL instead of IS NULL
SET #query = N'SELECT * FROM TmpTable WHERE COALESCE('+ #query +') IS NOT NULL';
Result
[
Are you trying to find out if a specific set of 60 columns are null, or do you just want to find out if any 60 out of the 100 columns are null (not necessarily the same 60 for each row?)
If it is the latter, one way to do it in oracle would be to use the nvl2 function, like so:
select ... where (nvl2(col1,0,1)+nvl2(col2,0,1)+...+nvl2(col100,0,1) > 59)
A quick test of this idea:
select 'dummy' from dual where nvl2('somevalue',0,1) + nvl2(null,0,1) > 1
Returns 0 rows while:
select 'dummy' from dual where nvl2(null,0,1) + nvl2(null,0,1) > 1
Returns 1 row as expected since more than one of the columns are null.
It would help to know which db you are using and perhaps which language or db framework if using one.
This should work though on any database.
Something like this would probably be a good stored procedure, since there are no input parameters for it.
select count(*) from table where col1 is null or col2 is null ...
Here is another method that seems to me to be logical as well (use Netezza or TSQL)
SELECT KeyColumn, MAX(NVL2(TEST_COLUMN,1,0) AS TEST_COLUMN
FROM TABLE1
GROUP BY KeyColumn
So every TEST_COLUMN that has MAX value of 0 is a column that contains all nulls for the record set. The function NVL2 is saying if the column data is not null return a 1, but if it is null then return a 0.
Taking the MAX of that column will reveal if any of the rows are not null. A value of 1 means that there is at least 1 row that has data. Zero (0) means that each row is null.
I use the below query when i have to check for multiple columns NULL. I hope this is helpful . If the SUM comes to a value other than Zero , then you have NULL in that column
select SUM (CASE WHEN col1 is null then 1 else 0 end) as null_col1,
SUM (CASE WHEN col2 is null then 1 else 0 end) as null_col2,
SUM (CASE WHEN col3 is null then 1 else 0 end) as null_col3, ....
.
.
.
from tablename
you can use
select NUM_NULLS , COLUMN_NAME from all_tab_cols where table_name = 'ABC' and COLUMN_NAME in ('PQR','XYZ');