Doubt in Query - SQL Server 2005 - sql-server-2005

I am having table with 100 columns. here up to 50 to 60 columns contains NULL value in it. Now i need to Replace this NULL value to 0 in all 50 to 60 columns. I tried with the Update query as,
UPDATE [tableName]
SET col1=0, col2 = 0, ... col60 = 0
WHERE col1 IS NULL AND Col2 IS NULL ... Col60 IS NULL
Is there anyother Query to update these all 60 columns without specifying such columns or we have any other approach???

You have to specify all columns, but you can skip the WHERE clause and have one update deal with them all at once:
UPDATE [tableName] SET
col1=COALESCE(col1, 0),
col2=COALESCE(col2, 0),
col3=COALESCE(col3, 0),
col4=COALESCE(col4, 0),
[...]

You could try this workaround if every value in the columns is NULL:
Edit the table definition and set the columns as "Calculated" and use 0 as formula
Save the table
Remove the formula
It is not very elegant but works

I don't think there's an alternative - but the query you posted will only update records where all the columns are null.
If you want to update individual columns, you need to break it up into individual updates:
update table
set col1 = 0
where col 1 is null
update table
set col2 = 0
where col2 is null

To do not write this query by hand, you can generate this by using dynamic SQL:
DECLARE #Table NVARCHAR(255)= 'Your table'
DECLARE #sSQl NVARCHAR(MAX)= 'UPDATE ' + #Table + ' SET ' + CHAR(13) ;
WITH c AS ( SELECT c.name
FROM sys.all_columns c
JOIN sys.tables T ON c.object_id = T.object_id
WHERE t.name = #Table
)
SELECT #sSQl = #sSQl + c.name + '=ISNULL(' + c.name + ',0)' + ','
+ CHAR(13)
FROM c
IF LEN(#sSQl) > 0
SET #ssql = LEFT(#sSQl, LEN(#sSQl) - 2)
PRINT #ssql

Related

ms sql server how to check table has “id” column and count rows if "id" exist

There are too many tables in my SQL Server db. Most of them have an 'id' column, but some do not. I want to know which table(s) doesn't have the 'id' column and to count the rows where id=null if an 'id' column exists. The query results may look like this:
TABLE_NAME | HAS_ID | ID_NULL_COUNT | ID_NOT_NULL_COUNT
table1 | false | 0 | 0
table2 | true | 10 | 100
How do I write this query?
Building query:
WITH cte AS (
SELECT t.*, has_id = CASE WHEN COLUMN_NAME = 'ID' THEN 'true' ELSE 'false' END
FROM INFORMATION_SCHEMA.TABLES t
OUTER APPLY (SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS c
WHERE t.TABLE_NAME = c.TABLE_NAME
AND t.[TABLE_SCHEMA] = c.[TABLE_SCHEMA]
AND c.COLUMN_NAME = 'id') s
WHERE t.TABLE_SCHEMA IN (...)
)
SELECT
query_to_run = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
'SELECT tab_name = ''<tab_name>'',
has_id = ''<has_id>'',
id_null_count = <id_null_count>,
id_not_null_count = <id_not_null_count>
FROM <schema_name>.<tab_name>'
,'<tab_name>', TABLE_NAME)
,'<schema_name>', TABLE_SCHEMA)
,'<has_id>', has_id)
,'<id_null_count>', CASE WHEN has_id = 'false' THEN '0' ELSE 'SUM(CASE WHEN id IS NULL THEN 1 END)' END)
,'<id_not_null_count>', CASE WHEN has_id = 'false' THEN '0' ELSE 'COUNT(id)' END)
FROM cte;
Copy the output and execute in separate window. UNION ALL could be added to get single resultset.
db<>fiddle demo
This might be useful for you... lists out the row count for all tables that have an "id" column. It filters out tables that start with "sys" because those are mostly internal tables. If you have a table that starts with "sys", you'll probably want to delete that part of the WHERE clause.
SELECT DISTINCT OBJECT_NAME(r.[object_id]) AS [TableName], [row_count] AS [RowCount]
FROM sys.dm_db_partition_stats r
WHERE index_id = 1
AND EXISTS (SELECT 1 FROM sys.columns c WHERE c.[object_id] = r.[object_id] AND c.[name] = N'id')
AND OBJECT_NAME(r.[object_id]) NOT LIKE 'sys%'
ORDER BY [TableName]
Note you can change the "c.[name] = N'id'" to be any column name, or even change the "=" to "<>" to find only tables without an id column
pmbAustin answers how to list all tables without "ID" column.
To know how many rows in each table, SQL Server has a built-in report for you.
Right click the database in SSMS, click "Reports", "Standard Reports" then "Disk Usage by Table"
You now know how many rows in each table, and from pmbAustin's answer you know how which tables do and do not have "ID" columns. with a simple Vlookup in Excel you can combine these two datasets to arrive at any answer you wish.
This will give you the info about which tables have or not have column named "ID":
SELECT Table_Name
, case when column_name not like '%ID%' then 'false'
else 'true'
end as HAS_ID
FROM INFORMATION_SCHEMA.COLUMNS;
Here is a small demo
And here is one way that you can use to select all the tables that have columns named ID and if this columns are null or not:
CREATE TABLE #AllIDSNullable (TABLE_NAME NVARCHAR(256) NOT NULL
, HAS_ID VARCHAR(10)
, ID_NULL_COUNT INT DEFAULT 0
, ID_NOT_NULL_COUNT INT DEFAULT 0);
DECLARE CT CURSOR FOR
SELECT Table_Name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE column_name = 'ID';
DECLARE #name NVARCHAR(MAX), #SQL NVARCHAR(MAX);
OPEN CT; FETCH NEXT FROM CT INTO #name;
WHILE ##FETCH_STATUS=0 BEGIN
SET #SQL = 'INSERT #AllIDSNullable (TABLE_NAME , HAS_ID) SELECT Table_Name, case when column_name not like ''%ID%'' then ''false'' else ''true'' end FROM INFORMATION_SCHEMA.COLUMNS;';
EXEC (#SQL);
SET #SQL = 'UPDATE #AllIDSNullable SET ID_NULL_COUNT = (SELECT COUNT(*) FROM ['+#name+'] WHERE ID IS NULL), ID_NOT_NULL_COUNT = (SELECT COUNT(*) FROM ['+#name+'] WHERE ID IS NOT NULL) WHERE TABLE_NAME='''+#name+''';';
EXEC (#SQL);
FETCH NEXT FROM CT INTO #name;
END;
CLOSE CT;
SELECT *
FROM #AllIDSNullable;
Here is a demo
Result:

SQL Loop through 8 million record and update them

I have a audit table that has about 8 million records. I have recently added two new column which I need to update from existing column with some rules/conditions. Basically initially, whenever a FK was updated in a table, it was storing old and new FK ids into the audit table. for example
Table A
ID Name
1 First A
2 Second A
3 Third A
Table B
ID AID Name
1 1 First B
2 1 Second B
3 2 Third B
Audit
ID TableName FieldName OldValue NewValue
now if i update first record of the table B
from 1 1 First B to 1 3 First B then the audit table will store the change as
Audit
ID TableName FieldName OldValue NewValue
1 Table B AID 1 3
Now I have updated Audit table to store actual Text value of the FK i.e above change will be stored as
Audit
ID TableName FieldName OldValue NewValue OldText NewText
1 Table B AID 1 3 First A Third A
The problem is I already have about 8 million records that I need to new columns for. I have written below query to do that
declare #sql nvarchar(max);
declare #start int = 1
while #start <= 8000000
begin
select top 10000 #sql = COALESCE(#sql+'Update Audit set ','Update Audit set') +
isnull(' OldText = ('+ dbo.GetFKText(i.TableName, i.FieldName)+case when len(isnull(i.OldValue,'')) < 1 then null else i.OldValue end +'),',' OldText = OldValue, ') +
isnull(' NewText = ('+ dbo.GetFKText(i.TableName, i.FieldName)+case when len(isnull(i.NewValue,'')) < 1 then null else i.NewValue end +')',' NewText = NewValue ') +
' where AuditID = '+cast(i.AuditID as nvarchar(200))+' and lower(ltrim(rtrim(TableName))) <> ''audit'';'
from Audit i where i.AuditID >= #start
exec sp_executesql #sql
set #start = #start+10000;
end
get text function (basically I getting column that has name = (TableName)+'Name' or (TablName)+(SomeText)+'Name' this just a convention that I have followed in all the tables)
declare #res nvarchar(max)='';
declare #fn nvarchar(200);
declare #ttn nvarchar(200);
declare #tcn nvarchar(200);
SELECT top 1
#ttn = kcu.table_name
,#tcn = kcu.column_name
FROM INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE ccu
INNER JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS rc
ON ccu.CONSTRAINT_NAME = rc.CONSTRAINT_NAME
INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE kcu
ON kcu.CONSTRAINT_NAME = rc.UNIQUE_CONSTRAINT_NAME
Where ccu.TABLE_NAME = #TableName and ccu.COLUMN_NAME = #FieldName
if isnull(#ttn,'') != '' and ISNULL(#tcn,'') != ''
begin
select #fn= COLUMN_NAME
from (SELECT top 1 COLUMN_NAME ,
case when COLUMN_NAME like (#ttn+'Name') then 0
when COLUMN_NAME like (#ttn+'%Name') then 1
when COLUMN_NAME like (#ttn+'Code') then 2
when COLUMN_NAME like (#ttn+'%Code') then 3 else 4 end as CPriority
FROM JVO.INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = #ttn and (COLUMN_NAME like '%Name' or COLUMN_NAME like '%Code'
)
order by CPriority) as aa;
RETURN 'select '+#fn+' from '+#ttn+' where '+#tcn+' = ';
end
return null;
Its working but really slow, it update about 1 million records in 13 hours. can anyone help to improve this query or suggest alternative way to update it.
Thanks

simplify if statement with stored procedure

i have a following stored procedure where i am repeating similar code. all i am doing is checking the condition based on Sample id1, sampleid2, and sample id3 to follow in similar fashion. The value of 'y' goes on till about it reaches 10, so it's going to be a big 'if' condition based statements. i was trying to see if a better solution could be put in place. thanks.
#select = 'select * from tbl Sample......'
if(x = 1 and y=1)
set #where = 'where Sample.id1 >=1 and <=10'
if(x = 1 and y=2)
set #where = 'where Sample.id1 >=11 and <=20'
if(x=2 and y=1)
set #where = 'where Sample.id2 >=1 and <= 10'
if(x=2 and y=2)
set #where = 'where Sample.id2 >=11 and <=20'
if(x=3 and y=1)
set #where = 'where Sample.id3 >=1 and <=10'
if(x=3 and y=2)
set #where = 'where Sample.id3 >=11 and <=20' //increment goes on
exec(#select+#where)
In general, if there is no easy correlation between the values of x, y and the filtered columns id1, id2 etc, then you could move the where predicates into a table keyed by values of x and y, and then use this as a lookup to apply to your PROC. Assuming the SPROC is used heavily, the lookup table can be made permanent and indexed on your x,y input mapping columns.
CREATE TABLE dbo.WhereMappings
(
x INT,
y INT,
Predicate NVARCHAR(MAX),
CONSTRAINT PK_MyWhereMappings PRIMARY KEY(x, y)
)
INSERT INTO dbo.WhereMappings(x, y, Predicate) VALUES
(1, 1, 'Sample.id1 > 5 and Sample.id2 <= 10'),
(1, 2, 'Sample.id1 > 7 and Sample.id2 <= 15'),
(2, 1, 'Sample.id2 > 2 and Sample.id3 <= 18');
Your proc then simplifies to:
CREATE PROC MyProc(#x INT, #y INT) AS
BEGIN
DECLARE #sql NVARCHAR(MAX);
DECLARE #predicate NVARCHAR(MAX);
SELECT TOP 1 #predicate = Predicate
FROM dbo.WhereMappings WHERE x = #x AND y = #y;
-- TODO THROW if predicate not mapped
SET #sql = CONCAT('SELECT * FROM Sample WHERE ', #predicate);
EXECUTE(#sql);
END;
Re : What does this solve
Although this hasn't necessarily reduced the complexity of the original queries, it does however allow for a data-only maintenance approach to the mappings, e.g. Admin UI screens could be written to maintain (and validate! think Sql Injection) the predicate mappings, without the need for direct modification to the SPROC.
Edit
After your edit, it does appear that there is a correlation between x, y and the filtered column and range used in the idx predicates, viz x sets the column, and y sets the range between.
In that case, simply append the value of x to an id column name stub, and multiply out the value of the BETWEEN clause to y*10 - 9 to y * 10;
You may do something like this:
select
*
from
tbl Sample
where
(#x=1 and #y=1 and Sample.id1>=..and Sample.id1<=..) --(or you could use between)
OR (#x=1 and #y=2 and Sample.id1>=..and Sample.id1<=..)
..
set #select = 'select * from tbl Sample......'
set #where = 'where Sample.id'+convert(nvarchar(10),#x)+' >=....and <=...'
exec(#select+#where)
I would suggest to use another sql table which will have information of all these condition like shown in below screenshot.
Then use join in your sql query like.(Assume above table has name Limit
select * from tbl Sample smpl
inner join Limit lmt
on #x=lmt.x and #y=lmt.y and
(
(#x=1 and smpl.id1 >= lmt.Min_limit and smpl.id1 <=lmt.Max_limit) or
(#x=2 and smpl.id2 >= lmt.Min_limit and smpl.id2 <=lmt.Max_limit) or
(#x=3 and smpl.id3 >= lmt.Min_limit and smpl.id3 <=lmt.Max_limit)
)
In this I have tried to avoid dynamic query.
I usually try to find a relation between inputs and outputs and in this case I found this way:
SET #where = 'WHERE Sample.id{0} >= {1} + 1 and <= {1} + 10'
SET #where = REPLACE(#where, '{0}', CAST(x AS varchar(5)))
SET #where = REPLACE(#where, '{1}', CAST((y - 1) AS varchar(5)))
I think you want something like:
SET #where = 'where Sample.id' + CAST(#x AS VARCHAR(10)) + ' between ' +
CAST((#y - 1) * 10 + 1 AS VARCHAR(10)) + ' and ' +
CAST(#y * 10 AS VARCHAR(10))

How to UPDATE all columns of a record without having to list every column

I'm trying to figure out a way to update a record without having to list every column name that needs to be updated.
For instance, it would be nice if I could use something similar to the following:
// the parts inside braces are what I am trying to figure out
UPDATE Employee
SET {all columns, without listing each of them}
WITH {this record with id of '111' from other table}
WHERE employee_id = '100'
If this can be done, what would be the most straightforward/efficient way of writing such a query?
It's not possible.
What you're trying to do is not part of SQL specification and is not supported by any database vendor. See the specifications of SQL UPDATE statements for MySQL, Postgresql, MSSQL, Oracle, Firebird, Teradata. Every one of those supports only below syntax:
UPDATE table_reference
SET column1 = {expression} [, column2 = {expression}] ...
[WHERE ...]
This is not posible, but..
you can doit:
begin tran
delete from table where CONDITION
insert into table select * from EqualDesingTabletoTable where CONDITION
commit tran
be carefoul with identity fields.
Here's a hardcore way to do it with SQL SERVER. Carefully consider security and integrity before you try it, though.
This uses schema to get the names of all the columns and then puts together a big update statement to update all columns except ID column, which it uses to join the tables.
This only works for a single column key, not composites.
usage: EXEC UPDATE_ALL 'source_table','destination_table','id_column'
CREATE PROCEDURE UPDATE_ALL
#SOURCE VARCHAR(100),
#DEST VARCHAR(100),
#ID VARCHAR(100)
AS
DECLARE #SQL VARCHAR(MAX) =
'UPDATE D SET ' +
-- Google 'for xml path stuff' This gets the rows from query results and
-- turns into comma separated list.
STUFF((SELECT ', D.'+ COLUMN_NAME + ' = S.' + COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = #DEST
AND COLUMN_NAME <> #ID
FOR XML PATH('')),1,1,'')
+ ' FROM ' + #SOURCE + ' S JOIN ' + #DEST + ' D ON S.' + #ID + ' = D.' + #ID
--SELECT #SQL
EXEC (#SQL)
In Oracle PL/SQL, you can use the following syntax:
DECLARE
r my_table%ROWTYPE;
BEGIN
r.a := 1;
r.b := 2;
...
UPDATE my_table
SET ROW = r
WHERE id = r.id;
END;
Of course that just moves the burden from the UPDATE statement to the record construction, but you might already have fetched the record from somewhere.
How about using Merge?
https://technet.microsoft.com/en-us/library/bb522522(v=sql.105).aspx
It gives you the ability to run Insert, Update, and Delete. One other piece of advice is if you're going to be updating a large data set with indexes, and the source subset is smaller than your target but both tables are very large, move the changes to a temporary table first. I tried to merge two tables that were nearly two million rows each and 20 records took 22 minutes. Once I moved the deltas over to a temp table, it took seconds.
If you are using Oracle, you can use rowtype
declare
var_x TABLE_A%ROWTYPE;
Begin
select * into var_x
from TABLE_B where rownum = 1;
update TABLE_A set row = var_x
where ID = var_x.ID;
end;
/
given that TABLE_A and TABLE_B are of same schema
It is possible. Like npe said it's not a standard practice. But if you really have to:
1. First a scalar function
CREATE FUNCTION [dte].[getCleanUpdateQuery] (#pTableName varchar(40), #pQueryFirstPart VARCHAR(200) = '', #pQueryLastPart VARCHAR(200) = '', #pIncludeCurVal BIT = 1)
RETURNS VARCHAR(8000) AS
BEGIN
DECLARE #pQuery VARCHAR(8000);
WITH cte_Temp
AS
(
SELECT
C.name
FROM SYS.COLUMNS AS C
INNER JOIN SYS.TABLES AS T ON T.object_id = C.object_id
WHERE T.name = #pTableName
)
SELECT #pQuery = (
CASE #pIncludeCurVal
WHEN 0 THEN
(
STUFF(
(SELECT ', ' + name + ' = ' + #pQueryFirstPart + #pQueryLastPart FROM cte_Temp FOR XML PATH('')), 1, 2, ''
)
)
ELSE
(
STUFF(
(SELECT ', ' + name + ' = ' + #pQueryFirstPart + name + #pQueryLastPart FROM cte_Temp FOR XML PATH('')), 1, 2, ''
)
) END)
RETURN 'UPDATE ' + #pTableName + ' SET ' + #pQuery
END
2. Use it like this
DECLARE #pQuery VARCHAR(8000) = dte.getCleanUpdateQuery(<your table name>, <query part before current value>, <query part after current value>, <1 if current value is used. 0 if updating everything to a static value>);
EXEC (#pQuery)
Example 1: make all employees columns 'Unknown' (you need to make sure column type matches the intended value:
DECLARE #pQuery VARCHAR(8000) = dte.getCleanUpdateQuery('employee', '', 'Unknown', 0);
EXEC (#pQuery)
Example 2: Remove an undesired text qualifier (e.g. #)
DECLARE #pQuery VARCHAR(8000) = dte.getCleanUpdateQuery('employee', 'REPLACE(', ', ''#'', '''')', 1);
EXEC (#pQuery)
This query can be improved. This is just the one I saved and sometime I use. You get the idea.
Similar to an upsert, you could check if the item exists on the table, if so, delete it and insert it with the new values (technically updating it) but you would lose your rowid if that's something sensitive to keep in your case.
Behold, the updelsert
IF NOT EXISTS (SELECT * FROM Employee WHERE ID = #SomeID)
INSERT INTO Employee VALUES(#SomeID, #Your, #Vals, #Here)
ELSE
DELETE FROM Employee WHERE ID = #SomeID
INSERT INTO Employee VALUES(#SomeID, #Your, #Vals, #Here)
you could do it by deleting the column in the table and adding the column back in and adding a default value of whatever you needed it to be. then saving this will require to rebuild the table

SQL Query to check if 40 columns in table is null

How do I select few columns in a table that only contain NULL values for all the rows?
Suppose if Table has 100 columns, among this 100 columns 60 columns has null values.
How can I write where condition to check if 60 columns are null.
maybe with a COALESCE
SELECT * FROM table WHERE coalesce(col1, col2, col3, ..., colN) IS NULL
where c1 is null and c2 is null ... and c60 is null
shortcut using string concatenation (Oracle syntax):
where c1||c2||c3 ... c59||c60 is null
First of all, if you have a table that has so many nulls and you use SQL Server 2008 - you might want to define the table using sparse columns (http://msdn.microsoft.com/en-us/library/cc280604.aspx).
Secondly I am not sure if coalesce solves the question asks - it seems like Ammu might actually want to find the list of columns that are null for all rows, but I might have misunderstood. Nevertheless - it is an interesting question, so I wrote a procedure to list null columns for any given table:
IF (OBJECT_ID(N'PrintNullColumns') IS NOT NULL)
DROP PROC dbo.PrintNullColumns;
go
CREATE PROC dbo.PrintNullColumns(#tablename sysname)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #query nvarchar(max);
DECLARE #column sysname;
DECLARE columns_cursor CURSOR FOR
SELECT c.name
FROM sys.tables t JOIN sys.columns c ON t.object_id = c.object_id
WHERE t.name = #tablename AND c.is_nullable = 1;
OPEN columns_cursor;
FETCH NEXT FROM columns_cursor INTO #column;
WHILE (##FETCH_STATUS = 0)
BEGIN
SET #query = N'
DECLARE #c int
SELECT #c = COUNT(*) FROM ' + #tablename + ' WHERE ' + #column + N' IS NOT NULL
IF (#c = 0)
PRINT (''' + #column + N''');'
EXEC (#query);
FETCH NEXT FROM columns_cursor INTO #column;
END
CLOSE columns_cursor;
DEALLOCATE columns_cursor;
SET NOCOUNT OFF;
RETURN;
END;
go
If you don't want to write the columns names, Try can do something like this.
This will show you all the rows when all of the columns values are null except for the columns you specified (IgnoreThisColumn1 & IgnoreThisColumn2).
DECLARE #query NVARCHAR(MAX);
SELECT #query = ISNULL(#query+', ','') + [name]
FROM sys.columns
WHERE object_id = OBJECT_ID('yourTableName')
AND [name] != 'IgnoreThisColumn1'
AND [name] != 'IgnoreThisColumn2';
SET #query = N'SELECT * FROM TmpTable WHERE COALESCE('+ #query +') IS NULL';
EXECUTE(#query)
Result
If you don't want rows when all the columns are null except for the columns you specified, you can simply use IS NOT NULL instead of IS NULL
SET #query = N'SELECT * FROM TmpTable WHERE COALESCE('+ #query +') IS NOT NULL';
Result
[
Are you trying to find out if a specific set of 60 columns are null, or do you just want to find out if any 60 out of the 100 columns are null (not necessarily the same 60 for each row?)
If it is the latter, one way to do it in oracle would be to use the nvl2 function, like so:
select ... where (nvl2(col1,0,1)+nvl2(col2,0,1)+...+nvl2(col100,0,1) > 59)
A quick test of this idea:
select 'dummy' from dual where nvl2('somevalue',0,1) + nvl2(null,0,1) > 1
Returns 0 rows while:
select 'dummy' from dual where nvl2(null,0,1) + nvl2(null,0,1) > 1
Returns 1 row as expected since more than one of the columns are null.
It would help to know which db you are using and perhaps which language or db framework if using one.
This should work though on any database.
Something like this would probably be a good stored procedure, since there are no input parameters for it.
select count(*) from table where col1 is null or col2 is null ...
Here is another method that seems to me to be logical as well (use Netezza or TSQL)
SELECT KeyColumn, MAX(NVL2(TEST_COLUMN,1,0) AS TEST_COLUMN
FROM TABLE1
GROUP BY KeyColumn
So every TEST_COLUMN that has MAX value of 0 is a column that contains all nulls for the record set. The function NVL2 is saying if the column data is not null return a 1, but if it is null then return a 0.
Taking the MAX of that column will reveal if any of the rows are not null. A value of 1 means that there is at least 1 row that has data. Zero (0) means that each row is null.
I use the below query when i have to check for multiple columns NULL. I hope this is helpful . If the SUM comes to a value other than Zero , then you have NULL in that column
select SUM (CASE WHEN col1 is null then 1 else 0 end) as null_col1,
SUM (CASE WHEN col2 is null then 1 else 0 end) as null_col2,
SUM (CASE WHEN col3 is null then 1 else 0 end) as null_col3, ....
.
.
.
from tablename
you can use
select NUM_NULLS , COLUMN_NAME from all_tab_cols where table_name = 'ABC' and COLUMN_NAME in ('PQR','XYZ');