T-SQL distinct count query...can it be optimized with an EXISTS or something else? - sql

Is there any way to make this query run faster (meaning take less reads/IO from SQL Server). The logic essentially is
I count distinct values from a column
if there is any more than 1 distinct value, it is considered as existing
List is built with the name of the column and a 1 or 0 if it is existing
I would like to do something with EXISTS (in t-sql that terminates the scan of the table/index if SQL Server finds the match to the EXISTS predicate). I am not sure if that is possible in this query.
Note: I am not looking for answers like is there an index on the table...well beyond that :)
with SomeCTE as
(
select
count(distinct(ColumnA)) as ColumnA,
count(distinct(ColumnB)) as ColumnB,
count(distinct(ColumnC)) as ColumnC
from VERYLARGETABLE
)
select 'NameOfColumnA', case when ColumnA > 1 then 1 else 0 end from SomeCTE
UNION ALL
select 'NameOfColumnB', case when ColumnB > 1 then 1 else 0 end from SomeCTE
UNION ALL
select 'NameOfColumnC', case when ColumnC > 1 then 1 else 0 end from SomeCTE
Just to copy what I posted below in the comments.
So after testing this solution. It makes the queries run "faster". To give two examples..one query went from 50 seconds to 3 seconds. Another went from 9+ minutes (stopped running it) went down to 1min03seconds. Also I am missing indexes (so according to DTA should run 14% faster) also I am running this in the SQL Azure DB (where you are being throttled drastically in terms of I/O, CPU and tempddb memory)...very nice solution all around. One downside is that min/max does not work on bit columns, but those can be converted.

If the datatype of the columns allow aggregate functions and if there are indexes, one for every column, this will be fast:
SELECT 'NameOfColumnA' AS ColumnName,
CASE WHEN MIN(ColumnA) < MAX(ColumnA)
THEN 1 ELSE 0
END AS result
FROM VERYLARGETABLE
UNION ALL
SELECT 'NameOfColumnB',
CASE WHEN MIN(ColumnB) < MAX(ColumnB)
THEN 1 ELSE 0
END
FROM VERYLARGETABLE
UNION ALL
SELECT 'NameOfColumnC'
CASE WHEN MIN(ColumnC) < MAX(ColumnC)
THEN 1 ELSE 0
END
FROM VERYLARGETABLE ;

Related

Max match same numbers from each row

To generate 1mln rows of report with the below mentioned script is taking almost 2 days so, really appreciate if somebody could help me with different script which the report can be generated within 10-15mins please.
The requirement of the report is as following;
Table “cover” contains 5mln rows & 6 columns of data and likewise table “data” contains 500,000 rows and 6 columns.
So, each numbers of the rows in table cover has to go through table date and provide the maximum matches.
For instance, as mentioned on the below tables, there could be 3 matches in row #1, 2 matches in row #2 and 5 matches in row #3 so the script has to select the max selection which is 5 in row #3.
Sample table
UPDATE public.cover_sheet AS fc
SET maxmatch = (SELECT MAX(tmp.mtch)
FROM (
SELECT (SELECT CASE WHEN fc.a=drwo.a THEN 1 ELSE 0 END) +
(SELECT CASE WHEN fc.b=drwo.b THEN 1 ELSE 0 END) +
(SELECT CASE WHEN fc.c=drwo.c THEN 1 ELSE 0 END) +
(SELECT CASE WHEN fc.d=drwo.d THEN 1 ELSE 0 END) +
(SELECT CASE WHEN fc.e=drwo.e THEN 1 ELSE 0 END) +
(SELECT CASE WHEN fc.f=drwo.f THEN 1 ELSE 0 END) AS mtch
FROM public.data AS drwo
) AS tmp)
WHERE fc.code>0;
SELECT *
FROM public.cover_sheet AS fc
WHERE fc.maxmatch>0;
As #a_horse_with_no_name mentioned in the comment to the question, your question is not clear...
Seems, you want to get the number of records which 6 fields from both tables are equal.
I'd suggest to:
reduce the number of select statements, then the speed of query execution will increase,
split your query into few smaller ones (good practice), to check your logic,
use join to get equal data, see: Visual Representation of SQL Joins
use subquery or cte to get result on which you'll be able to update table.
I think you want to get result as follow:
SELECT COUNT(*) mtch
FROM public.cover_sheet AS fc INNER JOIN public.data AS drwo ON
fc.a=drwo.a AND fc.b=drwo.b AND fc.c=drwo.c AND fc.d=drwo.d AND fc.e=drwo.e AND fc.f=drwo.f
If i'm not wrong and above query is correct, the time of execution of above query will reduce to about 1-2 minutes.
Finally, update query may look like:
WITH qry AS
(
-- proper select statement here
)
UPDATE public.cover_sheet AS fc
SET maxmatch = qry.<fieldname>
FROM qry
WHERE fc.code>0 AND fc.<key> = qry.<key>;
Note:
I do not see your data and i know nothing about its structure, relationships, etc. So, you have to change above query to your needs.

select sum(a), sum(b where c=1) from db; sql conditions in select statement

i guess i just lack the keywords to search, but this is burning on my mind:
how can i add a condition to the sum-function in the select-statement like
select sum(a), sum(b where c=1) from db;?
this means, i want to see the sum of column a and the sum of column b, but only of the records in column b of which column c has the value 1.
the output of heidi just says "bad syntac near WHERE". may there be any other way?
thanks in advance and best regards from Berlin, joachim
The exact syntax may differ depending on the database engine, however it will be along the lines of
SELECT
sum(a),
sum(CASE WHEN c = 1 THEN b ELSE 0 END)
FROM
db
select sum(case when c=1 then b else 0 end)
This technique is useful when you need a lot of aggregates on the same set of data - you can query the entire table without applying a where filter, and have a bunch of these which give you aggregated data for a specific filter.
It's also useful when you need a lot of counts based on filters - you can do sums of 1 or 0:
select sum(case when {somecondition} then 1 else 0 end)

SQL (SQLite) count for null-fields over all columns

I've got a table called datapoints with about 150 columns and 2600 rows. I know, 150 columns is too much, but I got this db after importing a csv and it is not possible to shrink the number of columns.
I have to get some statistical stuff out of the data. E.g. one question would be:
Give me the total number of fields (of all columns), which are null. Does somebody have any idea how I can do this efficiently?
For one column it isn't a problem:
SELECT count(*) FROM datapoints tb1 where 'tb1'.'column1' is null;
But how can I solve this for all columns together, without doing it by hand for every column?
Best,
Michael
Building on Lamak's idea, how about this idea:
SELECT (N * COUNT(*)) - (
COUNT(COLUMN_1)
+ COUNT(COLUMN_2)
+ ...
+ COUNT(COLUMN_N)
)
FROM DATAPOINTS;
where N is the number of columns. The trick will be in making the summation series of COUNT(column), but that shouldn't be too terrible with a good text editor and/or spreadsheet.
i don't think there is an easy way to do it. i'd get started on the 150 queries. you only have to replace one word (column name) each time.
Well, COUNT (and most aggregations funcions) ignore NULL values. In your case, since you are using COUNT(*), it counts every row in the table, but you can do that on any column. Something like this:
SELECT TotalRows-Column1NotNullCount, etc
FROM (
SELECT COUNT(1) TotalRows,
COUNT(column1) Column1NotNullCount,
COUNT(column2) Column2NotNullCount,
COUNT(column3) Column3NotNullCount ....
FROM datapoints) A
To get started it's often helpful to use a visual query tool to generate a field list and then use cut/paste/search/replace or manipulation in a spreadsheet program to transform it into what is needed. To do it all in one step you can use something like:
SELECT SUM(CASE COLUMN1 WHEN NULL THEN 1 ELSE 0 END) +
SUM(CASE COLUMN2 WHEN NULL THEN 1 ELSE 0 END) +
SUM(CASE COLUMN3 WHEN NULL THEN 1 ELSE 0 END) +
...
FROM DATAPOINTS;
With a visual query builder you can quickly generate:
SELECT COLUMN1, COLUMN2, COLUMN3 ... FROM DATAPOINTS;
You can then replace the comma with all the text that needs to appear between two field names followed by fixing up the first and last fields. So in the example search for "," and replace with " WHEN NULL 1 ELSE 0 END) + SUM(CASE " and then fix up the first and last fields.

How to count two fields by using Select statement in SQL Server 2005?

Total records in table are 10.
Select count(ID) from table1 where col1 = 1 (Result is 8)
Select count(ID) from table1 where col1 = 0 (Result is 2)
So its a same table but count is based on different condition. How am i gonna get two results (counts) using one select statement?
Also Performance is a big concern here.
PS: I am using Stored procedure...
EDIT:
I wanna clear the above query is just a part of a big SP logic (for me at least). Since i got these following answers, it gave another idea to achieve it in different way. My above question is a bit changed now.....Please help here? Its a same col (bool type) with true or false state.
Use CASE:
SELECT
SUM(CASE col1 WHEN 1 THEN 1 ELSE 0 END) AS Count1,
SUM(CASE col1 WHEN 0 THEN 1 ELSE 0 END) AS Count0
FROM table1
You should use subselects or UNIONS, I don't see the other way...

Counting non-null columns in a rather strange way

I have a table which has 32 columns in an Oracle table.
Two of these columns are identity columns
the rest are values
I would like to get the average of all the value columns, which is complicated by the null (identity) columns. Below is the pseudocode for what I am trying to achieve:
SELECT
((nvl(val0, 0) + nvl(val1, 0) + ... nvl(valn, 0))
/ nonZero_Column_Count_In_This_Row)
Such that: nonZero_Column_Count_In_This_Row = (ifNullThenZeroElse1(val0) + ifNullThenZeroElse1(val1) ... ifNullThenZeroElse(valn))
The difficulty here is of course in getting 1 for any non-null column. It seems I need a function similar to NVL, but with an else clause. Something that will return 0 if the value is null, but 1 if not, rather than the value itself.
How should I go about about getting the value for the denominator?
PS: I feel I must explain some motivation behind this design. Ideally this table would have been organized as the identity columns and one value per row with some identifier for the row itself. This would have made it more normalized and the solution to this problem would have been pretty simple. The reasons for it not to be done like this are throughput, and saving space. This is a huge DB where we insert 10 million values per minute into. Making each of these values one row would mean 10M rows per minute, which is definitely not attainable. Packing 30 of them into a single row reduces the number of rows inserted to something we can do with a single DB, and the overhead data amount (the identity data) much less.
(Case When col is null then 0 else 1 end)
You could use NVL2(val0, 1, 0) + NVL2(val1, 1, 0) + ... since you are using Oracle.
Another option is to use the AVG function, which ignores NULLs:
SELECT AVG(v) FROM (
WITH q AS (SELECT val0, val1, val2, val3 FROM mytable)
SELECT val0 AS v FROM q
UNION ALL SELECT val1 FROM q
UNION ALL SELECT val2 FROM q
UNION ALL SELECT val3 FROM q
);
If you're using Oracle11g you can use the UNPIVOT syntax to make it even simpler.
I see this is a pretty old question, but I don't see a sufficient answer. I had a similar problem, and below is how I solved it. It's pretty clear a case statement is needed. This solution is a workaround for such cases where
SELECT COUNT(column) WHERE column {IS | IS NOT} NULL
does not work for whatever reason, or, you need to do several
SELECT COUNT ( * )
FROM A_TABLE
WHERE COL1 IS NOT NULL;
SELECT COUNT ( * )
FROM A_TABLE
WHERE COL2 IS NOT NULL;
queries but want it as a data set when you run the script. See below; I use this for analysis and it's been working great for me so far.
SUM(CASE NVL(valn, 'X')
WHEN 'X'
THEN 0
ELSE 1
END) as COLUMN_NAME
FROM YOUR_TABLE;
Cheers!
Doug
Generically, you can do something like this:
SELECT (
(COALESCE(val0, 0) + COALESCE(val1, 0) + ...... COALESCE(valn, 0))
/
(SIGN(ABS(COALESCE(val0, 0))) + SIGN(ABS(COALESCE(val1, 0))) + .... )
) AS MyAverage
The top line will return the sum of values (omitting NULL values) whereas the bottom line will return the number of non-null values.
FYI - it's SQL Server syntax, but COALESCE is just like ISNULL for the most part. SIGN just returns -1 for a negative number, 0 for zero, and 1 for a positive number. ABS is "absolute value".