Coalesce function not selecting data value from series when it exists - sql

My code is as follows:
Insert Into dbo.database (Period, Amount)
Select coalesce (date_1, date_2, date_3), Amount FROM Source.dbo.[10]
I'm 100% a value exists in one of the 3 variables: date_1, date_2, date_3, all as strings (var char 100), yet I am still getting blanks when I call Period.
Any help?

Coalesce is designed to return the first NOT NULL field from the list or NULL if none of the fields are NOT NULL, follow the link for full details http://msdn.microsoft.com/en-us/library/ms190349.aspx
I would guess that you have blank values (' ') in one of the columns instead of NULL values. If you are trying to find the first not null non-blank column you can use a case statement.
select
case
when len(rtrim(ltrim(date_1))) > 0 then date_1
when len(rtrim(ltrim(date_2))) > 0 then date_2
when len(rtrim(ltrim(date_3))) > 0 then date_3
else null
end,
Amount
from Source.dbo.[10]

Related

Create a new table with columns with case statements and max function

I have some problems in creating a new table from an old one with new columns defined by case statements.
I need to add to a new table three columns, where I compute the maximum based on different conditions. Specifically,
if time is between 1 and 3, I define a variable max_var_1_3 as max((-1)*var),
if time is between 1 and 6, I define a variable max_var_1_6 as max((-1)*var),
if time is between 1 and 12, I define a variable max_var_1_12 as max((-1)*var),
The max function needs to take the maximum value of the variable var in the window between 1 and 3, 1 and 6, 1 and 12 respectively.
I wrote this
create table new as(
select t1.*,
(case when time between 1 and 3 then MAX((-1)*var)
else var
end) as max_var_1_3,
(case when time between 1 and 6 then MAX((-1)*var)
else var
end) as max_var_1_6,
(case when time between 1 and 12 then MAX((-1)*var)
else var
end) as max_var_1_12
from old_table t1
group by time
) with data primary index time
but unfortunately it is not working. The old_table has already some columns, and I would like to import all of them and then compare the old table with the new one. I got an error that says that should be something between ) and ',', but I cannot understand what. I am using Teradata SQL.
Could you please help me?
Many thanks
The problem is that you have GROUP BY time in your query while trying to return all the other values with your SELECT t1.*. To make your query work as-is, you'd need to add each column from t1.* to your GROUP BY clause.
If you want to find the MAX value within the different time ranges AND also return all the rows, then you can use a window function. Something like this:
CREATE TABLE new AS (
SELECT
t1.*,
CASE
WHEN t1.time BETWEEN 1 AND 3 THEN (
MAX(CASE WHEN t1.time BETWEEN 1 AND 3 THEN (-1 * t1.var) ELSE NULL END) OVER()
)
ELSE t1.var
END AS max_var_1_3,
CASE
WHEN t1.time BETWEEN 1 AND 6 THEN (
MAX(CASE WHEN t1.time BETWEEN 1 AND 6 THEN (-1 * t1.var) ELSE NULL END) OVER()
)
ELSE t1.var
END AS max_var_1_6,
CASE
WHEN t1.time BETWEEN 1 AND 12 THEN (
MAX(CASE WHEN t1.time BETWEEN 1 AND 12 THEN (-1 * t1.var) ELSE NULL END) OVER()
)
ELSE t1.var
END AS max_var_1_12,
FROM old_table t1
) WITH DATA PRIMARY INDEX (time)
;
Here's the logic:
check if a row falls in the range
if it does, return the desired MAX value for rows in that range
otherwise, just return that given row's default value (var)
return all rows along with the three new columns
If you have performance issues, you could also move the max_var calculations to a CTE, since they only need to be calculated once. Also to avoid confusion, you may want to explicitly specify the values in your SELECT instead of using t1.*.
I don't have a TD system to test, but try it out and see if that works.
I cannot help with the CREATE TABLE AS, but the query you want is this:
SELECT
t.*,
(SELECT MAX(-1 * var) FROM old_table WHERE time BETWEEN 1 AND 3) AS max_var_1_3,
(SELECT MAX(-1 * var) FROM old_table WHERE time BETWEEN 1 AND 6) AS max_var_1_6,
(SELECT MAX(-1 * var) FROM old_table WHERE time BETWEEN 1 AND 12) AS max_var_1_12
FROM old_table t;

SQL statement - use avg and count together but in different conditions

Assume we have the table as follows,
id Col-1 Col-2
A 1 some text
B 0 some other text
C 3
...
Take the table above as example, I want to build one SQL statement which would output the result: 2, 2.
The first value is the avg of all col-1 values except for 0, that is (1+3)/2 = 2. (If 0 is counted, then the result would be (1+0+3)/3 = 1, which is not what I want.)
The second value is the total number of all col-2 that is not empty. So the value is 2.
P.S, I know how to create them separately. What I prefer is to create only 1 statement to get both results.
For the first you can use NULLIF as null values are ignored in aggregations such as AVG.
For the second I assume you want to only count values not NULL or empty string.
SELECT AVG(NULLIF(Col1, 0)),
COUNT(CASE WHEN Col2 <> '' THEN 1 END)
FROM T
You want conditional aggregation:
select avg(case when col1 <> 0 then col1 end) as avg_not_zero,
count(col2) as num_not_empty
from table t;
As a note: 0 does not mean that the value is empty. Often NULL is used for this purpose in SQL, although strictly speaking, NULL means an unknown value.
Note: If "empty" could mean the empty string instead of NULL:
select avg(case when col1 <> 0 then col1 end) as avg_not_zero,
count(nullif(col2, '')) as num_not_empty
from table t;

Array contains NULL value

I'm trying to select the last enddate per nr. In case the nr contains an enddate with value NULL, it means this nr is still active. In short I cannot use MAX(enddate) because out of 2013-09-25 and NULL it would select the date whereas I need NULL.
I tried the following query though it seems that NULL IN (enddate) does not return what I suspected. Namely: 'if the array contains at least one value NULL...'. In other words, NULL should overrank MAX().
SELECT nr,
CASE WHEN NULL IN (enddate) THEN NULL ELSE MAX(enddate) END
FROM myTable
GROUP BY nr
Does someone know how to replace this expression?
You can use the query below. It returns NULL before other dates (provided that you put a date great enough) and then restores NULL.
SELECT nr, CASE d WHEN '20990101' THEN NULL ELSE d END d
FROM (
SELECT nr,
CASE MAX(ISNULL(enddate, '20990101') d
FROM myTable
GROUP BY nr
)
I couldn't check the syntax so there may be small typos.
You could use this query. The CTE calculates the maximum date ignoring any nulls, this is then left joined back to the table to see if there is a null value for each nr value. The case statement returns a null if it exists or the maximum date from the CTE.
WITH CTE1 AS
(SELECT nr, MAX(enddate) MaxEnddate
FROM myTable
GROUP BY nr)
SELECT CTE1.nr,
CASE WHEN MyTable.enddate IS NULL AND MyTable.NR IS NOT NULL THEN NULL ELSE CTE1.MaxEndDate END AS EndDate
FROM CTE1
LEFT JOIN MyTable
ON MyTable.nr=CTE1.nr
AND MyTable.enddate IS NULL
Just to build off of #Szymon answer a little bit:
drop table #temp
create table #temp (MyDate date) insert into #temp (MyDate) values ('1/1/2010'),('1/1/2011'),('1/1/2012'),(NULL)
select * from #temp
SELECT
(CASE WHEN MAX(ISNULL(MyDate, '2099-01-01')) = '2099-01-01' THEN NULL ELSE MAX(ISNULL(MyDate, '2099-01-01')) END) as Max_Date
FROM
#temp
The query replaces a NULL value with '2099-01-01'. Then, it looks to see if the Max is equal to '2099-01-01' and if so, returns NULL, and otherwise returns the actual Max.

oracle adding, then avg with null

I have sql like:
select avg(decode(type, 'A', value, null) + decode(type, 'B', value, null)) from table;
The problem with this is some of these types can be null, so the addition part will result in null because adding anything to null makes it null. So you might think I could change the decode from null to 0, but that seems to make the avg() count it as part of it's averaging, but it shouldn't/I don't want it counted as part of the average.
Ideally the addition would just ignore the nulls and just not try to add them to the rest of the values.
So let's say my numbers are:
5 + 6 + 5
3 + 2 + 1
4 + null + 2
They total 28 and I'd want to divide by 8 (ignore the null), but if I change the null to 0 in the decode, the avg will then divide by 9 which isn't what I want.
As written, your code should always return null, since if the first decode returns value, then the second decode must always return null. I'm going to assume that you made an error in genericizing your code and that what you really meant was this:
avg(decode(type1, 'A', value1, null) + decode(type2, 'B', value2, null))
(Or, instead of type1, it could be a.type. The point is that the fields in the two decodes are meant to be separate fields)
In this case, I think the easisest thing to do is check for nulls first:
avg(case when type1 is null and type2 is null then null
else case type1 when 'A' then value1 else 0 end
+ case type2 when 'B' then value2 else 0 end
end)
(I replaced decode with case because I find it easier to read, but, in this case decode would work just as well.)
This is overcomplicated to do a sum here. Juste output the values with a CASE, and you are done.
SELECT AVG(
CASE WHEN type = 'A' OR type = 'B'
THEN value
ELSE null
END
)
FROM table
A simple workaround would be to calculate the average yourself:
select
-- The sum of all values with type 'A' or 'B'
sum(decode(type, 'A', value, 'B', value, 0)) /
-- ... divided by the "count" of all values with type 'A' or 'B'
sum(decode(type, 'A', 1, 'B', 1, 0))
from table;
A SQLFiddle example
But the way AVG() works, it would probably be sufficient, if you just removed the addition and put everything in a single DECODE()
select avg(decode(type, 'A', value, 'B', value, null)) from table
The logic here is a bit complicated:
select avg((case when type = 'A' then value else 0 end) + (case when type = 'B' then value else 0 end))
from table
where type in ('A', 'B')
The where clause guarantees that you have at least one "A" or "B". The problem is arising when you have no examples of "A" or "B".

Counting null and non-null values in a single query

I have a table
create table us
(
a number
);
Now I have data like:
a
1
2
3
4
null
null
null
8
9
Now I need a single query to count null and not null values in column a
This works for Oracle and SQL Server (you might be able to get it to work on another RDBMS):
select sum(case when a is null then 1 else 0 end) count_nulls
, count(a) count_not_nulls
from us;
Or:
select count(*) - count(a), count(a) from us;
If I understood correctly you want to count all NULL and all NOT NULL in a column...
If that is correct:
SELECT count(*) FROM us WHERE a IS NULL
UNION ALL
SELECT count(*) FROM us WHERE a IS NOT NULL
Edited to have the full query, after reading the comments :]
SELECT COUNT(*), 'null_tally' AS narrative
FROM us
WHERE a IS NULL
UNION
SELECT COUNT(*), 'not_null_tally' AS narrative
FROM us
WHERE a IS NOT NULL;
Here is a quick and dirty version that works on Oracle :
select sum(case a when null then 1 else 0) "Null values",
sum(case a when null then 0 else 1) "Non-null values"
from us
for non nulls
select count(a)
from us
for nulls
select count(*)
from us
minus
select count(a)
from us
Hence
SELECT COUNT(A) NOT_NULLS
FROM US
UNION
SELECT COUNT(*) - COUNT(A) NULLS
FROM US
ought to do the job
Better in that the column titles come out correct.
SELECT COUNT(A) NOT_NULL, COUNT(*) - COUNT(A) NULLS
FROM US
In some testing on my system, it costs a full table scan.
As i understood your query, You just run this script and get Total Null,Total NotNull rows,
select count(*) - count(a) as 'Null', count(a) as 'Not Null' from us;
usually i use this trick
select sum(case when a is null then 0 else 1 end) as count_notnull,
sum(case when a is null then 1 else 0 end) as count_null
from tab
group by a
Just to provide yet another alternative, Postgres 9.4+ allows applying a FILTER to aggregates:
SELECT
COUNT(*) FILTER (WHERE a IS NULL) count_nulls,
COUNT(*) FILTER (WHERE a IS NOT NULL) count_not_nulls
FROM us;
SQLFiddle: http://sqlfiddle.com/#!17/80a24/5
This is little tricky. Assume the table has just one column, then the Count(1) and Count(*) will give different values.
set nocount on
declare #table1 table (empid int)
insert #table1 values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(NULL),(11),(12),(NULL),(13),(14);
select * from #table1
select COUNT(1) as "COUNT(1)" from #table1
select COUNT(empid) "Count(empid)" from #table1
Query Results
As you can see in the image, The first result shows the table has 16 rows. out of which two rows are NULL. So when we use Count(*) the query engine counts the number of rows, So we got count result as 16. But in case of Count(empid) it counted the non-NULL-values in the column empid. So we got the result as 14.
so whenever we are using COUNT(Column) make sure we take care of NULL values as shown below.
select COUNT(isnull(empid,1)) from #table1
will count both NULL and Non-NULL values.
Note: Same thing applies even when the table is made up of more than one column. Count(1) will give total number of rows irrespective of NULL/Non-NULL values. Only when the column values are counted using Count(Column) we need to take care of NULL values.
I had a similar issue: to count all distinct values, counting null values as 1, too. A simple count doesn't work in this case, as it does not take null values into account.
Here's a snippet that works on SQL and does not involve selection of new values.
Basically, once performed the distinct, also return the row number in a new column (n) using the row_number() function, then perform a count on that column:
SELECT COUNT(n)
FROM (
SELECT *, row_number() OVER (ORDER BY [MyColumn] ASC) n
FROM (
SELECT DISTINCT [MyColumn]
FROM [MyTable]
) items
) distinctItems
Try this..
SELECT CASE
WHEN a IS NULL THEN 'Null'
ELSE 'Not Null'
END a,
Count(1)
FROM us
GROUP BY CASE
WHEN a IS NULL THEN 'Null'
ELSE 'Not Null'
END
Here are two solutions:
Select count(columnname) as countofNotNulls, count(isnull(columnname,1))-count(columnname) AS Countofnulls from table name
OR
Select count(columnname) as countofNotNulls, count(*)-count(columnname) AS Countofnulls from table name
Try
SELECT
SUM(ISNULL(a)) AS all_null,
SUM(!ISNULL(a)) AS all_not_null
FROM us;
Simple!
If you're using MS Sql Server...
SELECT COUNT(0) AS 'Null_ColumnA_Records',
(
SELECT COUNT(0)
FROM your_table
WHERE ColumnA IS NOT NULL
) AS 'NOT_Null_ColumnA_Records'
FROM your_table
WHERE ColumnA IS NULL;
I don't recomend you doing this... but here you have it (in the same table as result)
use ISNULL embedded function.
All the answers are either wrong or extremely out of date.
The simple and correct way of doing this query is using COUNT_IF function.
SELECT
COUNT_IF(a IS NULL) AS nulls,
COUNT_IF(a IS NOT NULL) AS not_nulls
FROM
us
SELECT SUM(NULLs) AS 'NULLS', SUM(NOTNULLs) AS 'NOTNULLs' FROM
(select count(*) AS 'NULLs', 0 as 'NOTNULLs' FROM us WHERE a is null
UNION select 0 as 'NULLs', count(*) AS 'NOTNULLs' FROM us WHERE a is not null) AS x
It's fugly, but it will return a single record with 2 cols indicating the count of nulls vs non nulls.
This works in T-SQL. If you're just counting the number of something and you want to include the nulls, use COALESCE instead of case.
IF OBJECT_ID('tempdb..#us') IS NOT NULL
DROP TABLE #us
CREATE TABLE #us
(
a INT NULL
);
INSERT INTO #us VALUES (1),(2),(3),(4),(NULL),(NULL),(NULL),(8),(9)
SELECT * FROM #us
SELECT CASE WHEN a IS NULL THEN 'NULL' ELSE 'NON-NULL' END AS 'NULL?',
COUNT(CASE WHEN a IS NULL THEN 'NULL' ELSE 'NON-NULL' END) AS 'Count'
FROM #us
GROUP BY CASE WHEN a IS NULL THEN 'NULL' ELSE 'NON-NULL' END
SELECT COALESCE(CAST(a AS NVARCHAR),'NULL') AS a,
COUNT(COALESCE(CAST(a AS NVARCHAR),'NULL')) AS 'Count'
FROM #us
GROUP BY COALESCE(CAST(a AS NVARCHAR),'NULL')
Building off of Alberto, I added the rollup.
SELECT [Narrative] = CASE
WHEN [Narrative] IS NULL THEN 'count_total' ELSE [Narrative] END
,[Count]=SUM([Count]) FROM (SELECT COUNT(*) [Count], 'count_nulls' AS [Narrative]
FROM [CrmDW].[CRM].[User]
WHERE [EmployeeID] IS NULL
UNION
SELECT COUNT(*), 'count_not_nulls ' AS narrative
FROM [CrmDW].[CRM].[User]
WHERE [EmployeeID] IS NOT NULL) S
GROUP BY [Narrative] WITH CUBE;
SELECT
ALL_VALUES
,COUNT(ALL_VALUES)
FROM(
SELECT
NVL2(A,'NOT NULL','NULL') AS ALL_VALUES
,NVL(A,0)
FROM US
)
GROUP BY ALL_VALUES
select count(isnull(NullableColumn,-1))
if its mysql, you can try something like this.
select
(select count(*) from TABLENAME WHERE a = 'null') as total_null,
(select count(*) from TABLENAME WHERE a != 'null') as total_not_null
FROM TABLENAME
Just in case you wanted it in a single record:
select
(select count(*) from tbl where colName is null) Nulls,
(select count(*) from tbl where colName is not null) NonNulls
;-)
for counting not null values
select count(*) from us where a is not null;
for counting null values
select count(*) from us where a is null;
I created the table in postgres 10 and both of the following worked:
select count(*) from us
and
select count(a is null) from us
In my case I wanted the "null distribution" amongst multiple columns:
SELECT
(CASE WHEN a IS NULL THEN 'NULL' ELSE 'NOT-NULL' END) AS a_null,
(CASE WHEN b IS NULL THEN 'NULL' ELSE 'NOT-NULL' END) AS b_null,
(CASE WHEN c IS NULL THEN 'NULL' ELSE 'NOT-NULL' END) AS c_null,
...
count(*)
FROM us
GROUP BY 1, 2, 3,...
ORDER BY 1, 2, 3,...
As per the '...' it is easily extendable to more columns, as many as needed
Number of elements where a is null:
select count(a) from us where a is null;
Number of elements where a is not null:
select count(a) from us where a is not null;