Alias column name for Use in CASE Statement - sql

I have SQL query as below:
SELECT a.ID as AID, a.Amt as AAmt
FROM
(SELECT
ID,
CASE
WHEN Col1 = 0
THEN SUM (Col2 + Col3)
ELSE 0
END AS Amt
FROM table1
GROUP BY ID, Amt) AS a
I get an error:
Invalid column name 'Amt'.
(note: this applies to the GROUP BY clause).

You cannot GROUP BY the alias,
Try
SELECT a.ID as AID, a.Amt as AAmt
FROM
(SELECT
ID,
CASE
WHEN Col1 = 0
THEN SUM (Col2 + Col3)
ELSE 0
END AS Amt
FROM table1
GROUP BY ID, Col1) AS a
If you have a look at SQL Query Order of Operations you will note the the order of operations are
1.FROM clause
2.WHERE clause
3.GROUP BY clause
4.HAVING clause
5.SELECT clause
6.ORDER BY clause
This means that the GROUP BY is processed before the SELECT, which is where you defined the alias.
This also explains why you can order by an alias.

Your query seems a bit overly complicated, and the intention doesn't fully make sense. I suspect you want conditional aggregation:
SELECT ID, SUM(CASE WHEN col1 = 0 THEN col2 + col3 ELSE 0 END)
FROM table1
GROUP BY ID;

Related

Adding case condition in select clause gives //either an aggregate function or the GROUP BY clause// error

I have defined my select query like;
SELECT Day(Date) as Day,
...
Case
when (SUM(GallonsPumped)*0.01 +130 <ABS(Sum(DailyVar))) Then 'Fail'
Else 'pass'
End as result,
....
FROM [dbo].[zzz]
WHERE date >='2019-09-12' and date<='2019-10-09' and SiteCode='0209365' and CompanyId = 67
order by CAST(Date AS Date) asc
But getting;
Column 'tablezzz.Date' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
This occurs due to me Case statement. If I remove that, my select query works fine. What Im doing wrong here?
You need a group by for the columns not involved in aggregation function
so assuming you have columns Day, col1, col2, col3 not involved in aggreagtion you need agroup by with GROUP BY Day(Date), col1, col2, col3
SELECT Day(Date) as Day,
col1,
col2,
Case
when (SUM(GallonsPumped)*0.01 +130 <ABS(Sum(DailyVar))) Then 'Fail'
Else 'pass'
End as result,
col3
FROM [dbo].[zzz]
WHERE date >='2019-09-12' and date<='2019-10-09' and SiteCode='0209365' and CompanyId = 67
GROUP BY Day(Date), col1, col2, col3
order by CAST(Date AS Date) asc

In a GROUP BY, Is there a way to refer to a column in the SELECT clause whose name clashes with a column in the FROM clause?

Given a the following query like:
with t1 as (
select column1 as type, column2 as val1 from values
(1,2)
,(3,4)
,(5,6)
)
select
case
when val1 > 2 then 'a'
else 'b'
end as type,
count(*)
from t1
group by type
;
I get the error SQL compilation error: error line 9 at position 13 'T1.VAL1' in select clause is neither an aggregate nor in the group by clause. (using Snowflake).
It thinks that the type in group by type refers to t1.type instead of the case...end as type column in the SELECT clause (which is the one I meant to GROUP BY).
My question is there is any concise way to refer to the case ... end as type in the GROUP BY without copy and paste the whole case ... end into the GROUP BY?
I know that I can explicitly refer to the type in t1 using GROUP BY t1.type. Is there something like GROUP BY this.type (where this would refer to this query SELECT clause)?
You can use a lateral join to move the definition to the from clause:
with t1 as (
select column1 as type, column2 as val1
from values (1, 2), (3, 4), (5, 6)
)
select v.type, count(*)
from t1 cross join lateral
(values (case when t1.val1 > 2 then 'a' else 'b' end)
) v(type)
group by v.type;
I strongly recommend that you qualify all column references. Do not rely on scoping rules to figure out what your references mean. Lateral joins are one convenient mechanism for defining column aliases.
My current workaround for such situations is to add another step into the CTE:
with t1 as (
select column1 as type, column2 as val1 from values
(1,2)
,(3,4)
,(5,6)
)
, t2 as (
select
case
when val1 > 2 then 'a'
else 'b'
end as type
from t1
)
select type,
count(*)
from t2
group by type
;
The intermediate t2 will remove the t1.type from the scope so when you do the GROUP BY type it's t2.type which is the case...end column.
I'm not sure this qualifies as a concise solution, but it's the closest I was able to get so far.
You can enclose "the other type" in double quotes, like
with t1 as (
select column1 as type, column2 as val1 from values
(1,2)
,(3,4)
,(5,6)
)
select
case
when val1 > 2 then 'a'
else 'b'
end as "type",
count(*)
from t1
group by "type"
;

How to count a number of unique rows considering that A, B row = B, A row

I am a new in SQL therefore sorry in advance for possible mistakes, incorrect questions.
I am trying to solve the following task:
There is a table with two columns.
My task is to COUNT the number of unique rows, considering that the rows which have the same information (regardless of the order) are counted as 1.
EG. row [1] a b and row [2] b a should be counted as 1
So the result of the query should be 3
You can use aggregation:
select (case when col1 < col2 then col1 else col2 end) as least,
(case when col1 < col2 then col2 else col3 end) as greatest,
count(*)
from t
group by (case when col1 < col2 then col1 else col2 end),
(case when col1 < col2 then col2 else col3 end);
Many databases support the least() and greatest() functions which simplify this logic a bit.
Try the following:
select *, count(*)
from
(
select
case when Column2<Column1 then Column2 else Column1 end as Column1,
case when Column1>Column2 then Column1 else Column2 end as Column2
from tab
) as t
group by Column1,Column2
Example here.
Not the most efficient way of doing it, but if you don't need to group by, here is another method:
select
count(distinct case when col2<col1 then concat(col2,col1) else concat(col1,col2) end)
from your_table;

Using distinct in case condition sql

Is there a way to use distinct in the case statement in sql ?
For example -
CASE WHEN col1 = 'XYZ' and DISTINCT(col1,col2)
THEN 'Do_This'
ELSE 'Do_That'
END
Unfortunately the usage of DISTINCT inside the case statement as above throws an error.
I'm using postgres/redshift sql.
First of all your objective is not clear but from your query my assumption is as you want to DO_This or Do_That depending on col1 values as 'XYZ' with group by col 1, col2.
SELECT
CASE WHEN COL1 = 'XYZ'
THEN 'DO_THIS'
ELSE 'DO_THAT'
END
FROM
(
SELECT
COL1, COL2
FROM YOUR_TABLE
GROUP BY COL1, COL2
)

Is it possible to have a CASE WHEN THEN SELECT in a WHERE CLAUSE

I was just wondering if it would be possible to have a CASE statement in a WHERE clause exactly in this form...
SELECT *
FROM TABLEA
WHERE date between '2014-02-01' and '2014-02-28' and
CASE
WHEN date>'2014-02-28' THEN (SELECT FROM TABLEC WHERE...)
ELSE (SELECT FROM TABLE B WHERE...)
END
Thanks!
Yes, this is possible under the following circumstances:
The subqueries are returning one value.
There is an outside comparison such as = or >
The case statement returns scalar values. A row with one column and one value is "equivalent" to a scalar value. So, the following would be allowed:
where col = (CASE WHEN date > '2014-02-28' THEN (SELECT max(col2) FROM TABLEC WHERE...)
ELSE (SELECT min(col3) FROM TABLE B WHERE...)
END)
But, you probably want to do a conditional in statement. Eschew the case:
where date > '2014-02-28' and col in (SELECT max(col2) FROM TABLEC WHERE...) or
date <= '2014-02-28' and col in (SELECT min(col3) FROM TABLE B WHERE...)