Group by a set of columns and check if a value corresponding to another column exists in Teradata/SQL - sql

Input:
I need to group by (mnth_end_d, id, name) and check if there is a value for debt_calc corresponding to when period_rank = 'T'. And if there's a value, you create this new column debt_exists and populate it with TRUE or FALSE as in the output table below. Any tips on how to do that in Teradata without using any joins (perhaps using partition over)?
Output:

Your requirement translates into a conditional aggregation in SQL:
-- if there is a value for debt_calc corresponding to when period_rank = 'T'
max(case when period_rank = 'T' and debt_calc is not null
then 'TRUE' -- populate it with TRUE or FALSE
else 'FALSE'
end)
over (partition by mnth_end_d, id, name) as debt_exists

Related

SQL query to return rows with one distinct field and use CASE to create new evaluation column

I want to write an SQL query to return rows with one distinct field and use CASE to create new evaluation column. Any help is appreciated. Deets below:
table
id
status
category
string
string
bigint
--------
--------
----------
pseudo query:
return (distinct id), time_created, NEW_COL
where category is 123123
and where new_col //create new col with these values
(
if status = 'good' then 'GOOD'
if status = 'bad' then 'BAD'
)
FROM table
result:
id
time_created
new_col
1
Jun-1
BAD
2
Jul-21
GOOD
3
Jun-12
GOOD
4
Aug-1
GOOD
--- I keep getting a lint error right on my CASE keyword:
"expecting " '%', '*', '+', '-', '.', '/', 'AT', '[', '||',""
one of queries I tried:
SELECT
ID, time_created
CASE
WHEN status = 'good' THEN 'GOOD'
WHEN status = 'bad' THEN 'BAD'
END
as STATUS_new
FROM TBL
WHERE CATEGORY = '871654671'
ORDER BY time_created
You just have a small syntax error (and bad column name in your sql fiddle). You just need a comma after the time created column.
SELECT
ID, time as time_created,
CASE
WHEN status = 'good' THEN 'GOOD'
WHEN status = 'bad' THEN 'BAD'
END
as STATUS_new
FROM TBL
WHERE CATEGORY = '871654671'
ORDER BY time_created
Here is the working query:
http://www.sqlfiddle.com/#!18/7293b5/11
SELECT
ID, TIME, 'STATUS_new' =
CASE STATUS
WHEN 'good' THEN 'GOOD'
WHEN 'bad' THEN 'BAD'
END
FROM TBL
WHERE CATEGORY = '871654671'
ORDER BY TIME
you must put the new name of the column before the CASE
the column you are defining the CASE must be defined directly behind the case and all the WHEN conditions are directly related to it.
in your fiddle you used the wrong column name of your TIME column

How to create a new column based on data out from a query

I have a quick question.
ATtaching the SS for reference.
How can i set a new column 'Status' as NO based the nvl condition if the id is null then i have to map to the corresponding of another table .
You can do that using case statement.
select
nvl(b.id,a.id) as id
,b.name
,case when nvl(b.id,a.id) is null then 'No' else 'Yes' End as Status
from dd b,
(select id, name from demo group by id, name)a
where a.id=b.id(+)

how to ignore the values before and after between the rows in a table

I have a table as like below
I have to add a new column to the table as "value_", which should contains age values for rows between last TRUE i.e row number 4 (highlighted in green color) value and last record(which has 01-01-9999) and remaining all should be "zero"
like as below
If all values (except last record which have 01-01-9999) are FALSE then we need all the age values like as below
how to achieve this in sql? Could you please help me on this
If I understand correctly , here is one way:
SELECT * , CASE WHEN maxtrueflag IS NULL THEN age_
CASE WHEN maxtrueflag IS NOT NULL AND from_>= maxtruefalg THEN age_
ELSE 0 END AS value_
FROM (
SELECT *, MAX(CASE WHEN flag = TRUE THEN from_ END) OVER() maxtrueflag
FROM tableName
) t

BigQuery(standard SQL) grouping values based on first CASE WHEN statement

Here is my query with the output below the syntax.
SELECT DISTINCT CASE WHEN id = 'RUS0261431' THEN value END AS sr_type,
COUNT(CASE WHEN id in ('RUS0290788') AND value in ('1','2','3','4') THEN respondentid END) AS sub_ces,
COUNT(CASE WHEN id IN ('RUS0290788') AND value in ('5','6','7') THEN respondentid END) AS pos_ces,
COUNT(*) as total_ces
FROM `some_table`
WHERE id in ( 'RUS0261431') AND id <> '' AND value IS NOT NULL
GROUP BY 1
As you can see with the attached table I'm unable to group the values based on Id RUS0290788 with the distinct values that map to RUS0261431. Is there anyway to pivot with altering my case when statements so I can group sub_ces and pos_ces by sr_type. Thanks in advanceenter image description here
You can simplify your WHERE condition to WHERE id = ('RUS0261431'). Only records with this value will be selected so you do not have to repeat this in the CASE statements.

Condition while aggregation in Spark

This question is related to conditional aggregation on SQLs. Normally we put conditions using 'case' statement in select clause but that case condition checks only the row under consideration. Consider the below data:
BEGIN TRANSACTION;
/* Create a table called NAMES */
CREATE TABLE NAMES(M CHAR, D CHAR, A INTEGER);
/* Create few records in this table */
INSERT INTO NAMES VALUES('M1','Y',2);
INSERT INTO NAMES VALUES('M1','Y',3);
INSERT INTO NAMES VALUES('M2','Y',2);
INSERT INTO NAMES VALUES('M2',null,3);
INSERT INTO NAMES VALUES('M3',null,2);
INSERT INTO NAMES VALUES('M3',null,3);
COMMIT;
This query groups using column 'M' and checks if column 'D' is null or not (separately for each record) and put a sum aggregation on column 'A'.
select sum(case when D = 'Y' then 0 else A end) from NAMES group by M;
Output for this query is:
M1|0
M2|3
M3|5
But if we want to check column 'D' for each record in the group if it is null. If any of the records is 'Y' in the group, do not perform 'sum' aggregation at all.
In brief, the expected output for the above scenario is:
M1|0
M2|0
M3|5
Answers in Spark SQL are highly appreciated.
You can use another case expression:
select (case when max(D) = min(D) and max(D) = 'Y' -- all the same
then sum(case when D = 'Y' then 0 else A end)
else 0
end)
from NAMES
group by M;