Selecting records when a criteria is met - sql

I am trying to come up with a SQL that will NOT select records when error value is "true" and when the name of that person as well as the date are the same. I thought perhaps a main query using the IN function where the parameter would be a sub query that will identify what the duplicates are for User_ID and Error_Dt. So for example:
Sample Data:
+----------+-------+---------+----------+
| Error_ID | Error | User_ID | Error_Dt |
+----------+-------+---------+----------+
| Err_A_01 | True | JP_123 | 20200307 |
| Err_A_02 | True | DF_455 | 20200605 |
| Err_A_03 | True | DF_455 | 20200605 |
| Err_A_04 | False | DF_455 | 20200703 |
| Err_B_01 | False | BH_135 | 20200219 |
| Err_B_02 | True | DP_246 | 20200310 |
| Err_B_03 | True | DP_246 | 20200310 |
| Err_B_04 | True | DP_246 | 20200509 |
| Err_B_05 | False | DP_246 | 20200601 |
| Err_B_06 | True | KS_159 | 20200120 |
| Err_B_07 | True | KS_159 | 20200120 |
| Err_B_08 | True | KS_159 | 20200310 |
| Err_C_01 | False | JH_123 | 20200702 |
+----------+-------+---------+----------+
Desire Results:
+----------+-------+---------+----------+
| Error_ID | Error | User_ID | Error_Dt |
+----------+-------+---------+----------+
| Err_A_01 | True | JP_123 | 20200307 |
| Err_A_04 | False | DF_455 | 20200703 |
| Err_B_01 | False | BH_135 | 20200219 |
| Err_B_04 | True | DP_246 | 20200509 |
| Err_B_05 | False | DP_246 | 20200601 |
| Err_B_08 | True | KS_159 | 20200310 |
| Err_C_01 | False | JH_123 | 20200702 |
+----------+-------+---------+----------+

Select only unique Error + User_ID + Error_Dt rows or those not 'True'.
select Error_ID, Error, User_ID, Error_Dt
from (
select *,
count(*) over(partition by Error, User_ID, Error_Dt) cnt
from tbl ) t
where Error <> 'True' OR cnt = 1
order by Error_ID;

Related

Divide window values by a reference row

I would like to have some guidance or help to address the following problem:
I have the following data in a Spark Data frame.
I would like to create a window of n days preceding a succeeding a reference record and then calculate a division using reference values with the values in the window.
However I have not figured out how to do this kind of operation, everything that I find is just mean, count or sum operations in the window.
Original data looks like this:
| symbol_id | date | close | is_reference |
|----------|------------|----------|--------------|
| XXXX | 2000-01-19 | 809.9644 | FALSE |
| XXXX | 2000-01-20 | 784.274 | FALSE |
| XXXX | 2000-01-21 | 774.2831 | FALSE |
| XXXX | 2000-01-24 | 760.0106 | FALSE |
| XXXX | 2000-01-25 | 750.7335 | FALSE |
| XXXX | 2000-01-26 | 750.7335 | TRUE |
| XXXX | 2000-01-27 | 742.17 | FALSE |
| XXXX | 2000-01-28 | 749.3063 | FALSE |
| XXXX | 2000-01-31 | 750.02 | FALSE |
| XXXX | 2000-02-01 | 762.8653 | FALSE |
| XXXX | 2000-02-02 | 749.3063 | FALSE |
Expected output looks like this:
| symbol_id | date | close | is_reference | reference_change |
|----------|------------|----------|--------------|-------------------|
| XXXX | 2000-01-19 | 809.9644 | FALSE | 1.07889737170381 |
| XXXX | 2000-01-20 | 784.274 | FALSE | 1.04467697258748 |
| XXXX | 2000-01-21 | 774.2831 | FALSE | 1.03136878799201 |
| XXXX | 2000-01-24 | 760.0106 | FALSE | 1.0123573811479 |
| XXXX | 2000-01-25 | 750.7335 | FALSE | 1 |
| XXXX | 2000-01-26 | 750.7335 | TRUE | 1 |
| XXXX | 2000-01-27 | 742.17 | FALSE | 0.988593155893536 |
| XXXX | 2000-01-28 | 749.3063 | FALSE | 0.99809892591712 |
| XXXX | 2000-01-31 | 750.02 | FALSE | 0.999049596161621 |
| XXXX | 2000-02-01 | 762.8653 | FALSE | 1.01615992892285 |
| XXXX | 2000-02-02 | 749.3063 | FALSE | 0.99809892591712 |
I'm currently partition by symbol_id using the following snippet:
val window = Window.partitionBy(SYMBOL_ID)
.orderBy(col(DATE).desc)
.rowsBetween(5,0) // RangeBetween looks better but i just trying with rowsBetween for now
And trying to do something like this on reference_change column.
df
.withColumn("close_movement", $"close"/lit(col("close")
.where(col("is_reference") === true)).over(window)) // This command is wrong but its the most similar to thoughts in my mind.
So at the end I will be using the close WHERE is_reference = true divide by the close on the windows like the reference_change column we have on the expected output.
Thank you for your help!
I would just use a simple join:
val ref = df.filter($"is_reference")
df.join(ref, df.col("symbol_id") === ref.col("symbol_id") &&
abs(date_diff(df.col("date"), ref.col("date"))) <= 5)
.select(df.col("symbol_id"), df.col("date"), df.col("close"), df.col("is_reference"),
(df.col("close") / ref.col("close")).as("reference_change"))

Oracle SQL get last year of dates excluding weekends

I'd expect this to work to get me a list of calendar dates over the past 12 months excluding weekends; but it just gives me the entire list of dates - which I suppose is fine - but want to know why the below is incorrect.
SELECT ADD_MONTHS(TRUNC(SYSDATE,'MM'),-12) - 1 + rownum AS CalendarDate
FROM all_objects
WHERE ADD_MONTHS(TRUNC(SYSDATE,'MM'),-12) - 1 + rownum <= sysdate
AND to_char(sysdate,'DY') NOT IN ('SAT','SUN')
Because you're doing this:
AND to_char(sysdate,'DY') NOT IN ('SAT','SUN')
And today isn't Saturday or Sunday. You need to look at the calculated CalendarDate value; but you can't do that in the same level of subquery. You could try to recalculate it:
AND to_char(ADD_MONTHS(TRUNC(SYSDATE,'MM'),-12) - 1 + rownum,'DY') NOT IN ('SAT','SUN')
but this will return no rows - at least when run at the moment. As it happens, March 1st 2020 was a Sunday, so that is excluded; and because of when and how rownum is generated, that result is excluded, and the next one sees the same value, which is excluded, and so on.
You can use an inline view to avoid both issues:
SELECT CalendarDate
FROM (
SELECT ADD_MONTHS(TRUNC(SYSDATE,'MM'),-12) - 1 + rownum AS CalendarDate
FROM all_objects
WHERE ADD_MONTHS(TRUNC(SYSDATE,'MM'),-12) - 1 + rownum <= sysdate
)
WHERE to_char(CalendarDate,'DY','NLS_DATE_LANGUAGE=ENGLISH') NOT IN ('SAT','SUN')
CALENDARDATE
02-MAR-20
03-MAR-20
04-MAR-20
05-MAR-20
06-MAR-20
09-MAR-20
10-MAR-20
...
db<>fiddle
I've chucked in a language modifier to stop it behaving differently for users with sessions not set to English.
Querying against all_objects isn't ideal though, it would be better to use a hierarcical query:
SELECT *
FROM (
SELECT ADD_MONTHS(TRUNC(SYSDATE,'MM'),-12) - 1 + level AS CalendarDate
FROM dual
CONNECT BY level <= TRUNC(SYSDATE) - ADD_MONTHS(TRUNC(SYSDATE,'MM'),-12) + 1
)
WHERE to_char(CalendarDate,'DY','NLS_DATE_LANGUAGE=ENGLISH') NOT IN ('SAT','SUN')
ORDER BY CalendarDate
db<>fiddle
or a recursive CTE, if you're 11gR2+:
WITH rcte (CalendarDate) AS (
SELECT ADD_MONTHS(TRUNC(SYSDATE,'MM'),-12)
FROM dual
UNION ALL
SELECT rcte.CalendarDate + interval '1' day
FROM rcte
WHERE rcte.CalendarDate < TRUNC(SYSDATE)
)
SELECT CalendarDate
FROM rcte
WHERE to_char(CalendarDate,'DY','NLS_DATE_LANGUAGE=ENGLISH') NOT IN ('SAT','SUN')
ORDER BY CalendarDate
db<>fiddle (as 18c to avoid a couple of issues with the patch level in the 11g version it uses).
You checking whether today is sunday or monday with to_char(sysdate,'DY'). you need to check CalendarDate which is not available in your window. You can use cte to calculate the calendar then you can remove weekends with your condition as below.
with cte (CalendarDate) as
(
SELECT ADD_MONTHS(TRUNC(SYSDATE,'MM'),-12) - 1 + rownum AS CalendarDate
FROM all_objects
WHERE ADD_MONTHS(TRUNC(SYSDATE,'MM'),-12) - 1 + rownum <= sysdate
)
select * from cte where
to_char(CalendarDate,'DY') not in ('SAT','SUN');
| CALENDARDATE |
| :----------- |
| 02-MAR-20 |
| 03-MAR-20 |
| 04-MAR-20 |
| 05-MAR-20 |
| 06-MAR-20 |
| 09-MAR-20 |
| 10-MAR-20 |
| 11-MAR-20 |
| 12-MAR-20 |
| 13-MAR-20 |
| 16-MAR-20 |
| 17-MAR-20 |
| 18-MAR-20 |
| 19-MAR-20 |
| 20-MAR-20 |
| 23-MAR-20 |
| 24-MAR-20 |
| 25-MAR-20 |
| 26-MAR-20 |
| 27-MAR-20 |
| 30-MAR-20 |
| 31-MAR-20 |
| 01-APR-20 |
| 02-APR-20 |
| 03-APR-20 |
| 06-APR-20 |
| 07-APR-20 |
| 08-APR-20 |
| 09-APR-20 |
| 10-APR-20 |
| 13-APR-20 |
| 14-APR-20 |
| 15-APR-20 |
| 16-APR-20 |
| 17-APR-20 |
| 20-APR-20 |
| 21-APR-20 |
| 22-APR-20 |
| 23-APR-20 |
| 24-APR-20 |
| 27-APR-20 |
| 28-APR-20 |
| 29-APR-20 |
| 30-APR-20 |
| 01-MAY-20 |
| 04-MAY-20 |
| 05-MAY-20 |
| 06-MAY-20 |
| 07-MAY-20 |
| 08-MAY-20 |
| 11-MAY-20 |
| 12-MAY-20 |
| 13-MAY-20 |
| 14-MAY-20 |
| 15-MAY-20 |
| 18-MAY-20 |
| 19-MAY-20 |
| 20-MAY-20 |
| 21-MAY-20 |
| 22-MAY-20 |
| 25-MAY-20 |
| 26-MAY-20 |
| 27-MAY-20 |
| 28-MAY-20 |
| 29-MAY-20 |
| 01-JUN-20 |
| 02-JUN-20 |
| 03-JUN-20 |
| 04-JUN-20 |
| 05-JUN-20 |
| 08-JUN-20 |
| 09-JUN-20 |
| 10-JUN-20 |
| 11-JUN-20 |
| 12-JUN-20 |
| 15-JUN-20 |
| 16-JUN-20 |
| 17-JUN-20 |
| 18-JUN-20 |
| 19-JUN-20 |
| 22-JUN-20 |
| 23-JUN-20 |
| 24-JUN-20 |
| 25-JUN-20 |
| 26-JUN-20 |
| 29-JUN-20 |
| 30-JUN-20 |
| 01-JUL-20 |
| 02-JUL-20 |
| 03-JUL-20 |
| 06-JUL-20 |
| 07-JUL-20 |
| 08-JUL-20 |
| 09-JUL-20 |
| 10-JUL-20 |
| 13-JUL-20 |
| 14-JUL-20 |
| 15-JUL-20 |
| 16-JUL-20 |
| 17-JUL-20 |
| 20-JUL-20 |
| 21-JUL-20 |
| 22-JUL-20 |
| 23-JUL-20 |
| 24-JUL-20 |
| 27-JUL-20 |
| 28-JUL-20 |
| 29-JUL-20 |
| 30-JUL-20 |
| 31-JUL-20 |
| 03-AUG-20 |
| 04-AUG-20 |
| 05-AUG-20 |
| 06-AUG-20 |
| 07-AUG-20 |
| 10-AUG-20 |
| 11-AUG-20 |
| 12-AUG-20 |
| 13-AUG-20 |
| 14-AUG-20 |
| 17-AUG-20 |
| 18-AUG-20 |
| 19-AUG-20 |
| 20-AUG-20 |
| 21-AUG-20 |
| 24-AUG-20 |
| 25-AUG-20 |
| 26-AUG-20 |
| 27-AUG-20 |
| 28-AUG-20 |
| 31-AUG-20 |
| 01-SEP-20 |
| 02-SEP-20 |
| 03-SEP-20 |
| 04-SEP-20 |
| 07-SEP-20 |
| 08-SEP-20 |
| 09-SEP-20 |
| 10-SEP-20 |
| 11-SEP-20 |
| 14-SEP-20 |
| 15-SEP-20 |
| 16-SEP-20 |
| 17-SEP-20 |
| 18-SEP-20 |
| 21-SEP-20 |
| 22-SEP-20 |
| 23-SEP-20 |
| 24-SEP-20 |
| 25-SEP-20 |
| 28-SEP-20 |
| 29-SEP-20 |
| 30-SEP-20 |
| 01-OCT-20 |
| 02-OCT-20 |
| 05-OCT-20 |
| 06-OCT-20 |
| 07-OCT-20 |
| 08-OCT-20 |
| 09-OCT-20 |
| 12-OCT-20 |
| 13-OCT-20 |
| 14-OCT-20 |
| 15-OCT-20 |
| 16-OCT-20 |
| 19-OCT-20 |
| 20-OCT-20 |
| 21-OCT-20 |
| 22-OCT-20 |
| 23-OCT-20 |
| 26-OCT-20 |
| 27-OCT-20 |
| 28-OCT-20 |
| 29-OCT-20 |
| 30-OCT-20 |
| 02-NOV-20 |
| 03-NOV-20 |
| 04-NOV-20 |
| 05-NOV-20 |
| 06-NOV-20 |
| 09-NOV-20 |
| 10-NOV-20 |
| 11-NOV-20 |
| 12-NOV-20 |
| 13-NOV-20 |
| 16-NOV-20 |
| 17-NOV-20 |
| 18-NOV-20 |
| 19-NOV-20 |
| 20-NOV-20 |
| 23-NOV-20 |
| 24-NOV-20 |
| 25-NOV-20 |
| 26-NOV-20 |
| 27-NOV-20 |
| 30-NOV-20 |
| 01-DEC-20 |
| 02-DEC-20 |
| 03-DEC-20 |
| 04-DEC-20 |
| 07-DEC-20 |
| 08-DEC-20 |
| 09-DEC-20 |
| 10-DEC-20 |
| 11-DEC-20 |
| 14-DEC-20 |
| 15-DEC-20 |
| 16-DEC-20 |
| 17-DEC-20 |
| 18-DEC-20 |
| 21-DEC-20 |
| 22-DEC-20 |
| 23-DEC-20 |
| 24-DEC-20 |
| 25-DEC-20 |
| 28-DEC-20 |
| 29-DEC-20 |
| 30-DEC-20 |
| 31-DEC-20 |
| 01-JAN-21 |
| 04-JAN-21 |
| 05-JAN-21 |
| 06-JAN-21 |
| 07-JAN-21 |
| 08-JAN-21 |
| 11-JAN-21 |
| 12-JAN-21 |
| 13-JAN-21 |
| 14-JAN-21 |
| 15-JAN-21 |
| 18-JAN-21 |
| 19-JAN-21 |
| 20-JAN-21 |
| 21-JAN-21 |
| 22-JAN-21 |
| 25-JAN-21 |
| 26-JAN-21 |
| 27-JAN-21 |
| 28-JAN-21 |
| 29-JAN-21 |
| 01-FEB-21 |
| 02-FEB-21 |
| 03-FEB-21 |
| 04-FEB-21 |
| 05-FEB-21 |
| 08-FEB-21 |
| 09-FEB-21 |
| 10-FEB-21 |
| 11-FEB-21 |
| 12-FEB-21 |
| 15-FEB-21 |
| 16-FEB-21 |
| 17-FEB-21 |
| 18-FEB-21 |
| 19-FEB-21 |
| 22-FEB-21 |
| 23-FEB-21 |
| 24-FEB-21 |
| 25-FEB-21 |
| 26-FEB-21 |
| 01-MAR-21 |
| 02-MAR-21 |
| 03-MAR-21 |
| 04-MAR-21 |
| 05-MAR-21 |
| 08-MAR-21 |
| 09-MAR-21 |
db<>fiddle here

Counting based on group of 1st column

I am using following query to count how many Bill_date each BAN have
select replace(c.usertoken, '-', '') as BAN
, to_char(to_date(bi.name,'YYYY-MM-DD'),'dd-mm-yy') as Billdate_dmy
, (replace(c.usertoken, '-', '') ||':'|| to_char(to_date(bi.name,'YYYY-MM-DD'),'dd-mm-yy')) as BAN_Billdate_dmy
, count(c.usertoken) as Number_Of_Bills
from customer c
, service s
, document d
, bill bi
, batch ba
, billrun br
where c.ID = s.CUSTOMER_SERVICE_ID
and s.ID = d.SERVICE_DOCUMENT_ID
and bi.ID = d.BILL_DOCUMENT_ID
and d.BATCH = ba.ID
and ba.BILLRUN = br.ID
and br.STATUS = 'APPROVED'
and c.brand='rogers'
and d.VERSIONEDCONTENTFOLDER='cbu'
group by c.usertoken, bi.name
order by c.usertoken
Output of the above query
+-----------+----------+--------------------+--------------+--+-------+
| BAN | Bill_date | BAN_Billdate | Count |
+-----------+----------+--------------------+--------------+--+-------+
| 100001247 | 25-09-19 | 100001247:25-09-19 | 1 | | |
| 100001247 | 25-10-19 | 100001247:25-10-19 | 1 | | |
| 100002583 | 15-10-19 | 100002583:15-10-19 | 1 | | |
| 100004753 | 25-09-19 | 100004753:25-09-19 | 1 | | |
| 100004753 | 25-10-19 | 100004753:25-10-19 | 1 | | |
| 100005719 | 25-09-19 | 100005719:25-09-19 | 1 | | |
| 100005719 | 25-10-19 | 100005719:25-10-19 | 1 | | |
| 100006311 | 06-09-19 | 100006311:06-09-19 | 1 | | |
| 100009596 | 25-09-19 | 100009596:25-09-19 | 1 | | |
| 100009596 | 25-10-19 | 100009596:25-10-19 | 1 | | |
+-----------+----------+--------------------+--------------+--+-------+
However I was expecting the following output
+-----------+----------+--------------------+--------------+--+-------+
| BAN | Billdate | BAN_Billdate | | Count |
+-----------+----------+--------------------+--------------+--+-------+
| 100001247 | 25-09-19 | 100001247:25-09-19 | 2 | | |
| 100001247 | 25-10-19 | 100001247:25-10-19 | 2 | | |
| 100002583 | 15-10-19 | 100002583:15-10-19 | 3 | | |
| 100004753 | 25-09-19 | 100004753:25-09-19 | 3 | | |
| 100004753 | 25-10-19 | 100004753:25-10-19 | 3 | | |
| 100005719 | 25-09-19 | 100005719:25-09-19 | 2 | | |
| 100005719 | 25-10-19 | 100005719:25-10-19 | 2 | | |
| 100006311 | 06-09-19 | 100006311:06-09-19 | 1 | | |
| 100009596 | 25-09-19 | 100009596:25-09-19 | 2 | | |
| 100009596 | 25-10-19 | 100009596:25-10-19 | 2 | | |
+-----------+----------+--------------------+--------------+--+-------+
Please advise what changes should I do in the query to have the count column reflecting the expected values.
I don't want to touch your query and the archaic join syntax. Please learn proper SQL grammar with JOIN and ON clauses for joins.
That said, you seem to want a window function to sum the counts:
select sum(count(*)) over (partition by ban, to_date(bi.name, 'YYYY-MM-DD'))
I'm not sure that aggregation is really useful, if you are only getting one row per group. In that case, you might want to remove the group by and use:
select count(*) over (partition by ban, to_date(bi.name, 'YYYY-MM-DD'))

SQL Server - group by column for each corresponding value

I am new to this forum. Hope fully I will be able to contribute and get my queries resolved too.
I am stuck at this that I do not know where to start off.
I have below data set.
| Start Step| End 1 | End 2 |
| 1001866 | 1001867 | NULL |
| 1001866 | 1001868 | NULL |
| 1001868 | 1001873 | NULL |
| 1001873 | 1001868 | NULL |
| 1001868 | 1005206 | NULL |
| 1001873 | 1001867 | NULL |
| 1005206 | 1001873 | NULL |
| 1005206 | 1005385 | 1005386 |
| 1005206 | 1005377 | 1005378 |
| 1005378 | 1005376 | 1005206 |
| 1005379 | 1005376 | 1005206 |
| 1005379 | 1005380 | 1005381 |
| 1005381 | 1005382 | 1001869 |
| 1005381 | 1005383 | NULL |
| 1005381 | 1005384 | 1001872 |
| 1005378 | 1005379 | NULL |
| 1005383 | 1001872 | NULL |
| 1005383 | 1005376 | 1005206 |
| 1005383 | 1005381 | NULL |
| 1001869 | 1001871 | NULL |
| 1005386 | 1005376 | 1005206 |
I want each step to be in single row with their corresponding end1 and end2 and ordered by step and ranked. I want the output to be as in the image:
| Rank | Start | End Step 1 | End Step 2 |
| 1 | 1001866 | 1001867 | NULL |
| 1 | 1001866 | 1001868 | NULL |
| 2 | 1001867 | NULL | NULL |
| 3 | 1001868 | 1001873 | NULL |
| 3 | 1001868 | 1005206 | NULL |
| 4 | 1001869 | NULL | NULL |
| 4 | 1001869 | 1001871 | NULL |
| 5 | 1001871 | NULL | NULL |
| 6 | 1001872 | NULL | NULL |
| 7 | 1001873 | 1001868 | NULL |
| 7 | 1001873 | 1001867 | NULL |
| 8 | 1005206 | 1001873 | NULL |
| 8 | 1005206 | 1005385 | 1005386 |
| 8 | 1005206 | 1005377 | 1005378 |
| 9 | 1005376 | NULL | NULL |
| 10 | 1005377 | NULL | NULL |
| 11 | 1005378 | 1005379 | NULL |
| 11 | 1005378 | 1005376 | 1005206 |
| 12 | 1005379 | 1005376 | 1005206 |
| 12 | 1005379 | 1005380 | 1005381 |
| 13 | 1005380 | NULL | NULL |
| 14 | 1005381 | 1005382 | 1001869 |
| 14 | 1005381 | 1005383 | NULL |
| 14 | 1005381 | 1005384 | 1001872 |
| 15 | 1005382 | NULL | NULL |
| 16 | 1005383 | 1001872 | NULL |
| 16 | 1005383 | 1005376 | 1005206 |
| 16 | 1005383 | 1005381 | NULL |
| 17 | 1005384 | NULL | NULL |
| 18 | 1005385 | NULL | NULL |
| 19 | 1005386 | 1005376 | 1005206 |
| 19 | 1005386 | 1005387 | NULL |
| 20 | 1005387 | NULL | NULL |
Just highlighted few values for better understanding.
Is it possible ?
Can any one please help ?
select dense_rank() over(order by [start step]) [rank], * from
(select * from yourtable
union
select distinct [end 1], null, null from yourtable where [end 1] is not null
union
select distinct [end 2], null, null from yourtable where [end 2] is not null
)a order by [start step]

SQL Merge multiple columns into one column

I have a SQL statement that is combining two tables, but I've recently been asked to add case conditions. The conditions are working but the problem I'm running into is that each condition creates a duplicate column.
case when s.Department = 'Aero' then '(OA)' else '' end as Blah,
case when s.Department = 'Terrent' then '(OT)' else '' end as Blah,
case when s.Department = 'Vertigo' then '(OMG)' else '' end as Blah
This causes me to end up with
a| b | c | d | Blah | Blah | Blah|
| | | | (OT) | (OA) | (OT)|
| | | | (OT) | | |
| | | | (OT) | (OA) | |
| | | | (OT) | (OA) | (OT)|
| | | | | | (OT)|
How can I use the "case" cmd and have all the results if applicable show up under 1 column?
a| b | c | d | Blah |
| | | | (OT) |
| | | | (OT) |
| | | | (OT) |
| | | | (OT) |
| | | | (OA) |
| | | | (OA) |
| | | | (OA) |
| | | | (OT) |
| | | | (OT) |
| | | | (OT) |
You would use one case statement instead of three:
(case when s.Department = 'Aero' then '(OA)'
when s.Department = 'Terrent' then '(OT)'
when s.Department = 'Vertigo' then '(OMG)'
else ''
end) as Blah