Divide result of 2 queries element wis - sql

I can't find a way to divide the result of my 2 queries.
They look like that :
SELECT periode, cee_ref_no, SUM(somme) AS total FROM V_STAT_NAMUR
WHERE code_ref_no IN (1, 2, 3, 4, 5, 6, 193, 215, 237, 259, 281)
AND periode BETWEEN '201401' AND '201412'
AND cee_ref_no = '961'
GROUP BY periode, cee_ref_no
ORDER BY periode;
AND
SELECT periode, cee_ref_no, SUM(somme) AS total FROM V_STAT_NAMUR
WHERE code_ref_no IN (7, 8, 9, 10, 205, 227, 249, 271, 293)
AND periode BETWEEN '201401' AND '201412'
AND cee_ref_no = '961'
GROUP BY periode, cee_ref_no
ORDER BY periode;
They look pretty similar, and both return something like this :
DATE | CEE_REF_NO | TOTAL
201401 | 961 | 10713
201402 | 961 | 9593
... | 961 | ...
201412 | 961 | 10426
How can I merge these to obtain something like this :
DATE | CEE_REF_NO | TOTAL
201401 | 961 | Total Q1/ Total Q2
201402 | 961 | Total Q1/ Total Q2
... | 961 | ...
201412 | 961 | Total Q1/ Total Q2
Everything I tried returned either only one row, or 12 rows with the same result.
Thanks a lot !

You can try with this:
select q1.periode, q1.cee_ref_no, q1.total as Total1, q2.total as Total2, q1.total/q2.total as Division
from (
SELECT periode, cee_ref_no, SUM(somme) AS total FROM V_STAT_NAMUR
WHERE code_ref_no IN (1, 2, 3, 4, 5, 6, 193, 215, 237, 259, 281)
AND periode BETWEEN '201401' AND '201412'
AND cee_ref_no = '961'
GROUP BY periode, cee_ref_no
) q1
join (
SELECT periode, cee_ref_no, SUM(somme) AS total FROM V_STAT_NAMUR
WHERE code_ref_no IN (7, 8, 9, 10, 205, 227, 249, 271, 293)
AND periode BETWEEN '201401' AND '201412'
AND cee_ref_no = '961'
GROUP BY periode, cee_ref_no
) q2
on q1.Periode=q2.Periode
and q1.cee_ref_no=q2.cee_ref_no
Basicaly you have just to create 2 subselect with your queries and join them by period and cee_ref_no (in case you were going to include more than 1 cee_ref_no, in your example there is just one). Then you will be able to divide the Total from q1 and q2.
Take care about the join, I don't know if your data will have information for all the months in both queries.
PS: Query not tested, written directly on editor.

Use conditional aggregation:
SELECT periode, cee_ref_no,
SUM(CASE WHEN code_ref_no IN (1, 2, 3, 4, 5, 6, 193, 215, 237, 259, 281) THEN somme ELSE 0 END) AS total_1,
SUM(CASE WHEN code_ref_no IN (7, 8, 9, 10, 205, 227, 249, 271, 293) THEN somme ELSE 0 END) AS total_2,
(SUM(CASE WHEN code_ref_no IN (1, 2, 3, 4, 5, 6, 193, 215, 237, 259, 281) THEN somme ELSE 0 END) /
SUM(CASE WHEN code_ref_no IN (7, 8, 9, 10, 205, 227, 249, 271, 293) THEN sommeEND)
) AS ratio
FROM V_STAT_NAMUR
WHERE AND
periode BETWEEN '201401' AND '201412'
GROUP BY periode, cee_ref_no
ORDER BY periode;

Try this
Select periode, cee_ref_no, total1/total2 as Total
From (
SELECT periode, cee_ref_no,
sum(Case when code_ref_no in (1, 2, 3, 4, 5, 6, 193, 215, 237, 259, 281) Then somme else 0 end) as total1,
sum(Case when code_ref_no in (7, 8, 9, 10, 205, 227, 249, 271, 293) Then somme else 0 end) as total2
From my_table
WHERE code_ref_no IN (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 193, 205, 215, 227, 237, 249, 259, 271, 281, 293)
AND periode BETWEEN '201401' AND '201412'
AND cee_ref_no = '961'
GROUP BY periode, cee_ref_no
)
ORDER BY periode;

Related

how to do list_agg with a character limit of 1440 characters in Snowflake

I have a table as below, I have 1775 ids and length of id column is 10 characters, I want to create multiple groups of list_agg of ids with a limit of not more than 1440 characters to distribute 1775 ids into groups
id
distributor_name
1234567890
Sample_name1
2345678901
Sample_name1
3456789012
Sample_name1
4567890123
Sample_name2
5678901234
Sample_name2
6789012345
Sample_name3
7890123456
Sample_name3
8901234567
Sample_name3
Required output is:
group
id_count
list_agg
1
120
1234567890,2345678901,3456789012...
2
122
7890123456,5678901234,8901234567...
Very much appreciate your help!
If your ID space is symmetrically distributed you can use WIDTH_BUCKET
with data1 as (
select
row_number() over (order by true)-1 as rn
from table(generator(rowcount=>100))
)
select
width_bucket(rn, 0, 100, 5) as group_id,
count(*) as c,
array_agg(rn) within group(order by rn) as bucket_values
from data1
group by 1
order by 1;
GROUP_ID
C
BUCKET_VALUES
1
20
[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 ]
2
20
[ 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39 ]
3
20
[ 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59 ]
4
20
[ 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79 ]
5
20
[ 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99 ]
If your data is not symmetrical, you can allocate row numbers to each row, and then shave the yack again.
you can also be data driven:
width_bucket(rn, (select min(rn) from data1), (select max(rn) from data1)+1, 5) as group_id,
It's easier done with array_agg but if you must use listagg, here is a spin on Simeon's answer. The basic idea is to keep track of length of numbers when stitched together and also account for the number of commas so we don't go over 1440 char limit.
create or replace temporary table t as
select row_number() over (order by true)-1 as id,
uniform(1000000000, 1999999999, random()) as num
from table(generator(rowcount=>1775));
with cte as
(select *, ceil((sum(len(num)) over (order by id) + count(num) over (order by id) -1)/1440) as group_id
from t)
select group_id,
count(num) as id_count,
listagg(num,',') as id_list,
len(id_list) as len_check
from cte
group by group_id
order by group_id;

Outliers in data

I have a dataset like so -
15643, 14087, 12020, 8402, 7875, 3250, 2688, 2654, 2501, 2482, 1246, 1214, 1171, 1165, 1048, 897, 849, 579, 382, 285, 222, 168, 115, 92, 71, 57, 56, 51, 47, 43, 40, 31, 29, 29, 29, 29, 28, 22, 20, 19, 18, 18, 17, 15, 14, 14, 12, 12, 11, 11, 10, 9, 9, 8, 8, 8, 8, 7, 6, 5, 5, 5, 4, 4, 4, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
Based on domain knowledge, I know that larger values are the only ones we want to include in our analysis. How do I determine where to cut off our analysis? Should it be don't include 15 and lower or 50 and lower etc?
You can do a distribution check with quantile function. Then you can remove values below lowest 1 percentile or 2 percentile. Following is an example:
import numpy as np
data = np.array(data)
print(np.quantile(data, (.01, .02)))
Another method is calculating the inter quartile range (IQR) and setting lowest bar for analysis is Q1-1.5*IQR
Q1, Q3 = np.quantile(data, (0.25, 0.75))
data_floor = Q1 - 1.5 * (Q3 - Q1)

Daily Attendance Summed Weekly

Greetings all,
I’m wondering if I could get some insights on the best way to get Attendance data.
I need get every viable in(AttendanceTypeID = 1) and out(AttendanceTypeID = 2) pairs in a day. Sum up the time values up per employee by week starting Sunday and ending Saturday. The closet in to the an out is the business rule for what is needed, and there may be many in/out pairs in a single day. I am also anchored to SQL 2008 logic since some of our customers are still using SQL 2008. I have seen a view examples using CTE’s, which I prefer, but they had used lead,lag, LastVal, NestVal functions which start in 2012.
Output.
https://forums.asp.net/t/1946217.aspx?Time+Attendance+working+hours
https://www.red-gate.com/simple-talk/sql/t-sql-programming/solving-complex-t-sql-problems,-step-by-step/
CREATE TABLE tblAttendance
(
[AttendanceID] int,
[DepartmentID] int,
[EmployeeID] int,
[AttendanceDate] datetime,
[AttendanceTime] varchar(8),
[AttendanceTypeID] int,
[AttendanceCodeID] int,
[Submitted] int
);
INSERT INTO tblAttendance
([AttendanceID], [DepartmentID], [EmployeeID], [AttendanceDate],
[AttendanceTime], [AttendanceTypeID], [AttendanceCodeID], [Submitted])
VALUES
(838, 33, 260, '2018-02-26 00:00:00', '8:00:00', 1, 1, 0),
(839, 33, 260, '2018-02-26 00:00:00', '22:00:00', 2, 1, 0),
(836, 41, 344, '2018-02-26 00:00:00', '9:00:00', 1, 1, 0),
(837, 41, 344, '2018-02-26 00:00:00', '22:00:00', 2, 1, 0),
(812, 33, 348, '2018-02-26 00:00:00', '8:00:00', 1, 1, 0),
(813, 33, 348, '2018-02-26 00:00:00', '12:00:00', 2, 1, 0),
(814, 33, 350, '2018-02-26 00:00:00', '8:00:00', 1, 1, 0),
(815, 33, 350, '2018-02-26 00:00:00', '12:00:00', 2, 1, 0),
(930, 7, 361, '2018-02-26 00:00:00', '7:00:00', 1, 1, 0),
(931, 7, 361, '2018-02-26 00:00:00', '9:00:00', 2, 1, 0),
(940, 19, 361, '2018-02-26 00:00:00', '8:55:00', 1, 1, 0),
(941, 19, 361, '2018-02-26 00:00:00', '10:00:00', 2, 1, 0),
(824, 1, 114, '2018-02-27 00:00:00', '23:59:57', 2, 1, 0),
(816, 33, 206, '2018-02-27 00:00:00', '11:12:00', 1, 1, 0),
(819, 33, 206, '2018-02-27 00:00:00', '11:16:00', 2, 1, 0),
(822, 1, 350, '2018-02-27 00:00:00', '0:00:00', 1, 1, 0),
(829, 33, 350, '2018-02-27 00:00:00', '16:15:30', 1, 1, 0),
(830, 33, 359, '2018-02-27 00:00:00', '16:15:30', 1, 1, 0),
(932, 7, 361, '2018-02-27 00:00:00', '7:00:00', 1, 1, 0),
(933, 7, 361, '2018-02-27 00:00:00', '9:00:00', 2, 1, 0),
(942, 19, 361, '2018-02-27 00:00:00', '8:30:00', 1, 1, 0),
(943, 19, 361, '2018-02-27 00:00:00', '10:30:00', 2, 1, 0),
(835, 33, 206, '2018-02-28 00:00:00', '9:23:00', 1, 1, 0),
(934, 7, 361, '2018-02-28 00:00:00', '7:00:00', 1, 1, 0),
(935, 7, 361, '2018-02-28 00:00:00', '9:00:00', 2, 1, 0),
(944, 19, 361, '2018-02-28 00:00:00', '7:00:00', 1, 1, 0),
(945, 19, 361, '2018-02-28 00:00:00', '9:00:00', 2, 1, 0),
(936, 7, 361, '2018-03-01 00:00:00', '7:00:00', 1, 1, 0),
(937, 7, 361, '2018-03-01 00:00:00', '9:00:00', 2, 1, 0),
(946, 19, 361, '2018-03-01 00:00:00', '9:00:00', 1, 1, 0),
(947, 19, 361, '2018-03-01 00:00:00', '9:30:00', 2, 1, 0),
(840, 33, 350, '2018-03-02 00:00:00', '8:21:00', 1, 1, 0),
(841, 33, 350, '2018-03-02 00:00:00', '8:22:00', 2, 1, 0),
(938, 7, 361, '2018-03-02 00:00:00', '7:00:00', 1, 1, 0),
(939, 7, 361, '2018-03-02 00:00:00', '9:00:00', 2, 1, 0),
(948, 19, 361, '2018-03-02 00:00:00', '9:01:00', 1, 1, 0),
(949, 19, 361, '2018-03-02 00:00:00', '10:00:00', 2, 1, 0),
(894, 33, 260, '2018-03-05 00:00:00', '6:00:00', 1, 1, 0),
(895, 33, 260, '2018-03-05 00:00:00', '7:00:00', 2, 1, 0),
(859, 33, 348, '2018-03-05 00:00:00', '9:30:00', 1, 1, 0),
(860, 33, 348, '2018-03-05 00:00:00', '9:30:00', 1, 1, 0),
(842, 33, 206, '2018-03-06 00:00:00', '13:21:00', 1, 1, 0),
(856, 33, 206, '2018-03-06 00:00:00', '2:51:27', 2, 1, 0),
(848, 33, 206, '2018-03-06 00:00:00', '13:57:00', 2, 1, 0),
(843, 33, 260, '2018-03-06 00:00:00', '14:00:00', 1, 1, 0),
(861, 33, 348, '2018-03-06 00:00:00', '9:30:00', 1, 1, 0),
(862, 33, 348, '2018-03-06 00:00:00', '9:31:00', 1, 1, 0),
(853, 33, 350, '2018-03-06 00:00:00', '2:32:36', 1, 1, 0),
(854, 33, 350, '2018-03-06 00:00:00', '2:48:54', 1, 1, 0),
(855, 33, 350, '2018-03-06 00:00:00', '2:49:11', 1, 1, 0),
(857, 33, 350, '2018-03-06 00:00:00', '2:52:19', 1, 1, 0),
(844, 33, 350, '2018-03-06 00:00:00', '13:32:00', 1, 1, 0),
(846, 33, 350, '2018-03-06 00:00:00', '13:53:00', 1, 1, 0),
(845, 33, 350, '2018-03-06 00:00:00', '13:35:00', 2, 1, 0),
(850, 33, 350, '2018-03-06 00:00:00', '14:06:00', 2, 1, 0),
(847, 33, 359, '2018-03-06 00:00:00', '13:56:00', 1, 1, 0),
(858, 33, 359, '2018-03-06 00:00:00', '14:52:00', 1, 1, 0),
(852, 33, 359, '2018-03-06 00:00:00', '2:29:06', 2, 1, 0),
(851, 33, 359, '2018-03-06 00:00:00', '14:09:00', 2, 1, 0),
(896, 33, 260, '2018-03-07 00:00:00', '6:00:00', 1, 1, 0),
(897, 33, 260, '2018-03-07 00:00:00', '7:00:00', 2, 1, 0),
(907, 33, 348, '2018-03-07 00:00:00', '7:00:00', 1, 1, 0),
(908, 33, 348, '2018-03-07 00:00:00', '8:00:00', 2, 1, 0),
(893, 33, 350, '2018-03-07 00:00:00', '14:36:40', 1, 1, 0),
(915, 33, 350, '2018-03-07 00:00:00', '17:00:00', 2, 1, 0),
(913, 33, 359, '2018-03-07 00:00:00', '8:00:00', 1, 1, 0),
(914, 33, 359, '2018-03-07 00:00:00', '17:00:00', 2, 1, 0),
(923, 33, 348, '2018-03-08 00:00:00', '7:00:00', 1, 1, 0),
(925, 33, 348, '2018-03-08 00:00:00', '11:00:00', 1, 1, 0),
(928, 33, 348, '2018-03-08 00:00:00', '11:45:00', 1, 1, 0),
(927, 33, 348, '2018-03-08 00:00:00', '11:35:00', 2, 1, 0),
(924, 33, 348, '2018-03-08 00:00:00', '12:00:00', 2, 1, 0),
(926, 33, 348, '2018-03-08 00:00:00', '13:00:00', 2, 1, 0),
(898, 33, 350, '2018-03-08 00:00:00', '11:42:57', 1, 1, 0),
(906, 33, 206, '2018-03-09 00:00:00', '8:14:43', 1, 1, 0),
(905, 33, 260, '2018-03-09 00:00:00', '8:12:53', 1, 1, 0),
(972, 33, 348, '2018-03-09 00:00:00', '8:00:00', 1, 1, 0),
(911, 33, 350, '2018-03-09 00:00:00', '10:43:59', 1, 1, 0),
(912, 33, 350, '2018-03-09 00:00:00', '10:44:22', 1, 1, 0),
(917, 33, 350, '2018-03-09 00:00:00', '12:44:48', 1, 1, 0),
(929, 33, 350, '2018-03-09 00:00:00', '15:18:17', 1, 1, 0),
(921, 33, 350, '2018-03-09 00:00:00', '12:50:14', 2, 1, 0),
(918, 33, 359, '2018-03-09 00:00:00', '12:44:48', 1, 1, 0),
(922, 33, 359, '2018-03-09 00:00:00', '12:50:14', 2, 1, 0),
(970, 33, 350, '2018-03-11 00:00:00', '11:25:00', 1, 1, 0),
(968, 33, 260, '2018-03-12 00:00:00', '7:00:00', 1, 1, 0),
(962, 33, 348, '2018-03-12 00:00:00', '7:00:00', 1, 1, 0),
(969, 33, 348, '2018-03-12 00:00:00', '9:00:00', 2, 1, 0),
(951, 33, 350, '2018-03-12 00:00:00', '8:54:07', 1, 1, 0),
(953, 33, 350, '2018-03-12 00:00:00', '10:31:29', 1, 1, 0),
(954, 33, 350, '2018-03-12 00:00:00', '10:34:18', 1, 1, 0),
(950, 33, 350, '2018-03-12 00:00:00', '8:21:11', 2, 1, 0),
(952, 33, 350, '2018-03-12 00:00:00', '9:02:26', 2, 1, 0),
(955, 33, 359, '2018-03-12 00:00:00', '10:37:26', 1, 1, 0),
(959, 33, 206, '2018-03-13 00:00:00', '1:01:38', 1, 1, 0),
(961, 33, 206, '2018-03-13 00:00:00', '14:00:51', 1, 1, 0),
(975, 33, 295, '2018-03-13 00:00:00', '7:00:00', 1, 1, 0),
(976, 33, 295, '2018-03-13 00:00:00', '8:00:00', 2, 1, 0),
(957, 33, 350, '2018-03-13 00:00:00', '11:26:52', 1, 1, 0),
(958, 33, 359, '2018-03-13 00:00:00', '1:00:12', 1, 1, 0),
(965, 33, 359, '2018-03-13 00:00:00', '21:00:00', 1, 1, 0),
(960, 33, 359, '2018-03-13 00:00:00', '13:14:14', 2, 1, 0),
(967, 33, 359, '2018-03-13 00:00:00', '22:00:00', 2, 1, 0),
(964, 33, 350, '2018-03-14 00:00:00', '14:34:36', 2, 1, 0),
(963, 33, 359, '2018-03-14 00:00:00', '13:45:25', 1, 1, 0)
;
Here is the query I used to return the data above:
SELECT AttendanceID , DepartmentID , EmployeeID , AttendanceDate , AttendanceTime , AttendanceTypeID , AttendanceCodeID , Submitted
FROM tblAttendance
WHERE AttendanceDate BETWEEN #StartDate AND #EndDate
AND AttendanceTypeID IN (1,2) -- 1 = in, and 2 = out
AND AttendanceCodeID = 1
ORDER BY
AttendanceDate
,AttendanceTime
The tables added below show the steps that are needed to get the end results.
First a data example:
+------------+----------------+----------------+------------------+
| EmployeeID | AttendanceDate | AttendanceTime | AttendanceTypeID |
+------------+----------------+----------------+------------------+
| 350 | 3/6/18 | 2:32:36 | 1 |
| 350 | 3/6/18 | 2:48:54 | 1 |
| 350 | 3/6/18 | 2:49:11 | 1 |
| 350 | 3/6/18 | 2:52:19 | 1 |
| 350 | 3/6/18 | 13:32:00 | 1 |
| 350 | 3/6/18 | 13:53:00 | 1 |
| 350 | 3/6/18 | 13:35:00 | 2 |
| 350 | 3/6/18 | 14:06:00 | 2 |
| 350 | 3/7/18 | 14:36:40 | 1 |
| 350 | 3/7/18 | 17:00:00 | 2 |
| 350 | 3/8/18 | 11:42:57 | 1 |
| 350 | 3/9/18 | 10:43:59 | 1 |
| 350 | 3/9/18 | 10:44:22 | 1 |
| 350 | 3/9/18 | 12:44:48 | 1 |
| 350 | 3/9/18 | 15:18:17 | 1 |
| 350 | 3/9/18 | 12:50:14 | 2 |
| 350 | 3/9/18 | 10:43:59 | 1 |
| 350 | 3/9/18 | 10:44:22 | 1 |
| 350 | 3/9/18 | 12:44:48 | 1 |
| 350 | 3/9/18 | 15:18:17 | 1 |
| 350 | 3/9/18 | 12:50:14 | 2 |
+------------+----------------+----------------+------------------+
Second Pair up the matching data in out pairs, and get there time difference.
+------------+----------------+------------------+------------------+
| EmployeeID | AttendanceDate | AttendanceTime | Pair Total |
+------------+----------------+------------------+------------------+
| 350 | 3/6/18 |13:35:00 -13:32:00| 0:03:00 |
| 350 | 3/6/18 |14:06:00 -13:53:00| 0:13:00 |
| 350 | 3/7/18 |17:00:00 -14:36:40| 2:23:20 |
| 350 | 3/9/18 |12:44:48 -12:44:48| 0:05:26 |
+------------+----------------+------------------+------------------+
Third the compiled data for the employee during that week.
+------------+-------------------+----------------------+
| EmployeeID | AttendanceWeek | AttendanceTime Total |
+------------+-------------------+----------------------+
| 350 | 3/4/18 to 3/10/18 | 2:44:46 |
+------------+-------------------+----------------------+
Hope that this helps narrow the focus.
Thank you.
First you need to make sure that Sunday is the default value for the start week.
--see what day is the default start day of the week
SELECT ##DATEFIRST
--set it to Sunday if value isn't 7
SET DATEFIRST 7;
I think I figured out what you were asking by using a CTE, a couple SUM CASE statements, and the DATEPART function.
DECLARE #StartDate DATETIME = '2/25/2018 12:00AM';
DECLARE #EndDate DATETIME = '3/17/2018 11:59PM';
--Count number of 1's and 2's in a day
--Group by EmployeeId and AttendanceDate
WITH daily1And2 AS(
SELECT EmployeeID,
AttendanceDate,
SUM(CASE WHEN AttendanceTypeID = 1 THEN 1 ELSE 0 END) 'Num1',
SUM(CASE WHEN AttendanceTypeID = 2 THEN 1 ELSE 0 END) 'Num2'
FROM tblAttendance
WHERE AttendanceDate BETWEEN #StartDate AND #EndDate
AND AttendanceTypeID IN (1,2)
AND AttendanceCodeID = 1
GROUP BY EmployeeID,AttendanceDate)
--Find the number of pairs and group by week
SELECT EmployeeID,
DATEPART(wk,AttendanceDate)'Week', --Get Week
--Getting the lowest of the 2 numbers, because that will
--be the number of pairs, then summing for the week
SUM(CASE WHEN Num1 > Num2 THEN Num2 ELSE Num1 END) 'Num1and2Pairs'
FROM daily1And2
GROUP BY EmployeeID,DATEPART(wk,AttendanceDate)
ORDER BY EmployeeID,DATEPART(wk,AttendanceDate)
Output:
+------------+------+---------------+
| EmployeeID | Week | Num1and2Pairs |
+------------+------+---------------+
| 114 | 9 | 0 |
| 206 | 9 | 1 |
| 206 | 10 | 1 |
| 206 | 11 | 0 |
| 260 | 9 | 1 |
| 260 | 10 | 2 |
| 260 | 11 | 0 |
| 295 | 11 | 1 |
| 344 | 9 | 1 |
| 348 | 9 | 1 |
| 348 | 10 | 4 |
| 348 | 11 | 1 |
| 350 | 9 | 2 |
| 350 | 10 | 4 |
| 350 | 11 | 2 |
| 359 | 9 | 0 |
| 359 | 10 | 4 |
| 359 | 11 | 2 |
| 361 | 9 | 10 |
+------------+------+---------------+

How to alias a list of values in SQL

I need to see if any of a set of columns contains a value in a list.
E.G
...
SELECT *
FROM Account
WHERE
NOT (
AccountWarningCode1 IN (02, 05, 15, 20, 21, 24, 31, 36, 40, 42, 45, 47, 50, 51, 52, 53, 55, 56, 62, 65, 66, 78, 79, 84, 110, 119, 120, 121, 125, 202)
OR AccountWarningCode2 IN (02, 05, 15, 20, 21, 24, 31, 36, 40, 42, 45, 47, 50, 51, 52, 53, 55, 56, 62, 65, 66, 78, 79, 84, 110, 119, 120, 121, 125, 202)
OR AccountWarningCode3 IN (02, 05, 15, 20, 21, 24, 31, 36, 40, 42, 45, 47, 50, 51, 52, 53, 55, 56, 62, 65, 66, 78, 79, 84, 110, 119, 120, 121, 125, 202)
...
)
The above does work, but what i'd like to do instead is alias the list some how so I don't repeat myself quite as much.
For example (this doesn't actually work)
WITH bad_warnings AS (02, 05, 15, 20, 21, 24, 31, 36, 40, 42, 45, 47, 50, 51, 52, 53, 55, 56, 62, 65, 66, 78, 79, 84, 110, 119, 120, 121, 125, 202)
SELECT *
FROM Account
WHERE
NOT (
AccountWarningCode1 IN bad_warnings
OR AccountWarningCode2 IN bad_warnings
OR AccountWarningCode3 IN bad_warnings
...
)
Is this possible in T-SQL?
Your second version is actually close. You can use a common table expression:
WITH bad_warnings(code) AS(
SELECT * FROM(VALUES
('02'), ('05'), ('15'), ('20'), ('21'), ('24'),
('31'), ('36'), ('40'), ('42'), ('45'), ('47'),
('50'), ('51'), ('52'), ('53'), ('55'), ('56'),
('62'), ('65'), ('66'), ('78'), ('79'), ('84'),
('110'), ('119'), ('120'), ('121'), ('125'), ('202')
) a(b)
)
SELECT *
FROM Account
WHERE
NOT (
AccountWarningCode1 IN (SELECT code FROM bad_warnings)
OR AccountWarningCode2 IN (SELECT code FROM bad_warnings)
OR AccountWarningCode3 IN (SELECT code FROM bad_warnings)
)
This is the way to define a derived table with your values as CTE.
WITH bad_warnings AS
(SELECT val FROM (VALUES(02),(05),(15),(20),(21),(24),(31),(36),(40),(42),(45),(47),(50),(51),(52),(53),(55),(56),(62),(65),(66),(78),(79),(84),(110),(119),(120),(121),(125),(202)) AS tbl(val)
)
SELECT *
FROM bad_warnings
You can use this as any table in your query.
Your check would be something like
WHERE SomeValue IN(SELECT val FROM badWarnings)
With NOT IN you would negate this list
Is this possible in T-SQL?
Yes, either use a table variable or a temporary table. Populate those inlist data in table variable and use it as many places within your procedure you want.
Example:
declare #inlist1 table(elem int);
insert into #inlist1
select 02
union
select 05
union
select 15
union
select 20
union
select 21
union
select 24
Use it now
WHERE
NOT (
AccountWarningCode1 IN (select elem from #inlist1)
(OR)
You can as well perform a JOIN operation saying
FROM Account a
JOIN #inlist1 i ON a.AccountWarningCode1 = i.elem
You can do it like this:
with bad_warnings as
(select '02'
union
select '15'
etc
)
select * from account
where not
(AccountWarningCode1 IN (SELECT code FROM bad_warnings
etc)

Selecting ID in a certain order specified within IN operator

I would like to try to select a certain set of numbers in a particular order, for use with loops.
SELECT ID
FROM filter
WHERE id in (87, 97, 117, 52, 240, 76, 141, 137, 157, 255, 186, 196, 133,
175, 153, 224, 59, 205, 65, 47, 105, 80, 113, 293, 161, 145,
192, 149, 231, 91, 101, 109, 215, 121, 125, 64, 41, 291, 367,
388, 391, 462, 467)
Doing this returns results sorted by ID, rather than in the order I specified. In most other similar questions a preferred answer was using CASE for particular entries, but what about selecting hundreds of records in a predetermined order?
If you have hundreds of items, then use a derived table, such as:
select f.id
from filter f join
(values(1, 87), (2, 97), (3, 117), . . .) as v(ord, id)
on f.id = v.id
order by ord;