I have a table similar to the picture below. In this table, I have some duplicates in SESS_KEY. I only want rows that does not have duplicates or if rows that have duplicates, I only want the ones with CALL_TRNSF_FLG set to 1. I have manual create INCLUDE field to show column that I want (1). How can I achieve this?
Thank you for your help!
Here is the sample data:
INCLUDE SESS_KEY SESS_CALL_ST_DT_TS CONN_ID TLK_DUR HLD_DUR AFT_CALL_WRK_DUR TRNSF_TLK_TM TRNSF_HLD_TM TRNSF_ACW_TM CALL_TRNSF_FLG
0 24067A16-A24A-45BE-E3AA-7E0BFE7ECDA5 7/25/2016 9:07 0141028541267da5 918 57 26 ? ? ? 0
1 24067A16-A24A-45BE-E3AA-7E0BFE7ECDA5 7/25/2016 9:07 0521028304ed75f8 236 0 3 918 57 26 1
0 49FFAB03-C19C-4291-6BAB-267CC95E27CF 7/6/2016 17:25 014102854125f060 278 0 130 ? ? ? 0
1 49FFAB03-C19C-4291-6BAB-267CC95E27CF 7/6/2016 17:25 0521028304e98111 391 0 8 278 0 130 1
0 7CCBBF2F-6FBC-4812-BAB1-4E258B88C20A 7/12/2016 11:34 05200282b0814531 269 0 190 406 0 124 1
1 7CCBBF2F-6FBC-4812-BAB1-4E258B88C20A 7/12/2016 11:34 013b028225ed6484 406 0 124 ? ? ? 0
0 CA32F05E-5C8A-4849-63A4-15B2342081B8 7/6/2016 11:38 02420282b06776f9 256 0 114 297 0 67 1
1 CA32F05E-5C8A-4849-63A4-15B2342081B8 7/6/2016 11:38 014102854125ea06 297 0 67 ? ? ? 0
0 E75EF405-1C0D-45E4-EC97-88D3CD7B5E55 7/5/2016 15:03 1.41E+214 2,691 0 255 ? ? ? 0
1 E75EF405-1C0D-45E4-EC97-88D3CD7B5E55 7/5/2016 15:03 0243028304ee14a5 314 0 9 2,691 0 255 1
1 04F8CC43-710B-4E4D-D8A1-DAC45FB3FF24 7/19/2016 16:49 1.41E+14 123 100 43 ? ? ? 0
1 0AFB6070-9D95-47B0-B0AF-D34ED70FCE8E 7/22/2016 14:20 0243028304f1ffca 335 239 79 ? ? ? 0
1 13581E6A-A568-4993-098C-05233CF293AE 7/15/2016 11:22 014102854126375a 196 150 258 ? ? ? 0
1 1A6AE4BE-1858-4CB3-83B1-CFF7A9E88EF9 7/8/2016 19:09 02420282b068325e 120 0 0 ? ? ? 0
1 24CE6C11-AF85-4770-53B4-FE20200339DF 7/28/2016 12:47 0243028304f3401b 181 0 0 107 0 48 1
1 293F85F4-34BC-44B1-43B5-A6B3B8886FC8 7/1/2016 8:33 0521028304e8778e 70 0 21 149 0 1 1
1 2BD0216A-B3F3-4597-1CBD-095F8D291736 7/7/2016 8:41 0243028304ee83b2 1,037 0 187 ? ? ? 0
1 2C774BE2-5B26-47C0-B69F-69B04A63F879 7/25/2016 18:26 013b028225edd637 1,481 0 110 ? ? ? 0
1 3F43720B-B6AE-4335-4FB5-9275A952989F 7/11/2016 11:08 013b028225ed5830 155 0 0 ? ? ? 0
1 41B056DC-8D3F-425D-BD9E-10A3EB0E944D 7/27/2016 11:13 05200282b084c5d5 34 0 0 ? ? ? 0
1 420483AD-8586-45C7-68AB-675E50EF2B92 7/5/2016 11:03 013b028225ed2765 1,320 0 283 ? ? ? 0
1 43A14051-6EAA-4251-3FA1-F2FBAE6DB643 7/23/2016 12:16 05200282b083f410 359 0 143 ? ? ? 0
1 494F3EA9-EA47-4F7B-C795-61B8B23DA0FA 7/21/2016 9:27 02420282b06ac6c3 0 0 0 ? ? ? 0
1 4D743557-DE09-4007-D58C-EFB09EF6713C 7/29/2016 17:19 05200282b085844a 951 361 240 ? ? ? 0
1 546C0FD0-5445-44F8-0789-1FA62BB57CDB 7/15/2016 18:14 1.41E+59 686 0 60 ? ? ? 0
1 5487C587-D37C-4E5C-9A88-87A3978996CD 7/28/2016 18:51 014102854126a96d 833 0 534 ? ? ? 0
1 5AB8D65A-28C7-4CAD-5796-3A7B720A47F7 7/20/2016 8:56 0141028541265a9f 274 111 381 ? ? ? 0
1 6866B3F8-F953-43BF-9089-B1FE699DEE07 7/19/2016 16:25 05200282b0830349 35 0 180 ? ? ? 0
1 6A4566B3-71B9-47BB-75BC-37B6E644D704 7/19/2016 10:14 02420282b06a3d7b 0 0 0 ? ? ? 0
1 72D17A78-FA5D-42DA-E39A-F7B950C15E22 7/5/2016 18:05 02420282b0675679 606 0 167 ? ? ? 0
1 73657A2A-34B7-4921-E691-49827E46128D 7/20/2016 11:02 02420282b06a8ae8 31 0 264 ? ? ? 0
1 7520F825-DA7B-4D5F-7AA9-3ADD9AAC5BE7 7/5/2016 18:53 05200282b07fd5df 354 0 20 ? ? ? 0
1 76DA5FB6-3EDD-45E1-B8BB-C70EA1CB4E53 7/1/2016 10:07 0243028304ed7c74 132 0 20 105 0 66 1
1 810B9E66-AA32-4BB0-128D-8E3FFC86EB0E 7/22/2016 13:37 013b028225edc13d 1,621 109 34 ? ? ? 0
1 81402352-DE71-45E4-4EAD-C1FFE20F8288 7/11/2016 9:28 0521028304ea456a 38 0 0 71 0 0 1
1 81EA3AD7-B721-4718-9AB0-6FB005252F64 7/12/2016 17:15 013b028225ed6ad5 812 129 60 ? ? ? 0
1 870632C0-4D80-41DC-AD84-12972DBC5AF2 7/23/2016 14:20 0243028304f229ee 1,084 0 5 ? ? ? 0
1 886919E7-80DB-4E2C-D5B2-8B83420F4D27 7/26/2016 19:22 0243028304f2da42 533 465 155 ? ? ? 0
1 8A18B8A2-1405-446B-71BA-A3FBAC816C12 7/8/2016 16:13 013b028225ed4f72 318 237 0 ? ? ? 0
1 8A54DAD7-2745-4BFB-22BE-BF479C1A8710 7/7/2016 15:25 02420282b067da6c 42 0 94 104 0 38 1
1 8D5EB433-2D50-4A67-00AC-E768A549B56E 7/26/2016 14:35 0521028304edf692 55 0 0 ? ? ? 0
1 8F222904-EC4E-4395-D496-A25FB408AD95 7/29/2016 17:09 0243028304f3a5f0 88 0 137 ? ? ? 0
1 9310922F-D545-4E78-42B2-E1B508F5A436 7/7/2016 12:23 02420282b067c625 155 2 15 ? ? ? 1
1 A605BF7A-50E6-4114-1981-7B3988079B7E 7/6/2016 16:56 02420282b0679dfa 89 0 293 ? ? ? 0
1 AA23384F-C4DA-4357-3DAF-7CD8337831DD 7/9/2016 11:20 014102854126082a 138 0 210 ? ? ? 0
1 AF5AD7E2-7584-4ACD-B28E-1AB2DB87BDAA 7/21/2016 17:36 0243028304f1cda6 0 0 0 ? ? ? 0
1 B66D3851-83BE-4E0E-7D9B-1719E378905D 7/19/2016 12:41 0243028304f122cd 81 0 0 ? ? ? 0
1 BB2FA3CD-AB6D-42BD-3CB7-EEC27E3403BF 7/15/2016 14:38 0243028304f0753e 65 0 195 92 3 29 1
1 BBA4031A-7876-4614-F9BC-718A6D8A16A7 7/13/2016 17:42 0521028304eb1a47 163 0 85 ? ? ? 0
1 BCF2B7D8-CBD0-497F-EEA7-FDEC46EFEEBE 7/7/2016 12:09 0521028304e9acaa 44 0 8 ? ? ? 0
1 BE9386B6-424E-40F9-67A1-A56EF6C18B77 7/27/2016 20:03 013b028225edecc5 1 0 0 ? ? ? 0
1 C0F0EF71-F52B-4D10-E9B4-DA1AF4343CC7 7/11/2016 15:21 05200282b08111ee 49 0 61 368 0 597 1
1 C84FCA28-2372-4F8B-52B1-4BC5E9AD128B 7/19/2016 13:06 013b028225eda06c 59 0 32 162 0 0 1
1 C8B3CC50-DEC3-4F24-D0A2-E32A03AFA786 7/13/2016 13:22 05200282b0819c4d 126 0 0 ? ? ? 0
1 CC119F61-A70F-4DB3-7C8C-DCE9A1C3BCB5 7/27/2016 9:48 02420282b06c1330 0 0 0 ? ? ? 0
1 D57D43C7-F9F0-42B9-C6B6-23B1414D9F12 7/14/2016 15:04 05200282b081ee2a 36 0 17 ? ? ? 0
1 E438B480-8F98-469C-3899-E6F10DD1F755 7/5/2016 20:12 02420282b0675cea 3,874 163 7 ? ? ? 0
1 F223F1F4-F50D-41F6-DA9F-46EA2972F394 7/27/2016 20:13 05200282b084f966 417 0 6 ? ? ? 0
1 FB3B0CB1-89D8-4B47-E4BA-465E57D52B0D 7/14/2016 14:21 02420282b0695b07 138 0 2 ? ? ? 0
SELECT *
FROM tab
QUALIFY
-- rows that does not have duplicates
COUNT(*)
OVER (PARTITION BY SESS_KEY) = 1
-- the ones with CALL_TRNSF_FLG set to 1
OR CALL_TRNSF_FLG = 1
If there might be multiple rows with CALL_TRNSF_FLG = 1 and you only want one row per session:
QUALIFY
ROW_NUMBER(*)
OVER (PARTITION BY SESS_KEY
ORDER BY CALL_TRNSF_FLG DESC) = 1
You can filter rows You want like this:
WHERE
CONN_ID NOT IN (
SELECT
MIN(CONN_ID)
FROM
<<your_table>>
GROUP BY
SESS_KEY
HAVING
count(*) > 1
)
Assuming, that pair: sess_key,conn_id is unique. Otherwise, You should find unique set of columns and filter by it.
Hi You can have query as,
Using JOINS
select t1.* from table_name t1
join
(select count(*),SESS_KEYfrom table_name
group by SESS_KEYhaving count(*) > 1 ) t2
on t1.SESS_KEY=t2.SESS_KEY;
Using WHERE clause
select t1.* from table_name t1,
(select count(*),SESS_KEYfrom table_name
group by SESS_KEYhaving count(*) > 1 ) t2
where t1.SESS_KEY=t2.SESS_KEY;
This provides you all the duplicates having in SESS_KEY column
To Update
merge into table_name t1
using
(select count(*),SESS_KEYfrom table_name
group by SESS_KEYhaving count(*) > 1 ) t2
on t1.SESS_KEY=t2.SESS_KEY
when matched then
update set
CALL_TRNSF_FLG=1;
Use MERGE statement to update the table
Related
I know this has been asked a million times before but due to the joins i am finding it hard to correctly write the code required
My SQL is
SELECT DISTINCT
newFvItems.Id, outerFvia.[UserRoleId], outerFvia.[DefaultStatusId], outerFvia.[CanBeAllocated], outerFvia.[CanCreate], outerFvia.[CanUpdate], outerFvia.[CanDelete], outerFvia.[CanSeeDraft], outerFvia.[CanSeeChecking], outerFvia.[CanSeeCompleted], outerFvia.[CanDispute], outerFvia.[CanResolveDispute], outerFvia.[CanAudit], 1, GETUTCDATE(), 393, GETUTCDATE(), 393, 0, outerFvia.[RecycleBinId], outerFvia.[FlowAccessId]
FROM FlowVersionItemAccess outerFvia
JOIN FlowVersionItems outerFvi ON outerFvi.Id = outerFvia.FlowVersionItemId
JOIN FlowVersions outerFv ON outerFv.Id = outerFvi.FlowVersionId
JOIN FlowVersionItems newFvItems ON newFvItems.FlowVersionId = 143
WHERE outerFv.Id = 133
AND outerFvia.Deleted = 0 AND outerFvi.Deleted = 0 AND outerFv.Deleted = 0
My desired output is 21 Rows, i get 27 if i remove DISTINCT i get 63
Sample data:
ID
UserId
val
val
val
val
val
val
val
val
val
val
val
val
DateTime
val
DateTime
val
val
val
val
315
2
2
0
1
1
1
1
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
7
315
6
2
0
1
1
1
1
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
12
315
7
2
0
0
0
0
0
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
16
315
7
2
0
1
1
0
0
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
16
315
18
2
0
0
0
0
0
0
0
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
69
315
18
2
0
1
1
0
0
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
69
Expected data:
ID
UserId
val
val
val
val
val
val
val
val
val
val
val
val
DateTime
val
DateTime
val
val
val
val
315
2
2
0
1
1
1
1
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
7
315
6
2
0
1
1
1
1
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
12
315
7
2
0
1
1
0
0
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
16
315
18
2
0
1
1
0
0
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
69
You can see UserRoleId 7 and 18 are duplicated. I tried to simply group by UserRoleId but i get errors on FlowVersionItems.Id
Msg 8120, Level 16, State 1, Line 44
Column 'FlowVersionItems.Id' is
invalid in the select list because it is not contained in either an
aggregate function or the GROUP BY clause.
Remove DISTINCT, add GROUP BY newFvItems.Id at the very end of query and define aggregating functions (MIN, MAX,...) for all columns you return.
DISTINCT finds all unique combinations of all columns, while you need just one column.
Alternatively, leave only DISTINCT newFvItems.Id in the select part - depends on what you actually need.
There are two solid approaches to this and i thought i would post my solution that i ended up using to hopefully help someone else. The group by solution is good and does provide the correct answer too but this fits an edge case better.
SELECT
newFvItems.Id,
outerFvia.[Id],
outerFvia.[UserRoleId],
outerFvia.[DefaultStatusId],
outerFvia.[CanBeAllocated],
outerFvia.[CanCreate],
outerFvia.[CanUpdate],
outerFvia.[CanDelete],
outerFvia.[CanSeeDraft],
outerFvia.[CanSeeChecking],
outerFvia.[CanSeeCompleted],
outerFvia.[CanDispute], outerFvia.[CanResolveDispute], outerFvia.[CanAudit], 1, GETUTCDATE(), 393, GETUTCDATE(), 393, 0, outerFvia.[RecycleBinId], outerFvia.[FlowAccessId]
FROM FlowVersionItemAccess outerFvia
INNER JOIN FlowVersionItems outerFvi ON outerFvi.Id = outerFvia.FlowVersionItemId
AND outerFvi.Deleted = 0
INNER JOIN FlowVersions outerFv ON outerFv.Id = outerFvi.FlowVersionId
AND outerFv.Deleted = 0
INNER JOIN FlowVersionItems newFvItems ON newFvItems.FlowVersionId = 143
AND newFvItems.Deleted = 0
AND newFvItems.[Order] = outerFvi.[Order]
WHERE outerFvia.Deleted = 0
AND outerFv.Id = 133
Here is the group by solution too.
SELECT
newFvItems.Id,
outerFvia.[UserRoleId],
MAX(outerFvia.[DefaultStatusId]),
MAX(CAST(outerFvia.[CanBeAllocated] As tinyint)),
MAX(CAST(outerFvia.[CanCreate] as tinyint)),
MAX(CAST(outerFvia.[CanUpdate] as tinyint)),
MAX(CAST(outerFvia.[CanDelete] as tinyint)),
MAX(CAST(outerFvia.[CanSeeDraft] as tinyint)),
MAX(CAST(outerFvia.[CanSeeChecking] as tinyint)),
MAX(CAST(outerFvia.[CanSeeCompleted] as tinyint)),
MAX(CAST(outerFvia.[CanDispute] as tinyint))
--, MAX(outerFvia.[CanResolveDispute]), MAX(outerFvia.[CanAudit]), 1, GETUTCDATE(), 393, GETUTCDATE(), 393, 0, MAX(outerFvia.[RecycleBinId]), MAX(outerFvia.[FlowAccessId])
FROM FlowVersionItemAccess outerFvia
JOIN FlowVersionItems outerFvi ON outerFvi.Id = outerFvia.FlowVersionItemId
JOIN FlowVersions outerFv ON outerFv.Id = outerFvi.FlowVersionId
JOIN FlowVersionItems newFvItems ON newFvItems.FlowVersionId = 143
WHERE outerFv.Id = 133
AND outerFvia.Deleted = 0 AND outerFvi.Deleted = 0 AND outerFv.Deleted = 0 AND newFvItems.Deleted = 0
group by newFvItems.Id, outerFvia.[UserRoleId]
Hopefully me posting the solutions i came up with helps someone else, i know this question gets asked a lot and if it wasn't for the joins it would be a lot easier.
I know that my question would be duplicated but I really don't know how to created sql which return results of sum with multiple join.
Tables I have
result_summary
num_bin id_summary count_bin
3 172 0
4 172 0
5 172 0
6 172 0
7 172 0
8 172 0
1 174 1
2 174 0
3 174 0
4 174 0
5 174 0
6 174 0
7 174 0
8 174 0
1 175 0
summary_assembly
num_lot id_machine sabun date_work date_write id_product shift count_total count_fail count_good id_summary id_operation
adfe 1 21312 2020-11-25 2020-11-25 1 A 10 2 8 170 2000
adfe 1 21312 2020-11-25 2020-11-25 1 A 1000 1 999 171 2000
adfe 1 21312 2020-11-25 2020-11-25 2 A 100 1 99 172 2000
333 1 21312 2020-12-06 2020-12-06 1 A 10 2 8 500 2000
333 1 21312 2020-11-26 2020-11-26 1 A 10000 1 9999 174 2000
333 1 21312 2020-11-26. 2020-11-26 1 A 100 0 100 175 2000
333 1 21312 2020-12-06 2020-12-06 1 A 10 2 8 503 2000
333 1 21312 2020-12-07 2020-12-07 1 A 10 2 8 651 2000
333 1 21312 2020-12-02 2020-12-02 1 A 10 2 8 178 2000
employees
sabun name_emp
3532 Kim
12345 JS
4444 Gilsoo
21312 Wayn Hahn
123 Lee too
333 JD
info_product
id_product name_product
1 typeA
2 typeB
machine
id_machine id_operation name_machine
1 2000 name1
2 2000 name2
3 2000 name3
4 3000 name1
5 3000 name2
6 3000 name3
7 4000 name1
8 4000 name2
query
select S.id_summary, I.name_product, M.name_machine,
E.name_emp, S.sabun, S.date_work,
S.shift, S.num_lot, S.count_total,
S.count_good, S.count_fail,
sum(case num_bin when '1' then count_bin else 0 end) as bin1,
sum(case num_bin when '2' then count_bin else 0 end) as bin2,
sum(case num_bin when '3' then count_bin else 0 end) as bin3,
sum(case num_bin when '4' then count_bin else 0 end) as bin4,
sum(case num_bin when '5' then count_bin else 0 end) as bin5,
sum(case num_bin when '6' then count_bin else 0 end) as bin6,
sum(case num_bin when '7' then count_bin else 0 end) as bin7,
sum(case num_bin when '8' then count_bin else 0 end) as bin8
from result_assembly as R
join summary_assembly as S on R.id_summary = S.id_summary
join employees as E on S.sabun = E.sabun
join info_product as I on S.id_product = I.id_product
join machine as M on S.id_machine = M.id_machine
where I.id_product = '1'
and E.sabun='21312'
and S.shift = 'A'
and S.date_work between '2020-11-10' and '2020-12-20'
group by S.id_summary, E.name_emp, S.num_lot,
I.name_product,M.name_machine
order by S.id_summary;
result
id_summary name_product name_machine name_emp sabun date_work shift num_lot count_total count_good count_fail bin1 bin2 bin3 bin4 bin5 bin6 bin7 bin8
170 TypeA name1 Kim 21312 2020-11-25 A adfe 10 8 2 1 1 0 0 0 0 0 0
171 TypeA name1 Kim 21312 2020-11-25 A adfe 1000 999 1 1 1 0 0 0 0 0 0
174 TypeA name1 Kim 21312 2020-11-26 A 333 10000 9999 1 1 1 0 0 0 0 0 0
175 TypeA name1 Kim 21312 2020-11-26 A 333 100 100 0 0 0 0 0 0 0 0 0
178 TypeA name1 Kim 21312 2020-12-02 A 333 10 8 2 1 1 0 0 0 0 0 0
179 TypeA name1 Kim 21312 2020-12-02 A 333 10 8 2 1 1 0 0 0 0 0 0
180 TypeA name1 Kim 21312 2020-12-02 A 333 10 8 2 1 1 0 0 0 0 0 0
181 TypeA name1 Kim 21312 2020-12-02 A 333 10 8 2 1 1 0 0 0 0 0 0
182 TypeA name2 Kim 21312 2020-12-02 A 333 10 8 2 1 1 0 0 0 0 0 0
186 TypeA name2 Kim 21312 2020-12-06 A 333 10 8 2 1 1 0 0 0 0 0 0
193 TypeA name2 Kim 21312 2020-12-06 A 333 10 8 2 0 0 0 0 0 0 0 0
194 TypeA name2 Kim 21312 2020-12-06 A 333 10 8 2 0 0 0 0 0 0 0 0
195 TypeA name2 Kim 21312 2020-12-06 A 333 10 8 2 0 0 0 0 0 0 0 0
196 TypeA name2 JS 21312 2020-12-06 A 333 10 8 2 0 0 0 0 0 0 0 0
197 TypeA name2 JS 21312 2020-12-06 A 333 10 8 2 0 0 0 0 0 0 0 0
198 TypeA name2 JS 21312 2020-12-06 A 333 10 8 2 0 0 0 0 0 0 0 0
199 TypeA name2 JS 21312 2020-12-06 A 333 10 8 2 0 0 0 0 0 0 0 0
200 TypeA name2 JS 21312 2020-12-06 A 333 10 8 2 0 0 0 0 0 0 0 0
expected output(when sum by num_lot)
num_lot count_total count_good count_fail bin1 bin2 bin3 bin4 bin5 bin6 bin7 bin8
adfe 323 300 23 22 1 0 0 0 0 0 0
333 4312 4300 12 10 2 0 0 0 0 0 0
All of them were modified from original one because they were non-English, so there would be typo.
Here now I need to sum by num_lot, name_product or sabun.
id_summary is unique.
Thanks
As expected in the comments: It seems like you simple need a subquery which groups your table by the column num_lot
SELECT
num_lot,
SUM(count_total),
SUM(count_good)
-- some more SUM()
FROM (
--<your query>
) s
GROUP BY num_lot
It was asked in the comments what the s stands for: A subquery needs an alias, an identifier. Because I didn't want to think about a better name, I just called the subselect s. It is the shortcut for AS s
It sounds like you want to use crosstab() -- https://www.postgresql.org/docs/current/tablefunc.html
I have the following table:
Sequence Change
100 0
101 0
103 0
106 0
107 1
110 0
112 1
114 0
115 0
121 0
126 1
127 0
134 0
I need an additional column, Group, whose values increment based on the occurrence of 1 in Change. How is that done? I'm using Microsoft Server 2012.
Sequence Change Group
100 0 0
101 0 0
103 0 0
106 0 0
107 1 1
110 0 1
112 1 2
114 0 2
115 0 2
121 0 2
126 1 3
127 0 3
134 0 3
You want a cumulative sum:
select t.*, sum(change) over (order by sequence) as grp
from t;
I have a situation where I have a table that has a single varchar(max) column called dbo.JsonData. It has x number of rows with x number of properties.
How can I create a query that will allow me to turn the result set from a select * query into a row/column result set?
Here is what I have tried:
SELECT *
FROM JSONDATA
FOR JSON Path
But it returns a single row of the json data all in a single column:
JSON_F52E2B61-18A1-11d1-B105-00805F49916B
[{"Json_Data":"{\"Serial_Number\":\"12345\",\"Gateway_Uptime\":17,\"Defrost_Cycles\":0,\"Freeze_Cycles\":2304,\"Float_Switch_Raw_ADC\":1328,\"Bin_status\":2304,\"Line_Voltage\":0,\"ADC_Evaporator_Temperature\":0,\"Mem_Sw\":1280,\"Freeze_Timer\":2560,\"Defrost_Timer\":593,\"Water_Flow_Switch\":3328,\"ADC_Mid_Temperature\":2560,\"ADC_Water_Temperature\":0,\"Ambient_Temperature\":71,\"Mid_Temperature\":1259,\"Water_Temperature\":1259,\"Evaporator_Temperature\":1259,\"Ambient_Temperature_Off_Board\":0,\"Ambient_Temperature_On_Board\":0,\"Gateway_Info\":\"{\\\"temp_sensor\\\":0.00,\\\"temp_pcb\\\":82.00,\\\"gw_uptime\\\":17.00,\\\"winc_fw\\\":\\\"19.5.4\\\",\\\"gw_fw_version\\\":\\\"0.0.0\\\",\\\"gw_fw_version_git\\\":\\\"2a75f20-dirty\\\",\\\"gw_sn\\\":\\\"328\\\",\\\"heap_free\\\":11264.00,\\\"gw_sig_csq\\\":0.00,\\\"gw_sig_quality\\\":0.00,\\\"wifi_sig_strength\\\":-63.00,\\\"wifi_resets\\\":0.00}\",\"ADC_Ambient_Temperature\":1120,\"Control_State\":\"Bin Full\",\"Compressor_Runtime\":134215680}"},{"Json_Data":"{\"Serial_Number\":\"12345\",\"Gateway_Uptime\":200,\"Defrost_Cycles\":559,\"Freeze_Cycles\":510,\"Float_Switch_Raw_ADC\":106,\"Bin_status\":0,\"Line_Voltage\":119,\"ADC_Evaporator_Temperature\":123,\"Mem_Sw\":113,\"Freeze_Timer\":0,\"Defrost_Timer\":66,\"Water_Flow_Switch\":3328,\"ADC_Mid_Temperature\":2560,\"ADC_Water_Temperature\":0,\"Ambient_Temperature\":71,\"Mid_Temperature\":1259,\"Water_Temperature\":1259,\"Evaporator_Temperature\":54,\"Ambient_Temperature_Off_Board\":0,\"Ambient_Temperature_On_Board\":0,\"Gateway_Info\":\"{\\\"temp_sensor\\\":0.00,\\\"temp_pcb\\\":82.00,\\\"gw_uptime\\\":199.00,\\\"winc_fw\\\":\\\"19.5.4\\\",\\\"gw_fw_version\\\":\\\"0.0.0\\\",\\\"gw_fw_version_git\\\":\\\"2a75f20-dirty\\\",\\\"gw_sn\\\":\\\"328\\\",\\\"heap_free\\\":10984.00,\\\"gw_sig_csq\\\":0.00,\\\"gw_sig_quality\\\":0.00,\\\"wifi_sig_strength\\\":-60.00,\\\"wifi_resets\\\":0.00}\",\"ADC_Ambient_Temperature\":1120,\"Control_State\":\"Defrost\",\"Compressor_Runtime\":11304}"},{"Json_Data":"{\"Seri...
What am I missing?
I can't specify the columns explicitly because the json strings aren't always the same.
This what I expect:
Serial_Number Gateway_Uptime Defrost_Cycles Freeze_Cycles Float_Switch_Raw_ADC Bin_status Line_Voltage ADC_Evaporator_Temperature Mem_Sw Freeze_Timer Defrost_Timer Water_Flow_Switch ADC_Mid_Temperature ADC_Water_Temperature Ambient_Temperature Mid_Temperature Water_Temperature Evaporator_Temperature Ambient_Temperature_Off_Board Ambient_Temperature_On_Board ADC_Ambient_Temperature Control_State Compressor_Runtime temp_sensor temp_pcb gw_uptime winc_fw gw_fw_version gw_fw_version_git gw_sn heap_free gw_sig_csq gw_sig_quality wifi_sig_strength wifi_resets LastModifiedDateUTC Defrost_Cycle_time Freeze_Cycle_time
12345 251402 540 494 106 0 98 158 113 221 184 0 0 0 1259 1259 1259 33 0 0 0 Freeze 10833 0 78 251402 19.5.4 0.0.0 2a75f20-dirty 328.00000000 10976 0 0 -61 0 2018-03-20 11:15:28.000 0 0
12345 251702 540 494 106 0 98 178 113 517 184 0 0 0 1259 1259 1259 22 0 0 0 Freeze 10838 0 78 251702 19.5.4 0.0.0 2a75f20-dirty 328.00000000 10976 0 0 -62 0 2018-03-20 11:15:42.000 0 0
...
Thank you,
Ron
I want a list of all timezones in the mysql timezone tables, and need to select:
1) Their current offset from GMT
2) Whether DST is used by that timezone (not whether it's currently in use, just whether DST is considered at some point in the year for that timezone)
Reason:
I need to build a web form and match the users time zone information (which I can generate from javascript) to the correct time zone stored in the mysql DB. I can find UTC offset and get a DST flag from javascript functions.
Try this query. The offsettime is the (Offset / 60 / 60)
SELECT tzname.`Time_zone_id`,(`Offset`/60/60) AS `offsettime`,`Is_DST`,`Name`,`Transition_type_id`,`Abbreviation`
FROM `time_zone_transition_type` AS `transition`, `time_zone_name` AS `tzname`
WHERE transition.`Time_zone_id`=tzname.`Time_zone_id`
ORDER BY transition.`Offset` ASC;
The results are
501 -12.00000000 0 0 PHOT Pacific/Enderbury
369 -12.00000000 0 0 GMT+12 Etc/GMT+12
513 -12.00000000 0 1 KWAT Pacific/Kwajalein
483 -12.00000000 0 1 KWAT Kwajalein
518 -11.50000000 0 1 NUT Pacific/Niue
496 -11.50000000 0 1 SAMT Pacific/Apia
528 -11.50000000 0 1 SAMT Pacific/Samoa
555 -11.50000000 0 1 SAMT US/Samoa
521 -11.50000000 0 1 SAMT Pacific/Pago_Pago
496 -11.44888889 0 0 LMT Pacific/Apia
528 -11.38000000 0 0 LMT Pacific/Samoa
555 -11.38000000 0 0 LMT US/Samoa
521 -11.38000000 0 0 LMT Pacific/Pago_Pago
518 -11.33333333 0 0 NUT Pacific/Niue
544 -11.00000000 0 3 BST US/Aleutian
163 -11.00000000 0 3 BST America/Nome
518 -11.00000000 0 2 NUT Pacific/Niue
496 -11.00000000 0 2 WST Pacific/Apia
544 -11.00000000 0 0 NST US/Aleutian
163 -11.00000000 0 0 NST America/Nome
528 -11.00000000 0 4 SST Pacific/Samoa
528 -11.00000000 0 3 BST Pacific/Samoa