MySQL: How to select the UTC offset and DST for all timezones? - sql

I want a list of all timezones in the mysql timezone tables, and need to select:
1) Their current offset from GMT
2) Whether DST is used by that timezone (not whether it's currently in use, just whether DST is considered at some point in the year for that timezone)
Reason:
I need to build a web form and match the users time zone information (which I can generate from javascript) to the correct time zone stored in the mysql DB. I can find UTC offset and get a DST flag from javascript functions.

Try this query. The offsettime is the (Offset / 60 / 60)
SELECT tzname.`Time_zone_id`,(`Offset`/60/60) AS `offsettime`,`Is_DST`,`Name`,`Transition_type_id`,`Abbreviation`
FROM `time_zone_transition_type` AS `transition`, `time_zone_name` AS `tzname`
WHERE transition.`Time_zone_id`=tzname.`Time_zone_id`
ORDER BY transition.`Offset` ASC;
The results are
501 -12.00000000 0 0 PHOT Pacific/Enderbury
369 -12.00000000 0 0 GMT+12 Etc/GMT+12
513 -12.00000000 0 1 KWAT Pacific/Kwajalein
483 -12.00000000 0 1 KWAT Kwajalein
518 -11.50000000 0 1 NUT Pacific/Niue
496 -11.50000000 0 1 SAMT Pacific/Apia
528 -11.50000000 0 1 SAMT Pacific/Samoa
555 -11.50000000 0 1 SAMT US/Samoa
521 -11.50000000 0 1 SAMT Pacific/Pago_Pago
496 -11.44888889 0 0 LMT Pacific/Apia
528 -11.38000000 0 0 LMT Pacific/Samoa
555 -11.38000000 0 0 LMT US/Samoa
521 -11.38000000 0 0 LMT Pacific/Pago_Pago
518 -11.33333333 0 0 NUT Pacific/Niue
544 -11.00000000 0 3 BST US/Aleutian
163 -11.00000000 0 3 BST America/Nome
518 -11.00000000 0 2 NUT Pacific/Niue
496 -11.00000000 0 2 WST Pacific/Apia
544 -11.00000000 0 0 NST US/Aleutian
163 -11.00000000 0 0 NST America/Nome
528 -11.00000000 0 4 SST Pacific/Samoa
528 -11.00000000 0 3 BST Pacific/Samoa

Related

Select only one of each id sql server

I know this has been asked a million times before but due to the joins i am finding it hard to correctly write the code required
My SQL is
SELECT DISTINCT
newFvItems.Id, outerFvia.[UserRoleId], outerFvia.[DefaultStatusId], outerFvia.[CanBeAllocated], outerFvia.[CanCreate], outerFvia.[CanUpdate], outerFvia.[CanDelete], outerFvia.[CanSeeDraft], outerFvia.[CanSeeChecking], outerFvia.[CanSeeCompleted], outerFvia.[CanDispute], outerFvia.[CanResolveDispute], outerFvia.[CanAudit], 1, GETUTCDATE(), 393, GETUTCDATE(), 393, 0, outerFvia.[RecycleBinId], outerFvia.[FlowAccessId]
FROM FlowVersionItemAccess outerFvia
JOIN FlowVersionItems outerFvi ON outerFvi.Id = outerFvia.FlowVersionItemId
JOIN FlowVersions outerFv ON outerFv.Id = outerFvi.FlowVersionId
JOIN FlowVersionItems newFvItems ON newFvItems.FlowVersionId = 143
WHERE outerFv.Id = 133
AND outerFvia.Deleted = 0 AND outerFvi.Deleted = 0 AND outerFv.Deleted = 0
My desired output is 21 Rows, i get 27 if i remove DISTINCT i get 63
Sample data:
ID
UserId
val
val
val
val
val
val
val
val
val
val
val
val
DateTime
val
DateTime
val
val
val
val
315
2
2
0
1
1
1
1
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
7
315
6
2
0
1
1
1
1
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
12
315
7
2
0
0
0
0
0
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
16
315
7
2
0
1
1
0
0
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
16
315
18
2
0
0
0
0
0
0
0
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
69
315
18
2
0
1
1
0
0
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
69
Expected data:
ID
UserId
val
val
val
val
val
val
val
val
val
val
val
val
DateTime
val
DateTime
val
val
val
val
315
2
2
0
1
1
1
1
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
7
315
6
2
0
1
1
1
1
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
12
315
7
2
0
1
1
0
0
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
16
315
18
2
0
1
1
0
0
0
1
0
0
0
1
12/12/2022 16:53
393
12/12/2022 16:53
393
0
NULL
69
You can see UserRoleId 7 and 18 are duplicated. I tried to simply group by UserRoleId but i get errors on FlowVersionItems.Id
Msg 8120, Level 16, State 1, Line 44
Column 'FlowVersionItems.Id' is
invalid in the select list because it is not contained in either an
aggregate function or the GROUP BY clause.
Remove DISTINCT, add GROUP BY newFvItems.Id at the very end of query and define aggregating functions (MIN, MAX,...) for all columns you return.
DISTINCT finds all unique combinations of all columns, while you need just one column.
Alternatively, leave only DISTINCT newFvItems.Id in the select part - depends on what you actually need.
There are two solid approaches to this and i thought i would post my solution that i ended up using to hopefully help someone else. The group by solution is good and does provide the correct answer too but this fits an edge case better.
SELECT
newFvItems.Id,
outerFvia.[Id],
outerFvia.[UserRoleId],
outerFvia.[DefaultStatusId],
outerFvia.[CanBeAllocated],
outerFvia.[CanCreate],
outerFvia.[CanUpdate],
outerFvia.[CanDelete],
outerFvia.[CanSeeDraft],
outerFvia.[CanSeeChecking],
outerFvia.[CanSeeCompleted],
outerFvia.[CanDispute], outerFvia.[CanResolveDispute], outerFvia.[CanAudit], 1, GETUTCDATE(), 393, GETUTCDATE(), 393, 0, outerFvia.[RecycleBinId], outerFvia.[FlowAccessId]
FROM FlowVersionItemAccess outerFvia
INNER JOIN FlowVersionItems outerFvi ON outerFvi.Id = outerFvia.FlowVersionItemId
AND outerFvi.Deleted = 0
INNER JOIN FlowVersions outerFv ON outerFv.Id = outerFvi.FlowVersionId
AND outerFv.Deleted = 0
INNER JOIN FlowVersionItems newFvItems ON newFvItems.FlowVersionId = 143
AND newFvItems.Deleted = 0
AND newFvItems.[Order] = outerFvi.[Order]
WHERE outerFvia.Deleted = 0
AND outerFv.Id = 133
Here is the group by solution too.
SELECT
newFvItems.Id,
outerFvia.[UserRoleId],
MAX(outerFvia.[DefaultStatusId]),
MAX(CAST(outerFvia.[CanBeAllocated] As tinyint)),
MAX(CAST(outerFvia.[CanCreate] as tinyint)),
MAX(CAST(outerFvia.[CanUpdate] as tinyint)),
MAX(CAST(outerFvia.[CanDelete] as tinyint)),
MAX(CAST(outerFvia.[CanSeeDraft] as tinyint)),
MAX(CAST(outerFvia.[CanSeeChecking] as tinyint)),
MAX(CAST(outerFvia.[CanSeeCompleted] as tinyint)),
MAX(CAST(outerFvia.[CanDispute] as tinyint))
--, MAX(outerFvia.[CanResolveDispute]), MAX(outerFvia.[CanAudit]), 1, GETUTCDATE(), 393, GETUTCDATE(), 393, 0, MAX(outerFvia.[RecycleBinId]), MAX(outerFvia.[FlowAccessId])
FROM FlowVersionItemAccess outerFvia
JOIN FlowVersionItems outerFvi ON outerFvi.Id = outerFvia.FlowVersionItemId
JOIN FlowVersions outerFv ON outerFv.Id = outerFvi.FlowVersionId
JOIN FlowVersionItems newFvItems ON newFvItems.FlowVersionId = 143
WHERE outerFv.Id = 133
AND outerFvia.Deleted = 0 AND outerFvi.Deleted = 0 AND outerFv.Deleted = 0 AND newFvItems.Deleted = 0
group by newFvItems.Id, outerFvia.[UserRoleId]
Hopefully me posting the solutions i came up with helps someone else, i know this question gets asked a lot and if it wasn't for the joins it would be a lot easier.

ID rows containing values greater than corresponding values based on a criteria from another row

I have a grouped dataframe. I have created a flag that identifies if values in a row are less than the group maximums. This works fine. However I want to unflag rows where the value contained in a third column is greater than the value in the same (third) column within each group. I have a feeling there shoule be an elegant and pythonic way to do this but I can't figure it out.
The flag I have shown in the code compares the maximum value of tour_duration within each hh_id to the corresponding value of "comp_expr" and if found less, assigns "1" to the column flag. However, I want values in the flag column to be 0 if min(arrivaltime) for each subgroup tour_id > max(arrivaltime) for the tour_id whose tour_duration is found to be maximum within each hh_id. For example, in the given data, tour_id 16300 has the highest value of tour_duration. But tour_id 16200 has min arrivaltime 1080 which is < max(arrivaltime) for tour_id 16300 (960). So flag for all tour_id 16200 should be 0.
Kindly assist.
import pandas as pd
import numpy as np
stops_data = pd.DataFrame({'hh_id': [20044,20044,20044,20044,20044,20044,20044,20044,20044,20044,20044,20122,20122,20122,20122,20122,20122,20122,20122,20122,20122,20122,20122,20122,],'tour_id':[16300,16300,16100,16100,16100,16100,16200,16200,16200,16000,16000,38100,38100,37900,37900,37900,38000,38000,38000,38000,38000,38000,37800,37800],'arrivaltime':[360,960,900,900,900,960,1080,1140,1140,420,840,300,960,780,720,960,1080,1080,1080,1080,1140,1140,480,900],'tour_duration':[600,600,60,60,60,60,60,60,60,420,420,660,660,240,240,240,60,60,60,60,60,60,420,420],'comp_expr':[1350,1350,268,268,268,268,406,406,406,974,974,1568,1568,606,606,606,298,298,298,298,298,298,840,840]})
stops_data['flag'] = np.where(stops_data.groupby(['hh_id'])
['tour_duration'].transform(max) < stops_data['comp_expr'],0,1)
This is my current output:Current dataset and output
This is my desired output, please see flag column: Desired output, see changed flag values in bold
>>> stops_data.loc[stops_data.tour_id
.isin(stops_data.loc[stops_data.loc[stops_data
.groupby(['hh_id','tour_id'])['arrivaltime'].idxmin()]
.groupby('hh_id')['arrivaltime'].idxmax()]['tour_id']), 'flag'] = 0
>>> stops_data
hh_id tour_id arrivaltime tour_duration comp_expr flag
0 20044 16300 360 600 1350 0
1 20044 16300 960 600 1350 0
2 20044 16100 900 60 268 1
3 20044 16100 900 60 268 1
4 20044 16100 900 60 268 1
5 20044 16100 960 60 268 1
6 20044 16200 1080 60 406 0
7 20044 16200 1140 60 406 0
8 20044 16200 1140 60 406 0
9 20044 16000 420 420 974 0
10 20044 16000 840 420 974 0
11 20122 38100 300 660 1568 0
12 20122 38100 960 660 1568 0
13 20122 37900 780 240 606 1
14 20122 37900 720 240 606 1
15 20122 37900 960 240 606 1
16 20122 38000 1080 60 298 0
17 20122 38000 1080 60 298 0
18 20122 38000 1080 60 298 0
19 20122 38000 1080 60 298 0
20 20122 38000 1140 60 298 0
21 20122 38000 1140 60 298 0
22 20122 37800 480 420 840 0
23 20122 37800 900 420 840 0

Parsing Json Data from select query in SQL Server

I have a situation where I have a table that has a single varchar(max) column called dbo.JsonData. It has x number of rows with x number of properties.
How can I create a query that will allow me to turn the result set from a select * query into a row/column result set?
Here is what I have tried:
SELECT *
FROM JSONDATA
FOR JSON Path
But it returns a single row of the json data all in a single column:
JSON_F52E2B61-18A1-11d1-B105-00805F49916B
[{"Json_Data":"{\"Serial_Number\":\"12345\",\"Gateway_Uptime\":17,\"Defrost_Cycles\":0,\"Freeze_Cycles\":2304,\"Float_Switch_Raw_ADC\":1328,\"Bin_status\":2304,\"Line_Voltage\":0,\"ADC_Evaporator_Temperature\":0,\"Mem_Sw\":1280,\"Freeze_Timer\":2560,\"Defrost_Timer\":593,\"Water_Flow_Switch\":3328,\"ADC_Mid_Temperature\":2560,\"ADC_Water_Temperature\":0,\"Ambient_Temperature\":71,\"Mid_Temperature\":1259,\"Water_Temperature\":1259,\"Evaporator_Temperature\":1259,\"Ambient_Temperature_Off_Board\":0,\"Ambient_Temperature_On_Board\":0,\"Gateway_Info\":\"{\\\"temp_sensor\\\":0.00,\\\"temp_pcb\\\":82.00,\\\"gw_uptime\\\":17.00,\\\"winc_fw\\\":\\\"19.5.4\\\",\\\"gw_fw_version\\\":\\\"0.0.0\\\",\\\"gw_fw_version_git\\\":\\\"2a75f20-dirty\\\",\\\"gw_sn\\\":\\\"328\\\",\\\"heap_free\\\":11264.00,\\\"gw_sig_csq\\\":0.00,\\\"gw_sig_quality\\\":0.00,\\\"wifi_sig_strength\\\":-63.00,\\\"wifi_resets\\\":0.00}\",\"ADC_Ambient_Temperature\":1120,\"Control_State\":\"Bin Full\",\"Compressor_Runtime\":134215680}"},{"Json_Data":"{\"Serial_Number\":\"12345\",\"Gateway_Uptime\":200,\"Defrost_Cycles\":559,\"Freeze_Cycles\":510,\"Float_Switch_Raw_ADC\":106,\"Bin_status\":0,\"Line_Voltage\":119,\"ADC_Evaporator_Temperature\":123,\"Mem_Sw\":113,\"Freeze_Timer\":0,\"Defrost_Timer\":66,\"Water_Flow_Switch\":3328,\"ADC_Mid_Temperature\":2560,\"ADC_Water_Temperature\":0,\"Ambient_Temperature\":71,\"Mid_Temperature\":1259,\"Water_Temperature\":1259,\"Evaporator_Temperature\":54,\"Ambient_Temperature_Off_Board\":0,\"Ambient_Temperature_On_Board\":0,\"Gateway_Info\":\"{\\\"temp_sensor\\\":0.00,\\\"temp_pcb\\\":82.00,\\\"gw_uptime\\\":199.00,\\\"winc_fw\\\":\\\"19.5.4\\\",\\\"gw_fw_version\\\":\\\"0.0.0\\\",\\\"gw_fw_version_git\\\":\\\"2a75f20-dirty\\\",\\\"gw_sn\\\":\\\"328\\\",\\\"heap_free\\\":10984.00,\\\"gw_sig_csq\\\":0.00,\\\"gw_sig_quality\\\":0.00,\\\"wifi_sig_strength\\\":-60.00,\\\"wifi_resets\\\":0.00}\",\"ADC_Ambient_Temperature\":1120,\"Control_State\":\"Defrost\",\"Compressor_Runtime\":11304}"},{"Json_Data":"{\"Seri...
What am I missing?
I can't specify the columns explicitly because the json strings aren't always the same.
This what I expect:
Serial_Number Gateway_Uptime Defrost_Cycles Freeze_Cycles Float_Switch_Raw_ADC Bin_status Line_Voltage ADC_Evaporator_Temperature Mem_Sw Freeze_Timer Defrost_Timer Water_Flow_Switch ADC_Mid_Temperature ADC_Water_Temperature Ambient_Temperature Mid_Temperature Water_Temperature Evaporator_Temperature Ambient_Temperature_Off_Board Ambient_Temperature_On_Board ADC_Ambient_Temperature Control_State Compressor_Runtime temp_sensor temp_pcb gw_uptime winc_fw gw_fw_version gw_fw_version_git gw_sn heap_free gw_sig_csq gw_sig_quality wifi_sig_strength wifi_resets LastModifiedDateUTC Defrost_Cycle_time Freeze_Cycle_time
12345 251402 540 494 106 0 98 158 113 221 184 0 0 0 1259 1259 1259 33 0 0 0 Freeze 10833 0 78 251402 19.5.4 0.0.0 2a75f20-dirty 328.00000000 10976 0 0 -61 0 2018-03-20 11:15:28.000 0 0
12345 251702 540 494 106 0 98 178 113 517 184 0 0 0 1259 1259 1259 22 0 0 0 Freeze 10838 0 78 251702 19.5.4 0.0.0 2a75f20-dirty 328.00000000 10976 0 0 -62 0 2018-03-20 11:15:42.000 0 0
...
Thank you,
Ron

Teradata how to select first occurrent

I have a table similar to the picture below. In this table, I have some duplicates in SESS_KEY. I only want rows that does not have duplicates or if rows that have duplicates, I only want the ones with CALL_TRNSF_FLG set to 1. I have manual create INCLUDE field to show column that I want (1). How can I achieve this?
Thank you for your help!
Here is the sample data:
INCLUDE SESS_KEY SESS_CALL_ST_DT_TS CONN_ID TLK_DUR HLD_DUR AFT_CALL_WRK_DUR TRNSF_TLK_TM TRNSF_HLD_TM TRNSF_ACW_TM CALL_TRNSF_FLG
0 24067A16-A24A-45BE-E3AA-7E0BFE7ECDA5 7/25/2016 9:07 0141028541267da5 918 57 26 ? ? ? 0
1 24067A16-A24A-45BE-E3AA-7E0BFE7ECDA5 7/25/2016 9:07 0521028304ed75f8 236 0 3 918 57 26 1
0 49FFAB03-C19C-4291-6BAB-267CC95E27CF 7/6/2016 17:25 014102854125f060 278 0 130 ? ? ? 0
1 49FFAB03-C19C-4291-6BAB-267CC95E27CF 7/6/2016 17:25 0521028304e98111 391 0 8 278 0 130 1
0 7CCBBF2F-6FBC-4812-BAB1-4E258B88C20A 7/12/2016 11:34 05200282b0814531 269 0 190 406 0 124 1
1 7CCBBF2F-6FBC-4812-BAB1-4E258B88C20A 7/12/2016 11:34 013b028225ed6484 406 0 124 ? ? ? 0
0 CA32F05E-5C8A-4849-63A4-15B2342081B8 7/6/2016 11:38 02420282b06776f9 256 0 114 297 0 67 1
1 CA32F05E-5C8A-4849-63A4-15B2342081B8 7/6/2016 11:38 014102854125ea06 297 0 67 ? ? ? 0
0 E75EF405-1C0D-45E4-EC97-88D3CD7B5E55 7/5/2016 15:03 1.41E+214 2,691 0 255 ? ? ? 0
1 E75EF405-1C0D-45E4-EC97-88D3CD7B5E55 7/5/2016 15:03 0243028304ee14a5 314 0 9 2,691 0 255 1
1 04F8CC43-710B-4E4D-D8A1-DAC45FB3FF24 7/19/2016 16:49 1.41E+14 123 100 43 ? ? ? 0
1 0AFB6070-9D95-47B0-B0AF-D34ED70FCE8E 7/22/2016 14:20 0243028304f1ffca 335 239 79 ? ? ? 0
1 13581E6A-A568-4993-098C-05233CF293AE 7/15/2016 11:22 014102854126375a 196 150 258 ? ? ? 0
1 1A6AE4BE-1858-4CB3-83B1-CFF7A9E88EF9 7/8/2016 19:09 02420282b068325e 120 0 0 ? ? ? 0
1 24CE6C11-AF85-4770-53B4-FE20200339DF 7/28/2016 12:47 0243028304f3401b 181 0 0 107 0 48 1
1 293F85F4-34BC-44B1-43B5-A6B3B8886FC8 7/1/2016 8:33 0521028304e8778e 70 0 21 149 0 1 1
1 2BD0216A-B3F3-4597-1CBD-095F8D291736 7/7/2016 8:41 0243028304ee83b2 1,037 0 187 ? ? ? 0
1 2C774BE2-5B26-47C0-B69F-69B04A63F879 7/25/2016 18:26 013b028225edd637 1,481 0 110 ? ? ? 0
1 3F43720B-B6AE-4335-4FB5-9275A952989F 7/11/2016 11:08 013b028225ed5830 155 0 0 ? ? ? 0
1 41B056DC-8D3F-425D-BD9E-10A3EB0E944D 7/27/2016 11:13 05200282b084c5d5 34 0 0 ? ? ? 0
1 420483AD-8586-45C7-68AB-675E50EF2B92 7/5/2016 11:03 013b028225ed2765 1,320 0 283 ? ? ? 0
1 43A14051-6EAA-4251-3FA1-F2FBAE6DB643 7/23/2016 12:16 05200282b083f410 359 0 143 ? ? ? 0
1 494F3EA9-EA47-4F7B-C795-61B8B23DA0FA 7/21/2016 9:27 02420282b06ac6c3 0 0 0 ? ? ? 0
1 4D743557-DE09-4007-D58C-EFB09EF6713C 7/29/2016 17:19 05200282b085844a 951 361 240 ? ? ? 0
1 546C0FD0-5445-44F8-0789-1FA62BB57CDB 7/15/2016 18:14 1.41E+59 686 0 60 ? ? ? 0
1 5487C587-D37C-4E5C-9A88-87A3978996CD 7/28/2016 18:51 014102854126a96d 833 0 534 ? ? ? 0
1 5AB8D65A-28C7-4CAD-5796-3A7B720A47F7 7/20/2016 8:56 0141028541265a9f 274 111 381 ? ? ? 0
1 6866B3F8-F953-43BF-9089-B1FE699DEE07 7/19/2016 16:25 05200282b0830349 35 0 180 ? ? ? 0
1 6A4566B3-71B9-47BB-75BC-37B6E644D704 7/19/2016 10:14 02420282b06a3d7b 0 0 0 ? ? ? 0
1 72D17A78-FA5D-42DA-E39A-F7B950C15E22 7/5/2016 18:05 02420282b0675679 606 0 167 ? ? ? 0
1 73657A2A-34B7-4921-E691-49827E46128D 7/20/2016 11:02 02420282b06a8ae8 31 0 264 ? ? ? 0
1 7520F825-DA7B-4D5F-7AA9-3ADD9AAC5BE7 7/5/2016 18:53 05200282b07fd5df 354 0 20 ? ? ? 0
1 76DA5FB6-3EDD-45E1-B8BB-C70EA1CB4E53 7/1/2016 10:07 0243028304ed7c74 132 0 20 105 0 66 1
1 810B9E66-AA32-4BB0-128D-8E3FFC86EB0E 7/22/2016 13:37 013b028225edc13d 1,621 109 34 ? ? ? 0
1 81402352-DE71-45E4-4EAD-C1FFE20F8288 7/11/2016 9:28 0521028304ea456a 38 0 0 71 0 0 1
1 81EA3AD7-B721-4718-9AB0-6FB005252F64 7/12/2016 17:15 013b028225ed6ad5 812 129 60 ? ? ? 0
1 870632C0-4D80-41DC-AD84-12972DBC5AF2 7/23/2016 14:20 0243028304f229ee 1,084 0 5 ? ? ? 0
1 886919E7-80DB-4E2C-D5B2-8B83420F4D27 7/26/2016 19:22 0243028304f2da42 533 465 155 ? ? ? 0
1 8A18B8A2-1405-446B-71BA-A3FBAC816C12 7/8/2016 16:13 013b028225ed4f72 318 237 0 ? ? ? 0
1 8A54DAD7-2745-4BFB-22BE-BF479C1A8710 7/7/2016 15:25 02420282b067da6c 42 0 94 104 0 38 1
1 8D5EB433-2D50-4A67-00AC-E768A549B56E 7/26/2016 14:35 0521028304edf692 55 0 0 ? ? ? 0
1 8F222904-EC4E-4395-D496-A25FB408AD95 7/29/2016 17:09 0243028304f3a5f0 88 0 137 ? ? ? 0
1 9310922F-D545-4E78-42B2-E1B508F5A436 7/7/2016 12:23 02420282b067c625 155 2 15 ? ? ? 1
1 A605BF7A-50E6-4114-1981-7B3988079B7E 7/6/2016 16:56 02420282b0679dfa 89 0 293 ? ? ? 0
1 AA23384F-C4DA-4357-3DAF-7CD8337831DD 7/9/2016 11:20 014102854126082a 138 0 210 ? ? ? 0
1 AF5AD7E2-7584-4ACD-B28E-1AB2DB87BDAA 7/21/2016 17:36 0243028304f1cda6 0 0 0 ? ? ? 0
1 B66D3851-83BE-4E0E-7D9B-1719E378905D 7/19/2016 12:41 0243028304f122cd 81 0 0 ? ? ? 0
1 BB2FA3CD-AB6D-42BD-3CB7-EEC27E3403BF 7/15/2016 14:38 0243028304f0753e 65 0 195 92 3 29 1
1 BBA4031A-7876-4614-F9BC-718A6D8A16A7 7/13/2016 17:42 0521028304eb1a47 163 0 85 ? ? ? 0
1 BCF2B7D8-CBD0-497F-EEA7-FDEC46EFEEBE 7/7/2016 12:09 0521028304e9acaa 44 0 8 ? ? ? 0
1 BE9386B6-424E-40F9-67A1-A56EF6C18B77 7/27/2016 20:03 013b028225edecc5 1 0 0 ? ? ? 0
1 C0F0EF71-F52B-4D10-E9B4-DA1AF4343CC7 7/11/2016 15:21 05200282b08111ee 49 0 61 368 0 597 1
1 C84FCA28-2372-4F8B-52B1-4BC5E9AD128B 7/19/2016 13:06 013b028225eda06c 59 0 32 162 0 0 1
1 C8B3CC50-DEC3-4F24-D0A2-E32A03AFA786 7/13/2016 13:22 05200282b0819c4d 126 0 0 ? ? ? 0
1 CC119F61-A70F-4DB3-7C8C-DCE9A1C3BCB5 7/27/2016 9:48 02420282b06c1330 0 0 0 ? ? ? 0
1 D57D43C7-F9F0-42B9-C6B6-23B1414D9F12 7/14/2016 15:04 05200282b081ee2a 36 0 17 ? ? ? 0
1 E438B480-8F98-469C-3899-E6F10DD1F755 7/5/2016 20:12 02420282b0675cea 3,874 163 7 ? ? ? 0
1 F223F1F4-F50D-41F6-DA9F-46EA2972F394 7/27/2016 20:13 05200282b084f966 417 0 6 ? ? ? 0
1 FB3B0CB1-89D8-4B47-E4BA-465E57D52B0D 7/14/2016 14:21 02420282b0695b07 138 0 2 ? ? ? 0
SELECT *
FROM tab
QUALIFY
-- rows that does not have duplicates
COUNT(*)
OVER (PARTITION BY SESS_KEY) = 1
-- the ones with CALL_TRNSF_FLG set to 1
OR CALL_TRNSF_FLG = 1
If there might be multiple rows with CALL_TRNSF_FLG = 1 and you only want one row per session:
QUALIFY
ROW_NUMBER(*)
OVER (PARTITION BY SESS_KEY
ORDER BY CALL_TRNSF_FLG DESC) = 1
You can filter rows You want like this:
WHERE
CONN_ID NOT IN (
SELECT
MIN(CONN_ID)
FROM
<<your_table>>
GROUP BY
SESS_KEY
HAVING
count(*) > 1
)
Assuming, that pair: sess_key,conn_id is unique. Otherwise, You should find unique set of columns and filter by it.
Hi You can have query as,
Using JOINS
select t1.* from table_name t1
join
(select count(*),SESS_KEYfrom table_name
group by SESS_KEYhaving count(*) > 1 ) t2
on t1.SESS_KEY=t2.SESS_KEY;
Using WHERE clause
select t1.* from table_name t1,
(select count(*),SESS_KEYfrom table_name
group by SESS_KEYhaving count(*) > 1 ) t2
where t1.SESS_KEY=t2.SESS_KEY;
This provides you all the duplicates having in SESS_KEY column
To Update
merge into table_name t1
using
(select count(*),SESS_KEYfrom table_name
group by SESS_KEYhaving count(*) > 1 ) t2
on t1.SESS_KEY=t2.SESS_KEY
when matched then
update set
CALL_TRNSF_FLG=1;
Use MERGE statement to update the table

Exract the data based on a condition from SQL Server 2008

This is my table:
ProposalNo itemtypenum Dummy
2015005005 427 1
2015006003 478 1
2015006003 2243 0
2015006003 2249 0
2015006004 470 1
2015006005 2247 0
2015006005 2298 0
2015006007 478 1
2015006008 471 1
2015006008 2245 0
I need the result as
ProposalNo itemtypenum Dummy
2015005005 427 1
2015006003 478 1
2015006003 2243 0
2015006003 2249 0
2015006004 470 1
2015006007 478 1
2015006008 471 1
Enhanced the previous logic is, when there is a ProposalNo with Dummy=1 and itemtypenum = 478 then only consider the rows with Dummy = 0, else ignore the rows from the table.
one way to do it (there are several):
SELECT t1.ProposalNo, t1.itemtypenum, t1.Dummy
FROM table t1
WHERE t1.ProposalNo IN (SELECT ProposalNo FROM table WHERE Dummy = 1)