SQL Server : get start time and end time with in the multiple night shift - sql

How can I get the Start and End time of this list? I can add date to this time and can get by min and max but you can see row 3 have next day shift but it will come under same date because it is night shift
I have added normal day shift employee also get the logic right
EmployeeId ShiftDate ShiftStartTime ShiftEndTime
-----------------------------------------------------
20040 2017-11-01 21:00:00 23:00:00
20040 2017-11-01 23:00:00 00:30:00
20040 2017-11-01 00:30:00 06:00:00
20124 2017-11-01 09:00:00 16:30:00
20124 2017-11-01 16:30:00 22:00:00
20124 2017-11-01 22:00:00 22:30:00
I need it like below:
EmployeeId ShiftDate ShiftStartTime ShiftEndTime
----------------------------------------------------
20040 2017-11-01 21:00:00 06:00:00
20124 2017-11-01 09:00:00 22:30:00

In a commercial environment we solved this by attaching a FLAG to each shift. The Flag would indicate the 'Reporting Date' of the Shift...The Flag would have have a value of 1 if the 'Reporting / Administrative date' was the 'next' day. 0 for the same day. -1 for the previous day (which we never used...depends on your scenario)
I modified your table to show a possible SHIFTS table, which should also have a NAME column I guess (like Morning, Afternoon, Day, Night shift etc)
ReportFlag ShiftStartTime ShiftEndTime
1 21:00:00 23:00:00
1 23:00:00 00:30:00
0 00:30:00 06:00:00
0 09:00:00 16:30:00
0 16:30:00 22:00:00
1 22:00:00 22:30:00
Notice how I added 1 - to say that 'this shift' is actually considered to be on the 'next' day.
Then you can use your flag value 0,1 to add to DATE functions in your queries too

Related

Pandas: merge two time series and get the mean values during the period when these two have overlapped time period

I got two pandas dataframes as following:
ts1
Out[50]:
soil_moisture_ids41
date_time
2007-01-07 05:00:00 0.1830
2007-01-07 06:00:00 0.1825
2007-01-07 07:00:00 0.1825
2007-01-07 08:00:00 0.1825
2007-01-07 09:00:00 0.1825
... ...
2017-10-10 20:00:00 0.0650
2017-10-10 21:00:00 0.0650
2017-10-10 22:00:00 0.0650
2017-10-10 23:00:00 0.0650
2017-10-11 00:00:00 0.0650
[94316 rows x 3 columns]
and the other one is
ts2
Out[51]:
soil_moisture_ids42
date_time
2016-07-20 00:00:00 0.147
2016-07-20 01:00:00 0.148
2016-07-20 02:00:00 0.149
2016-07-20 03:00:00 0.150
2016-07-20 04:00:00 0.152
... ...
2019-12-31 19:00:00 0.216
2019-12-31 20:00:00 0.216
2019-12-31 21:00:00 0.215
2019-12-31 22:00:00 0.215
2019-12-31 23:00:00 0.215
[30240 rows x 3 columns]
You could see that, from 2007-01-07 to 2016-07-19, only ts1 has the data points. And from 2016-07-20 to 2017-10-11 there are some overlapped time series. Now I want to combine these two data frames. During the overlapped period, I want to get the mean values over ts1 and ts2. During the non-overlapped period, (2007-01-07 to 2016-07-19 and 2017-10-12 to 2019-12-31), the values at each time stamp is set as the value from ts1 or ts2. So how can I do it?
Thanks!
Use concat with aggregate mean, if only one value get same ouput, if multiple get mean. Also finally DatatimeIndex is sorted:
s = pd.concat([ts1, ts2]).groupby(level=0).mean()
Just store the concatenated series first and then apply the mean. i.e. merged_ts = pd.concat([ts1, ts2]) and then mean_ts = merged_ts.group_by(level=0).mean()

Why do I get different values when I extract data from netCDF files using CDO and ArcGIS for a same grid point?

details of the raw data (Mnth.nc)
netcdf Mnth {
dimensions:
time = UNLIMITED ; // (480 currently)
bnds = 2 ;
longitude = 25 ;
latitude = 33 ;
variables:
double time(time) ;
time:standard_name = "time" ;
time:long_name = "verification time generated by wgrib2 function verftime()" ;
time:bounds = "time_bnds" ;
time:units = "seconds since 1970-01-01 00:00:00.0 0:00" ;
time:calendar = "standard" ;
time:axis = "T" ;
double time_bnds(time, bnds) ;
double longitude(longitude) ;
longitude:standard_name = "longitude" ;
longitude:long_name = "longitude" ;
longitude:units = "degrees_east" ;
longitude:axis = "X" ;
double latitude(latitude) ;
latitude:standard_name = "latitude" ;
latitude:long_name = "latitude" ;
latitude:units = "degrees_north" ;
latitude:axis = "Y" ;
float APCP_sfc(time, latitude, longitude) ;
APCP_sfc:long_name = "Total Precipitation" ;
APCP_sfc:units = "kg/m^2" ;
APCP_sfc:_FillValue = 9.999e+20f ;
APCP_sfc:missing_value = 9.999e+20f ;
APCP_sfc:cell_methods = "time: sum" ;
APCP_sfc:short_name = "APCP_surface" ;
APCP_sfc:level = "surface" ;
}
Detail information of the raw data (Mnth.nc)
File format : NetCDF4 classic
-1 : Institut Source T Steptype Levels Num Points Num Dtype : Parameter ID
1 : unknown unknown v instant 1 1 825 1 F32 : -1
Grid coordinates :
1 : lonlat : points=825 (25x33)
longitude : 87 to 89.88 by 0.12 degrees_east
latitude : 25.08 to 28.92 by 0.12 degrees_north
Vertical coordinates :
1 : surface : levels=1
Time coordinate : 480 steps
RefTime = 1970-01-01 00:00:00 Units = seconds Calendar = standard Bounds = true
YYYY-MM-DD hh:mm:ss YYYY-MM-DD hh:mm:ss YYYY-MM-DD hh:mm:ss YYYY-MM-DD hh:mm:ss
1980-01-16 12:30:00 1980-02-15 12:30:00 1980-03-16 12:30:00 1980-04-16 00:30:00
1980-05-16 12:30:00 1980-06-16 00:30:00 1980-07-16 12:30:00 1980-08-16 12:30:00
1980-09-16 00:30:00 1980-10-16 12:30:00 1980-11-16 00:30:00 1980-12-16 12:30:00
1981-01-16 12:30:00 1981-02-15 00:30:00 1981-03-16 12:30:00 1981-04-16 00:30:00
1981-05-16 12:30:00 1981-06-16 00:30:00 1981-07-16 12:30:00 1981-08-16 12:30:00
1981-09-16 00:30:00 1981-10-16 12:30:00 1981-11-16 00:30:00 1981-12-16 12:30:00
1982-01-16 12:30:00 1982-02-15 00:30:00 1982-03-16 12:30:00 1982-04-16 00:30:00
1982-05-16 12:30:00 1982-06-16 00:30:00 1982-07-16 12:30:00 1982-08-16 12:30:00
1982-09-16 00:30:00 1982-10-16 12:30:00 1982-11-16 00:30:00 1982-12-16 12:30:00
1983-01-16 12:30:00 1983-02-15 00:30:00 1983-03-16 12:30:00 1983-04-16 00:30:00
1983-05-16 12:30:00 1983-06-16 00:30:00 1983-07-16 12:30:00 1983-08-16 12:30:00
1983-09-16 00:30:00 1983-10-16 12:30:00 1983-11-16 00:30:00 1983-12-16 12:30:00
1984-01-16 12:30:00 1984-02-15 12:30:00 1984-03-16 12:30:00 1984-04-16 00:30:00
1984-05-16 12:30:00 1984-06-16 00:30:00 1984-07-16 12:30:00 1984-08-16 12:30:00
1984-09-16 00:30:00 1984-10-16 12:30:00 1984-11-16 00:30:00 1984-12-16 12:30:00
................................................................................
............................
2016-01-16 12:30:00 2016-02-15 12:30:00 2016-03-16 12:30:00 2016-04-16 00:30:00
2016-05-16 12:30:00 2016-06-16 00:30:00 2016-07-16 12:30:00 2016-08-16 12:30:00
2016-09-16 00:30:00 2016-10-16 12:30:00 2016-11-16 00:30:00 2016-12-16 12:30:00
2017-01-16 12:30:00 2017-02-15 00:30:00 2017-03-16 12:30:00 2017-04-16 00:30:00
2017-05-16 12:30:00 2017-06-16 00:30:00 2017-07-16 12:30:00 2017-08-16 12:30:00
2017-09-16 00:30:00 2017-10-16 12:30:00 2017-11-16 00:30:00 2017-12-16 12:30:00
2018-01-16 12:30:00 2018-02-15 00:30:00 2018-03-16 12:30:00 2018-04-16 00:30:00
2018-05-16 12:30:00 2018-06-16 00:30:00 2018-07-16 12:30:00 2018-08-16 12:30:00
2018-09-16 00:30:00 2018-10-16 12:30:00 2018-11-16 00:30:00 2018-12-16 12:30:00
2019-01-16 12:30:00 2019-02-15 00:30:00 2019-03-16 12:30:00 2019-04-16 00:30:00
2019-05-16 12:30:00 2019-06-16 00:30:00 2019-07-16 12:30:00 2019-08-16 12:30:00
2019-09-16 00:30:00 2019-10-16 12:30:00 2019-11-16 00:30:00 2019-12-16 12:30:00
2020-01-16 12:30:00 2020-02-15 12:30:00 2020-03-16 12:30:00 2020-04-16 00:30:00
2020-05-16 12:30:00 2020-06-16 00:30:00 2020-07-16 12:30:00 2020-08-16 12:30:00
2020-09-16 00:30:00 2020-10-16 12:30:00 2020-11-16 00:30:00 2020-12-16 12:30:00
cdo sinfo: Processed 1 variable over 480 timesteps [0.50s 30MB].
I extracted monthly rainfall values from the Mnth.nc file for a location (lon: 88.44; lat: 27.12)using the following command
cdo remapnn,lon=88.44-lat=27.12 Mnth.nc Mnth1.nc
cdo outputtab,year, month, value Mnth1.nc > Mnth.csv
The output is as follows ()
Year month Value
1980 1 31.74219
1980 2 54.60938
1980 3 66.94531
1980 4 149.4062
1980 5 580.7227
1980 6 690.1328
1980 7 1146.305
1980 8 535.8164
1980 9 486.4688
1980 10 119.5391
1980 11 82.10547
1980 12 13.95703
Then I extracted the rainfall values from the same data (Mnth.nc) for the same location (lon: 88.44; lat: 27.12) using the features of the multidimensional toolbox provided in ArcGIS. The result is as follows-
year month Value
1980 1 38.8125
1980 2 58.6542969
1980 3 71.7382813
1980 4 148.6367188
1980 5 564.7070313
1980 6 653.0390625
1980 7 1026.832031
1980 8 501.3164063
1980 9 458.5429688
1980 10 113.078125
1980 11 74.0976563
1980 12 24.2265625
Why I'm getting different results in two different software for the same location and for the same variable? Any help will highly be appreciated.
Thanks in advance.
The question is perhaps misleading, in that you are not "extracting" the data in both cases. Instead you are interpolating it. The method used by CDO is nearest neighbour. arcGIS is probably simply using a different method, so you should get different results. They should give slightly different results.
The results look very similar, so both are almost certainly working as advertised.
I think I ended up in the same issues. I used CDO to extract a point and also used ArcGIS for cross checking. I found out that the values were different.
Just to be sure, I recorded the location extent of one particular cell and tried extracting values for different locations within the cell boundary extent. CDO seemed to have been giving the same results as expected because it uses nearest neighbour resampling method.
Then I tried the same with ArcGIS. Interestingly, in my case, I found out that ArcGIS also gave me same results sometimes within the same cell boundary extent and sometimes different. I checked the values by also using 'Panoply' and I realised that CDO gave accurate results, while ArcGIS was sometimes giving offset results,i.e., it was giving the values of the nearby cells. This was confirmed by cross-checking with Panoply. As #Robert Wilson mentioned that ArcGIS must be using different resampling method, I figured out in the results section after using the tool 'Netcdf to table view' that it also uses Nearest neighbour method. This is not an answer to your question, but just something I found.

Overlap in seconds between datetime range and a time range

I have a dataframe like this:
df11 = pd.DataFrame(
{
"Start_date": ["2018-01-31 12:00:00", "2018-02-28 16:00:00", "2018-02-27 22:00:00"],
"End_date": ["2019-01-31 21:45:00", "2019-03-24 22:00:00", "2018-02-28 01:00:00"],
}
)
Start_date End_date
0 2018-01-31 12:00:00 2019-01-31 21:45:00
1 2018-02-28 16:00:00 2019-03-24 22:00:00
2 2018-02-27 22:00:00 2018-02-28 01:00:00
I need to check the overlap time duration in specific periods in seconds. My expected results are like this:
Start_date End_date 12h-16h 16h-22h 22h-00h 00h-02h30
0 2018-01-31 12:00:00 2019-01-31 21:45:00 14400 20700 0 0
1 2018-02-28 16:00:00 2019-03-24 22:00:00 0 21600 0 0
2 2018-02-27 22:00:00 2018-02-28 01:00:00 0 0 7200 3600
I know it`s completely wrong and I´ve tried other solutions. This is one of my attempts:
df11['12h-16h']=np.where(df11['Start_date']<timedelta(hours=16, minutes=0, seconds=0) & df11['End_date']>timedelta(hours=12, minutes=0, seconds=0),(np.minimum(df11['End_date'],timedelta(hours=16, minutes=0, seconds=0)))-(np.maximum(df11['Start_date'],timedelta(hours=12, minutes=0, seconds=0)))

SQL Server datetime ranges between records

What would be the best way to get datetime ranges between records in SQL Server? I think it would be easiest to explain with an example.
I have the following data - these records start and end datetime ranges would never overlap:
ID
Start
End
1
1/27/2021 06:00:00
1/27/2021 09:00:00
2
1/27/2021 10:00:00
1/27/2021 14:00:00
3
1/27/2021 21:00:00
1/28/2021 04:00:00
4
1/28/2021 06:00:00
1/28/2021 09:00:00
I need to get the date time range between records. So the resulting SQL query would return the following result set (ID doesn't matter):
ID
Start
End
1
1/27/2021 09:00:00
1/27/2021 10:00:00
2
1/27/2021 14:00:00
1/27/2021 21:00:00
3
1/28/2021 04:00:00
1/28/2021 06:00:00
Thanks for any help in advance.
Use lead():
select t.*
from (select id, end as start, lead(start) over (order by start) as end
from t
) t
where end is not null;
Note: end is a lousy name for a column, given that it is a SQL keyword. I assume it is for illustrative purposes only.
Here is a SQL Fiddle.

What's the difference between changing datetime string to datetime by pd.to_datetime & datetime.strptime()

I have a df that looks similar to this (shortened version, with less rows):
Time (EDT) Open High Low Close
0 02.01.2006 19:00:00 0.85224 0.85498 0.85224 0.85498
1 02.01.2006 20:00:00 0.85498 0.85577 0.85423 0.85481
2 02.01.2006 21:00:00 0.85481 0.85646 0.85434 0.85646
3 02.01.2006 22:00:00 0.85646 0.85705 0.85623 0.85651
4 02.01.2006 23:00:00 0.85643 0.85691 0.85505 0.85653
5 03.01.2006 00:00:00 0.85653 0.8569 0.85601 0.85626
6 03.01.2006 01:00:00 0.85626 0.85653 0.85524 0.8557
7 03.01.2006 02:00:00 0.85558 0.85597 0.85486 0.85597
8 03.01.2006 03:00:00 0.85597 0.85616 0.85397 0.8548
9 03.01.2006 04:00:00 0.85469 0.85495 0.8529 0.85328
10 03.01.2006 05:00:00 0.85316 0.85429 0.85222 0.85401
11 03.01.2006 06:00:00 0.85401 0.8552 0.853 0.8552
12 03.01.2006 07:00:00 0.8552 0.8555 0.85319 0.85463
13 03.01.2006 08:00:00 0.85477 0.85834 0.8545 0.85788
14 03.01.2006 09:00:00 0.85788 0.85838 0.85341 0.85416
15 03.01.2006 10:00:00 0.8542 0.8542 0.85006 0.85111
16 03.01.2006 11:00:00 0.85115 0.85411 0.85 0.85345
17 03.01.2006 12:00:00 0.85337 0.85432 0.8526 0.85413
18 03.01.2006 13:00:00 0.85413 0.85521 0.85363 0.85363
19 03.01.2006 14:00:00 0.85325 0.8561 0.85305 0.85606
20 03.01.2006 15:00:00 0.8561 0.85675 0.85578 0.85599
I need to convert the date string to datetime, then set date column as index, and resample. When I use method 1, I can't resample properly, the data how it resamples is wrong and it creates extra future dates. Let say my last date is 2018-11, I will see 2018-12 something like that.
method 1:
df['Time (EDT)'] = pd.to_datetime(df['Time (EDT)']) <---- this takes long also, because theres 90000 rows
df.set_index('Time (EDT)', inplace=True)
ohlc_dict = {'Open':'first','High':'max', 'Low':'min','Close'}
df=df.resample'4H', base=17, closed='left', label='left').agg(ohlc_dict)
result:
Time (EDT) Open High Low Close
1/1/2006 21:00 0.86332 0.86332 0.86268 0.86321
1/2/2006 1:00 0.86321 0.86438 0.86111 0.86164
1/2/2006 5:00 0.86164 0.86222 0.8585 0.86134
1/2/2006 9:00 0.86149 0.86297 0.85695 0.85793
1/2/2006 13:00 0.85801 0.85947 0.85759 0.8591
1/2/2006 17:00 0.8591 0.86034 0.85757 0.85825
1/2/2006 21:00 0.85825 0.85969 0.84377 0.84412
1/3/2006 1:00 0.84445 0.8468 0.84286 0.84642
1/3/2006 5:00 0.84659 0.8488 0.84494 0.84872
1/3/2006 9:00 0.84829 0.84915 0.84271 0.84416
1/3/2006 13:00 0.84372 0.8453 0.84346 0.84423
1/3/2006 17:00 0.84426 0.84693 0.84426 0.84516
1/3/2006 21:00 0.84523 0.8458 0.84442 0.84579
When I use method 2. It resamples properly.
method 2:
def to_datetime_obj(date_string):
datetime_obj = datetime.strptime(date_string[:], '%d.%m.%Y %H:%M:%S')
return datetime_obj
datetime_objs = None
date_list = df['Time (EDT)'].tolist()
datetime_objs=list(map(to_datetime_obj, date_list)) <--- this is faster also
df.iloc[:,:1] = datetime_objs
df.set_index('Time (EDT)', inplace=True)
ohlc_dict = {'Open':'first','High':'max', 'Low':'min','Close'}
df=df.resample'4H', base=17, closed='left', label='left').agg(ohlc_dict)
result:
Time (EDT) Open High Low Close
1/2/2006 17:00 0.85224 0.85577 0.85224 0.85481
1/2/2006 21:00 0.85481 0.85705 0.85434 0.85626
1/3/2006 1:00 0.85626 0.85653 0.8529 0.85328
1/3/2006 5:00 0.85316 0.85834 0.85222 0.85788
1/3/2006 9:00 0.85788 0.85838 0.85 0.85413
1/3/2006 13:00 0.85413 0.85675 0.85305 0.85525
1/3/2006 17:00 0.85525 0.85842 0.85502 0.85783
1/3/2006 21:00 0.85783 0.85898 0.85736 0.85774
1/4/2006 1:00 0.85774 0.85825 0.8558 0.85595
1/4/2006 5:00 0.85595 0.85867 0.85577 0.85839
1/4/2006 9:00 0.85847 0.85981 0.85586 0.8578
1/4/2006 13:00 0.85773 0.85886 0.85597 0.85653
1/4/2006 17:00 0.85653 0.85892 0.85642 0.8584
1/4/2006 21:00 0.8584 0.85863 0.85658 0.85715
1/5/2006 1:00 0.85715 0.8588 0.85641 0.85791
1/5/2006 5:00 0.85803 0.86169 0.85673 0.86065
The df.index of method 1 and 2 are the same visually before resampling.
They are both pandas.core.indexes.datetimes.DatetimeIndex
But when I compare them, they are actually different method1_df.index != method2_df.index
Why is that? How to fix? Thanks.
It's surprising that a vectorized method (pd.to_datetime), written in Cython is slower than a pure Python method (datetime.strptime).
You can specify the format to pd.to_datetime whicch speeds it up a lot:
pd.to_datetime(df['Time (EDT)'], format='%d.%m.%Y %H:%M:%S')
For your second problem, I think it may have something to do with the order of day and month in your string data. Have you verified that the two methods actually give you the same datetimes?
s1 = pd.to_datetime(df['Time (EDT)'])
s2 = pd.Series(map(to_datetime_obj, date_list))
(s1 == s2).all()
For me datetime.strptime was 3 times faster than pd.to_datetime for 2 operations per row on a 880,000+ rows DataFrame.