count jumps from one location to another based on conditions - pandas

I have the following dataframe.
id start finish location
0 1 2015-12-14 16:44:00 2015-12-15 18:00:00 A
1 1 2015-12-15 18:00:00 2015-12-16 13:00:00 B
2 1 2015-12-16 13:00:00 2015-12-16 20:00:00 C
3 2 2015-12-10 13:15:00 2015-12-12 13:45:00 B
4 2 2015-12-12 13:45:00 2015-12-12 19:45:00 A
5 3 2015-12-15 07:45:00 2015-12-15 18:45:00 A
6 3 2015-12-15 18:45:00 2015-12-18 07:15:00 D
7 3 2015-12-18 07:15:00 2015-12-19 10:45:00 C
8 3 2015-12-19 10:45:00 2015-12-20 09:00:00 H
9 4 2015-12-09 10:45:00 2015-12-13 12:20:00 E
10 4 2015-12-13 12:20:00 2015-12-13 18:20:00 A
11 4 2015-12-13 18:20:00 2015-12-13 23:40:00 A
12 4 2015-12-13 23:40:00 2015-12-16 08:00:00 B
13 5 2015-12-07 08:00:00 2015-12-13 12:25:00 H
I wanted to calculate jumps from one location to another in every 'id'. For these jump counts, first I wanted to compare the date and time of finish column with the date and time of start column of the next row of the same id. If it matches, I want to have the count as 1 other wise 0. What I want to obtain is the following:
id start count
0 1 2015-12-14 16:44:00 1
1 1 2015-12-15 18:00:00 1
2 1 2015-12-16 13:00:00 0
3 2 2015-12-10 13:15:00 1
4 2 2015-12-12 13:45:00 0
5 3 2015-12-15 07:45:00 1
6 3 2015-12-15 18:45:00 1
7 3 2015-12-18 07:15:00 1
8 3 2015-12-19 10:45:00 0
9 4 2015-12-09 10:45:00 1
10 4 2015-12-13 12:20:00 1
11 4 2015-12-13 18:20:00 1
12 4 2015-12-13 23:40:00 0
13 5 2015-12-07 08:00:00 0
Once I have that, I would like to sum the counts based on date to get something like the following:
date count_sum
2015-12-07 0
2015-12-09 1
2015-12-10 1
2015-12-12 0
2015-12-13 2
2015-12-14 1
2015-12-15 3
2015-12-16 0
2015-12-18 1
2015-12-19 0
For me, the last part is easy to do by doing groupby() based on date and using .sum() to sum up all the counts on that date. But how to get the first part where we count the actual jumps is not clear. Any help will be appreciated.

Your data already appears to be sorted by 'start' so you can just groupby and check if the finish time is the same as the start time of the next row with pandas.Series.shift()
I'd advise against calling a column 'count' as this is a built in function for pandas, so you can't use the Series.col_name notation.
#df['start'] = pd.to_datetime(df.start)
#df['finish'] = pd.to_datetime(df.finish)
df['count'] = (df.groupby('id').apply(lambda x: x.finish == x.start.shift(-1))
.astype('int').reset_index(level=0, drop=True))
Output:
id start finish location count
0 1 2015-12-14 16:44:00 2015-12-15 18:00:00 A 1
1 1 2015-12-15 18:00:00 2015-12-16 13:00:00 B 1
2 1 2015-12-16 13:00:00 2015-12-16 20:00:00 C 0
3 2 2015-12-10 13:15:00 2015-12-12 13:45:00 B 1
4 2 2015-12-12 13:45:00 2015-12-12 19:45:00 A 0
5 3 2015-12-15 07:45:00 2015-12-15 18:45:00 A 1
6 3 2015-12-15 18:45:00 2015-12-18 07:15:00 D 1
7 3 2015-12-18 07:15:00 2015-12-19 10:45:00 C 1
8 3 2015-12-19 10:45:00 2015-12-20 09:00:00 H 0
9 4 2015-12-09 10:45:00 2015-12-13 12:20:00 E 1
10 4 2015-12-13 12:20:00 2015-12-13 18:20:00 A 1
11 4 2015-12-13 18:20:00 2015-12-13 23:40:00 A 1
12 4 2015-12-13 23:40:00 2015-12-16 08:00:00 B 0
13 5 2015-12-07 08:00:00 2015-12-13 12:25:00 H 0
And just for completeness:
df.groupby(df.start.dt.date)['count'].sum()
start
2015-12-07 0
2015-12-09 1
2015-12-10 1
2015-12-12 0
2015-12-13 2
2015-12-14 1
2015-12-15 3
2015-12-16 0
2015-12-18 1
2015-12-19 0

Related

Pandas: create a period based on date column

I have a dataframe
ID datetime
11 01-09-2021 10:00:00
11 01-09-2021 10:15:15
11 01-09-2021 15:00:00
12 01-09-2021 15:10:00
11 01-09-2021 18:00:00
I need to add period based just on datetime if it increases to 2 hours
ID datetime period
11 01-09-2021 10:00:00 1
11 01-09-2021 10:15:15 1
11 01-09-2021 15:00:00 2
12 01-09-2021 15:10:00 2
11 01-09-2021 18:00:00 3
And the same thing but based on ID and datetime
ID datetime period
11 01-09-2021 10:00:00 1
11 01-09-2021 10:15:15 1
11 01-09-2021 15:00:00 2
12 01-09-2021 15:10:00 1
11 01-09-2021 18:00:00 3
How can I do that?
You can get difference by Series.diff, convert to hours Series.dt.total_seconds, comapre for 2 and add cumulative sum:
df['period'] = df['datetime'].diff().dt.total_seconds().div(3600).gt(2).cumsum().add(1)
print (df)
ID datetime period
0 11 2021-01-09 10:00:00 1
1 11 2021-01-09 10:15:15 1
2 11 2021-01-09 15:00:00 2
3 12 2021-01-09 15:10:00 2
4 11 2021-01-09 18:00:00 3
Similar idea per groups:
f = lambda x: x.diff().dt.total_seconds().div(3600).gt(2).cumsum().add(1)
df['period'] = df.groupby('ID')['datetime'].transform(f)
print (df)
ID datetime period
0 11 2021-01-09 10:00:00 1
1 11 2021-01-09 10:15:15 1
2 11 2021-01-09 15:00:00 2
3 12 2021-01-09 15:10:00 1
4 11 2021-01-09 18:00:00 3

7 days hourly mean with pandas

I need some help calculating a 7 days mean for every hour.
The timeseries has a hourly resolution and I need the 7 days mean for each hour e.g. for 13 o'clock
date, x
2020-07-01 13:00 , 4
2020-07-01 14:00 , 3
.
.
.
2020-07-02 13:00 , 3
2020-07-02 14:00 , 7
.
.
.
I tried it with pandas and a rolling mean, but rolling includes last 7 days.
Thanks for any hints!
Add a new hour column, grouping by hour column, and then add
The average was calculated over 7 days. This is consistent with the intent of the question.
df['hour'] = df.index.hour
df = df.groupby(df.hour)['x'].rolling(7).mean().reset_index()
df.head(35)
hour level_1 x
0 0 2020-07-01 00:00:00 NaN
1 0 2020-07-02 00:00:00 NaN
2 0 2020-07-03 00:00:00 NaN
3 0 2020-07-04 00:00:00 NaN
4 0 2020-07-05 00:00:00 NaN
5 0 2020-07-06 00:00:00 NaN
6 0 2020-07-07 00:00:00 48.142857
7 0 2020-07-08 00:00:00 50.285714
8 0 2020-07-09 00:00:00 60.000000
9 0 2020-07-10 00:00:00 63.142857
10 1 2020-07-01 01:00:00 NaN
11 1 2020-07-02 01:00:00 NaN
12 1 2020-07-03 01:00:00 NaN
13 1 2020-07-04 01:00:00 NaN
14 1 2020-07-05 01:00:00 NaN
15 1 2020-07-06 01:00:00 NaN
16 1 2020-07-07 01:00:00 52.571429
17 1 2020-07-08 01:00:00 48.428571
18 1 2020-07-09 01:00:00 38.000000
19 2 2020-07-01 02:00:00 NaN
20 2 2020-07-02 02:00:00 NaN
21 2 2020-07-03 02:00:00 NaN
22 2 2020-07-04 02:00:00 NaN
23 2 2020-07-05 02:00:00 NaN
24 2 2020-07-06 02:00:00 NaN
25 2 2020-07-07 02:00:00 46.571429
26 2 2020-07-08 02:00:00 47.714286
27 2 2020-07-09 02:00:00 42.714286
28 3 2020-07-01 03:00:00 NaN
29 3 2020-07-02 03:00:00 NaN
30 3 2020-07-03 03:00:00 NaN
31 3 2020-07-04 03:00:00 NaN
32 3 2020-07-05 03:00:00 NaN
33 3 2020-07-06 03:00:00 NaN
34 3 2020-07-07 03:00:00 72.571429

Add a column value with the other date time column at minutes level in pandas

I have a data frame as shown below
ID ideal_appt_time service_time
1 2020-01-06 09:00:00 22
2 2020-01-06 09:30:00 15
1 2020-01-08 14:00:00 42
2 2020-01-12 01:30:00 5
I would like to add service time in terms of minutes with ideal_appt_time and create new column called finish.
Expected Output:
ID ideal_appt_time service_time finish
1 2020-01-06 09:00:00 22 2020-01-06 09:22:00
2 2020-01-06 09:30:00 15 2020-01-06 09:45:00
1 2020-01-08 14:00:00 42 2020-01-08 14:42:00
2 2020-01-12 01:30:00 35 2020-01-12 02:05:00
Use to_timedelta for convert column to timedeltas by minutes and add to datetimes:
df['ideal_appt_time'] = pd.to_datetime(df['ideal_appt_time'])
df['finish'] = df['ideal_appt_time'] + pd.to_timedelta(df['service_time'], unit='Min')
print (df)
ID ideal_appt_time service_time finish
0 1 2020-01-06 09:00:00 22 2020-01-06 09:22:00
1 2 2020-01-06 09:30:00 15 2020-01-06 09:45:00
2 1 2020-01-08 14:00:00 42 2020-01-08 14:42:00
3 2 2020-01-12 01:30:00 5 2020-01-12 01:35:00
Data
df=pd.DataFrame({'ideal_appt_time':['2020-01-06 09:00:00','2020-01-06 09:30:00','2020-01-08 14:00:00','2020-01-12 01:30:00'],'service_time':[22,15,42,35]})
Another way out
df['finish'] = pd.to_datetime(df['ideal_appt_time']).add( df['service_time'].astype('timedelta64[m]'))
df
ideal_appt_time service_time finish
0 2020-01-06 09:00:00 22 2020-01-06 09:22:00
1 2020-01-06 09:30:00 15 2020-01-06 09:45:00
2 2020-01-08 14:00:00 42 2020-01-08 14:42:00
3 2020-01-12 01:30:00 35 2020-01-12 02:05:00

find the earliest and latest dates between two columns

I have the following dataframe df.
id start finish location
0 1 2015-12-14 16:44:00 2015-12-15 18:00:00 A
1 1 2015-12-15 18:00:00 2015-12-16 13:00:00 B
2 1 2015-12-16 13:00:00 2015-12-16 20:00:00 C
3 2 2015-12-10 13:15:00 2015-12-12 13:45:00 B
4 2 2015-12-12 13:45:00 2015-12-12 19:45:00 A
5 3 2015-12-15 07:45:00 2015-12-15 18:45:00 A
6 3 2015-12-15 18:45:00 2015-12-18 07:15:00 D
7 3 2015-12-18 07:15:00 2015-12-19 10:45:00 C
8 3 2015-12-19 10:45:00 2015-12-20 09:00:00 H
I wanted to find the the id_start_date and id_end_date for every id.
In the above example, there are start and finish dates for every line. I want to have two new columns id_start_date and id_end_date. In the id_start_date column, I want to find the earliest most date in the start column specific to every id. This is easy. I can first sort the data based on id and start, then I can just pick the first start date in every id or I can do group-by based on id and later use aggregate function to find the minimum date in the start column. For the id_end_date, I can do the same. I can group-by based on id and use aggregate function to find the maximum date in the finish column.
df1 = df.sort_values(['id','start'],ascending=True)
gp = df1.groupby('id')
gp_out = gp.agg({'start': {'mindate': np.min}, 'finish': {'maxdate': np.max}})
when I print gp_out, It does show the correct dates but how would I write them back to the original dataframe df. I expect the following:
id start finish location id_start_date id_end_date
0 1 2015-12-14 16:44:00 2015-12-15 18:00:00 A 2015-12-14 16:44:00 2015-12-16 20:00:00
1 1 2015-12-15 18:00:00 2015-12-16 13:00:00 B 2015-12-14 16:44:00 2015-12-16 20:00:00
2 1 2015-12-16 13:00:00 2015-12-16 20:00:00 C 2015-12-14 16:44:00 2015-12-16 20:00:00
3 2 2015-12-10 13:15:00 2015-12-12 13:45:00 B 2015-12-10 13:15:00 2015-12-12 19:45:00
4 2 2015-12-12 13:45:00 2015-12-12 19:45:00 A 2015-12-10 13:15:00 2015-12-12 19:45:00
5 3 2015-12-15 07:45:00 2015-12-15 18:45:00 A 2015-12-15 07:45:00 2015-12-20 09:00:00
6 3 2015-12-15 18:45:00 2015-12-18 07:15:00 D 2015-12-15 07:45:00 2015-12-20 09:00:00
7 3 2015-12-18 07:15:00 2015-12-19 10:45:00 C 2015-12-15 07:45:00 2015-12-20 09:00:00
8 3 2015-12-19 10:45:00 2015-12-20 09:00:00 H 2015-12-15 07:45:00 2015-12-20 09:00:00
How can i get the last two columns into the original dataframe df?
Using transform
g=df.groupby('id')
df['id_start_date']=g['start'].transform('min')
df['id_end_date']=g['finish'].transform('max')

How to get count incremental by date

I am trying to get a count of rows with incremental dates.
My table looks like this:
ID name status create_date
1 John AC 2016-01-01 00:00:26.513
2 Jane AC 2016-01-02 00:00:26.513
3 Kane AC 2016-01-02 00:00:26.513
4 Carl AC 2016-01-03 00:00:26.513
5 Dave AC 2016-01-04 00:00:26.513
6 Gina AC 2016-01-04 00:00:26.513
Now what I want to return from the SQL is something like this:
Date Count
2016-01-01 1
2016-01-02 3
2016-01-03 4
2016-01-04 6
You can make use of COUNT() OVER () without PARTITION BY,by using ORDER BY. It will give you the cumulative sum.Use DISTINCT to filter out the duplicate values.
SELECT DISTINCT CAST(create_date AS DATE) [Date],
COUNT(create_date) OVER (ORDER BY CAST(create_date AS DATE)) as [COUNT]
FROM [YourTable]
SELECT create_date, COUNT(create_date) as [COUNT]
FROM (
SELECT CAST(create_date AS DATE) create_date
FROM [YourTable]
) T
GROUP BY create_date
Per your description, you need a continuous dates list, Does it make sense?
This sample only generating one-month data.
CREATE TABLE #tt(ID INT, name VARCHAR(10), status VARCHAR(10), create_date DATETIME)
INSERT INTO #tt
SELECT 1,'John','AC','2016-01-01 00:00:26.513' UNION
SELECT 2,'Jane','AC','2016-01-02 00:00:26.513' UNION
SELECT 3,'Kane','AC','2016-01-02 00:00:26.513' UNION
SELECT 4,'Carl','AC','2016-01-03 00:00:26.513' UNION
SELECT 5,'Dave','AC','2016-01-04 00:00:26.513' UNION
SELECT 6,'Gina','AC','2016-01-04 00:00:26.513' UNION
SELECT 7,'Tina','AC','2016-01-08 00:00:26.513'
SELECT * FROM #tt
SELECT CONVERT(DATE,DATEADD(d,sv.number,n.FirstDate)) AS [Date],COUNT(n.num) AS [Count]
FROM master.dbo.spt_values AS sv
LEFT JOIN (
SELECT MIN(t.create_date)OVER() AS FirstDate,DATEDIFF(d,MIN(t.create_date)OVER(),t.create_date) AS num FROM #tt AS t
) AS n ON n.num<=sv.number
WHERE sv.type='P' AND sv.number>=0 AND MONTH(DATEADD(d,sv.number,n.FirstDate))=MONTH(n.FirstDate)
GROUP BY CONVERT(DATE,DATEADD(d,sv.number,n.FirstDate))
Date Count
---------- -----------
2016-01-01 1
2016-01-02 3
2016-01-03 4
2016-01-04 6
2016-01-05 6
2016-01-06 6
2016-01-07 6
2016-01-08 7
2016-01-09 7
2016-01-10 7
2016-01-11 7
2016-01-12 7
2016-01-13 7
2016-01-14 7
2016-01-15 7
2016-01-16 7
2016-01-17 7
2016-01-18 7
2016-01-19 7
2016-01-20 7
2016-01-21 7
2016-01-22 7
2016-01-23 7
2016-01-24 7
2016-01-25 7
2016-01-26 7
2016-01-27 7
2016-01-28 7
2016-01-29 7
2016-01-30 7
2016-01-31 7
2017-01-01 7
2017-01-02 7
2017-01-03 7
2017-01-04 7
2017-01-05 7
2017-01-06 7
2017-01-07 7
2017-01-08 7
2017-01-09 7
2017-01-10 7
2017-01-11 7
2017-01-12 7
2017-01-13 7
2017-01-14 7
2017-01-15 7
2017-01-16 7
2017-01-17 7
2017-01-18 7
2017-01-19 7
2017-01-20 7
2017-01-21 7
2017-01-22 7
2017-01-23 7
2017-01-24 7
2017-01-25 7
2017-01-26 7
2017-01-27 7
2017-01-28 7
2017-01-29 7
2017-01-30 7
2017-01-31 7
2018-01-01 7
2018-01-02 7
2018-01-03 7
2018-01-04 7
2018-01-05 7
2018-01-06 7
2018-01-07 7
2018-01-08 7
2018-01-09 7
2018-01-10 7
2018-01-11 7
2018-01-12 7
2018-01-13 7
2018-01-14 7
2018-01-15 7
2018-01-16 7
2018-01-17 7
2018-01-18 7
2018-01-19 7
2018-01-20 7
2018-01-21 7
2018-01-22 7
2018-01-23 7
2018-01-24 7
2018-01-25 7
2018-01-26 7
2018-01-27 7
2018-01-28 7
2018-01-29 7
2018-01-30 7
2018-01-31 7
2019-01-01 7
2019-01-02 7
2019-01-03 7
2019-01-04 7
2019-01-05 7
2019-01-06 7
2019-01-07 7
2019-01-08 7
2019-01-09 7
2019-01-10 7
2019-01-11 7
2019-01-12 7
2019-01-13 7
2019-01-14 7
2019-01-15 7
2019-01-16 7
2019-01-17 7
2019-01-18 7
2019-01-19 7
2019-01-20 7
2019-01-21 7
2019-01-22 7
2019-01-23 7
2019-01-24 7
2019-01-25 7
2019-01-26 7
2019-01-27 7
2019-01-28 7
2019-01-29 7
2019-01-30 7
2019-01-31 7
2020-01-01 7
2020-01-02 7
2020-01-03 7
2020-01-04 7
2020-01-05 7
2020-01-06 7
2020-01-07 7
2020-01-08 7
2020-01-09 7
2020-01-10 7
2020-01-11 7
2020-01-12 7
2020-01-13 7
2020-01-14 7
2020-01-15 7
2020-01-16 7
2020-01-17 7
2020-01-18 7
2020-01-19 7
2020-01-20 7
2020-01-21 7
2020-01-22 7
2020-01-23 7
2020-01-24 7
2020-01-25 7
2020-01-26 7
2020-01-27 7
2020-01-28 7
2020-01-29 7
2020-01-30 7
2020-01-31 7
2021-01-01 7
2021-01-02 7
2021-01-03 7
2021-01-04 7
2021-01-05 7
2021-01-06 7
2021-01-07 7
2021-01-08 7
2021-01-09 7
2021-01-10 7
2021-01-11 7
2021-01-12 7
2021-01-13 7
2021-01-14 7
2021-01-15 7
2021-01-16 7
2021-01-17 7
2021-01-18 7
2021-01-19 7
2021-01-20 7
2021-01-21 7
2021-01-22 7
2021-01-23 7
2021-01-24 7
2021-01-25 7
2021-01-26 7
2021-01-27 7
2021-01-28 7
2021-01-29 7
2021-01-30 7
2021-01-31 7
select r.date,count(r.date) count
from
(
select id,name,substring(convert(nvarchar(50),create_date),1,10) date
from tblName
) r
group by r.date
In this code, in the subquery part,
I select the first 10 letter of date which is converted from dateTime to nvarchar so I make like '2016-01-01'. (which is not also necessary but for make code more readable I prefer to do it in this way).
Then with a simple group by I have date and date's count.