I have a dataset in this form:
company_name date
0 global_infotech 2019-06-15
1 global_infotech 2020-03-22
2 global_infotech 2020-08-30
3 global_infotech 2018-06-19
4 global_infotech 2018-06-15
5 global_infotech 2018-02-15
6 global_infotech 2018-11-22
7 global_infotech 2019-01-15
8 global_infotech 2018-12-15
9 global_infotech 2019-06-15
10 global_infotech 2018-12-19
11 global_infotech 2019-12-31
12 global_infotech 2019-02-18
13 global_infotech 2018-06-16
14 global_infotech 2019-02-10
15 global_infotech 2019-03-15
16 Qualcom 2019-07-11
17 Qualcom 2018-01-11
18 Qualcom 2018-05-29
19 Qualcom 2018-10-06
20 Qualcom 2018-11-11
21 Qualcom 2019-08-17
22 Qualcom 2019-02-22
23 Qualcom 2019-10-16
24 Qualcom 2018-06-22
25 Qualcom 2018-06-14
26 Qualcom 2018-06-16
27 Syscin 2018-02-10
28 Syscin 2019-02-16
29 Syscin 2018-04-12
30 Syscin 2018-08-22
31 Syscin 2018-09-16
32 Syscin 2019-04-20
33 Syscin 2018-02-28
34 Syscin 2018-01-19
CONSIDERING TODAY'S DATE AS 1st JANUARY 2020, I WANT TO WRITE A CODE TO FIND THE NUMBER OF TIMES EACH COMPANY NAME IS OCCURING IN LAST 3 MONTHS. For example, suppose from 1st Oct 2019 to 1st Jan 2020, golbal_infotech's name is appearing 5 times, then 5 should appear infront of every global_infotech value like:
company_name date appearance_count_last_3_months
0 global_infotech 2019-06-15 5
1 global_infotech 2020-03-22 5
2 global_infotech 2020-08-30 5
3 global_infotech 2018-06-19 5
4 global_infotech 2018-06-15 5
5 global_infotech 2018-02-15 5
6 global_infotech 2018-11-22 5
7 global_infotech 2019-01-15 5
8 global_infotech 2018-12-15 5
9 global_infotech 2019-06-15 5
10 global_infotech 2018-12-19 5
11 global_infotech 2019-12-31 5
12 global_infotech 2019-02-18 5
13 global_infotech 2018-06-16 5
14 global_infotech 2019-02-10 5
15 global_infotech 2019-03-15 5
IIUC:
you can create a custom function:
def getcount(company,month=3,df=df):
df=df.copy()
df['date']=pd.to_datetime(df['date'],format='%Y-%m-%d',errors='coerce')
df=df[df['company_name'].eq(company)]
val=df.groupby(pd.Grouper(key='date',freq=str(month)+'m')).count().max().get(0)
df['appearance_count_last_3_months']=val
return df
getcount('global_infotech')
#OR
getcount('global_infotech',3)
Update:
since you have 92 different companies so you can use for loop:
lst=[]
for x in df['company_name'].unique():
lst.append(getcount(x))
out=pd.concat(lst)
If you print out then you will get your desired output
You can first filter the data for the last 3 months, and then groupby company name and merge back into the original dataframe.
import pandas as pd
from datetime import datetime
from dateutil.relativedelta import relativedelta
# sample data
df = pd.DataFrame({
'company_name': ['global_infotech', 'global_infotech', 'Qualcom','another_company'],
'date': ['2019-02-18', '2021-07-02', '2021-07-01','2019-02-18']
})
df['date'] = pd.to_datetime(df['date'])
# filter for last 3 months
summary = df[df['date']>=datetime.now()-relativedelta(months=3)]
# groupby then aggregate with desired column name
summary = summary.rename(columns={'date':'appearance_count_last_3_months'})
summary = summary.groupby('company_name')
summary = summary.agg('count')
# merge summary back into original df, filling missing values with 0
df = df.merge(summary, left_on='company_name', right_index=True, how='left')
df['appearance_count_last_3_months'] = df['appearance_count_last_3_months'].fillna(0).astype('int')
# result:
df
company_name date appearance_count_last_3_months
0 global_infotech 2019-02-18 1
1 global_infotech 2021-07-02 1
2 Qualcom 2021-07-01 1
3 another_company 2019-02-18 0
Related
I need to count the number of campaigns per day based on the start and end dates of the campaigns
Input Table:
Campaign name
Start date
End date
Campaign A
2022-07-10
2022-09-25
Campaign B
2022-08-06
2022-10-07
Campaign C
2022-07-30
2022-09-10
Campaign D
2022-08-26
2022-10-24
Campaign E
2022-07-17
2022-09-29
Campaign F
2022-08-24
2022-09-12
Campaign G
2022-08-11
2022-10-24
Campaign H
2022-08-26
2022-11-22
Campaign I
2022-08-29
2022-09-25
Campaign J
2022-08-21
2022-11-15
Campaign K
2022-07-20
2022-09-18
Campaign L
2022-07-31
2022-11-20
Campaign M
2022-08-17
2022-10-10
Campaign N
2022-07-27
2022-09-07
Campaign O
2022-07-29
2022-09-26
Campaign P
2022-07-06
2022-09-15
Campaign Q
2022-07-16
2022-09-22
Out needed (result):
Date
Count unique campaigns
2022-07-02
17
2022-07-03
47
2022-07-04
5
2022-07-05
5
2022-07-06
25
2022-07-07
27
2022-07-08
17
2022-07-09
58
2022-07-10
23
2022-07-11
53
2022-07-12
18
2022-07-13
29
2022-07-14
52
2022-07-15
7
2022-07-16
17
2022-07-17
37
2022-07-18
33
How do I need to write the SQL command to get the above result? thanks all
In the following solutions we leverage string_split with combination with replicate to generate new records.
select dt as date
,count(*) as Count_unique_campaigns
from
(
select *
,dateadd(day, row_number() over(partition by Campaign_name order by (select null))-1, Start_date) as dt
from (
select *
from t
outer apply string_split(replicate(',',datediff(day, Start_date, End_date)),',')
) t
) t
group by dt
order by dt
date
Count_unique_campaigns
2022-07-06
1
2022-07-07
1
2022-07-08
1
2022-07-09
1
2022-07-10
2
2022-07-11
2
2022-07-12
2
2022-07-13
2
2022-07-14
2
2022-07-15
2
2022-07-16
3
2022-07-17
4
2022-07-18
4
2022-07-19
4
2022-07-20
5
2022-07-21
5
2022-07-22
5
2022-07-23
5
2022-07-24
5
2022-07-25
5
2022-07-26
5
2022-07-27
6
2022-07-28
6
2022-07-29
7
2022-07-30
8
2022-07-31
9
2022-08-01
9
2022-08-02
9
2022-08-03
9
2022-08-04
9
2022-08-05
9
2022-08-06
10
2022-08-07
10
2022-08-08
10
2022-08-09
10
2022-08-10
10
2022-08-11
11
2022-08-12
11
2022-08-13
11
2022-08-14
11
2022-08-15
11
2022-08-16
11
2022-08-17
12
2022-08-18
12
2022-08-19
12
2022-08-20
12
2022-08-21
13
2022-08-22
13
2022-08-23
13
2022-08-24
14
2022-08-25
14
2022-08-26
16
2022-08-27
16
2022-08-28
16
2022-08-29
17
2022-08-30
17
2022-08-31
17
2022-09-01
17
2022-09-02
17
2022-09-03
17
2022-09-04
17
2022-09-05
17
2022-09-06
17
2022-09-07
17
2022-09-08
16
2022-09-09
16
2022-09-10
16
2022-09-11
15
2022-09-12
15
2022-09-13
14
2022-09-14
14
2022-09-15
14
2022-09-16
13
2022-09-17
13
2022-09-18
13
2022-09-19
12
2022-09-20
12
2022-09-21
12
2022-09-22
12
2022-09-23
11
2022-09-24
11
2022-09-25
11
2022-09-26
9
2022-09-27
8
2022-09-28
8
2022-09-29
8
2022-09-30
7
2022-10-01
7
2022-10-02
7
2022-10-03
7
2022-10-04
7
2022-10-05
7
2022-10-06
7
2022-10-07
7
2022-10-08
6
2022-10-09
6
2022-10-10
6
2022-10-11
5
2022-10-12
5
2022-10-13
5
2022-10-14
5
2022-10-15
5
2022-10-16
5
2022-10-17
5
2022-10-18
5
2022-10-19
5
2022-10-20
5
2022-10-21
5
2022-10-22
5
2022-10-23
5
2022-10-24
5
2022-10-25
3
2022-10-26
3
2022-10-27
3
2022-10-28
3
2022-10-29
3
2022-10-30
3
2022-10-31
3
2022-11-01
3
2022-11-02
3
2022-11-03
3
2022-11-04
3
2022-11-05
3
2022-11-06
3
2022-11-07
3
2022-11-08
3
2022-11-09
3
2022-11-10
3
2022-11-11
3
2022-11-12
3
2022-11-13
3
2022-11-14
3
2022-11-15
3
2022-11-16
2
2022-11-17
2
2022-11-18
2
2022-11-19
2
2022-11-20
2
2022-11-21
1
2022-11-22
1
For SQL in Azure and SQL Server 2022 we have a cleaner solution based on [ordinal][4].
"The enable_ordinal argument and ordinal output column are currently
supported in Azure SQL Database, Azure SQL Managed Instance, and Azure
Synapse Analytics (serverless SQL pool only). Beginning with SQL
Server 2022 (16.x) Preview, the argument and output column are
available in SQL Server."
select dt as date
,count(*) as Count_unique_campaigns
from
(
select *
,dateadd(day, ordinal-1, Start_date) as dt
from (
select *
from t
outer apply string_split(replicate(',',datediff(day, Start_date, End_date)),',', 1)
) t
) t
group by dt
order by dt
Fiddle
Your sample data doesn't seem to match your desired results, but I think what you're after is this:
DECLARE #Start date, #End date;
-- first, find the earliest and last date:
SELECT #Start = MIN([Start date]), #End = MAX([End date])
FROM dbo.Campaigns;
-- now use a recursive CTE to build a date range,
-- and count the number of campaigns that have a row
-- where the campaign was active on that date:
WITH d(d) AS
(
SELECT #Start
UNION ALL
SELECT DATEADD(DAY, 1, d) FROM d WHERE d < #End
)
SELECT
[Date] = d,
[Count unique campaigns] = COUNT(*)
FROM d
INNER JOIN dbo.Campaigns AS c
ON d.d >= c.[Start date] AND d.d <= c.[End date]
GROUP BY d.d OPTION (MAXRECURSION 32767);
Working example in this fiddle.
I have this df:
CODE TMAX TMIN PP
DATE
1991-01-01 000130 32.6 23.4 0.0
1991-01-02 000130 31.2 22.4 0.0
1991-01-03 000130 32.0 NaN 0.0
1991-01-04 000130 32.2 23.0 0.0
1991-01-05 000130 30.5 22.0 0.0
... ... ... ...
2020-12-27 158328 NaN NaN NaN
2020-12-28 158328 NaN NaN NaN
2020-12-29 158328 NaN NaN NaN
2020-12-30 158328 NaN NaN NaN
2020-12-31 158328 NaN NaN NaN
I have data of 30 years (1991-2020) for each CODE, and i want to calculate monthly normals of TMAX, TMIN and PP. So for TMAX and TMIN i should calculate the average for every month, so if January have 31 days i should get the mean of those 31 values and get a value for January 1991, January 1992, etc. So i will have 30 Januarys (January 1991, January 1992, ... ,January 2020), 30 Februarys, etc. After this i should calculate the average of every group of months (Januarys with Januarys, Februarys with Februarys, etc). So i will have 12 values (one value for every month). Example:
(January1991 + January1992 + ..... + January 2020) /30
(February1991 + February1992 + ..... + February 2020) /30
.... same for every group of months.
So i'm using this code but i don't know if it's ok.
from datetime import date
normalstemp=df[['CODE','TMAX','TMIN']].groupby([df.CODE, df.index.month]).mean().round(1)
For PP (precipitation) i should sum the values of every PP value of the month, so if January have 31 days i should sum all of their values and get a value for January 1991, January 1992, etc. So i will have 30 Januarys (January 1991, January 1992, ... ,January 2020) , 30 Februarys (February 1991, February 1992, ... ,February 2020), etc. After this i should calculate the average of every group of months (Januarys with Januarys, Februarys with Februarys, etc). So i will have 12 values (one value for every month, the same as TMAX and TMIN).
Example:
(January1991 + January1992 + ..... + January 2020) /30
(February1991 + February1992 + ..... + February 2020) /30
.... same for every group of months.
So im using this code but i know this code isn't correct because i'm not getting the mean of the januarys, februarys, etc.
normalspp=df[['CODE','PP']].groupby([df.CODE, df.index.month]).sum().round(1)
I only have basic knowledge of python so i will appreciate if you can help me.
Thanks in advance.
Ver 2: Average by Year-Month and by Month
import pandas as pd
import numpy as np
x = pd.date_range(start='1/1/1991', end='12/31/2020',freq='D')
df = pd.DataFrame({'Date':x.tolist()*2,
'Code':['000130']*10958 + ['158328']*10958,
'TMAX': np.random.randint(6,10, size=21916),
'TMIN': np.random.randint(1,5, size=21916)
})
# Create a Month column to get Average by Month for all years
df['Month'] = df.Date.dt.month
# Create a Year-Month column to get Average of each Month within the Year
df['Year_Mon'] = df.Date.dt.strftime('%Y-%m')
# Print the Average of each Month within each Year for each code
print (df.groupby(['Code','Year_Mon'])['TMAX'].mean())
print (df.groupby(['Code','Year_Mon'])['TMIN'].mean())
# Print the Average of each Month irrespective of the year (for each code)
print (df.groupby(['Code','Month'])['TMAX'].mean())
print (df.groupby(['Code','Month'])['TMAX'].mean())
If you want to give a name for the TMAX Average value, you can add the reset_index and rename column. Here's code to do that.
print (df.groupby(['Code','Year_Mon'])['TMAX'].mean().reset_index().rename(columns={'TMAX':'TMAX_Avg'}))
The output of this will be:
Average of TMAX for each Year-Month for each Code
Code Year_Mon
000130 1991-01 7.225806
1991-02 7.678571
1991-03 7.354839
1991-04 7.500000
1991-05 7.516129
...
158328 2020-08 7.387097
2020-09 7.300000
2020-10 7.516129
2020-11 7.500000
2020-12 7.451613
Name: TMAX, Length: 720, dtype: float64
Average of TMIN for each Year-Month for each Code
Code Year_Mon
000130 1991-01 2.419355
1991-02 2.571429
1991-03 2.193548
1991-04 2.366667
1991-05 2.451613
...
158328 2020-08 2.451613
2020-09 2.566667
2020-10 2.612903
2020-11 2.666667
2020-12 2.580645
Name: TMIN, Length: 720, dtype: float64
Average of TMAX for each Month for each Code (all years combined)
Code Month
000130 1 7.540860
2 7.536557
3 7.482796
4 7.486667
5 7.444086
6 7.570000
7 7.507527
8 7.529032
9 7.501111
10 7.401075
11 7.482222
12 7.517204
158328 1 7.532258
2 7.563679
3 7.490323
4 7.555556
5 7.500000
6 7.497778
7 7.545161
8 7.483871
9 7.526667
10 7.529032
11 7.547778
12 7.524731
Name: TMAX, dtype: float64
Average of TMIN for each Month for each Code (all years combined)
Code Month
000130 1 7.540860
2 7.536557
3 7.482796
4 7.486667
5 7.444086
6 7.570000
7 7.507527
8 7.529032
9 7.501111
10 7.401075
11 7.482222
12 7.517204
158328 1 7.532258
2 7.563679
3 7.490323
4 7.555556
5 7.500000
6 7.497778
7 7.545161
8 7.483871
9 7.526667
10 7.529032
11 7.547778
12 7.524731
Name: TMAX, dtype: float64
Ver 1: Average by Year and Month for each Code
Here is one way to do this.
You can create two columns - Year and Month. Then get the average of TMAX, TMIN, and PP for each month within the year by doing a groupby ('Code','Year_Mon')
See code for more details.
import pandas as pd
import numpy as np
# create a range of dates from 1/1/2018 thru 12/31/2020 for each day
x = pd.date_range(start='1/1/2018', end='12/31/2020',freq='D')
# create a dataframe with the date ranges x 2 for two codes
# TMIN is a random value from 1 thru 5 - you can put your actual data here
# TMAX is a random value from 6 thru 10 - you can put your actual data here
df = pd.DataFrame({'Date':x.tolist()*2,
'Code':['000130']*1096 + ['158328']*1096,
'TMAX': np.random.randint(6,10, size=2192),
'TMIN': np.random.randint(1,5, size=2192)
})
# Create a Year-Month column using df.Date.dt.strftime
df['Year_Mon'] = df.Date.dt.strftime('%Y-%m')
# Calculate the Average of TMAX and TMIN using groupby Code and Year_Mon
df['TMAX_Avg'] = df.groupby(['Code','Year_Mon'])['TMAX'].transform('mean')
df['TMIN_Avg'] = df.groupby(['Code','Year_Mon'])['TMIN'].transform('mean')
The output of this will be:
Date Code TMAX TMIN Year_Mon TMAX_Avg TMIN_Avg
0 2018-01-01 000130 8 2 2018-01 7.451613 2.129032
1 2018-01-02 000130 7 4 2018-01 7.451613 2.129032
2 2018-01-03 000130 9 2 2018-01 7.451613 2.129032
3 2018-01-04 000130 6 1 2018-01 7.451613 2.129032
4 2018-01-05 000130 9 4 2018-01 7.451613 2.129032
5 2018-01-06 000130 6 1 2018-01 7.451613 2.129032
6 2018-01-07 000130 9 2 2018-01 7.451613 2.129032
7 2018-01-08 000130 9 2 2018-01 7.451613 2.129032
8 2018-01-09 000130 7 2 2018-01 7.451613 2.129032
9 2018-01-10 000130 8 2 2018-01 7.451613 2.129032
10 2018-01-11 000130 8 3 2018-01 7.451613 2.129032
11 2018-01-12 000130 7 2 2018-01 7.451613 2.129032
12 2018-01-13 000130 7 1 2018-01 7.451613 2.129032
13 2018-01-14 000130 8 1 2018-01 7.451613 2.129032
14 2018-01-15 000130 7 3 2018-01 7.451613 2.129032
15 2018-01-16 000130 6 1 2018-01 7.451613 2.129032
16 2018-01-17 000130 6 3 2018-01 7.451613 2.129032
17 2018-01-18 000130 9 3 2018-01 7.451613 2.129032
18 2018-01-19 000130 7 2 2018-01 7.451613 2.129032
19 2018-01-20 000130 8 1 2018-01 7.451613 2.129032
20 2018-01-21 000130 9 4 2018-01 7.451613 2.129032
21 2018-01-22 000130 6 2 2018-01 7.451613 2.129032
22 2018-01-23 000130 9 4 2018-01 7.451613 2.129032
23 2018-01-24 000130 6 2 2018-01 7.451613 2.129032
24 2018-01-25 000130 8 3 2018-01 7.451613 2.129032
25 2018-01-26 000130 6 2 2018-01 7.451613 2.129032
26 2018-01-27 000130 8 1 2018-01 7.451613 2.129032
27 2018-01-28 000130 8 3 2018-01 7.451613 2.129032
28 2018-01-29 000130 6 1 2018-01 7.451613 2.129032
29 2018-01-30 000130 6 1 2018-01 7.451613 2.129032
30 2018-01-31 000130 8 1 2018-01 7.451613 2.129032
31 2018-02-01 000130 7 1 2018-02 7.250000 2.428571
32 2018-02-02 000130 6 2 2018-02 7.250000 2.428571
33 2018-02-03 000130 6 4 2018-02 7.250000 2.428571
34 2018-02-04 000130 8 3 2018-02 7.250000 2.428571
35 2018-02-05 000130 8 2 2018-02 7.250000 2.428571
36 2018-02-06 000130 6 3 2018-02 7.250000 2.428571
37 2018-02-07 000130 6 3 2018-02 7.250000 2.428571
38 2018-02-08 000130 7 1 2018-02 7.250000 2.428571
39 2018-02-09 000130 9 4 2018-02 7.250000 2.428571
40 2018-02-10 000130 8 2 2018-02 7.250000 2.428571
41 2018-02-11 000130 7 4 2018-02 7.250000 2.428571
42 2018-02-12 000130 8 1 2018-02 7.250000 2.428571
43 2018-02-13 000130 6 4 2018-02 7.250000 2.428571
44 2018-02-14 000130 6 1 2018-02 7.250000 2.428571
45 2018-02-15 000130 6 4 2018-02 7.250000 2.428571
46 2018-02-16 000130 8 2 2018-02 7.250000 2.428571
47 2018-02-17 000130 7 3 2018-02 7.250000 2.428571
48 2018-02-18 000130 9 3 2018-02 7.250000 2.428571
49 2018-02-19 000130 8 2 2018-02 7.250000 2.428571
If you want only the Code, Year-Month, and TMIN and TMAX values, you can do:
TMAX average for each month within the year:
print (df.groupby(['Code','Year_Mon'])['TMAX'].mean())
Output will be:
Code Year_Mon
000130 2018-01 7.451613
2018-02 7.250000
2018-03 7.774194
2018-04 7.366667
2018-05 7.451613
...
158328 2020-08 7.935484
2020-09 7.666667
2020-10 7.548387
2020-11 7.333333
2020-12 7.580645
TMIN average for each month within the year:
print (df.groupby(['Code','Year_Mon'])['TMIN'].mean())
Output will be:
Code Year_Mon
000130 2018-01 2.129032
2018-02 2.428571
2018-03 2.451613
2018-04 2.500000
2018-05 2.677419
...
158328 2020-08 2.709677
2020-09 2.166667
2020-10 2.161290
2020-11 2.366667
2020-12 2.548387
I have a column in a dataframe which is currently a string:
'4/17/2015 8:03:45 PM'
I need to create other columns based on this column
['dayofweek','dayofmonth']
I couldnt find a fast solution which could help me do that for 60 000 000 rows, pls help )
You can create a function(s) to convert the date in your dataframe to the output you need and than add the result as a new columns in your dataframe as below:
df
numbers time
0 1 2019-01-01
1 2 2019-01-02
2 3 2019-01-03
3 4 2019-01-04
4 5 2019-01-05
5 6 2019-01-06
6 7 2019-01-07
7 8 2019-01-08
8 9 2019-01-09
9 10 2019-01-10
10 11 2019-01-11
11 12 2019-01-12
Create the function to apply to your data columns to your dataframe
def day_of_the_week(value):
return value.strftime("%A")
def day_of_the_month(value):
return value.day
Create the new columns by applying the functions to the dataframe date columns
df['day_of_the_week'] = df['time'].apply(day_of_the_week)
df['day_of_the_month'] = df['time'].apply(day_of_the_month)
Get the update Datafame
df
numbers time day_of_the_week day_of_the_month
0 1 2019-01-01 Tuesday 1
1 2 2019-01-02 Wednesday 2
2 3 2019-01-03 Thursday 3
3 4 2019-01-04 Friday 4
4 5 2019-01-05 Saturday 5
5 6 2019-01-06 Sunday 6
6 7 2019-01-07 Monday 7
7 8 2019-01-08 Tuesday 8
8 9 2019-01-09 Wednesday 9
9 10 2019-01-10 Thursday 10
10 11 2019-01-11 Friday 11
11 12 2019-01-12 Saturday 12
Following is the code for an example of using pandas datetime module. As shown in the output, it is not consitent, It is mixing date and month. Am i doing something wrong?
dates = ['20/11/17', '12/02/18', '02/05/18', '10/09/18',
'22/06/17', '12/02/15','19/11/17', '04/09/16',
'12/05/18', '11/04/15', '10/04/17', '13/06/16']
data = pd.DataFrame(data=dates, columns=['date'])
data['date_format'] = pd.to_datetime(dates)
data
Output:
date date_format
0 20/11/17 2017-11-20
1 12/02/18 2018-12-02
2 02/05/18 2018-02-05
3 10/09/18 2018-10-09
4 22/06/17 2017-06-22
5 12/02/15 2015-12-02
6 19/11/17 2017-11-19
7 04/09/16 2016-04-09
8 12/05/18 2018-12-05
9 11/04/15 2015-11-04
10 10/04/17 2017-10-04
11 13/06/16 2016-06-13
My initial dataframe (df):
column1 column2 column3 column4
0 criteria_1 criteria_a 1/5/2017 5
1 criteria_1 criteria_b 2/3/2017 3
2 criteria_1 criteria_a 1/10/2017 10
3 criteria_1 criteria_b 2/7/2017 7
4 criteria_1 criteria_b 2/11/2017 11
5 criteria_1 criteria_a 1/13/2017 13
My code:
df = pd.read_csv("C:/Users/Desktop/maxtest.csv")
df['column3'] = pd.to_datetime(df['column3'])
df['max_column3'] = df.groupby(['column1','column2'])['column3'].transform(max)
df['max_column4'] = df.groupby(['column1','column2'])['column4'].transform(max)
df['test'] = np.where(df['column3'] < df['max_column3'],df['column3'],df['max_column4'])
The issue:
I created a df['test'] column and wish to return df['column3'] when the np.where statement is True. When I try this I receive a "TypeError: invalid type promotion" error.
I am not entirely sure what is causing the error.
See my comment for explanation.
df['column3'] = pd.to_datetime(df['column3'])
df['max_column3'] = df.groupby(['column1','column2'])['column3'].transform(max)
df['max_column4'] = df.groupby(['column1','column2'])['column4'].transform(max)
df['test'] = np.where((df['column3'] < df['max_column3']),df.column3.astype(str),df.max_column4)
Output:
column1 column2 column3 column4 max_column3 max_column4 \
0 criteria_1 criteria_a 2017-01-05 5 2017-01-13 13
1 criteria_1 criteria_b 2017-02-03 3 2017-02-11 11
2 criteria_1 criteria_a 2017-01-10 10 2017-01-13 13
3 criteria_1 criteria_b 2017-02-07 7 2017-02-11 11
4 criteria_1 criteria_b 2017-02-11 11 2017-02-11 11
5 criteria_1 criteria_a 2017-01-13 13 2017-01-13 13
test
0 2017-01-05
1 2017-02-03
2 2017-01-10
3 2017-02-07
4 11
5 13
If you want to retain the datetime format, you can do:
df['test'] = df.apply(lambda x: x.column3 if x.column3 < x.max_column3 else x.max_column4, axis=1)
df
Out[1291]:
column1 column2 column3 column4 max_column3 max_column4 \
0 criteria_1 criteria_a 2017-01-05 5 2017-01-13 13
1 criteria_1 criteria_b 2017-02-03 3 2017-02-11 11
2 criteria_1 criteria_a 2017-01-10 10 2017-01-13 13
3 criteria_1 criteria_b 2017-02-07 7 2017-02-11 11
4 criteria_1 criteria_b 2017-02-11 11 2017-02-11 11
5 criteria_1 criteria_a 2017-01-13 13 2017-01-13 13
test
0 2017-01-05 00:00:00
1 2017-02-03 00:00:00
2 2017-01-10 00:00:00
3 2017-02-07 00:00:00
4 11
5 13
I ended up using a standard function and doing:
import pandas as pd
import numpy as np
df = pd.read_csv("C:/Users/andre_000/Desktop/maxtest.csv")
df['column3'] = pd.to_datetime(df['column3'])
df['max_column3'] = df.groupby(['column1','column2'])['column3'].transform(max)
df['max_column4'] = df.groupby(['column1','column2'])['column4'].transform(max)
def func(row):
if row['column3'] < row['max_column3']:
return row['column3']
else:
return row['max_column4']
df = df.assign(test=df.apply(func, axis=1))