I am working on an automation task and my dataframe columns are as shown below
Defined Discharge Bin Apr-20 Jan-20 Mar-20 May-20 Grand Total
2-4 min 1 1
4-6 min 5 1 6
6-8 min 5 7 2 14
I want to sort the columns starting from Jan-20. The problem here is that the columns automatically get sorted according to alphabetical order. Sorting can be done manually but since I'm working on an automation task I need to ensure that each month when we feed the data the columns should automatically get sorted according to the months of the year.
Try this:
import pandas as pd
df = pd.DataFrame(data={'Defined Discharge Bin':['2-4 min', '4-6 min','6-8 min'], 'Apr-20':['', '', ''], 'Jan-20':['', 5, 5], 'Mar-20':['', '', 7], 'May-20':[1, 1, 2], 'Grand Total':[1, 6, 14]})
cols_exclude = ['Defined Discharge Bin', 'Grand Total']
cols_date = [c for c in df.columns.tolist() if c not in cols_exclude]
cols_sorted = sorted(cols_date, key=lambda x: pd.to_datetime(x, format='%b-%y'))
df = df[cols_exclude[0:1] + cols_sorted + cols_exclude[-1:]]
print(df)
Output:
Defined Discharge Bin Jan-20 Mar-20 Apr-20 May-20 Grand Total
0 2-4 min 1 1
1 4-6 min 5 1 6
2 6-8 min 5 7 2 14
Related
I have this data frame available. It has a timestamp for start, a timestamp for end and a duration column.
start
end
duration
1
5
4
2
5
3
3
4
1
4
6
2
5
9
4
6
7
1
7
10
3
I'd like to add a column 'rolling_mean' to the dataframe that calculates a rolling average on all previous rows (ordered by start) with this condition: only previous rows can be used for mean calculation where the event has already ended (so end date should be equal to or lower than the start date of the row for which the rolling mean is being calculated). So for row number 4, the rolling_mean is 1 because we look at all previous rows and only the previous one fulfills the condition of the event having ended.
This is the dataframe I'd like to get with a Pandas rolling mean:
start
end
duration
rolling_mean
1
5
4
Nan
2
5
3
Nan
3
4
1
Nan
4
6
2
1
5
9
4
2.666667
6
7
1
2.500000
7
10
3
2.200000
Here is the code to reproduce my example:
d = [[1, 5],
[2, 5],
[3, 4],
[4, 6],
[5, 9],
[6, 7],
[7, 10]]
df = pd.DataFrame(d, columns=['start_time', 'end_time'])
df['duration'] = df.end_time - df.start_time
I've tried to merge the dataframe with itself to then filter out the irrelevant rows, but the data frame is too big to take this approach.
So I'm looking for a rolling mean but where I can specify the extra condition.
Does anyone have any ideas for this one?
A for loop will do the job:
rolling_mean = np.repeat(np.nan, len(df))
start, end, duration = df[["start_time", "end_time", "duration"]].to_numpy().T
for i in range(len(df)):
matches = duration[:i][end[:i] <= start[i]]
if matches.any():
rolling_mean[i] = matches.mean()
There are three dates in a df Date column sorted in ascending order. How to write text 'Short' for nearest date, 'Mid' for next date, 'Long' for the farthest date in a new column adjacent to the Date column ? i.e. 2021-04-23 = Short, 2021-05-11 = Mid and 2021-10-08 = Long.
data = {"product_name":["Keyboard","Mouse", "Monitor", "CPU","CPU", "Speakers"],
"Unit_Price":[500,200, 5000.235, 10000.550, 10000.550, 250.50],
"No_Of_Units":[5,5, 10, 20, 20, 8],
"Available_Quantity":[5,6,10,1,3,2],
"Date":['11-05-2021', '23-04-2021', '08-10-2021','23-04-2021', '08-10-2021','11-05-2021']
}
df = pd.DataFrame(data)
df['Date'] = pd.to_datetime(df['Date'], format = '%d-%m-%Y')
df = df.sort_values(by='Date')
Convert to_datetime and rank the dates, then map your values in the desired order:
df['New'] = (pd.to_datetime(df['Date']).rank(method='dense')
.map(dict(enumerate(['Short', 'Mid', 'Long'], start=1)))
)
Output:
product_name Unit_Price No_Of_Units Available_Quantity Date New
1 Mouse 200.000 5 6 2021-04-23 Short
3 CPU 10000.550 20 1 2021-04-23 Short
0 Keyboard 500.000 5 5 2021-05-11 Mid
5 Speakers 250.500 8 2 2021-05-11 Mid
2 Monitor 5000.235 10 10 2021-10-08 Long
4 CPU 10000.550 20 3 2021-10-08 Long
I have a dataframe with 2 columns: Date and LMP and there are totals of 8760 rows. This is the dummy dataframe:
import pandas as pd
import numpy as np
df = pd.DataFrame({'Date': pd.date_range('2023-01-01 00:00', '2023-12-31 23:00', freq='1H'), 'LMP': np.random.randint(10, 20, 8760)})
I extract month from the date and then created the season column for the specific dates. Like this
df['month'] = pd.DatetimeIndex(df['Date']).month
season = []
for i in df['month']:
if i <= 2 or i == 12:
season.append('Winter')
elif 2 < i <= 5:
season.append('Spring')
elif 5 < i <= 8:
season.append('Summer')
else:
season.append('Autumn')
df['Season'] = season
df2 = df.groupby(['month']).mean()
df3 = df.groupby(['Season']).mean()
print(df2['LMP'])
print(df3['LMP'])
Output:
**month**
1 20.655113
2 20.885532
3 19.416946
4 22.025248
5 26.040606
6 19.323863
7 51.117965
8 51.434093
9 21.404680
10 14.701989
11 20.009590
12 38.706160
**Season**
Autumn 18.661426
Spring 22.499365
Summer 40.856845
Winter 26.944382
But I want the output to be in 24 hour average for both monthly and seasonal.
Desired Output:
for seasonal 24 hours average
For monthyl 24 hours average
Note: in the monthyl 24 hour average columns are months(1,2,3,4,5,6,7,8,9,10,11,12) and rows are hours(starting from 0).
Can anyone help?
try:
df['hour']=pd.DatetimeIndex(df['Date']).hour
dft = df[['Season', 'hour', 'LMP']]
dftg = dft.groupby(['hour', 'Season'])['LMP'].mean()
dftg.reset_index().pivot(index='hour', columns='Season')
result:
I have df like this
Date amount
0 2021-06-18 14
1 2021-06-19 -8
2 2021-06-20 -8
3 2021-06-21 17
4 2021-07-02 -8
5 2021-07-05 77
6 2021-07-06 -10
7 2021-08-02 -78
8 2021-08-06 77
9 2021-07-08 10
i went the count of sign change in amount month wise of count each month like in
count = [{"June-2021": 2},{"July-2021" : 3},{"Aug-2021" : 1}]
Note: Last Date of each month and first date of next month is different then count as in different count
i want a function for this
You can use (x.mul(x.shift()) < 0).sum() (current entry multiply by last entry being negative indicates a sign change) to get the count of sign changes within a group of month-year, as follows:
count = (df.groupby(df['Date'].dt.strftime('%b-%Y'), sort=False)['amount']
.agg(lambda x: (x.mul(x.shift()) < 0).sum())
.to_dict()
)
Result:
print(count)
{'Jun-2021': 2, 'Jul-2021': 3, 'Aug-2021': 1}
Edit
If you want list of dict, you can use:
count = (df.groupby(df['Date'].dt.strftime('%b-%Y'), sort=False)['amount']
.agg(lambda x: (x.mul(x.shift()) < 0).sum())
.reset_index()
.apply(lambda x: {x['Date']: x['amount']}, axis=1)
.to_list()
)
Result:
print(count)
[{'Jun-2021': 2}, {'Jul-2021': 3}, {'Aug-2021': 1}]
Suppose I have 5 categories {A, B, C, D, E} and Several date entries of PURCHASES with distinct dates (for instance, A may range from 01/01/1900 to 31/01/1901 and B from 02/02/1930 to 03/03/1933.
I want to create a new column 'day of occurrence' where I have sequence of number 1...N from the point I find the first date in which number of purchases >= 5.
I want this to compare how categories are similar from the day they've achieved 5 purchases (dates are irrelevant here, but product lifetime is)
Thanks!
Here is how you can label rows from 1 to N depending on column value.
import pandas as pd
df = pd.DataFrame(data=[3, 6, 9, 3, 6], columns=['data'])
df['day of occurrence'] = 0
values_count = df.loc[df['data'] > 5].shape[0]
df.loc[df['data'] > 5, 'day of occurrence'] = range(1, values_count + 1)
The initial DataFrame:
data
0 3
1 6
2 9
3 3
4 6
Output DataFrame:
data day of occurrence
0 3 0
1 6 1
2 9 2
3 3 0
4 6 3
Your data should be sorted by date, for example, df = df.sort_values(by='your-datetime-column')