Consider this sample data:
Month Location Products Sales Profit
JAN 1 43 32 20
JAN 2 82 54 25
JAN 3 64 43 56
FEB 1 37 28 78
FEB 2 18 15 34
FEB 3 5 2 4
MAR 1 47 40 14
The multi-index transformation I am trying to achieve is this:
JAN FEB MAR
Location Products Sales Profit Products Sales Profit Products Sales Profit
1 43 32 29 37 28 78 47 40 14
2 82 54 25 18 15 34 null null null
3 64 43 56 5 2 4 null null null
I tried versions of this:
df.stack().to_frame().T
It put all the data into one row. So, that's not the goal.
I presume I am close in that it should be a stacking or unstacking, melting or unmelting, but my attempts have all resulted in data oatmeal at this point. Appreciate your time trying to solve this one.
You can use pivot with reorder_levels and sort_index():
df.pivot(index='Location',columns='Month').reorder_levels(order=[1,0],axis=1).sort_index(axis=1)
Month FEB JAN MAR
Products Profit Sales Products Profit Sales Products Profit Sales
Location
1 37.0 78.0 28.0 43.0 20.0 32.0 47.0 14.0 40.0
2 18.0 34.0 15.0 82.0 25.0 54.0 NaN NaN NaN
3 5.0 4.0 2.0 64.0 56.0 43.0 NaN NaN NaN
In case you are interested, this answer elaborates between swaplevel and reoder_levels.
Use pivot:
>>> df.pivot('Location', 'Month').swaplevel(axis=1).sort_index(axis=1)
Month FEB JAN MAR
Products Profit Sales Products Profit Sales Products Profit Sales
Location
1 37.0 78.0 28.0 43.0 20.0 32.0 47.0 14.0 40.0
2 18.0 34.0 15.0 82.0 25.0 54.0 NaN NaN NaN
3 5.0 4.0 2.0 64.0 56.0 43.0 NaN NaN NaN
To preserve order, you have to transform your Month column as CategoricalDtype before:
df['Month'] = df['Month'].astype(pd.CategoricalDtype(df['Month'].unique(), ordered=True))
out = df.pivot('Location', 'Month').swaplevel(axis=1).sort_index(axis=1)
print(out)
# Output:
Month JAN FEB MAR
Products Profit Sales Products Profit Sales Products Profit Sales
Location
1 43.0 20.0 32.0 37.0 78.0 28.0 47.0 14.0 40.0
2 82.0 25.0 54.0 18.0 34.0 15.0 NaN NaN NaN
3 64.0 56.0 43.0 5.0 4.0 2.0 NaN NaN NaN
Update 2
Try to force the order of level 2 columns:
df1 = df.set_index(['Month', 'Location'])
df1.columns = pd.CategoricalIndex(df1.columns, ordered=True)
df1 = df1.unstack('Month').swaplevel(axis=1).sort_index(axis=1)
Related
I am encountering an errors each time i attempt to compute the difference in readings for a meter in my dataset. The dataset structure is this.
id paymenttermid houseid houseid-meterid quantity month year cleaned_quantity
Datetime
2019-02-01 255 water 215 215M201 23.0 2 2019 23.0
2019-02-01 286 water 193 193M181 24.0 2 2019 24.0
2019-02-01 322 water 172 172M162 22.0 2 2019 22.0
2019-02-01 323 water 176 176M166 61.0 2 2019 61.0
2019-02-01 332 water 158 158M148 15.0 2 2019 15.0
I am attempting to generate a new column called consumption that computes the difference in quantities consumed for each house(identified by houseid-meterid) after every month of the year.
The code i am using to implement this is:
water_df["consumption"] = water_df.groupby(["year", "month", "houseid-meterid"])["cleaned_quantity"].diff(-1)
After executing this code, the consumption column is filled with NaN values. How can I correctly implement this logic.
The end result looks like this:
id paymenttermid houseid houseid-meterid quantity month year cleaned_quantity consumption
Datetime
2019-02-01 255 water 215 215M201 23.0 2 2019 23.0 NaN
2019-02-01 286 water 193 193M181 24.0 2 2019 24.0 NaN
2019-02-01 322 water 172 172M162 22.0 2 2019 22.0 NaN
2019-02-01 323 water 176 176M166 61.0 2 2019 61.0 NaN
2019-02-01 332 water 158 158M148 15.0 2 2019 15.0 NaN
Many thank in advance.
I have attempted to use
water_df["consumption"] = water_df.groupby(["year", "month", "houseid-meterid"])["cleaned_quantity"].diff(-1)
and
water_df["consumption"] = water_df.groupby(["year", "month", "houseid-meterid"])["cleaned_quantity"].diff(0)
and
water_df["consumption"] = water_df.groupby(["year", "month", "houseid-meterid"])["cleaned_quantity"].diff()
all this commands result in the same behaviour as stated above.
Expected output should be:
Datetime houseid-meterid cleaned_quantity consumption
2019-02-01 215M201 23.0 20
2019-03-02 215M201 43.0 9
2019-04-01 215M201 52.0 12
2019-05-01 215M201 64.0 36
2019-06-01 215M201 100.0 20
what steps should i take?
Sort values by Datetime (if needed) then group by houseid-meterid before compute the diff for cleaned_quantity values then shift row to align with the right data:
df['consumption'] = (df.sort_values('Datetime')
.groupby('houseid-meterid')['cleaned_quantity']
.transform(lambda x: x.diff().shift(-1)))
print(df)
# Output
Datetime houseid-meterid cleaned_quantity consumption
0 2019-02-01 215M201 23.0 20.0
1 2019-03-02 215M201 43.0 9.0
2 2019-04-01 215M201 52.0 12.0
3 2019-05-01 215M201 64.0 36.0
4 2019-06-01 215M201 100.0 NaN
I downloaded the Broad Dollar Index from FRED with the following format:
DATE RTWEXBGS
0 2006-01-01 100.0000
1 2006-02-01 100.2651
2 2006-03-01 100.5424
3 2006-04-01 100.0540
4 2006-05-01 97.8681
.. ... ...
194 2022-03-01 111.2659
195 2022-04-01 111.8324
196 2022-05-01 114.6075
197 2022-06-01 115.6957
198 2022-07-01 118.2674
I also got an Excel file of inflation rate with a different format:
Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Annual
0 2022 0.07480 0.07871 0.08542 0.08259 0.08582 0.09060 0.08525 NaN NaN NaN NaN NaN NaN
1 2021 0.01400 0.01676 0.02620 0.04160 0.04993 0.05391 0.05365 0.05251 0.05390 0.06222 0.06809 0.07036 0.04698
2 2020 0.02487 0.02335 0.01539 0.00329 0.00118 0.00646 0.00986 0.01310 0.01371 0.01182 0.01175 0.01362 0.01234
3 2019 0.01551 0.01520 0.01863 0.01996 0.01790 0.01648 0.01811 0.01750 0.01711 0.01764 0.02051 0.02285 0.01812
4 2018 0.02071 0.02212 0.02360 0.02463 0.02801 0.02872 0.02950 0.02699 0.02277 0.02522 0.02177 0.01910 0.02443
.. ... ... ... ... ... ... ... ... ... ... ... ... ... ...
104 1918 0.19658 0.17500 0.16667 0.12698 0.13281 0.13077 0.17969 0.18462 0.18045 0.18519 0.20741 0.20438 0.17284
105 1917 0.12500 0.15385 0.14286 0.18868 0.19626 0.20370 0.18519 0.19266 0.19820 0.19469 0.17391 0.18103 0.17841
106 1916 0.02970 0.04000 0.06061 0.06000 0.05941 0.06931 0.06931 0.07921 0.09901 0.10784 0.11650 0.12621 0.07667
107 1915 0.01000 0.01010 0.00000 0.02041 0.02020 0.02020 0.01000 -0.00980 -0.00980 0.00990 0.00980 0.01980 0.00915
108 1914 0.02041 0.01020 0.01020 0.00000 0.02062 0.01020 0.01010 0.03030 0.02000 0.01000 0.00990 0.01000 0.01349
How do I change the inflation table into a format similar to the dollar index?
Something like this(didn't take column=Annual into account),
df
###
Year Jan Feb Mar Apr May Jun Jul Aug \
0 2022 0.07480 0.07871 0.08542 0.08259 0.08582 0.09060 0.08525 NaN
1 2021 0.01400 0.01676 0.02620 0.04160 0.04993 0.05391 0.05365 NaN
2 2020 0.02487 0.02335 0.01539 0.00329 0.00118 0.00646 0.00986 NaN
Sep Oct Nov Dec Annual
0 NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN
month = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
df_melt = pd.melt(df, id_vars=['Year'], value_vars=month, var_name='Month', value_name='Sales')
df_melt['Date'] = pd.to_datetime(df_melt['Year'].astype(str) + '-' + df_melt['Month'].astype(str))
# convert Date column to datetime type
df_melt = df_melt[['Date', 'Sales']]
df_melt
###
Date Sales
0 2022-01-01 0.07480
1 2021-01-01 0.01400
2 2020-01-01 0.02487
3 2022-02-01 0.07871
4 2021-02-01 0.01676
5 2020-02-01 0.02335
6 2022-03-01 0.08542
7 2021-03-01 0.02620
8 2020-03-01 0.01539
9 2022-04-01 0.08259
10 2021-04-01 0.04160
11 2020-04-01 0.00329
12 2022-05-01 0.08582
13 2021-05-01 0.04993
14 2020-05-01 0.00118
15 2022-06-01 0.09060
16 2021-06-01 0.05391
17 2020-06-01 0.00646
18 2022-07-01 0.08525
19 2021-07-01 0.05365
20 2020-07-01 0.00986
21 2022-08-01 NaN
22 2021-08-01 NaN
23 2020-08-01 NaN
24 2022-09-01 NaN
25 2021-09-01 NaN
26 2020-09-01 NaN
27 2022-10-01 NaN
28 2021-10-01 NaN
29 2020-10-01 NaN
30 2022-11-01 NaN
31 2021-11-01 NaN
32 2020-11-01 NaN
33 2022-12-01 NaN
34 2021-12-01 NaN
35 2020-12-01 NaN
My data has the following structure:
np.random.seed(25)
tdf = pd.DataFrame({'person_id' :[1,1,1,1,
2,2,
3,3,3,3,3,
4,4,4,
5,5,5,5,5,5,5,
6,
7,7,
8,8,8,8,8,8,8,
9,9,
10,10
],
'Date': ['2021-01-02','2021-01-05','2021-01-07','2021-01-09',
'2021-01-02','2021-01-05',
'2021-01-02','2021-01-05','2021-01-07','2021-01-09','2021-01-11',
'2021-01-02','2021-01-05','2021-01-07',
'2021-01-02','2021-01-05','2021-01-07','2021-01-09','2021-01-11','2021-01-13','2021-01-15',
'2021-01-02',
'2021-01-02','2021-01-05',
'2021-01-02','2021-01-05','2021-01-07','2021-01-09','2021-01-11','2021-01-13','2021-01-15',
'2021-01-02','2021-01-05',
'2021-01-02','2021-01-05'
],
'Quantity': np.floor(np.random.random(size=35)*100)
})
And I want to calculate moving average (2 periods) over Date. So, the final output looks like the following. For first MA, we are taking 2021-01-02 & 2021-01-05 across all observations & calculate the MA (50). Similarly for other dates. The output need not be in the structure I'm showing the report. I just need date & MA column in the final data.
Thanks!
IIUC, you can aggregate the similar dates first, getting the sum and count.
Then take the sum per rolling 2 dates (here it doesn't look like you want to take care of a defined period but rather raw successive values, so I am assuming here prior sorting).
Finally, perform the ratio of sum and count to get the mean:
g = tdf.groupby('Date')['Quantity']
out = g.sum().rolling(2).sum()/g.count().rolling(2).sum()
output:
Date
2021-01-02 NaN
2021-01-05 50.210526
2021-01-07 45.071429
2021-01-09 41.000000
2021-01-11 44.571429
2021-01-13 48.800000
2021-01-15 50.500000
Name: Quantity, dtype: float64
joining the original data:
g = tdf.groupby('Date')['Quantity']
s = g.sum().rolling(2).sum()/g.count().rolling(2).sum()
tdf.merge(s.rename('Quantity_MA(2)'), left_on='Date', right_index=True)
output:
person_id Date Quantity Quantity_MA(2)
0 1 2021-01-02 87.0 NaN
4 2 2021-01-02 41.0 NaN
6 3 2021-01-02 68.0 NaN
11 4 2021-01-02 11.0 NaN
14 5 2021-01-02 16.0 NaN
21 6 2021-01-02 51.0 NaN
22 7 2021-01-02 38.0 NaN
24 8 2021-01-02 51.0 NaN
31 9 2021-01-02 90.0 NaN
33 10 2021-01-02 45.0 NaN
1 1 2021-01-05 58.0 50.210526
5 2 2021-01-05 11.0 50.210526
7 3 2021-01-05 43.0 50.210526
12 4 2021-01-05 44.0 50.210526
15 5 2021-01-05 52.0 50.210526
23 7 2021-01-05 99.0 50.210526
25 8 2021-01-05 55.0 50.210526
32 9 2021-01-05 66.0 50.210526
34 10 2021-01-05 28.0 50.210526
2 1 2021-01-07 27.0 45.071429
8 3 2021-01-07 55.0 45.071429
13 4 2021-01-07 58.0 45.071429
16 5 2021-01-07 32.0 45.071429
26 8 2021-01-07 3.0 45.071429
3 1 2021-01-09 18.0 41.000000
9 3 2021-01-09 36.0 41.000000
17 5 2021-01-09 69.0 41.000000
27 8 2021-01-09 71.0 41.000000
10 3 2021-01-11 40.0 44.571429
18 5 2021-01-11 36.0 44.571429
28 8 2021-01-11 42.0 44.571429
19 5 2021-01-13 83.0 48.800000
29 8 2021-01-13 43.0 48.800000
20 5 2021-01-15 48.0 50.500000
30 8 2021-01-15 28.0 50.500000
I have a dataframe of values that are mostly (but not always) quarterly values.
I need to fill in for any missing months so it is complete.
Here i need to put it into a complete df from 2015-12 to 2021-03.
Thank you.
id date amt rate
0 15856 2015-12-31 85.09 0.0175
1 15857 2016-03-31 135.60 0.0175
2 15858 2016-06-30 135.91 0.0175
3 15859 2016-09-30 167.27 0.0175
4 15860 2016-12-31 173.32 0.0175
....
19 15875 2020-09-30 305.03 0.0175
20 15876 2020-12-31 354.09 0.0175
21 15877 2021-03-31 391.19 0.0175
You can use pd.date_range() to generate a list of months end with freq='M' then reindex datetime index.
df_ = df.set_index('date').reindex(pd.date_range('2015-12', '2021-03', freq='M')).reset_index().rename(columns={'index': 'date'})
print(df_)
date id amt rate
0 2015-12-31 15856.0 85.09 0.0175
1 2016-01-31 NaN NaN NaN
2 2016-02-29 NaN NaN NaN
3 2016-03-31 15857.0 135.60 0.0175
4 2016-04-30 NaN NaN NaN
.. ... ... ... ...
58 2020-10-31 NaN NaN NaN
59 2020-11-30 NaN NaN NaN
60 2020-12-31 15876.0 354.09 0.0175
61 2021-01-31 NaN NaN NaN
62 2021-02-28 NaN NaN NaN
To fill the NaN value, you can use df_.fillna(0).
How to seperate dataframe as follows:
yr mon day Tmax Tmin pcp
2013 4 22 5.09-10.92 0.0
2013 4 23 2.77 -9.63 0.5
2013 4 24 0.28 -9.90 9.9
2013 4 25 0.76 -6.70 12.2
2013 4 26 -0.35 -9.48 0.0
2013 4 27 7.22-10.47 0.0
2013 4 28 4.19-10.78 0.0
you see: there are no whitespaces between Tmax and Tmin in principle. The max width of Tmax and Tmin are 6 char-spaces. If less than 6 spaces , filled by whitespace. I want to read it to df and seperate each columns.
seperate columns as given char length?
try this:
df = pd.read_fwf(filename)
It seems you need str.extract floats and ints, solution works if all data are in one column which is selected by iloc:
pat="(\d+)\s*(\d+)\s*(\d+)\s*([-+]?\d+\.\d+|\d+)\s*([-+]?\d+\.\d+|\d+)\s*([-+]?\d+\.\d+|\d+)"
df1 = df.iloc[:, 0].str.extract(pat, expand=True)
df1.columns = ['year', 'mont','day','Tmax','Tmin','pcp']
print (df1)
year mont day Tmax Tmin pcp
0 2013 4 22 5.09 -10.92 0.0
1 2013 4 23 2.77 -9.63 0.5
2 2013 4 24 0.28 -9.90 9.9
3 2013 4 25 0.76 -6.70 12.2
4 2013 4 26 -0.35 -9.48 0.0
5 2013 4 27 7.22 -10.47 0.0
6 2013 4 28 4.19 -10.78 0.0
Another solution is use read_fwf and specify colspecs:
import pandas as pd
from pandas.compat import StringIO
temp=u"""yr mon day Tmax Tmin pcp
2013 4 22 5.09-10.92 0.0
2013 4 23 2.77 -9.63 0.5
2013 4 24 0.28 -9.90 9.9
2013 4 25 0.76 -6.70 12.2
2013 4 26 -0.35 -9.48 0.0
2013 4 27 7.22-10.47 0.0
2013 4 28 4.19-10.78 0.0 """
#after testing replace 'StringIO(temp)' to 'filename.csv'
names=['year', 'mont','day','Tmax','Tmin','pcp']
colspecs = [(0, 6), (9, 10), (12, 14), (21, 26),(26,32),(34,38)]
df = pd.read_fwf(StringIO(temp),colspecs=colspecs, names=names, header=0)
print (df)
year mont day Tmax Tmin pcp
0 2013 4 22 5.09 -10.92 0.0
1 2013 4 23 2.77 -9.63 0.5
2 2013 4 24 0.28 -9.90 9.9
3 2013 4 25 0.76 -6.70 12.2
4 2013 4 26 -0.35 -9.48 0.0
5 2013 4 27 7.22 -10.47 0.0
6 2013 4 28 4.19 -10.78 0.0