I'm trying to rename the size() column as shown here like this:
x = monthly.copy()
x["size"] = x\
.groupby(["sub_acct_id", "clndr_yr_month"]).transform(np.size)
But what I'm getting is
ValueError: Wrong number of items passed 15, placement implies 1
Why is this not working for my dataframe?
If I simple print the copy:
x = monthly.copy()
print x
this is how the table looks like:
sub_acct_id clndr_yr_month
12716D 201601 219
201602 265
12716G 201601 221
201602 262
12716K 201601 181
201602 149
...
what I try to accomplish is to set the name of the column:
sub_acct_id clndr_yr_month size
12716D 201601 219
201602 265
12716G 201601 221
201602 262
12716K 201601 181
201602 149
...
You need:
x["size"] = x.groupby(["sub_acct_id", "clndr_yr_month"])['sub_acct_id'].transform('size')
Sample:
df = pd.DataFrame({'sub_acct_id': ['x', 'x', 'x','x','y','y','y','z','z']
, 'clndr_yr_month': ['a', 'b', 'c','c','a','b','c','a','b']})
print (df)
clndr_yr_month sub_acct_id
0 a x
1 b x
2 c x
3 c x
4 a y
5 b y
6 c y
7 a z
8 b z
df['size'] = df.groupby(['sub_acct_id', 'clndr_yr_month'])['sub_acct_id'].transform('size')
print (df)
clndr_yr_month sub_acct_id size
0 a x 1
1 b x 1
2 c x 2
3 c x 2
4 a y 1
5 b y 1
6 c y 1
7 a z 1
8 b z 1
Another solution with aggregating output:
df = df.groupby(['sub_acct_id', 'clndr_yr_month']).size().reset_index(name='Size')
print (df)
sub_acct_id clndr_yr_month Size
0 x a 1
1 x b 1
2 x c 2
3 y a 1
4 y b 1
5 y c 1
6 z a 1
7 z b 1
Related
I have a dataframe as follow:
key1 key2 val
0 a x 8
1 a x 6
2 a x 7
3 a x 4
4 a x 9
5 a x 1
6 a x 2
7 a x 3
8 a x 10
9 a x 5
10 a y 4
11 a y 9
12 a y 1
13 a y 2
14 b x 17
15 b x 15
16 b x 18
17 b x 19
18 b x 12
19 b x 20
20 b x 14
21 b x 13
22 b x 16
23 b x 11
24 b y 2
25 b y 3
26 b y 10
27 b y 5
28 b y 4
29 b y 24
30 b y 22
What I need to do is:
Access each group by key1
In each group of key1, I need to do qcut on observations that key2 == x
For those observation that is out of bin range, assign them to lowest and highest bins
According to the dataframe above, first group key1 = a is from indx=0-13. However, only the indx from 0-9 are used to create bins(threshold). The bins(threshold) is then applied from indx=0-13
Then for second group key1 = b is from indx=14-30. Only indx from 14-23 are used to creates bins(threshold). The bins(threshold) is then applied from indx=14-30.
However, from indx=24-28 and indx=29-30, they are out of bins range. Then for indx=24-28 assign to smallest bin range, indx=29-30 assign to the largest bin range.
The output looks like this:
key1 key2 val labels
0 a x 8 1
1 a x 6 1
2 a x 7 1
3 a x 4 0
4 a x 9 1
5 a x 1 0
6 a x 2 0
7 a x 3 0
8 a x 10 1
9 a x 5 0
10 a y 4 0
11 a y 9 1
12 a y 1 0
13 a y 2 0
14 b x 17 1
15 b x 15 0
16 b x 18 1
17 b x 19 1
18 b x 12 0
19 b x 20 1
20 b x 14 0
21 b x 13 0
22 b x 16 1
23 b x 11 0
24 b y 2 0
25 b y 3 0
26 b y 10 0
27 b y 5 0
28 b y 4 0
29 b y 24 1
30 b y 22 1
My solution: I creates a dict to contain bins as: (for simplicity, take qcut=2)
dict_bins = {}
key_unique = data['key1'].unique()
for k in key_unique:
sub = data[(data['key1'] == k) & (data['key2'] == 'x')].copy()
dict_bins[k] = pd.qcut(sub['val'], 2, labels=False, retbins=True )[1]
Then, I intend to use groupby with apply, but get stuck on accessing dict_bins
data['sort_key1'] = data.groupby(['key1'])['val'].apply(lambda g: --- stuck---)
Any other solution, or modification to my solution is appreciated.
Thank you
A first approach is to create a custom function:
def discretize(df):
bins = pd.qcut(df.loc[df['key2'] == 'x', 'val'], 2, labels=False, retbins=True)[1]
bins = [-np.inf] + bins[1:-1].tolist() + [np.inf]
return pd.cut(df['val'], bins, labels=False)
df['label'] = df.groupby('key1').apply(discretize).droplevel(0)
Output:
>>> df
key1 key2 val label
0 a x 8 1
1 a x 6 1
2 a x 7 1
3 a x 4 0
4 a x 9 1
5 a x 1 0
6 a x 2 0
7 a x 3 0
8 a x 10 1
9 a x 5 0
10 a y 4 0
11 a y 9 1
12 a y 1 0
13 a y 2 0
14 b x 17 1
15 b x 15 0
16 b x 18 1
17 b x 19 1
18 b x 12 0
19 b x 20 1
20 b x 14 0
21 b x 13 0
22 b x 16 1
23 b x 11 0
24 b y 2 0
25 b y 3 0
26 b y 10 0
27 b y 5 0
28 b y 4 0
29 b y 24 1
30 b y 22 1
You need to drop the first level of index to align indexes:
>>> df.groupby('key1').apply(discretize)
key1 # <- you have to drop this index level
a 0 1
1 1
2 1
3 0
4 1
5 0
6 0
7 0
8 1
9 0
10 0
11 1
12 0
13 0
b 14 1
15 0
16 1
17 1
18 0
19 1
20 0
21 0
22 1
23 0
24 0
25 0
26 0
27 0
28 0
29 1
30 1
Name: val, dtype: int64
Question updated, see below
I have a large dataframe similar in structure to e.g.
df=pd.DataFrame({'A': [0, 0, 0, 11, 22,33], 'B': [10, 20,30, 110, 220, 330], 'C':['x', 'y', 'z', 'x', 'y', 'z']})
df
A B C
0 0 10 x
1 0 20 y
2 0 30 z
3 11 110 x
4 22 220 y
5 33 330 z
I want to create a new column by selecting the column value of B from a different row based on the value of C being equal to the current row and the value of A being 0, so the expected result is
A B C new_B_based_on_A_and_C
0 0 10 x 10
1 0 20 y 20
2 0 30 z 30
3 11 110 x 10
4 22 220 y 20
5 33 330 z 30
I want to have a simple solution without needing to have a for loop over the rows. Something like
df.apply(lambda row: df[df[(df['C']==row.C) & (df['A']==0)]]['B'].iloc[0], axis=1)
The dataframe is guaranteed to have those values and the values are unique
Update for a more general case
I am looking for a general solution that would also work for multiple columns to match on e.g.
df=pd.DataFrame({'A': [0, 0, 0,0, 11, 22,33, 44], 'B': [10, 20,30, 40, 110, 220, 330, 440], 'C':['x', 'y', 'x', 'y', 'x', 'y', 'x', 'y'], 'D': [1, 1, 5, 5, 1,1 ,5, 5]})
A B C D
0 0 10 x 1
1 0 20 y 1
2 0 30 x 5
3 0 40 y 5
4 11 110 x 1
5 22 220 y 1
6 33 330 x 5
7 44 440 y 5
and the result would be then
A B C D new_B_based_on_A_C_D
0 0 10 x 1 10
1 0 20 y 1 20
2 0 30 x 5 30
3 0 40 y 5 40
4 11 110 x 1 10
5 22 220 y 1 20
6 33 330 x 5 30
7 44 440 y 5 40
You can do a map:
# you **must** make sure that for each unique `C` value,
# there is only one row with `A==0`.
df['new'] = df['C'].map(df.loc[df['A']==0].set_index('C')['B'])
Output:
A B C new
0 0 10 x 10
1 0 20 y 20
2 0 30 z 30
3 11 110 x 10
4 22 220 y 20
5 33 330 z 30
Explanation: Imagine you have a series s indicating the mapping:
idx
idx1 value1
idx2 value2
idx3 value3
then that's what map does: df['C'].map(s).
Now, for a dataframe d:
C B
c1 b1
c2 b2
c3 b3
we do s=d.set_index('C')['B'] to get the above form.
Finally, as mentioned, you mapping happens where A==0, so d = df[df['A']==0].
Composing the forward path:
mapping_data = df[df['A']==0]
mapping_series = mapping_data.set_index('C')['B']
new_values = df['C'].map(mapping_series)
and the first piece of code is just all these lines combined.
If I understood the question, for the general case you could use a merge like this:
df.merge(df.loc[df['A'] == 0, ['B', 'C', 'D']], on=['C', 'D'], how='left', suffixes=('', '_new'))
Output:
A B C D B_new
0 10 x 1 10
0 20 y 1 20
0 30 x 5 30
0 40 y 5 40
11 110 x 1 10
22 220 y 1 20
33 330 x 5 30
44 440 y 5 40
My dataframe looks like:
A B C D .... Y Z
0 5 12 14 4 2
3 6 15 10 1 30
2 10 20 12 5 15
I want to create another dataframe that only contains the columns with an average value greater than 10:
C D .... Z
12 14 2
15 10 30
20 12 15
Use:
df = df.loc[:, df.mean() > 10]
print (df)
C D Z
0 12 14 2
1 15 10 30
2 20 12 15
Detail:
print (df.mean())
A 1.666667
B 7.000000
C 15.666667
D 12.000000
Y 3.333333
Z 15.666667
dtype: float64
print (df.mean() > 10)
A False
B False
C True
D True
Y False
Z True
dtype: bool
Alternative:
print (df[df.columns[df.mean() > 10]])
C D Z
0 12 14 2
1 15 10 30
2 20 12 15
Detail:
print (df.columns[df.mean() > 10])
Index(['C', 'D', 'Z'], dtype='object')
i have a dataframe df
id value
1 100
2 200
3 500
4 600
5 700
6 800
i have another dataframe df2
c_id flag
2 Y
3 Y
5 Y
Similarly df3
c_id flag
1 N
3 Y
4 Y
i want to merge these 3 dataframes and create a column in df
such that my df looks like:
id value flag
1 100 N
2 200 Y
3 500 Y
4 600 Y
5 700 Y
6 800 nan
I DON'T WANT TO USE df2 and df3 concatenation
for eg(
final = pd.concat([df2,df3],ignore_index=False)
final.drop_duplicates(inplace=True)
i don't want to use this method, is there any other way?
Using pd.merge, between df and combined df2+df3
In [1150]: df.merge(df2.append(df3), left_on=['id'], right_on=['c_id'], how='left')
Out[1150]:
id value c_id flag
0 1 100 1.0 N
1 2 200 2.0 Y
2 3 500 3.0 Y
3 3 500 3.0 Y
4 4 600 4.0 Y
5 5 700 5.0 Y
6 6 800 NaN NaN
Details
In [1151]: df2.append(df3)
Out[1151]:
c_id flag
0 2 Y
1 3 Y
2 5 Y
0 1 N
1 3 Y
2 4 Y
Using map you could
In [1140]: df.assign(flag=df.id.map(
df2.set_index('c_id')['flag'].combine_first(
df3.set_index('c_id')['flag']))
)
Out[1140]:
id value flag
0 1 100 N
1 2 200 Y
2 3 500 Y
3 4 600 Y
4 5 700 Y
5 6 800 NaN
Let me explain, using set_index and combine_first create a mapping for id and flag
In [1141]: mapping = df2.set_index('c_id')['flag'].combine_first(
df3.set_index('c_id')['flag'])
In [1142]: mapping
Out[1142]:
c_id
1 N
2 Y
3 Y
4 Y
5 Y
Name: flag, dtype: object
In [1143]: df.assign(flag=df.id.map(mapping))
Out[1143]:
id value flag
0 1 100 N
1 2 200 Y
2 3 500 Y
3 4 600 Y
4 5 700 Y
5 6 800 NaN
Merge on both df2 and df3
df= pd.merge(pd.merge(df,df2,on='ID',how='left'),df3,on='ID',how='left')
Fill nulls
df['ID'] =df['ID_y'].fillna(df['ID_x']
Delete the columns
del df['ID_y']; del df['ID_x']
Or you could just append,
df4 = df2.append(df3)
pd.merge(df,df4,how='left',on='ID')
I have a dataset of a series with missing values that I want to replace by the index. The second column contains the same numbers than the first column, but in a different order.
here's an example:
>>> df
ind u v d
0 5 7 151
1 7 20 151
2 8 40 151
3 20 5 151
this should turn out to:
>>>df
ind u v d
0 1 2 151
1 2 4 151
2 3 5 151
3 4 1 151
i reindexed the values in row 'u' by creating a new column:
>>>df['new_index'] = range(1, len(numbers) + 1)
but how do I now replace values of the second column referring to the indexes?
Thanks for any advice!
You can use Series.rank, but first need create Series with unstack and last create DataFrame with unstack again:
df[['u','v']] = df[['u','v']].unstack().rank(method='dense').astype(int).unstack(0)
print (df)
u v d
ind
0 1 2 151
1 2 4 151
2 3 5 151
3 4 1 151
If use only DataFrame.rank, output in v is different:
df[['u','v']] = df[['u','v']].rank(method='dense').astype(int)
print (df)
u v d
ind
0 1 2 151
1 2 3 151
2 3 4 151
3 4 1 151