Replace condition with mode in Pandas - pandas

I need a pandas code for the following data.Here I need a condition for replace the value.if product name is A,price needs to be the mode value of A and replace all the value.At the end The value of A is 5 in every row.
Product
Price
A
5
A
6
A
7
B
8
B
8
B
4
A
5
A
5
A
5
A
Nan
c
4
D
3

You could create a dictionary, keys being the values from Product column and values will be their respective mode price, and then map it back to your dataframe based on your Product column:
df.assign(Price=df['Product'].map(
df.groupby(['Product'])['Price'].agg(pd.Series.mode).to_dict()))
prints:
Product Price
0 A 5
1 A 5
2 A 5
3 B 8
4 B 8
5 B 8
6 A 5
7 A 5
8 A 5
9 c 4
10 D 3

Related

Pandas Groupby Problems with Calculating Column-Wise Quantiles with "quantile"

i need to compute quantiles for a large DF across columns or column-wise along rows or "months" in my case. Apparently, the quantile function applied on just a df works using the key word "axis" but if you try and apply quantile using a groupby, it is rejected with an error:
TypeError: quantile() got an unexpected keyword argument 'axis'
Here is the situation that the quantile works with data like this:
Num Num Num Quantile 0.5
5 6 4 5
4 1 2 2
3 9 7 7
7 2 8 7
5 5 4 5
But, if I add more columns with a groupby statement to find the same quantile(0.5, axis=1), then I get the error shown above. Please help and thank you. My actual data looks like this below:
site month Num Num Num Quantile 0.5
0 A 8 5 6 4 5
1 A 9 4 1 2 2
2 A 10 3 9 7 7
3 A 11 7 2 8 7
4 A 12 5 5 4 5
5 B 8 3 7 5 5
6 B 9 6 9 0 6
7 B 10 4 1 3 3
8 B 11 8 3 0 3
9 B 12 5 6 8 6
The confusion arises from the fact that pd.DataFrame.quantile and DataFrameGroupBy.quantile are not the same functions. The first one has an axis parameter, the second one does not. Hence the error.
When you think about it, it is perfectly logical that the second function does not have this option. Suppose we do:
groups = df.groupby('site')
for group in groups:
print(group[1])
site month Num Num.1 Num.2
0 A 8 5 6 4
1 A 9 4 1 2
2 A 10 3 9 7
3 A 11 7 2 8
4 A 12 5 5 4
site month Num Num.1 Num.2
5 B 8 3 7 5
6 B 9 6 9 0
7 B 10 4 1 3
8 B 11 8 3 0
9 B 12 5 6 8
Now ask yourself the question which axis could generate a qauntile that is meaningfully related to A | B. The answer surely is column-wise. I could get a quantile of Num for A, or Num.1. E.g.:
print(groups.quantile())
month Num Num.1 Num.2
site
A 10.0 5.0 5.0 4.0
B 10.0 5.0 6.0 3.0
It wouldn't make sense to say, let's get the quantile row-wise for A at row 0 (and pretend that this has anything to do with A as a grouped value as distinct from B). Indeed, you don't need a groupby for that at all.
Sidenote: you will have noticed that your columns Num, Num, Num have turned into Num, Num.1, Num.2 in my examples. This conversion takes place automatically when you read from the clipboard (pd.read_clipboard). In general, having multiple columns with duplicate names is very bad practice and might get you into all sorts of problems with various operators. So, I strongly advice you to rename them.

Using groupby() and cut() in pandas

I have a dataframe and for each group value I want to label values. If value is less that group mean then label is 1 and if group value is more than group mean then label is 2.
input data frame is
groups num1
0 a 2
1 a 5
2 a Nan
3 b 10
4 b 4
5 b 0
6 b 7
7 c 2
8 c 4
9 c 1
Here mean values for group a, b ,c are 3.5, 5.25 and 2.33 respectively and output data frame is .
groups out
0 a 1
1 a 2
2 a Nan
3 b 2
4 b 1
5 b 1
6 b 2
7 c 1
8 c 2
9 c 1
I want to use panads.cut and may be pandas.groupby and pandas.apply also.
and also how can I skip Null values here?
Thanks in advance
cut is not really pertinent here. Use groupby.transform('mean') and numpy.where:
df['out'] = np.where(df['num1'].lt(df.groupby('groups')['num1']
.transform('mean')),
1, 2)
Output (as new column "out" for clarity):
groups num1 out
0 a 2 1
1 a 5 2
2 a 7 2
3 b 10 2
4 b 4 1
5 b 0 1
6 b 7 2
7 c 2 1
8 c 4 2
9 c 1 1
I really want cut
OK, but it's not really nice and performant:
(df.groupby('groups')['num1']
.transform(lambda g: pd.cut(g, [-np.inf, g.mean(), np.inf], labels=[1, 2]))
)

Stack multiple columns into single column while maintaining other columns in Pandas?

Given pandas multiple columns as below
cl_a cl_b cl_c cl_d cl_e
0 1 a 5 6 20
1 2 b 4 7 21
2 3 c 3 8 22
3 4 d 2 9 23
4 5 e 1 10 24
I would like to stack the column cl_c cl_d cl_e into a single column with the name ax. But, please note that, the columns cl_a cl_b were maintained.
cl_a cl_b ax from_col
1,a,5,cl_c
2,b,4,cl_c
3,c,3,cl_c
4,d,2,cl_c
5,e,1,cl_c
1,a,6,cl_d
2,b,7,cl_d
3,c,8,cl_d
4,d,9,cl_d
5,e,10,cl_d
1,a,20,cl_e
2,b,21,cl_e
3,c,22,cl_e
4,d,23,cl_e
5,e,24,cl_e
So far, the following code does the job
df = pd.DataFrame ( {'cl_a': [1,2,3,4,5], 'cl_b': ['a','b','c','d','e'],
'cl_c': [5,4,3,2,1],'cl_d': [6,7,8,9,10],
'cl_e': [20,21,22,23,24]})
df_new = pd.DataFrame()
for col_name in ['cl_c','cl_d','cl_e']:
df_new=df_new.append (df [['cl_a', 'cl_b', col_name]].rename(columns={col_name: "ax"}))
However, I am curious whether there is Pandas build-in approach that can do the trick
Edit:
Upon Quong answer, I realise of the need to include another column (i.e., from_col) beside the ax. The from_col indicate the origin of ax previous column name.
Yes, it's called melt:
df.melt(['cl_a','cl_b'], value_name='ax').drop(columns='variable')
Output:
cl_a cl_b ax
0 1 a 5
1 2 b 4
2 3 c 3
3 4 d 2
4 5 e 1
5 1 a 6
6 2 b 7
7 3 c 8
8 4 d 9
9 5 e 10
10 1 a 20
11 2 b 21
12 3 c 22
13 4 d 23
14 5 e 24
Or equivalently set_index().stack():
(df.set_index(['cl_a','cl_b']).stack()
.reset_index(level=-1, drop=True)
.reset_index(name='ax')
)
with a slightly different output:
cl_a cl_b ax
0 1 a 5
1 1 a 6
2 1 a 20
3 2 b 4
4 2 b 7
5 2 b 21
6 3 c 3
7 3 c 8
8 3 c 22
9 4 d 2
10 4 d 9
11 4 d 23
12 5 e 1
13 5 e 10
14 5 e 24

If a column value does not have a certain number of occurances in a dataframe, how to duplicate rows at random until that count is met?

Say that this is what my dataframe looks like
A B
0 1 5
1 4 2
2 3 5
3 3 3
4 3 2
5 2 0
6 4 5
7 2 3
8 4 1
9 5 1
I want every unique value in Column B to occur at least 3 times. So none of the rows with a B value of 5 are duplicated. The row with a column B value of 0 are duplicated twice. And the rest have one of their two rows duplicated at random.
Here is an example desired output
A B
0 1 5
1 4 2
2 3 5
3 3 3
4 3 2
5 2 0
6 4 5
7 2 3
8 4 1
9 5 1
10 4 2
11 2 3
12 2 0
13 2 0
14 4 1
Edit:
The row chosen to be duplicated should be selected at random
To random pick rows, I would use groupby apply with sample on each group. x of lambda is each group of B, so I use reapeat - x.shape[0] to find number of rows need to create. There may be some cases group B already has more rows than 3, so I use np.clip to force negative values to 0. Sample on 0 row is the same as ignore it. Finally, reset_index and append back to df
repeats = 3
df1 = (df.groupby('B').apply(lambda x: x.sample(n=np.clip(repeats-x.shape[0], 0, np.inf)
.astype(int), replace=True))
.reset_index(drop=True))
df_final = df.append(df1).reset_index(drop=True)
Out[43]:
A B
0 1 5
1 4 2
2 3 5
3 3 3
4 3 2
5 2 0
6 4 5
7 2 3
8 4 1
9 5 1
10 2 0
11 2 0
12 5 1
13 4 2
14 2 3

how to append column data from different csv files in one folder

I have some of csv files which i need to add special column from another file in one folder and save into one file. Lets i give an example below.
1.csv 2.csv 3.csv n.csv (and so on i have several csv file)
a b c d a b c d a b c d a b c d
1 2 3 4 8 3 5 7 2 9 4 6 3 6 8 3
4 2 8 3 6 3 6 7 9 3 4 5 3 6 6 8
3 9 4 8 9 3 4 2 4 7 4 4 1 8 3 5
i want to append only column a and b, like an example below
x.csv
a b a b a b a b
1 2 8 3 2 9 3 6
4 2 6 3 9 3 3 6
3 9 9 3 4 7 1 8
can some one help me how to add a column?