change all values in a dataframe with other values from another dataframe - pandas

I just started with learning pandas.
I have 2 dataframes.
The first one is
val num
0 1 0
1 2 1
2 3 2
3 4 3
4 5 4
and the second one is
0 1 2 3
0 1 2 3 4
1 5 3 2 2
2 2 5 3 2
I want to change my second dataframe so that the values present in the dataframe are compared with the column val in the first dataframe and every values that is the same needs then to be changed in the values that is present in de the num column from dataframe 1. Which means that in the end i need to get the following dataframe:
0 1 2 3
0 0 1 2 3
1 4 2 1 1
2 1 4 2 1
How do i do that in pandas?

You can use DataFrame.replace() to do this:
df2.replace(df1.set_index('val')['num'])
Explanation:
The first step is to set the val column of the first DataFrame as the index. This will change how the matching is performed in the third step.
Convert the first DataFrame to a Series, by sub-setting to the index and the num column. It looks like this:
val
1 0
2 1
3 2
4 3
5 4
Name: num, dtype: int64
Next, use DataFrame.replace() to do the replacement in the second DataFrame. It looks up each value from the second DataFrame, finds a matching index in the Series, and replaces it with the value from the Series.
Full reproducible example:
import pandas as pd
import io
s = """ val num
0 1 0
1 2 1
2 3 2
3 4 3
4 5 4"""
df1 = pd.read_csv(io.StringIO(s), delim_whitespace=True)
s = """ 0 1 2 3
0 1 2 3 4
1 5 3 2 2
2 2 5 3 2"""
df2 = pd.read_csv(io.StringIO(s), delim_whitespace=True)
print(df2.replace(df1.set_index('val')['num']))

Creat the mapping dict , then replace
mpd = dict(zip(df1.val,df1.num))
df2.replace(mpd, inplace=True)
0 1 2 3
0 0 1 2 3
1 4 2 1 1
2 1 4 2 1

Related

Allotting unique identifier to a group of groups in pandas dataframe

Given a frame like this
import pandas as pd
df = pd.DataFrame({'A':[1,2,3,4,6,3,7,3,2,11,13,10,1,5],'B':[1,1,1,2,2,2,2,3,3,3,3,3,4,4],
'C':[1,1,1,1,1,1,1,2,2,2,2,2,3,3]})
I want to allot a unique identifier to multiple groups in column B. For example, going from top for every two groups allot a unique identifier as shown in red boxes in below image. The end result would look like below:
Currently I am doing like below but it seems to be over kill. It's taking too much time to update even 70,000 rows:
b_unique_cnt = df['B'].nunique()
the_list = list(range(1, b_unique_cnt+1))
slice_size = 2
list_of_slices = zip(*(iter(the_list),) * slice_size)
counter = 1
df['D'] = -1
for i in list_of_slices:
df.loc[df['B'].isin(i), 'D'] = counter
counter = counter + 1
df.head(15)
You could do
df['new'] = df.B.factorize()[0]//2+1
#(df.groupby(['B'], sort=False).ngroup()//2).add(1)
df
Out[153]:
A B C new
0 1 1 1 1
1 2 1 1 1
2 3 1 1 1
3 4 2 1 1
4 6 2 1 1
5 3 2 1 1
6 7 2 1 1
7 3 3 2 2
8 2 3 2 2
9 11 3 2 2
10 13 3 2 2
11 10 3 2 2
12 1 4 3 2
13 5 4 3 2

Conditional count of cumulative sum Dataframe - Loop through columns

Im trying to compute a cumulative sum with a reset within a dataframe, based on the sign of each values. The idea is to the same exercise for each column separately.
For example, let's assume I have the following dataframe:
df = pd.DataFrame({'A': [1,1,1,-1,-1,1,1,1,1,-1,-1,-1],'B':[1,1,-1,-1,-1,1,1,1,-1,-1,-1,1]},index=[0, 1, 2, 3,4,5,6,7,8,9,10,11])
For each column, I want to compute the cumulative sum until I find a change in sign; in which case, the sum should be reset to 1. For the example above, I am expecting the following result:
df1=pd.DataFrame({'A_cumcount':[1,2,3,1,2,1,2,3,4,1,2,3],'B_cumcount':[1,2,1,2,3,1,2,3,1,2,3,4],index=[0,1,2,3,4,5,6,7,8,9,10,11]})
Similar issue has been discussed here: Pandas: conditional rolling count
I have tried the following code:
nb_col=len(df.columns) #number of columns in dataframe
for i in range(0,int(nb_col)): #Loop through the number of columns in the dataframe
name=df.columns[i] #read the column name
name=name+'_cumcount'
#add column for the calculation
df=df.reindex(columns=np.append(df.columns.values, [name]))
df=df[df.columns[nb_col+i]]=df.groupby((df[df.columns[i]] != df[df.columns[i]].shift(1)).cumsum()).cumcount()+1
My question is, is there a way to avoid this for loop? So I can avoid appending a new column each time and make the computation faster. Thank you
Answers received (all working fine):
From #nixon
df.apply(lambda x: x.groupby(x.diff().ne(0).cumsum()).cumcount()+1).add_suffix('_cumcount')
From #jezrael
df1 = (df.apply(lambda x: x.groupby((x != x.shift()).cumsum()).cumcount() + 1).add_suffix('_cumcount'))
From #Scott Boston:
df.apply(lambda x: x.groupby(x.diff().bfill().ne(0).cumsum()).cumcount() + 1)
I think in pandas need loop, e.g. by apply:
df1 = (df.apply(lambda x: x.groupby((x != x.shift()).cumsum()).cumcount() + 1)
.add_suffix('_cumcount'))
print (df1)
A_cumcount B_cumcount
0 1 1
1 2 2
2 3 1
3 1 2
4 2 3
5 1 1
6 2 2
7 3 3
8 4 1
9 1 2
10 2 3
11 3 1
You can try this:
df.apply(lambda x: x.groupby(x.diff().bfill().ne(0).cumsum()).cumcount() + 1)
Output:
A B
0 1 1
1 2 2
2 3 1
3 1 2
4 2 3
5 1 1
6 2 2
7 3 3
8 4 1
9 1 2
10 2 3
11 3 1
You can start by grouping by where the changes in the sequence occur by doing x.diff().ne(0).cumsum(), and using cumcount over the groups:
df.apply(lambda x: x.groupby(x.diff().ne(0).cumsum())
.cumcount()+1).add_suffix('_cumcount')
A_cumcount B_cumcount
0 1 1
1 2 2
2 3 1
3 1 2
4 2 3
5 1 1
6 2 2
7 3 3
8 4 1
9 1 2
10 2 3
11 3 1

Pandas change each group into a single row

I have a dataframe like the follows.
>>> data
target user data
0 A 1 0
1 A 1 0
2 A 1 1
3 A 2 0
4 A 2 1
5 B 1 1
6 B 1 1
7 B 1 0
8 B 2 0
9 B 2 0
10 B 2 1
You can see that each user may contribute multiple claims about a target. I want to only store each user's most frequent data for each target. For example, for the dataframe shown above, I want the result like follows.
>>> result
target user data
0 A 1 0
1 A 2 0
2 B 1 1
3 B 2 0
How to do this? And, can I do this using groupby? (my real dataframe is not sorted)
Thanks!
Using groupby with count create the helper key , then we using idxmax
df['helperkey']=df.groupby(['target','user','data']).data.transform('count')
df.groupby(['target','user']).helperkey.idxmax()
Out[10]:
target user
A 1 0
2 3
B 1 5
2 8
Name: helperkey, dtype: int64
df.loc[df.groupby(['target','user']).helperkey.idxmax()]
Out[11]:
target user data helperkey
0 A 1 0 2
3 A 2 0 1
5 B 1 1 2
8 B 2 0 2

Create a column of counts in a pandas dataframe

I want to create a column of counts in a pandas dataframe. Here is the input:
dict = {'id': [1,2,3,4,5,6], 'cat': ['A', 'A', 'A', 'A', 'A', 'B'], 'status': [1, 1, 1, 1, 2, 1]}
id cat status
0 1 A 1
1 2 A 1
2 3 A 1
3 4 A 1
4 5 A 2
5 6 B 1
Preferred output:
id cat status status_1_for_cat_count status_2_for_category_count
0 1 A 1 4 1
1 2 A 1 4 1
2 3 A 1 4 1
3 4 A 1 4 1
4 5 A 2 4 1
5 6 B 1 1 0
As can hopefully be seen, I'm trying to get the full counts added for each row to two columns (one for each status). I have tried several approaches, mostly with groupby in combination with unique_counts, transform, apply, filter, merges and what not, but have not been able to get this to work. I am able to do this on a single column easily (I want to create a column of value_counts in my pandas dataframe), but not with two different statuses combined with the category.
Another option, use pd.crosstab to create a two way table with cat as index, then join back with the original data frame on cat column:
df.join(pd.crosstab(df.cat, 'status_' + df.status.astype(str)), on='cat')
# cat id status status_1 status_2
#0 A 1 1 4 1
#1 A 2 1 4 1
#2 A 3 1 4 1
#3 A 4 1 4 1
#4 A 5 2 4 1
#5 B 6 1 1 0
You can use get_dummies first then groupby transform i.e
one = pd.get_dummies(df.set_index(['id','cat']).astype(str))
two = one.groupby(['cat']).transform('sum').reset_index()
id cat status_1 status_2
0 1 A 4 1
1 2 A 4 1
2 3 A 4 1
3 4 A 4 1
4 5 A 4 1
5 6 B 1 0

Need to loop over pandas series to find indices of variable

I have a dataframe and a list. I would like to iterate over elements in the list and find their location in dataframe then store this to a new dataframe
my_list = ['1','2','3','4','5']
df1 = pd.DataFrame(my_list, columns=['Num'])
dataframe : df1
Num
0 1
1 2
2 3
3 4
4 5
dataframe : df2
0 1 2 3 4
0 9 12 8 6 7
1 11 1 4 10 13
2 5 14 2 0 3
I've tried something similar to this but doesn't work
for x in my_list:
i,j= np.array(np.where(df==x)).tolist()
df2['X'] = df.append(i)
df2['Y'] = df.append(j)
so looking for a result like this
dataframe : df1 updated
Num X Y
0 1 1 1
1 2 2 2
2 3 2 4
3 4 1 2
4 5 2 0
any hints or ideas would be appreciated
Instead of trying to find the value in df2, why not just make df2 a flat dataframe.
df2 = pd.melt(df2)
df2.reset_index(inplace=True)
df2.columns = ['X', 'Y', 'Num']
so now your df2 just looks like this:
Index X Y Num
0 0 0 9
1 1 0 11
2 2 0 5
3 3 1 12
4 4 1 1
5 5 1 14
You can of course sort by Num and if you just want the values from your list you can further filter df2:
df2 = df2[df2.Num.isin(my_list)]