I have two dataframes each having 1000 rows. The dataframes are same, however, row by row is not same. The following examples can be assumed as truncated version of the dataframes.
df1:
col1 col2 col3
1 2 3
2 3 4
5 6 6
8 9 9
df2:
col1 col2 col3
5 6 6
8 9 9
1 2 3
2 3 4
The dataframes don't have indices and I expect null returned when I implement sql minus query on these. I used the following query, but did not obtain the result as expected. Is there any way to achieve my desired result ?
df3 = df1.merge(df2.drop_duplicates(),how='right', indicator=True)
print(df3)
For instance, if I consider df1 as table1 and df2 as table2, and if I ran the following query in SQL server, I would get null returned (empty table).
SELECT * FROM table1
EXCEPT
SELECT * FROM table2
Yes, you can use the indicator like this:
df1.merge(df2, how='left', indicator='ind').query('ind=="left_only"')
Where df1 is:
col1 col2 col3
0 1.0 2.0 3.0
1 2.0 3.0 4.0
2 5.0 6.0 6.0
3 8.0 9.0 9.0
4 10.0 10.0 10.0
and df2 is:
col1 col2 col3
0 5 6 6
1 8 9 9
2 1 2 3
3 2 3 4
Output:
col1 col2 col3 ind
4 10.0 10.0 10.0 left_only
Related
One column of my dataset contains numpy arrays as elements. I want to split it to multiple columns each one with a value of the array.
Data now are like:
column1 column2 column3
0 1 np.array([1,2,3,4]) 4.5
1 2 np.array([5,6,7,8]) 3
I want to convert it into:
column1 col1 col2 col3 col4 column3
0 1 1 2 3 4 4.5
1 2 5 6 7 8 3
Another possible solution, based on pandas.DataFrame.from_records:
out = pd.DataFrame.from_records(
df['col'], columns=[f'col{i+1}' for i in range(len(df.loc[0, 'col']))])
Output:
col1 col2 col3 col4
0 1 2 3 4
1 5 6 7 8
As an alternative:
df = pd.DataFrame(data={'col':[np.array([1,2,3,4]),np.array([5,6,7,8])]})
new_df = pd.DataFrame(df.col.tolist(), index= df.index) #explode column to new dataframe and get index from old df.
new_df.columns = ["col_{}".format(i) for i in range(1,len(new_df.columns) + 1)]
'''
col_1 col_2 col_3 col_4
0 1 2 3 4
1 5 6 7 8
'''
I hope I've understood your question well. You can leverage the result_type="expand" of the .apply method:
df = df.apply(
lambda x: {f"col{k}": vv for v in x for k, vv in enumerate(v, 1)},
result_type="expand",
axis=1,
)
print(df)
Prints:
col1 col2 col3 col4
0 1 2 3 4
1 5 6 7 8
I have 2 tables which I am merging( Left Join) based on common column but other column does not have exact column values and hence some of the column values are blank. I want to fill the missing value with closest tenth . for example I have these two dataframes:
d = {'col1': [1.31, 2.22,3.33,4.44,5.55,6.66], 'col2': ['010100', '010101','101011','110000','114000','120000']}
df1=pd.DataFrame(data=d)
d2 = {'col2': ['010100', '010102','010144','114218','121212','166110'],'col4': ['a','b','c','d','e','f']}
df2=pd.DataFrame(data=d2)
# df1
col1 col2
0 1.31 010100
1 2.22 010101
2 3.33 101011
3 4.44 110000
4 5.55 114000
5 6.66 120000
# df2
col2 col4
0 010100 a
1 010102 b
2 010144 c
3 114218 d
4 121212 e
5 166110 f
After left merging on col2,
I get:
df1.merge(df2,how='left',on='col2')
col1 col2 col4
0 1.31 010100 a
1 2.22 010101 NaN
2 3.33 101011 NaN
3 4.44 111100 NaN
4 5.55 114100 NaN
5 6.66 166100 NaN
Vs what I want, for for all values where NaN, my col2 value firstly converts to closest 10 and then matches in col2 of table 1 if there is a match, place col4 accordingly, if not then closest 100, then closest thousand, ten thousand..
Ideally my answer should be:
col1 col2 col4
0 1.31 010100 a
1 2.22 010101 a
2 3.33 101011 f
3 4.44 111100 d
4 5.55 114100 d
5 6.66 166100 f
Please help me in coding this
I have a data frame:
import numpy as np
import pandas as pd
np.random.seed(42)
df = pd.DataFrame(np.random.randint(0, 10, size=(5, 2)), columns=['col1', 'col2'])
Which generates the following frame:
col1 col2
0 6 3
1 7 4
2 6 9
3 2 6
4 7 4
I want to replace all values from row 2 forward with whatever value on row 1. So I type:
df.loc[2:] = df.loc[1:1]
But the resulting frame is filled with nan:
col1 col2
0 6.0 3.0
1 7.0 4.0
2 NaN NaN
3 NaN NaN
4 NaN NaN
I know I can use fillna(method='ffill') to get what I want but why did the broadcasting not work and result is NaN? Expected result:
col1 col2
0 6 3
1 7 4
2 7 4
3 7 4
4 7 4
Edit: pandas version 0.24.2
I believe df.loc[1:1] is just the empty array, hence converted to NaN? It should be df.loc[2:, 'Value'] = df.loc[1, 'Value'].
I have a dataframe with 3 columns: Col1, Col2 and Col3.
Toy example
d = {'Col1':['hello','k','hello','we','r'],
'Col2':[10,20,30,40,50],
'Col3':[1,2,3,4,5]}
df = pd.DataFrame(d)
Which gets:
Col1 Col2 Col3
0 hello 10 1
1 k 20 2
2 hello 30 3
3 we 40 4
4 r 50 5
I am selecting the values of Col2 such that the value in Col1 is 'hello'
my_values = df.loc[df['Col1']=='hello']['Col2']
this returns me a Series where I can see the values of Col2 as well as the index.
0 10
2 30
Name: Col2, dtype: int64
Now suppose I want to assign this values to a Col3.
I only want to replace those values(index 0 and 2), keeping the other values in Col3 unmodified
I tried:
df['Col3'] = my_values
But this assigns Nan to the other values (the ones where Col1 is not hello)
Col1 Col2 Col3
0 hello 10 10
1 k 20 NaN
2 hello 30 30
3 we 40 NaN
4 r 50 NaN
How can I update certain values in Col3 leaving the others untouched?
Col1 Col2 Col3
0 hello 10 10
1 k 20 2
2 hello 30 30
3 we 40 4
4 r 50 5
So, in short:
Having my_values I want to put them in Col3
Or just base on np.where
df['Col3']=np.where(df['Col1'] == 'hello',df.Col2,df.Col3)
If base on your myvalue
df.loc[my_values.index,'col3']=my_values
Or you can just do update
df['Col3'].update(my_values)
This question already has answers here:
vlookup in Pandas using join
(3 answers)
Closed 6 years ago.
I have panadas dataframe (df) like ['key','col1','col2','col3'] and I have pandas series (sr) for which the index is the same as 'key' in data frame. I want to append the series to the dataframe at the new column called col4 with the same 'key'. I have the following code:
for index, row in segmention.iterrows():
df[df['key']==row['key']]['col4']=sr.loc[row['key']]
The code is very slow. I assume there should be more efficient and better way to do that. could you please help?
You can simply do:
df['col4'] = sr
If don't misunderstand.
Use map as mentioned EdChum:
df['col4'] = df['key'].map(sr)
print (df)
col1 col2 col3 key col4
0 4 7 1 A 2
1 5 8 3 B 4
2 6 9 5 C 1
Or assign with set_index:
df = df.set_index('key')
df['col4'] = sr
print (df)
col1 col2 col3 col4
key
A 4 7 1 2
B 5 8 3 4
C 6 9 5 1
If dont need align data in Series by key use (see difference 2,1,4 vs 4,1,2):
df['col4'] = sr.values
print (df)
col1 col2 col3 key col4
0 4 7 1 A 4
1 5 8 3 B 1
2 6 9 5 C 2
Sample:
df = pd.DataFrame({'key':[1,2,3],
'col1':[4,5,6],
'col2':[7,8,9],
'col3':[1,3,5]}, index=list('ABC'))
print (df)
col1 col2 col3 key
A 4 7 1 1
B 5 8 3 2
C 6 9 5 3
sr = pd.Series([4,1,2], index=list('BCA'))
print (sr)
B 4
C 1
A 2
dtype: int64
df['col4'] = df['key'].map(sr)
print (df)
col1 col2 col3 key col4
0 4 7 1 A 2
1 5 8 3 B 4
2 6 9 5 C 1
df = df.set_index('key')
df['col4'] = sr
print (df)
col1 col2 col3 col4
key
A 4 7 1 2
B 5 8 3 4
C 6 9 5 1
This is really a good use case for join, where the left dataframe aligns a column with the index of the right dataframe/series. You have to make sure your Series has a name for it to work
sr.name = 'some name'
df.join(sr, on='key')