Selecting data from a dataframe based on a tuple - pandas

Suppose I have the following dataframe
df = DataFrame({'vals': [1, 2, 3, 4],
'ids': ['a', 'b', 'a', 'n']})
I want to select all the rows which are in the list
[ (1,a), (3,f) ]
I have tried using boolean indexing like so
to_search = { 'vals' : [1,3],
'ids' : ['a', 'f']
}
df.isin(to_search)
I expect only the first row to match but I get the first and the third row
ids vals
0 True True
1 True False
2 True True
3 False False
Is there any way to match exactly the values at a particular index instead of matching any value?

You might create a DataFrame for what you want to match, and then merge it:
In [32]: df2 = DataFrame([[1,'a'],[3,'f']], columns=['vals', 'ids'])
In [33]: df.merge(df2)
Out[33]:
ids vals
0 a 1

Related

How to show rows with data which are not equal?

I have two tables
import pandas as pd
import numpy as np
df2 = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]),
columns=['a', 'b', 'c'])
df1 = pd.DataFrame(np.array([[1, 2, 4], [4, 5, 6], [7, 8, 9]]),
columns=['a', 'b', 'c'])
print(df1.equals(df2))
I want to compare them. I want the same result if I would use function df.compare(df1) or at least something close to it. Can't use above fnction as my complier states that 'DataFrame' object has no attribute 'compare'
First approach:
Let's compare value by value:
In [1183]: eq_df = df1.eq(df2)
In [1196]: eq_df
Out[1200]:
a b c
0 True True False
1 True True True
2 True True True
Then let's reduce it down to see which rows are equal for all columns
from functools import reduce
In [1285]: eq_ser = reduce(np.logical_and, (eq_df[c] for c in eq_df.columns))
In [1288]: eq_ser
Out[1293]:
0 False
1 True
2 True
dtype: bool
Now we can print out the rows which are not equal
In [1310]: df1[~eq_ser]
Out[1315]:
a b c
0 1 2 4
In [1316]: df2[~eq_ser]
Out[1316]:
a b c
0 1 2 3
Second approach:
def diff_dataframes(
df1, df2, compare_cols=None
) -> Tuple[pd.DataFrame, pd.DataFrame, pd.DataFrame]:
"""
Given two dataframes and column(s) to compare, return three dataframes with rows:
- common between the two dataframes
- found only in the left dataframe
- found only in the right dataframe
"""
df1 = df1.fillna(pd.NA)
df = df1.merge(df2.fillna(pd.NA), how="outer", on=compare_cols, indicator=True)
df_both = df.loc[df["_merge"] == "both"].drop(columns="_merge")
df_left = df.loc[df["_merge"] == "left_only"].drop(columns="_merge")
df_right = df.loc[df["_merge"] == "right_only"].drop(columns="_merge")
tup = namedtuple("df_diff", ["common", "left", "right"])
return tup(df_both, df_left, df_right)
Usage:
In [1366]: b, l, r = diff_dataframes(df1, df2)
In [1371]: l
Out[1371]:
a b c
0 1 2 4
In [1372]: r
Out[1372]:
a b c
3 1 2 3
Third approach:
In [1440]: eq_ser = df1.eq(df2).sum(axis=1).eq(len(df1.columns))

I want those customers present in the data frame which has more false value than true value .Any suggestion on how to achieve that

The data frame :
df = pd.DataFrame({'A': ['cust1', 'cust1', 'cust2', 'cust1',
'cust2', 'cust1', 'cust2', 'cust2','cust2','cust1'],
'B': ['true', 'true', 'true', 'false',
'false', 'false', 'false', 'true','false','true']})
Ouput : ['cust2']
First get counts by crosstab and then filter index values by columns with boolean indexing, for greater is used Series.gt:
df1 = pd.crosstab(df['A'], df['B'])
print (df1)
B false true
A
cust1 2 3
cust2 3 2
c = df1.index[df1['false'].gt(df1['true'])].tolist()
#if True, False are boolean
#c = df1.index[df1[False].gt(df1[True])].tolist()
print (c)
['cust2']]
df[df['B']=='false'].groupby(['A']).count().sort_values(by['A'],ascending=False).index[0]
Explanation: Take all values with only 'False', groupby 'A' and count. Now sort the value in descending order and get the first index('A') value.
It seems like the case of multi -indexing so you can use index to isolate the greater value :
list = list(dataframe.index[dataframe['false'].gt(dataframe['true'])])

Pandas apply function on multiple columns

I am trying to apply a function to every column in a dataframe, when I try to do it on just a single fixed column name it works. I tried doing it on every column, but when I try passing the column name as an argument in the function I get an error.
How do you properly pass arguments to apply a function on a data frame?
def result(row,c):
if row[c] >=0 and row[c] <=1:
return 'c'
elif row[c] >1 and row[c] <=2:
return 'b'
else:
return 'a'
cols = list(df.columns.values)
for c in cols
df[c] = df.apply(result, args = (c), axis=1)
TypeError: ('result() takes exactly 2 arguments (21 given)', u'occurred at index 0')
Input data frame format:
d = {'c1': [1, 2, 1, 0], 'c2': [3, 0, 1, 2]}
df = pd.DataFrame(data=d)
df
c1 c2
0 1 3
1 2 0
2 1 1
3 0 2
You don't need to pass the column name to apply. As you only want to check if values of the columns are in certain range and should return a, b or c. You can make the following changes.
def result(val):
if 0<=val<=1:
return 'c'
elif 1<val<=2:
return 'b'
return 'a'
cols = list(df.columns.values)
for c in cols
df[c] = df[c].apply(result)
Note that this will replace your column values.
A faster way is np.select:
import numpy as np
values = ['c', 'b']
for col in df.columns:
df[col] = np.select([0<=df[col]<=1, 1<df[col]<=2], values, default = 'a')

How can I select rows from one DataFrame, where a part of the row's index is in another DataFrame's index and meets certain criteria?

I have two DataFrames. df provides a lot of data. test_df describes whether certain tests have passed or not. I need to select from df only the rows where the tests have not failed by looking up this info in test_df. So far, I'm able to reduce my test_df to passed_tests. So, what's left is to select only the rows from df where the relevant part of the row index is in passed_tests. How can I do that?
Updates:
test_db doesn't haven't unique rows. Where there are duplicate rows (and there may be more than 1 duplicate), the test that was the most positive takes priority. i.e True > Ok > False.
My code:
import pandas as pd
import numpy as np
index = [np.array(['foo', 'foo', 'foo', 'foo', 'qux', 'qux', 'qux']), np.array(['a', 'a', 'b', 'b', 'a', 'b', 'b'])]
data = np.array(['False', 'True', 'False', 'False', 'False', 'Ok', 'False'])
columns = ["Passed?"]
test_df = pd.DataFrame(data, index=index, columns=columns)
print test_df
index = [np.array(['foo', 'foo', 'foo', 'foo', 'qux', 'qux', 'qux', 'qux']),
np.array(['a', 'a', 'b', 'b', 'a', 'a', 'b', 'b']),
np.array(['1', '2', '1', '2', '1', '2', '1', '2'])]
data = np.random.randn(8, 2)
columns = ["X", "Y"]
df = pd.DataFrame(data, index=index, columns=columns)
print df
passed_tests = test_df.loc[test_df['Passed?'].isin(['True', 'Ok'])]
print passed_tests
df
X Y
foo a 1 0.589776 -0.234717
2 0.105161 1.937174
b 1 -0.092252 0.143451
2 0.939052 -0.239052
qux a 1 0.757239 2.836032
2 -0.445335 1.352374
b 1 2.175553 -0.700816
2 1.082709 -0.923095
test_df
Passed?
foo a False
a True
b False
b False
qux a False
b Ok
b False
passed_tests
Passed?
foo a True
qux b Ok
required solution
X Y
foo a 1 0.589776 -0.234717
2 0.105161 1.937174
qux b 1 2.175553 -0.700816
2 1.082709 -0.923095
You need reindex with method='ffill', then check values by isin and last use boolean indexing:
print (test_df.reindex(df.index, method='ffill'))
Passed?
foo a 1 True
2 True
b 1 False
2 False
qux a 1 False
2 False
b 1 Ok
2 Ok
mask = test_df.reindex(df.index, method='ffill').isin(['True', 'Ok'])['Passed?']
print (mask)
foo a 1 True
2 True
b 1 False
2 False
qux a 1 False
2 False
b 1 True
2 True
Name: Passed?, dtype: bool
print (df[mask])
X Y
foo a 1 -0.580448 -0.168951
2 -0.875165 1.304745
qux b 1 -0.147014 -0.787483
2 0.188989 -1.159533
EDIT:
For remove duplicates here is the easier use:
get columns from MultiIndex by reset_index
sort_values - Passed? column descending, first and second ascending
drop_duplicates - keep only first value
set_index for MultiIndex back
rename_axis for remove index names
test_df = test_df.reset_index()
.sort_values(['level_0','level_1', 'Passed?'], ascending=[1,1,0])
.drop_duplicates(['level_0','level_1'])
.set_index(['level_0','level_1'])
.rename_axis([None, None])
print (test_df)
Passed?
foo a True
b False
qux a False
b Ok
Another solution is simplier - sorting first and then groupby with first:
test_df = test_df.sort_values('Passed?', ascending=False)
.groupby(level=[0,1])
.first()
print (test_df)
Passed?
foo a True
b False
qux a False
b Ok
EDIT1:
Convert values to ordered Categorical.
index = [np.array(['foo', 'foo', 'foo', 'foo', 'qux', 'qux', 'qux']), np.array(['a', 'a', 'b', 'b', 'a', 'b', 'b'])]
data = np.array(['False', 'True', 'False', 'False', 'False', 'Acceptable', 'False'])
columns = ["Passed?"]
test_df = pd.DataFrame(data, index=index, columns=columns)
#print (test_df)
cat = ['False', 'Acceptable','True']
test_df["Passed?"] = test_df["Passed?"].astype('category', categories=cat, ordered=True)
print (test_df["Passed?"])
foo a False
a True
b False
b False
qux a False
b Acceptable
b False
Name: Passed?, dtype: category
Categories (3, object): [False < Acceptable < True]
test_df = test_df.sort_values('Passed?', ascending=False).groupby(level=[0,1]).first()
print (test_df)
Passed?
foo a True
b False
qux a False
b Acceptable

Aggregate/Remove duplicate rows in DataFrame based on swapped index levels

Sample input
import pandas as pd
df = pd.DataFrame([
['A', 'B', 1, 5],
['B', 'C', 2, 2],
['B', 'A', 1, 1],
['C', 'B', 1, 3]],
columns=['from', 'to', 'type', 'value'])
df = df.set_index(['from', 'to', 'type'])
Which looks like this:
value
from to type
A B 1 5
B C 2 2
A 1 1
C B 1 3
Goal
I now want to remove "duplicate" rows from this in the following sense: for each row with an arbitrary index (from, to, type), if there exists a row (to, from, type), the value of the second row should be added to the first row and the second row be dropped. In the example above, the row (B, A, 1) with value 1 should be added to the first row and dropped, leading to the following desired result.
Sample result
value
from to type
A B 1 6
B C 2 2
C B 1 3
This is my best try so far. It feels unnecessarily verbose and clunky:
# aggregate val of rows with (from,to,type) == (to,from,type)
df2 = df.reset_index()
df3 = df2.rename(columns={'from':'to', 'to':'from'})
df_both = df.join(df3.set_index(
['from', 'to', 'type']),
rsuffix='_b').sum(axis=1)
# then remove the second, i.e. the (to,from,t) row
rows_to_keep = []
rows_to_remove = []
for a,b,t in df_both.index:
if (b,a,t) in df_both.index and not (b,a,t) in rows_to_keep:
rows_to_keep.append((a,b,t))
rows_to_remove.append((b,a,t))
df_final = df_both.drop(rows_to_remove)
df_final
Especially the second "de-duplication" step feels very unpythonic. (How) can I improve these steps?
Not sure how much better this is, but it's certainly different
import pandas as pd
from collections import Counter
df = pd.DataFrame([
['A', 'B', 1, 5],
['B', 'C', 2, 2],
['B', 'A', 1, 1],
['C', 'B', 1, 3]],
columns=['from', 'to', 'type', 'value'])
df = df.set_index(['from', 'to', 'type'])
ls = df.to_records()
ls = list(ls)
ls2=[]
for l in ls:
i=0
while i <= l[3]:
ls2.append(list(l)[:3])
i+=1
counted = Counter(tuple(sorted(entry)) for entry in ls2)