How to select rows that have missing values in columns depending on conditions for dataframes? - pandas

I have a dataframe extracted from excel sheet.
I am looking for NOT legit rows.
A legit row is such that it meets ANY of the following conditions:
exactly 1 column filled in but the other columns are empty or null
exactly 2 columns are filled in but the other columns are empty or null
exactly all 8 columns are filled in
SO a NON legit row is the opposite of the above such as:
7 of the 8 columns are filled in but one is empty
6 of the 8 columns are filled in but any of the two is empty
and so on...
The 8 columns i am interested in are: columns A, B, D, E, F, G, I, L.
I only want to return those rows that are NOT legit.
I know how to find rows which are empty in specific columns but not sure how to find the non legit rows based on the above conditions.
empty_A = sheet[sheet[sheet.columns[0]].isnull()]
empty_B = sheet[sheet[sheet.columns[1]].isnull()]
empty_D = sheet[sheet[sheet.columns[3]].isnull()]
empty_E = sheet[sheet[sheet.columns[4]].isnull()]
empty_F = sheet[sheet[sheet.columns[5]].isnull()]
empty_G = sheet[sheet[sheet.columns[6]].isnull()]
empty_I = sheet[sheet[sheet.columns[8]].isnull()]
empty_L = sheet[sheet[sheet.columns[11]].isnull()]
print(empty_G)
UPDATE:
I solved using list comprehension

If you already have populated your dataframe then you can do it like this
import numpy as np
import pandas as pd
## Generate Random Data
raw_data=np.random.choice([None,1], (50,8))
raw_data= np.r_[raw_data, np.random.choice([None, 1,2,3], (50,8))]
## Create dataframe from random data
df = pd.DataFrame(raw_data, columns="A, B, D, E, F, G, I, L".split(","))
notnull_counts = (~df.isnull()).sum(axis=1)
## filter rows with your condition
legit_rows = df[((notnull_counts==1) | (notnull_counts==2) | (notnull_counts==8))]
non_legit_rows = df[~((notnull_counts==1) | (notnull_counts==2) | (notnull_counts==8))]
display(legit_rows)

It seems like you want to count the number of null values in these 8 particular columns and select rows based on how many nulls are found. That phrasing suggests summing and selecting based on that sum. Most pandas operations default to performing columnwise operations, so you need to tell sum() to perform the sum for each row by using axis="columns", like so:
# This is a series indexed like df.
# It counts the number of null values in the given columns.
n_null = df[["A", "B", "D", "E", "F", "G", "I", "L"]].isnull().sum(axis="columns")
# This selects the rows where n_null has certain values.
df_notlegit = df.loc[n_null.isin([8, 5, 4, 3, 2])]
# This is another way to do it.
df_nonlegit = df.loc[(n_null > 1) & (n_null < 9)]

df.loc[(df.isna().sum(axis=1)==0) | (df.isna().sum(axis=1)==7) | (df.isna().sum(axis=1)==6)]

Related

Joining two data frames on column name and comparing result side by side

I have two data frames which look like df1 and df2 below and I want to create df3 as shown.
I could do this using a left join to have all the rows in one dataframe and then did a numpy.where to see if they are matching or not.
I could get what I want but I feel there should be an elegant way of doing this which will eliminate renaming columns, reshuffling columns in dataframe and then using np.where.
Is there a better way to do this?
code to reproduce dataframes:
import pandas as pd
df1=pd.DataFrame({'product':['apples','bananas','oranges','pineapples'],'price':[1,2,3,7],'quantity':[5,7,11,4]})
df2=pd.DataFrame({'product':['apples','bananas','oranges'],'price':[2,2,4],'quantity':[5,7,13]})
df3=pd.DataFrame({'product':['apples','bananas','oranges'],'price_df1':[1,2,3],'price_df2':[2,2,4],'price_match':['No','Yes','No'],'quantity':[5,7,11],'quantity_df2':[5,7,13],'quantity_match':['Yes','Yes','No']})
An elegant way to do your task is to:
generate "partial" DataFrames from each source column,
and then concatenate them.
The first step is to define a function to join 2 source columns and append "match" column:
def myJoin(s1, s2):
rv = s1.to_frame().join(s2.to_frame(), how='inner',
lsuffix='_df1', rsuffix='_df2')
rv[s1.name + '_match'] = np.where(rv.iloc[:,0] == rv.iloc[:,1], 'Yes', 'No')
return rv
Then, from df1 and df2, generate 2 auxiliary DataFrames setting product as the index:
wrk1 = df1.set_index('product')
wrk2 = df2.set_index('product')
And the final step is:
result = pd.concat([ myJoin(wrk1[col], wrk2[col]) for col in wrk1.columns ], axis=1)\
.reset_index()
Details:
for col in wrk1.columns - generates names of columns to join.
myJoin(wrk1[col], wrk2[col]) - generates the partial result for this column from
both source DataFrames.
[…] - a list comprehension, collecting the above partial results in a list.
pd.concat(…) - concatenates these partial results into the final result.
reset_index() - converts the index (product names) into a regular column.
For your source data, the result is:
product price_df1 price_df2 price_match quantity_df1 quantity_df2 quantity_match
0 apples 1 2 No 5 5 Yes
1 bananas 2 2 Yes 7 7 Yes
2 oranges 3 4 No 11 13 No

Dataframe Column into Multiple Columns by delimiter ',' : expand = True, n =-1

My first question thanks) Sorry for lengthy formulationenter image description here
Researched all related posts
What I have
my Dataframe column (please see screenshot) is strings separated by delimiter ',' Car parameters.
My Dataframe:-
Some rows come with mileage while others not (screenshot) hence some rows have fewer delimiters.
The Task
Need to create 5 columns (max number of delimiters) to store CarParameters separately (Mileage, GearBox, HP, Body etc)
If a row doesn't have Mileage Put 0 in the Mileage Column
What I know and works well
df["name"].str.split(" ", expand = True) by default n=-1 and splits into necessary columns
example:
The issue:
If I use the str.split(" ", expand = True) method - GearBox (ATM) is wrongly put under newly created Mileage column because that row is short of one delimiter (screenshot)
Result:-
-
You can try lambda function combined with list concatenation like below.
>>> import pandas as pd
>>> df = pd.DataFrame([['1,2,3,4,5'],['2,3,4,5']], columns=["CarParameters"])
>>> print(pd.DataFrame(df.CarParameters.apply(
lambda x: str(x).split(',')).apply(
lambda x: [0]*(5-len(x)) + x).to_list(), columns=list("ABCDE")))
A B C D E
0 1 2 3 4 5
1 0 2 3 4 5

replace values for specific rows more efficiently in pandas / Python

I have two data frames, based on a condition that I get from a list (which length is 2 million) I get rows that match that condition, then for those rows I change the values in columns x and y in the first data frame by the values of x and y in the second data frame. Here is my code, but it is very slow and makes my computer freeze. Any idea how I can do this more efficiently ?
for ids in List_id:
a=df1.index[(df1['id'] == ids )==True].values[0]
b=df2.index[(df2['id'] == ids )==True].values[0]
df1['x'][a] = df2['x'][b]
df1['y'][a] = df2['y'][b]
thank you
--
Example:
List_id=[1, 11 , 12 , 13]
ids=1
a=df1.index[(df1['id'] == 1 )==True].values[0]
print('a') : 234
b=df2.index[(df2['id'] == 1 )==True].values[0]
print('b') : 789
df1['x'][a] = 0
df2['x'][b] =15
So at the end I want in my data frame 1:
df1['x'][a] = df2['x'][b]
Assuming you don't have repeated id in both dataframe, you can try something like below:
step-1: filtering df2
step-2: joining df1 with filtered one
step-3 replace values in joined df and dropping extra columns.
df2_filtered=df2[df2['id'].isin(List_id)]
join_df = df1.setIndex('id').join(df2_filtered.setIndex('id'), rsuffix = "_ignore", how = 'left')
# other columns from df2 will be null, you can use that to get the rows which needs to be updated

Pandas dataframe select rows where a list-column contains a specific set of elements

This is a follow-up to the following post: Pandas dataframe select rows where a list-column contains any of a list of strings
I want to be able to select rows that contain the exact pair of strings from the selection list (where selection= ['cat', 'dog']).
starting df:
molecule species
0 a [dog]
1 b [horse, pig]
2 c [cat, dog]
3 d [cat, horse, pig]
4 e [chicken, pig]
df I want:
molecule species
2 c [cat, dog]
I tried the following and it returned only the columns labels.
df[pd.DataFrame(df.species.tolist()).isin(selection).all(1)]
One way to do it:
df['joined'] = df.species.str.join(sep=',')
selection = ['cat,dog']
filtered = df.loc[df.joined.isin(selection)]
This won't find cases with different sorting (i.e. 'dog,cat' or 'horse,cat,pig'), but if that is not an issue then it works fine.
This will find anything.
import pandas as pd
selection = ['cat', 'dog']
mols = pd.DataFrame({'molecule':['a','b','c','d','e'],'species':[['dog'],['horse','pig'],['cat','dog'],['cat','horse','pig'],['chicken','pig']]})
mols.loc[np.where(pd.Series([all(w in selection for w in mols.species.values[k]) for k in mols.index]).map({True:1,False:0}) == 1)[0]]
If you want to find any rows that have at least the elements in the list (and could have others as well), use:
mols.loc[np.where(pd.Series([all(w in mols.species.values[k] for w in selection) for k in mols.index]).map({True:1,False:0}) == 1)[0]]
This is an interesting application of matrices as selectors. Use the transposed mols to multiply the vector of zeroes and ones that points which rows in mols fit your criteria:
mols.to_numpy().T.dot(pd.Series([all(w in mols.species.values[k] for w in selection) for k in mols.index]).map({True:1,False:0}))
Another (more readable) solution would be to assign, to mols, a column where the condition is True, map it to 0 and 1 and query mols where that column is equal to 1.

pandas: appending a row to a dataframe with values derived using a user defined formula applied on selected columns

I have a dataframe as
df = pd.DataFrame(np.random.randn(5,4),columns=list('ABCD'))
I can use the following to achieve the traditional calculation like mean(), sum()etc.
df.loc['calc'] = df[['A','D']].iloc[2:4].mean(axis=0)
Now I have two questions
How can I apply a formula (like exp(mean()) or 2.5*mean()/sqrt(max()) to column 'A' and 'D' for rows 2 to 4
How can I append row to the existing df where two values would be mean() of the A and D and two values would be of specific formula result of C and B.
Q1:
You can use .apply() and lambda functions.
df.iloc[2:4,[0,3]].apply(lambda x: np.exp(np.mean(x)))
df.iloc[2:4,[0,3]].apply(lambda x: 2.5*np.mean(x)/np.sqrt(max(x)))
Q2:
You can use dictionaries and combine them and add it as a row.
First one is mean, the second one is some custom function.
ad = dict(df[['A', 'D']].mean())
bc = dict(df[['B', 'C']].apply(lambda x: x.sum()*45))
Combine them:
ad.update(bc)
df = df.append(ad, ignore_index=True)