pandas: filter rows with list elements beginning with string? - pandas

Blockquote
I have the following dataframe.
d = pd.DataFrame({'a': [['foo', 'bar'], ['bar'], ['fah', 'baz']})
I'd like to return just the rows with values of a beginning f in them - i.e. the first and third rows.
This is what I've tried:
d[d.a.is_in('f')]

Use any in list comprehension with generator:
d = d[[any(y.startswith('f') for y in x) for x in d['a']]]
print (d)
a
0 [foo, bar]
2 [fah, baz]
Detail: (convert to list only for sample)
print ([list(y.startswith('f') for y in x) for x in d['a']])
[[True, False], [False], [True, False]]

Solution using .apply(), iterating over the individual list elements, checking with .startswith() and evaluating the length of the resultant list:
import pandas as pd
df = pd.DataFrame({'a': [['foo', 'bar'], ['bar'], ['fah', 'baz']]})
df = df[df.a.apply(lambda x: len([el for el in x if el.startswith('f')]) > 0)]
print(df)
which results in:
a
0 [foo, bar]
2 [fah, baz]

Related

New column with word at nth position of string from other column pandas

import numpy as np
import pandas as pd
d = {'ABSTRACT_ID': [14145090,1900667, 8157202,6784974],
'TEXT': [
"velvet antlers vas are commonly used in tradit",
"we have taken a basic biologic RPA to elucidat4",
"ceftobiprole bpr is an investigational cephalo",
"lipoperoxidationderived aldehydes for example",],
'LOCATION': [1, 4, 2, 1]}
df = pd.DataFrame(data=d)
df
def word_at_pos(x,y):
pos=x
string= y
count = 0
res = ""
for word in string:
if word == ' ':
count = count + 1
if count == pos:
break
res = ""
else :
res = res + word
print(res)
word_at_pos(df.iloc[0,2],df.iloc[0,1])
For this df I want to create a new column WORD that contains the word from TEXT at the position indicated by LOCATION. e.g. first line would be "velvet".
I can do this for a single line as an isolated function world_at_pos(x,y), but can't work out how to apply this to whole column. I have done new columns with Lambda functions before, but can't work out how to fit this function to lambda.
Looping over TEXT and LOCATION could be the best idea because splitting creates a jagged array, so filtering using numpy advanced indexing won't be possible.
df["WORDS"] = [txt.split()[loc] for txt, loc in zip(df["TEXT"], df["LOCATION"]-1)]
print(df)
ABSTRACT_ID ... WORDS
0 14145090 ... velvet
1 1900667 ... a
2 8157202 ... bpr
3 6784974 ... lipoperoxidationderived
[4 rows x 4 columns]

Pandas create new column base on groupby and apply lambda if statement

I have the issue with groupby and apply
df = pd.DataFrame({'A': ['a', 'a', 'a', 'b', 'b', 'b', 'b'], 'B': np.r_[1:8]})
I want to create a column C for each group take value 1 if B > z_score=2 and 0 otherwise. The code:
from scipy import stats
df['C'] = df.groupby('A').apply(lambda x: 1 if np.abs(stats.zscore(x['B'], nan_policy='omit')) > 2 else 0, axis=1)
However, I am unsuccessful with code and cannot figure out the issue
Use GroupBy.transformwith lambda, function, then compare and for convert True/False to 1/0 convert to integers:
from scipy import stats
s = df.groupby('A')['B'].transform(lambda x: np.abs(stats.zscore(x, nan_policy='omit')))
df['C'] = (s > 2).astype(int)
Or use numpy.where:
df['C'] = np.where(s > 2, 1, 0)
Error in your solution is per groups:
from scipy import stats
df = df.groupby('A')['B'].apply(lambda x: 1 if np.abs(stats.zscore(x, nan_policy='omit')) > 2 else 0)
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
If check gotcha in pandas docs:
pandas follows the NumPy convention of raising an error when you try to convert something to a bool. This happens in an if-statement or when using the boolean operations: and, or, and not.
So if use one of solutions instead if-else:
from scipy import stats
df = df.groupby('A')['B'].apply(lambda x: (np.abs(stats.zscore(x, nan_policy='omit')) > 2).astype(int))
print (df)
A
a [0, 0, 0]
b [0, 0, 0, 0]
Name: B, dtype: object
but then need convert to column, for avoid this problems is used groupby.transform.
You can use groupby + apply a function that finds the z-scores of each item in each group; explode the resulting list; use gt to create a boolean series and convert it to dtype int
df['C'] = df.groupby('A')['B'].apply(lambda x: stats.zscore(x, nan_policy='omit')).explode(ignore_index=True).abs().gt(2).astype(int)
Output:
A B C
0 a 1 0
1 a 2 0
2 a 3 0
3 b 4 0
4 b 5 0
5 b 6 0
6 b 7 0

Pandas get row if column is a substring of string

I can do the following if I want to extract rows whose column "A" contains the substring "hello".
df[df['A'].str.contains("hello")]
How can I select rows whose column is the substring for another word? e.g.
df["hello".contains(df['A'].str)]
Here's an example dataframe
df = pd.DataFrame.from_dict({"A":["hel"]})
df["hello".contains(df['A'].str)]
IIUC, you could apply str.find:
import pandas as pd
df = pd.DataFrame(['hell', 'world', 'hello'], columns=['A'])
res = df[df['A'].apply("hello".find).ne(-1)]
print(res)
Output
A
0 hell
2 hello
As an alternative use __contains__
res = df[df['A'].apply("hello".__contains__)]
print(res)
Output
A
0 hell
2 hello
Or simply:
res = df[df['A'].apply(lambda x: x in "hello")]
print(res)

Pandas dataframe append to column containing list

I have a pandas dataframe with one column that contains an empty list in each cell.
I need to duplicate the dataframe, and append it at the bottom of the original dataframe, but with additional information in the list.
Here is a minimal code example:
df_main = pd.DataFrame([['a', []], ['b', []]], columns=['letter', 'mylist'])
> df_main
letter mylist
0 a []
1 b []
df_copy = df_main.copy()
for index, row in df_copy.iterrows():
row.mylist = row.mylist.append(1)
pd.concat([ df_copy,df_main], ignore_index=True)
> result:
letter mylist
0 a None
1 b None
2 a [1]
3 b [1]
As you can see there is a problem that the [] empty list was replaced by a None
Just to make sure, this is what I would like to have:
letter mylist
0 a []
1 b []
2 a [1]
3 b [1]
How can I achieve that?
append method on list return a None value, that's why None appears in the final dataframe. You may have use + operator for reassignment like this:
import pandas as pd
df_main = pd.DataFrame([['a', []], ['b', []]], columns=['letter', 'mylist'])
df_copy = df_main.copy()
for index, row in df_copy.iterrows():
row.mylist = row.mylist + list([1])
pd.concat([df_main, df_copy], ignore_index=True).head()
Output of this block of code:
letter mylist
0 a []
1 b []
2 a [1]
3 b [1]
A workaround to solve your problem would be to create a temporary column mylist2 with np.empty((len(df), 0)).tolist()) and use np.where() to change the None values of mylist to an empty list and then drop the empty column.
import pandas as pd, numpy as np
df_main = pd.DataFrame([['a', []], ['b', []]], columns=['letter', 'mylist'])
df_copy = df_main.copy()
for index, row in df_copy.iterrows():
row.mylist = row.mylist.append(1)
df = (pd.concat([df_copy,df_main], ignore_index=True)
.assign(mylist2=np.empty((len(df), 0)).tolist()))
df['mylist'] = np.where((df['mylist'].isnull()), df['mylist2'], df['mylist'])
df= df.drop('mylist2', axis=1)
df
Out[1]:
letter mylist
0 a []
1 b []
2 a [1]
3 b [1]
Not only does append method on list return a None value as indicated in the first answer, but both df_main and df_copy contain pointers to the same lists. So after:
for index, row in df_copy.iterrows():
row.mylist.append(1)
both dataframes have updated lists with one element. For your code to work as expected you can create a new list after you copy the dataframe:
df_copy = df_main.copy()
for index, row in df_copy.iterrows():
row.mylist = []
This question is another great example why we should not put objects in a dataframe.

How to find the value by checking the flag

data frame is below
uid,col1,col2,flag
1001,a,b,{'a':True,'b':False}
1002,a,b,{'a':False,'b':True}
out
a
b
by checking the flag, if a is true then print a on the out column, if b flag is true then print b on the out column
IIUC, you can use dot after DataFrame constructor:
m=pd.DataFrame(df['flag'].tolist()).fillna(False)
final=df.assign(New=m.dot(m.columns))
print(final)
uid col1 col2 flag New
0 1001 a b {'a': True} a
1 1002 a b {'b': True} b
If you just want to evaluate the flags column (and col1 and col2 won't be used in any way as per your question), then you can simply get the first key from the flags dict where the value is True:
df.flag.apply(lambda x: next((k for k,v in x.items() if v), ''))
(instead of '' you can, of course, supply any other value for the case that none of the values in the dict is True)
Example:
import pandas as pd
import io
import ast
s = '''uid,col1,col2,flag
1001,a,b,"{'a':True,'b':False}"
1002,a,b,"{'a':False,'b':True}"
1003,a,b,"{'a':True,'b':True}"
1004,a,b,"{'a':False,'b':False}"'''
df = pd.read_csv(io.StringIO(s))
df.flag = df.flag.map(ast.literal_eval)
df['out'] = df.flag.apply(lambda x: next((k for k,v in x.items() if v), ''))
Result
uid col1 col2 flag out
0 1001 a b {'a': True, 'b': False} a
1 1002 a b {'a': False, 'b': True} b
2 1003 a b {'a': True, 'b': True} a
3 1004 a b {'a': False, 'b': False}
Method 1
We can also use Series.apply
to convert the dictionary to series, then remove the fake ones with boolean indexing + DataFrame.stack and select a or b from the index with Index.get_level_values:
s = df['flag'].apply(pd.Series)
df['new']=s[s].stack().index.get_level_values(1)
#df['new']=np.dot(s,s.columns) #or this
print(df)
Method 2:
We can also check the items with Series.apply and save the key in a list if the value is True.
Finally we use Series.explode if we want to get rid of the list.
df['new']=df['flag'].apply(lambda x: [k for k,v in x.items() if v])
df = df.explode('new')
print(df)
or without apply:
df=df.assign(new=[[k for k,v in d.items() if v] for d in df['flag']]).explode('new')
print(df)
Output
uid col1 col2 flag new
0 1001 a b {'a': True, 'b': False} a
1 1002 a b {'a': False, 'b': True} b