Exporting Tokenized SpaCy result into Excel or SQL tables - pandas

I'm using SpaCy with Pandas to get a sentence tokenised with Part of Speech (POS)export to excel. The code is as follow:
import spacy
import xlsxwriter
import pandas as pd
nlp = spacy.load('en_core_web_sm')
text ="""He is a good boy."""
doc = nlp(text)
for token in doc:
x=[token.text, token.lemma_, token.pos_, token.tag_,token.dep_,token.shape_, token.is_alpha, token.is_stop]
print(x)
When I print(x)I get the following:
['He', '-PRON-', 'PRON', 'PRP', 'nsubj', 'Xx', True, False]
['is', 'be', 'VERB', 'VBZ', 'ROOT', 'xx', True, True]
['a', 'a', 'DET', 'DT', 'det', 'x', True, True]
['good', 'good', 'ADJ', 'JJ', 'amod', 'xxxx', True, False]
['boy', 'boy', 'NOUN', 'NN', 'attr', 'xxx', True, False]
['.', '.', 'PUNCT', '.', 'punct', '.', False, False]
To the token loop, I added the DataFrame as follow:
for token in doc:
for token in doc:
x=[token.text, token.lemma_, token.pos_, token.tag_,token.dep_,token.shape_, token.is_alpha, token.is_stop]
df=pd.Dataframe(x)
print(df)
Now, I stat to get the following format:
0
0 He
1 -PRON-
2 PRON
3 PRP
4 nsubj
5 Xx
6 True
7 False
........
........
However, when I try exporting the output (df) to excel using Pandas as the following code, it only shows me the last iteration of x in the column
df=pd.DataFrame(x)
writer = pd.ExcelWriter('pandas_simple.xlsx', engine='xlsxwriter')
df.to_excel(writer,sheet_name='Sheet1')
Output (in Excel Sheet):
0
0 .
1 .
2 PUNCT
3 .
4 punct
5 .
6 False
7 False
How I can have all the iterations one after the other in the new column in this scenario as follow?
0 He is ….
1 -PRON- be ….
2 PRON VERB ….
3 PRP VBZ ….
4 nsubj ROOT ….
5 Xx xx ….
6 True True ….
7 False True ….

Some shorter code:
import spacy
import pandas as pd
nlp = spacy.load('en_core_web_sm')
text ="""He is a good boy."""
param = [[token.text, token.lemma_, token.pos_,
token.tag_,token.dep_,token.shape_,
token.is_alpha, token.is_stop] for token in nlp(text)]
df=pd.DataFrame(param)
headers = ['text', 'lemma', 'pos', 'tag', 'dep',
'shape', 'is_alpha', 'is_stop']
df.columns = headers

In case you don't have your version yet:
import pandas as pd
rows =[
['He', '-PRON-', 'PRON', 'PRP', 'nsubj', 'Xx', True, False],
['is', 'be', 'VERB', 'VBZ', 'ROOT', 'xx', True, True],
['a', 'a', 'DET', 'DT', 'det', 'x', True, True],
['good', 'good', 'ADJ', 'JJ', 'amod', 'xxxx', True, False],
['boy', 'boy', 'NOUN', 'NN', 'attr', 'xxx', True, False],
['.', '.', 'PUNCT', '.', 'punct', '.', False, False],
]
headers = ['text', 'lemma', 'pos', 'tag', 'dep',
'shape', 'is_alpha', 'is_stop']
# example 1: list of lists of dicts
#following https://stackoverflow.com/a/28058264/1758363
d = []
for row in rows:
dict_ = {k:v for k, v in zip(headers, row)}
d.append(dict_)
df = pd.DataFrame(d)[headers]
# example 2: appending dicts
df2 = pd.DataFrame(columns=headers)
for row in rows:
dict_ = {k:v for k, v in zip(headers, row)}
df2 = df2.append(dict_, ignore_index=True)
#example 3: lists of dicts created with map() function
def as_dict(row):
return {k:v for k, v in zip(headers, row)}
df3 = pd.DataFrame(list(map(as_dict, rows)))[headers]
def is_equal(df_a, df_b):
"""Substitute for pd.DataFrame.equals()"""
return (df_a == df_b).all().all()
assert is_equal(df, df2)
assert is_equal(df2, df3)

Related

Conditional mapping among columns of two data frames with Pandas Data frame

I needed your advice regarding how to map columns between data-frames:
I have put it in simple way so that it's easier for you to understand:
df = dataframe
EXAMPLE:
df1 = pd.DataFrame({
"X": [],
"Y": [],
"Z": []
})
df2 = pd.DataFrame({
"A": ['', '', 'A1'],
"C": ['', '', 'C1'],
"D": ['D1', 'Other', 'D3'],
"F": ['', '', ''],
"G": ['G1', '', 'G3'],
"H": ['H1', 'H2', 'H3']
})
Requirement:
1st step:
We needed to track a value for X column on df1 from columns A, C, D respectively. It would stop searching once it finds any value and would select it.
2nd step:
If the selected value is "Other" then X column of df1 would map columns F, G, and H respectively until it finds any value.
Result:
X
0 D1
1 H2
2 A1
Thank you so much in advance
Try this:
def first_non_empty(df, cols):
"""Return the first non-empty, non-null value among the specified columns per row"""
return df[cols].replace('', pd.NA).bfill(axis=1).iloc[:, 0]
col_x = first_non_empty(df2, ['A','C','D'])
col_x = col_x.mask(col_x == 'Other', first_non_empty(df2, ['F','G','H']))
df1['X'] = col_x

All() is printing every time else statement

Basically pandas object is applying to entire data frame not individually
that is why it is going to else condition. we need to apply on each rows
I got proper output while applying on one row frame. While applying entire data frame I got the error No keys on each rows, Basically some rows of res have None only those rows are expected to be No keys
sample dataframe
res,url1,url2
{'bool': True, 'val':False},{'bool': False, 'val':False},{'bool': True, 'val':False}
None,{'bool': True, 'val':False},{'bool': False, 'val':False}
{'bool': False, 'val':False},},{'bool': True, 'val':False},{'bool': True, 'val':False}
Code
def func1():
return ('url1')
def func2():
return ('url2')
def test_func():
if df['res'].str['bool'].all() and df['url1'].str['bool'].all():
return func1()
elif df['res'].str['bool'].all() and df['url2'].str['bool'].all():
return func2()
else:
return ("No Keys")
Expected Out
output
url1
No Keys
url2
MY out
No keys
No Keys
No Kyes
I need to apply on the below code more than 5000 urls
df['output'] = df.apply(test_func)
While applying I got the error No keys on each rows
if i do any its passing False because first row of the url1 bools is False
What is the issue is if all() its checking all the rows since None is present in the second rows its printing No Keys
Recreating DataFrame
res url1 \
0 {'bool': True, 'val': False} {'bool': False, 'val': False}
1 None {'bool': True, 'val': False}
2 {'bool': False, 'val': False} {'bool': True, 'val': False}
url2
0 {'bool': True, 'val': False}
1 {'bool': False, 'val': False}
2 {'bool': True, 'val': False}
use pd.apply
df.apply(lambda x: 'url1' if (x['res'] != None and x['res'].get('bool') and x['url1'].get('bool'))\
else 'url2' if (x['res'] != None and x['res'].get('bool') and x['url2'].get('bool'))
else 'No Keys',1)
Output
0 url2
1 No Keys
2 No Keys
dtype: object
Note - for third row, res bool value is False, so doing and will give false and hence No Keys
You can also use a nested np.where:
import pandas as pd
import numpy as np
#Recreate dataframe
df = pd.DataFrame(data = {
'res': [{'bool': True, 'val':False}, None, {'bool': False, 'val':False}],
'url1':[{'bool': False, 'val':False}, {'bool': True, 'val':False}, {'bool': True, 'val':False}],
'url2':[{'bool': True, 'val':False},{'bool': False, 'val':False},{'bool': True, 'val':False}]})
# Define logic
df['Output'] = np.where(df['res'].str['bool'] & df['url1'].str['bool'], 'url1',
np.where(df['res'].str['bool'] & df['url2'].str['bool'], 'url2',
'No Keys'))
# Check Result
df
res ... Output
0 {'bool': True, 'val': False} ... url2
1 None ... No Keys
2 {'bool': False, 'val': False} ... No Keys

How to find the value by checking the flag

data frame is below
uid,col1,col2,flag
1001,a,b,{'a':True,'b':False}
1002,a,b,{'a':False,'b':True}
out
a
b
by checking the flag, if a is true then print a on the out column, if b flag is true then print b on the out column
IIUC, you can use dot after DataFrame constructor:
m=pd.DataFrame(df['flag'].tolist()).fillna(False)
final=df.assign(New=m.dot(m.columns))
print(final)
uid col1 col2 flag New
0 1001 a b {'a': True} a
1 1002 a b {'b': True} b
If you just want to evaluate the flags column (and col1 and col2 won't be used in any way as per your question), then you can simply get the first key from the flags dict where the value is True:
df.flag.apply(lambda x: next((k for k,v in x.items() if v), ''))
(instead of '' you can, of course, supply any other value for the case that none of the values in the dict is True)
Example:
import pandas as pd
import io
import ast
s = '''uid,col1,col2,flag
1001,a,b,"{'a':True,'b':False}"
1002,a,b,"{'a':False,'b':True}"
1003,a,b,"{'a':True,'b':True}"
1004,a,b,"{'a':False,'b':False}"'''
df = pd.read_csv(io.StringIO(s))
df.flag = df.flag.map(ast.literal_eval)
df['out'] = df.flag.apply(lambda x: next((k for k,v in x.items() if v), ''))
Result
uid col1 col2 flag out
0 1001 a b {'a': True, 'b': False} a
1 1002 a b {'a': False, 'b': True} b
2 1003 a b {'a': True, 'b': True} a
3 1004 a b {'a': False, 'b': False}
Method 1
We can also use Series.apply
to convert the dictionary to series, then remove the fake ones with boolean indexing + DataFrame.stack and select a or b from the index with Index.get_level_values:
s = df['flag'].apply(pd.Series)
df['new']=s[s].stack().index.get_level_values(1)
#df['new']=np.dot(s,s.columns) #or this
print(df)
Method 2:
We can also check the items with Series.apply and save the key in a list if the value is True.
Finally we use Series.explode if we want to get rid of the list.
df['new']=df['flag'].apply(lambda x: [k for k,v in x.items() if v])
df = df.explode('new')
print(df)
or without apply:
df=df.assign(new=[[k for k,v in d.items() if v] for d in df['flag']]).explode('new')
print(df)
Output
uid col1 col2 flag new
0 1001 a b {'a': True, 'b': False} a
1 1002 a b {'a': False, 'b': True} b

How can I select rows from one DataFrame, where a part of the row's index is in another DataFrame's index and meets certain criteria?

I have two DataFrames. df provides a lot of data. test_df describes whether certain tests have passed or not. I need to select from df only the rows where the tests have not failed by looking up this info in test_df. So far, I'm able to reduce my test_df to passed_tests. So, what's left is to select only the rows from df where the relevant part of the row index is in passed_tests. How can I do that?
Updates:
test_db doesn't haven't unique rows. Where there are duplicate rows (and there may be more than 1 duplicate), the test that was the most positive takes priority. i.e True > Ok > False.
My code:
import pandas as pd
import numpy as np
index = [np.array(['foo', 'foo', 'foo', 'foo', 'qux', 'qux', 'qux']), np.array(['a', 'a', 'b', 'b', 'a', 'b', 'b'])]
data = np.array(['False', 'True', 'False', 'False', 'False', 'Ok', 'False'])
columns = ["Passed?"]
test_df = pd.DataFrame(data, index=index, columns=columns)
print test_df
index = [np.array(['foo', 'foo', 'foo', 'foo', 'qux', 'qux', 'qux', 'qux']),
np.array(['a', 'a', 'b', 'b', 'a', 'a', 'b', 'b']),
np.array(['1', '2', '1', '2', '1', '2', '1', '2'])]
data = np.random.randn(8, 2)
columns = ["X", "Y"]
df = pd.DataFrame(data, index=index, columns=columns)
print df
passed_tests = test_df.loc[test_df['Passed?'].isin(['True', 'Ok'])]
print passed_tests
df
X Y
foo a 1 0.589776 -0.234717
2 0.105161 1.937174
b 1 -0.092252 0.143451
2 0.939052 -0.239052
qux a 1 0.757239 2.836032
2 -0.445335 1.352374
b 1 2.175553 -0.700816
2 1.082709 -0.923095
test_df
Passed?
foo a False
a True
b False
b False
qux a False
b Ok
b False
passed_tests
Passed?
foo a True
qux b Ok
required solution
X Y
foo a 1 0.589776 -0.234717
2 0.105161 1.937174
qux b 1 2.175553 -0.700816
2 1.082709 -0.923095
You need reindex with method='ffill', then check values by isin and last use boolean indexing:
print (test_df.reindex(df.index, method='ffill'))
Passed?
foo a 1 True
2 True
b 1 False
2 False
qux a 1 False
2 False
b 1 Ok
2 Ok
mask = test_df.reindex(df.index, method='ffill').isin(['True', 'Ok'])['Passed?']
print (mask)
foo a 1 True
2 True
b 1 False
2 False
qux a 1 False
2 False
b 1 True
2 True
Name: Passed?, dtype: bool
print (df[mask])
X Y
foo a 1 -0.580448 -0.168951
2 -0.875165 1.304745
qux b 1 -0.147014 -0.787483
2 0.188989 -1.159533
EDIT:
For remove duplicates here is the easier use:
get columns from MultiIndex by reset_index
sort_values - Passed? column descending, first and second ascending
drop_duplicates - keep only first value
set_index for MultiIndex back
rename_axis for remove index names
test_df = test_df.reset_index()
.sort_values(['level_0','level_1', 'Passed?'], ascending=[1,1,0])
.drop_duplicates(['level_0','level_1'])
.set_index(['level_0','level_1'])
.rename_axis([None, None])
print (test_df)
Passed?
foo a True
b False
qux a False
b Ok
Another solution is simplier - sorting first and then groupby with first:
test_df = test_df.sort_values('Passed?', ascending=False)
.groupby(level=[0,1])
.first()
print (test_df)
Passed?
foo a True
b False
qux a False
b Ok
EDIT1:
Convert values to ordered Categorical.
index = [np.array(['foo', 'foo', 'foo', 'foo', 'qux', 'qux', 'qux']), np.array(['a', 'a', 'b', 'b', 'a', 'b', 'b'])]
data = np.array(['False', 'True', 'False', 'False', 'False', 'Acceptable', 'False'])
columns = ["Passed?"]
test_df = pd.DataFrame(data, index=index, columns=columns)
#print (test_df)
cat = ['False', 'Acceptable','True']
test_df["Passed?"] = test_df["Passed?"].astype('category', categories=cat, ordered=True)
print (test_df["Passed?"])
foo a False
a True
b False
b False
qux a False
b Acceptable
b False
Name: Passed?, dtype: category
Categories (3, object): [False < Acceptable < True]
test_df = test_df.sort_values('Passed?', ascending=False).groupby(level=[0,1]).first()
print (test_df)
Passed?
foo a True
b False
qux a False
b Acceptable

Selecting data from a dataframe based on a tuple

Suppose I have the following dataframe
df = DataFrame({'vals': [1, 2, 3, 4],
'ids': ['a', 'b', 'a', 'n']})
I want to select all the rows which are in the list
[ (1,a), (3,f) ]
I have tried using boolean indexing like so
to_search = { 'vals' : [1,3],
'ids' : ['a', 'f']
}
df.isin(to_search)
I expect only the first row to match but I get the first and the third row
ids vals
0 True True
1 True False
2 True True
3 False False
Is there any way to match exactly the values at a particular index instead of matching any value?
You might create a DataFrame for what you want to match, and then merge it:
In [32]: df2 = DataFrame([[1,'a'],[3,'f']], columns=['vals', 'ids'])
In [33]: df.merge(df2)
Out[33]:
ids vals
0 a 1