Conditional merge on in pandas - pandas

My question in simple I am using pd.merge to merge two df .
Here's the line of code:
pivoted = pd.merge(pivoted, concerned_data, on='A')
and I want the on='B' whenever a row has column A value as null. Is there a possible way to do this?
Edit:
As an example if
df1: A | B |randomval
1 | 1 | ty
Nan| 2 | asd
df2: A | B |randomval2
1 | Nan| tyrte
3 | 2 | asde
So if on='A' and the value is Nan is any of the df (for a single row) I want on='B' for that row only
Thank you!

You could create a third column in your pandas.DataFrame which incorporates this logic and merge on this one.
For example, create dummy data
df1 = pd.DataFrame({"A" : [1, None], "B" : [1, 2], "Val1" : ["a", "b"]})
df2 = pd.DataFrame({"A" : [1, 2], "B" : [None, 2], "Val2" : ["c", "d"]})
Create a column c which has this logic
df1["C"] = pd.concat([df1.loc[~df1.A.isna(), "A"], df1.loc[df1.A.isna(), "B"]],ignore_index=False)
df2["C"] = pd.concat([df2.loc[~df2.A.isna(), "A"], df2.loc[df2.A.isna(), "B"]],ignore_index=False)
Finally, merge on this common column and include only your value columns
df3 = pd.merge(df1[["Val1","C"]], df2[["Val2","C"]], on='C')
In [27]: df3
Out[27]:
Val1 C Val2
0 a 1.0 c
1 b 2.0 d

Related

Split rows in Pandas into multiple rows based on a semicolon, but the change appears only in one column and not in others [duplicate]

I have a pandas dataframe in which one column of text strings contains comma-separated values. I want to split each CSV field and create a new row per entry (assume that CSV are clean and need only be split on ','). For example, a should become b:
In [7]: a
Out[7]:
var1 var2
0 a,b,c 1
1 d,e,f 2
In [8]: b
Out[8]:
var1 var2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
So far, I have tried various simple functions, but the .apply method seems to only accept one row as return value when it is used on an axis, and I can't get .transform to work. Any suggestions would be much appreciated!
Example data:
from pandas import DataFrame
import numpy as np
a = DataFrame([{'var1': 'a,b,c', 'var2': 1},
{'var1': 'd,e,f', 'var2': 2}])
b = DataFrame([{'var1': 'a', 'var2': 1},
{'var1': 'b', 'var2': 1},
{'var1': 'c', 'var2': 1},
{'var1': 'd', 'var2': 2},
{'var1': 'e', 'var2': 2},
{'var1': 'f', 'var2': 2}])
I know this won't work because we lose DataFrame meta-data by going through numpy, but it should give you a sense of what I tried to do:
def fun(row):
letters = row['var1']
letters = letters.split(',')
out = np.array([row] * len(letters))
out['var1'] = letters
a['idx'] = range(a.shape[0])
z = a.groupby('idx')
z.transform(fun)
UPDATE 3: it makes more sense to use Series.explode() / DataFrame.explode() methods (implemented in Pandas 0.25.0 and extended in Pandas 1.3.0 to support multi-column explode) as is shown in the usage example:
for a single column:
In [1]: df = pd.DataFrame({'A': [[0, 1, 2], 'foo', [], [3, 4]],
...: 'B': 1,
...: 'C': [['a', 'b', 'c'], np.nan, [], ['d', 'e']]})
In [2]: df
Out[2]:
A B C
0 [0, 1, 2] 1 [a, b, c]
1 foo 1 NaN
2 [] 1 []
3 [3, 4] 1 [d, e]
In [3]: df.explode('A')
Out[3]:
A B C
0 0 1 [a, b, c]
0 1 1 [a, b, c]
0 2 1 [a, b, c]
1 foo 1 NaN
2 NaN 1 []
3 3 1 [d, e]
3 4 1 [d, e]
for multiple columns (for Pandas 1.3.0+):
In [4]: df.explode(['A', 'C'])
Out[4]:
A B C
0 0 1 a
0 1 1 b
0 2 1 c
1 foo 1 NaN
2 NaN 1 NaN
3 3 1 d
3 4 1 e
UPDATE 2: more generic vectorized function, which will work for multiple normal and multiple list columns
def explode(df, lst_cols, fill_value='', preserve_index=False):
# make sure `lst_cols` is list-alike
if (lst_cols is not None
and len(lst_cols) > 0
and not isinstance(lst_cols, (list, tuple, np.ndarray, pd.Series))):
lst_cols = [lst_cols]
# all columns except `lst_cols`
idx_cols = df.columns.difference(lst_cols)
# calculate lengths of lists
lens = df[lst_cols[0]].str.len()
# preserve original index values
idx = np.repeat(df.index.values, lens)
# create "exploded" DF
res = (pd.DataFrame({
col:np.repeat(df[col].values, lens)
for col in idx_cols},
index=idx)
.assign(**{col:np.concatenate(df.loc[lens>0, col].values)
for col in lst_cols}))
# append those rows that have empty lists
if (lens == 0).any():
# at least one list in cells is empty
res = (res.append(df.loc[lens==0, idx_cols], sort=False)
.fillna(fill_value))
# revert the original index order
res = res.sort_index()
# reset index if requested
if not preserve_index:
res = res.reset_index(drop=True)
return res
Demo:
Multiple list columns - all list columns must have the same # of elements in each row:
In [134]: df
Out[134]:
aaa myid num text
0 10 1 [1, 2, 3] [aa, bb, cc]
1 11 2 [] []
2 12 3 [1, 2] [cc, dd]
3 13 4 [] []
In [135]: explode(df, ['num','text'], fill_value='')
Out[135]:
aaa myid num text
0 10 1 1 aa
1 10 1 2 bb
2 10 1 3 cc
3 11 2
4 12 3 1 cc
5 12 3 2 dd
6 13 4
preserving original index values:
In [136]: explode(df, ['num','text'], fill_value='', preserve_index=True)
Out[136]:
aaa myid num text
0 10 1 1 aa
0 10 1 2 bb
0 10 1 3 cc
1 11 2
2 12 3 1 cc
2 12 3 2 dd
3 13 4
Setup:
df = pd.DataFrame({
'aaa': {0: 10, 1: 11, 2: 12, 3: 13},
'myid': {0: 1, 1: 2, 2: 3, 3: 4},
'num': {0: [1, 2, 3], 1: [], 2: [1, 2], 3: []},
'text': {0: ['aa', 'bb', 'cc'], 1: [], 2: ['cc', 'dd'], 3: []}
})
CSV column:
In [46]: df
Out[46]:
var1 var2 var3
0 a,b,c 1 XX
1 d,e,f,x,y 2 ZZ
In [47]: explode(df.assign(var1=df.var1.str.split(',')), 'var1')
Out[47]:
var1 var2 var3
0 a 1 XX
1 b 1 XX
2 c 1 XX
3 d 2 ZZ
4 e 2 ZZ
5 f 2 ZZ
6 x 2 ZZ
7 y 2 ZZ
using this little trick we can convert CSV-like column to list column:
In [48]: df.assign(var1=df.var1.str.split(','))
Out[48]:
var1 var2 var3
0 [a, b, c] 1 XX
1 [d, e, f, x, y] 2 ZZ
UPDATE: generic vectorized approach (will work also for multiple columns):
Original DF:
In [177]: df
Out[177]:
var1 var2 var3
0 a,b,c 1 XX
1 d,e,f,x,y 2 ZZ
Solution:
first let's convert CSV strings to lists:
In [178]: lst_col = 'var1'
In [179]: x = df.assign(**{lst_col:df[lst_col].str.split(',')})
In [180]: x
Out[180]:
var1 var2 var3
0 [a, b, c] 1 XX
1 [d, e, f, x, y] 2 ZZ
Now we can do this:
In [181]: pd.DataFrame({
...: col:np.repeat(x[col].values, x[lst_col].str.len())
...: for col in x.columns.difference([lst_col])
...: }).assign(**{lst_col:np.concatenate(x[lst_col].values)})[x.columns.tolist()]
...:
Out[181]:
var1 var2 var3
0 a 1 XX
1 b 1 XX
2 c 1 XX
3 d 2 ZZ
4 e 2 ZZ
5 f 2 ZZ
6 x 2 ZZ
7 y 2 ZZ
OLD answer:
Inspired by #AFinkelstein solution, i wanted to make it bit more generalized which could be applied to DF with more than two columns and as fast, well almost, as fast as AFinkelstein's solution):
In [2]: df = pd.DataFrame(
...: [{'var1': 'a,b,c', 'var2': 1, 'var3': 'XX'},
...: {'var1': 'd,e,f,x,y', 'var2': 2, 'var3': 'ZZ'}]
...: )
In [3]: df
Out[3]:
var1 var2 var3
0 a,b,c 1 XX
1 d,e,f,x,y 2 ZZ
In [4]: (df.set_index(df.columns.drop('var1',1).tolist())
...: .var1.str.split(',', expand=True)
...: .stack()
...: .reset_index()
...: .rename(columns={0:'var1'})
...: .loc[:, df.columns]
...: )
Out[4]:
var1 var2 var3
0 a 1 XX
1 b 1 XX
2 c 1 XX
3 d 2 ZZ
4 e 2 ZZ
5 f 2 ZZ
6 x 2 ZZ
7 y 2 ZZ
After painful experimentation to find something faster than the accepted answer, I got this to work. It ran around 100x faster on the dataset I tried it on.
If someone knows a way to make this more elegant, by all means please modify my code. I couldn't find a way that works without setting the other columns you want to keep as the index and then resetting the index and re-naming the columns, but I'd imagine there's something else that works.
b = DataFrame(a.var1.str.split(',').tolist(), index=a.var2).stack()
b = b.reset_index()[[0, 'var2']] # var1 variable is currently labeled 0
b.columns = ['var1', 'var2'] # renaming var1
Pandas >= 0.25
Series and DataFrame methods define a .explode() method that explodes lists into separate rows. See the docs section on Exploding a list-like column.
Since you have a list of comma separated strings, split the string on comma to get a list of elements, then call explode on that column.
df = pd.DataFrame({'var1': ['a,b,c', 'd,e,f'], 'var2': [1, 2]})
df
var1 var2
0 a,b,c 1
1 d,e,f 2
df.assign(var1=df['var1'].str.split(',')).explode('var1')
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
Note that explode only works on a single column (for now). To explode multiple columns at once, see below.
NaNs and empty lists get the treatment they deserve without you having to jump through hoops to get it right.
df = pd.DataFrame({'var1': ['d,e,f', '', np.nan], 'var2': [1, 2, 3]})
df
var1 var2
0 d,e,f 1
1 2
2 NaN 3
df['var1'].str.split(',')
0 [d, e, f]
1 []
2 NaN
df.assign(var1=df['var1'].str.split(',')).explode('var1')
var1 var2
0 d 1
0 e 1
0 f 1
1 2 # empty list entry becomes empty string after exploding
2 NaN 3 # NaN left un-touched
This is a serious advantage over ravel/repeat -based solutions (which ignore empty lists completely, and choke on NaNs).
Exploding Multiple Columns
pandas 1.3 update
df.explode works on multiple columns starting from pandas 1.3:
df = pd.DataFrame({'var1': ['a,b,c', 'd,e,f'],
'var2': ['i,j,k', 'l,m,n'],
'var3': [1, 2]})
df
var1 var2 var3
0 a,b,c i,j,k 1
1 d,e,f l,m,n 2
(df.set_index(['var3'])
.apply(lambda col: col.str.split(','))
.explode(['var1', 'var2'])
.reset_index()
.reindex(df.columns, axis=1))
var1 var2 var3
0 a i 1
1 b j 1
2 c k 1
3 d l 2
4 e m 2
5 f n 2
On older versions, you would move the explode column inside the apply which is a lot less performant:
(df.set_index(['var3'])
.apply(lambda col: col.str.split(',').explode())
.reset_index()
.reindex(df.columns, axis=1))
The idea is to set as the index, all the columns that should NOT be exploded, then explode the remaining columns via apply. This works well when the lists are equally sized.
How about something like this:
In [55]: pd.concat([Series(row['var2'], row['var1'].split(','))
for _, row in a.iterrows()]).reset_index()
Out[55]:
index 0
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
Then you just have to rename the columns
Here's a function I wrote for this common task. It's more efficient than the Series/stack methods. Column order and names are retained.
def tidy_split(df, column, sep='|', keep=False):
"""
Split the values of a column and expand so the new DataFrame has one split
value per row. Filters rows where the column is missing.
Params
------
df : pandas.DataFrame
dataframe with the column to split and expand
column : str
the column to split and expand
sep : str
the string used to split the column's values
keep : bool
whether to retain the presplit value as it's own row
Returns
-------
pandas.DataFrame
Returns a dataframe with the same columns as `df`.
"""
indexes = list()
new_values = list()
df = df.dropna(subset=[column])
for i, presplit in enumerate(df[column].astype(str)):
values = presplit.split(sep)
if keep and len(values) > 1:
indexes.append(i)
new_values.append(presplit)
for value in values:
indexes.append(i)
new_values.append(value)
new_df = df.iloc[indexes, :].copy()
new_df[column] = new_values
return new_df
With this function, the original question is as simple as:
tidy_split(a, 'var1', sep=',')
Similar question as: pandas: How do I split text in a column into multiple rows?
You could do:
>> a=pd.DataFrame({"var1":"a,b,c d,e,f".split(),"var2":[1,2]})
>> s = a.var1.str.split(",").apply(pd.Series, 1).stack()
>> s.index = s.index.droplevel(-1)
>> del a['var1']
>> a.join(s)
var2 var1
0 1 a
0 1 b
0 1 c
1 2 d
1 2 e
1 2 f
There is a possibility to split and explode the dataframe without changing the structure of dataframe
Split and expand data of specific columns
Input:
var1 var2
0 a,b,c 1
1 d,e,f 2
#Get the indexes which are repetative with the split
df['var1'] = df['var1'].str.split(',')
df = df.explode('var1')
Out:
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
Edit-1
Split and Expand of rows for Multiple columns
Filename RGB RGB_type
0 A [[0, 1650, 6, 39], [0, 1691, 1, 59], [50, 1402... [r, g, b]
1 B [[0, 1423, 16, 38], [0, 1445, 16, 46], [0, 141... [r, g, b]
Re indexing based on the reference column and aligning the column value information with stack
df = df.reindex(df.index.repeat(df['RGB_type'].apply(len)))
df = df.groupby('Filename').apply(lambda x:x.apply(lambda y: pd.Series(y.iloc[0])))
df.reset_index(drop=True).ffill()
Out:
Filename RGB_type Top 1 colour Top 1 frequency Top 2 colour Top 2 frequency
Filename
A 0 A r 0 1650 6 39
1 A g 0 1691 1 59
2 A b 50 1402 49 187
B 0 B r 0 1423 16 38
1 B g 0 1445 16 46
2 B b 0 1419 16 39
TL;DR
import pandas as pd
import numpy as np
def explode_str(df, col, sep):
s = df[col]
i = np.arange(len(s)).repeat(s.str.count(sep) + 1)
return df.iloc[i].assign(**{col: sep.join(s).split(sep)})
def explode_list(df, col):
s = df[col]
i = np.arange(len(s)).repeat(s.str.len())
return df.iloc[i].assign(**{col: np.concatenate(s)})
Demonstration
explode_str(a, 'var1', ',')
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
Let's create a new dataframe d that has lists
d = a.assign(var1=lambda d: d.var1.str.split(','))
explode_list(d, 'var1')
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
General Comments
I'll use np.arange with repeat to produce dataframe index positions that I can use with iloc.
FAQ
Why don't I use loc?
Because the index may not be unique and using loc will return every row that matches a queried index.
Why don't you use the values attribute and slice that?
When calling values, if the entirety of the the dataframe is in one cohesive "block", Pandas will return a view of the array that is the "block". Otherwise Pandas will have to cobble together a new array. When cobbling, that array must be of a uniform dtype. Often that means returning an array with dtype that is object. By using iloc instead of slicing the values attribute, I alleviate myself from having to deal with that.
Why do you use assign?
When I use assign using the same column name that I'm exploding, I overwrite the existing column and maintain its position in the dataframe.
Why are the index values repeat?
By virtue of using iloc on repeated positions, the resulting index shows the same repeated pattern. One repeat for each element the list or string.
This can be reset with reset_index(drop=True)
For Strings
I don't want to have to split the strings prematurely. So instead I count the occurrences of the sep argument assuming that if I were to split, the length of the resulting list would be one more than the number of separators.
I then use that sep to join the strings then split.
def explode_str(df, col, sep):
s = df[col]
i = np.arange(len(s)).repeat(s.str.count(sep) + 1)
return df.iloc[i].assign(**{col: sep.join(s).split(sep)})
For Lists
Similar as for strings except I don't need to count occurrences of sep because its already split.
I use Numpy's concatenate to jam the lists together.
import pandas as pd
import numpy as np
def explode_list(df, col):
s = df[col]
i = np.arange(len(s)).repeat(s.str.len())
return df.iloc[i].assign(**{col: np.concatenate(s)})
I came up with a solution for dataframes with arbitrary numbers of columns (while still only separating one column's entries at a time).
def splitDataFrameList(df,target_column,separator):
''' df = dataframe to split,
target_column = the column containing the values to split
separator = the symbol used to perform the split
returns: a dataframe with each entry for the target column separated, with each element moved into a new row.
The values in the other columns are duplicated across the newly divided rows.
'''
def splitListToRows(row,row_accumulator,target_column,separator):
split_row = row[target_column].split(separator)
for s in split_row:
new_row = row.to_dict()
new_row[target_column] = s
row_accumulator.append(new_row)
new_rows = []
df.apply(splitListToRows,axis=1,args = (new_rows,target_column,separator))
new_df = pandas.DataFrame(new_rows)
return new_df
Here is a fairly straightforward message that uses the split method from pandas str accessor and then uses NumPy to flatten each row into a single array.
The corresponding values are retrieved by repeating the non-split column the correct number of times with np.repeat.
var1 = df.var1.str.split(',', expand=True).values.ravel()
var2 = np.repeat(df.var2.values, len(var1) / len(df))
pd.DataFrame({'var1': var1,
'var2': var2})
var1 var2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
I have been struggling with out-of-memory experience using various way to explode my lists so I prepared some benchmarks to help me decide which answers to upvote. I tested five scenarios with varying proportions of the list length to the number of lists. Sharing the results below:
Time: (less is better, click to view large version)
Peak memory usage: (less is better)
Conclusions:
#MaxU's answer (update 2), codename concatenate offers the best speed in almost every case, while keeping the peek memory usage low,
see #DMulligan's answer (codename stack) if you need to process lots of rows with relatively small lists and can afford increased peak memory,
the accepted #Chang's answer works well for data frames that have a few rows but very large lists.
Full details (functions and benchmarking code) are in this GitHub gist. Please note that the benchmark problem was simplified and did not include splitting of strings into the list - which most solutions performed in a similar fashion.
One-liner using split(___, expand=True) and the level and name arguments to reset_index():
>>> b = a.var1.str.split(',', expand=True).set_index(a.var2).stack().reset_index(level=0, name='var1')
>>> b
var2 var1
0 1 a
1 1 b
2 1 c
0 2 d
1 2 e
2 2 f
If you need b to look exactly like in the question, you can additionally do:
>>> b = b.reset_index(drop=True)[['var1', 'var2']]
>>> b
var1 var2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
Based on the excellent #DMulligan's solution, here is a generic vectorized (no loops) function which splits a column of a dataframe into multiple rows, and merges it back to the original dataframe. It also uses a great generic change_column_order function from this answer.
def change_column_order(df, col_name, index):
cols = df.columns.tolist()
cols.remove(col_name)
cols.insert(index, col_name)
return df[cols]
def split_df(dataframe, col_name, sep):
orig_col_index = dataframe.columns.tolist().index(col_name)
orig_index_name = dataframe.index.name
orig_columns = dataframe.columns
dataframe = dataframe.reset_index() # we need a natural 0-based index for proper merge
index_col_name = (set(dataframe.columns) - set(orig_columns)).pop()
df_split = pd.DataFrame(
pd.DataFrame(dataframe[col_name].str.split(sep).tolist())
.stack().reset_index(level=1, drop=1), columns=[col_name])
df = dataframe.drop(col_name, axis=1)
df = pd.merge(df, df_split, left_index=True, right_index=True, how='inner')
df = df.set_index(index_col_name)
df.index.name = orig_index_name
# merge adds the column to the last place, so we need to move it back
return change_column_order(df, col_name, orig_col_index)
Example:
df = pd.DataFrame([['a:b', 1, 4], ['c:d', 2, 5], ['e:f:g:h', 3, 6]],
columns=['Name', 'A', 'B'], index=[10, 12, 13])
df
Name A B
10 a:b 1 4
12 c:d 2 5
13 e:f:g:h 3 6
split_df(df, 'Name', ':')
Name A B
10 a 1 4
10 b 1 4
12 c 2 5
12 d 2 5
13 e 3 6
13 f 3 6
13 g 3 6
13 h 3 6
Note that it preserves the original index and order of the columns. It also works with dataframes which have non-sequential index.
The string function split can take an option boolean argument 'expand'.
Here is a solution using this argument:
(a.var1
.str.split(",",expand=True)
.set_index(a.var2)
.stack()
.reset_index(level=1, drop=True)
.reset_index()
.rename(columns={0:"var1"}))
I do appreciate the answer of "Chang She", really, but the iterrows() function takes long time on large dataset. I faced that issue and I came to this.
# First, reset_index to make the index a column
a = a.reset_index().rename(columns={'index':'duplicated_idx'})
# Get a longer series with exploded cells to rows
series = pd.DataFrame(a['var1'].str.split('/')
.tolist(), index=a.duplicated_idx).stack()
# New df from series and merge with the old one
b = series.reset_index([0, 'duplicated_idx'])
b = b.rename(columns={0:'var1'})
# Optional & Advanced: In case, there are other columns apart from var1 & var2
b.merge(
a[a.columns.difference(['var1'])],
on='duplicated_idx')
# Optional: Delete the "duplicated_index"'s column, and reorder columns
b = b[a.columns.difference(['duplicated_idx'])]
One-liner using assign and explode:
col1 col2
0 a,b,c 1
1 d,e,f 2
df.assign(col1 = df.col1.str.split(',')).explode('col1', ignore_index=True)
Output:
col1 col2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
Just used jiln's excellent answer from above, but needed to expand to split multiple columns. Thought I would share.
def splitDataFrameList(df,target_column,separator):
''' df = dataframe to split,
target_column = the column containing the values to split
separator = the symbol used to perform the split
returns: a dataframe with each entry for the target column separated, with each element moved into a new row.
The values in the other columns are duplicated across the newly divided rows.
'''
def splitListToRows(row, row_accumulator, target_columns, separator):
split_rows = []
for target_column in target_columns:
split_rows.append(row[target_column].split(separator))
# Seperate for multiple columns
for i in range(len(split_rows[0])):
new_row = row.to_dict()
for j in range(len(split_rows)):
new_row[target_columns[j]] = split_rows[j][i]
row_accumulator.append(new_row)
new_rows = []
df.apply(splitListToRows,axis=1,args = (new_rows,target_column,separator))
new_df = pd.DataFrame(new_rows)
return new_df
upgraded MaxU's answer with MultiIndex support
def explode(df, lst_cols, fill_value='', preserve_index=False):
"""
usage:
In [134]: df
Out[134]:
aaa myid num text
0 10 1 [1, 2, 3] [aa, bb, cc]
1 11 2 [] []
2 12 3 [1, 2] [cc, dd]
3 13 4 [] []
In [135]: explode(df, ['num','text'], fill_value='')
Out[135]:
aaa myid num text
0 10 1 1 aa
1 10 1 2 bb
2 10 1 3 cc
3 11 2
4 12 3 1 cc
5 12 3 2 dd
6 13 4
"""
# make sure `lst_cols` is list-alike
if (lst_cols is not None
and len(lst_cols) > 0
and not isinstance(lst_cols, (list, tuple, np.ndarray, pd.Series))):
lst_cols = [lst_cols]
# all columns except `lst_cols`
idx_cols = df.columns.difference(lst_cols)
# calculate lengths of lists
lens = df[lst_cols[0]].str.len()
# preserve original index values
idx = np.repeat(df.index.values, lens)
res = (pd.DataFrame({
col:np.repeat(df[col].values, lens)
for col in idx_cols},
index=idx)
.assign(**{col:np.concatenate(df.loc[lens>0, col].values)
for col in lst_cols}))
# append those rows that have empty lists
if (lens == 0).any():
# at least one list in cells is empty
res = (res.append(df.loc[lens==0, idx_cols], sort=False)
.fillna(fill_value))
# revert the original index order
res = res.sort_index()
# reset index if requested
if not preserve_index:
res = res.reset_index(drop=True)
# if original index is MultiIndex build the dataframe from the multiindex
# create "exploded" DF
if isinstance(df.index, pd.MultiIndex):
res = res.reindex(
index=pd.MultiIndex.from_tuples(
res.index,
names=['number', 'color']
)
)
return res
My version of the solution to add to this collection! :-)
# Original problem
from pandas import DataFrame
import numpy as np
a = DataFrame([{'var1': 'a,b,c', 'var2': 1},
{'var1': 'd,e,f', 'var2': 2}])
b = DataFrame([{'var1': 'a', 'var2': 1},
{'var1': 'b', 'var2': 1},
{'var1': 'c', 'var2': 1},
{'var1': 'd', 'var2': 2},
{'var1': 'e', 'var2': 2},
{'var1': 'f', 'var2': 2}])
### My solution
import pandas as pd
import functools
def expand_on_cols(df, fuse_cols, delim=","):
def expand_on_col(df, fuse_col):
col_order = df.columns
df_expanded = pd.DataFrame(
df.set_index([x for x in df.columns if x != fuse_col])[fuse_col]
.apply(lambda x: x.split(delim))
.explode()
).reset_index()
return df_expanded[col_order]
all_expanded = functools.reduce(expand_on_col, fuse_cols, df)
return all_expanded
assert(b.equals(expand_on_cols(a, ["var1"], delim=",")))
I have come up with the following solution to this problem:
def iter_var1(d):
for _, row in d.iterrows():
for v in row["var1"].split(","):
yield (v, row["var2"])
new_a = DataFrame.from_records([i for i in iter_var1(a)],
columns=["var1", "var2"])
Another solution that uses python copy package
import copy
new_observations = list()
def pandas_explode(df, column_to_explode):
new_observations = list()
for row in df.to_dict(orient='records'):
explode_values = row[column_to_explode]
del row[column_to_explode]
if type(explode_values) is list or type(explode_values) is tuple:
for explode_value in explode_values:
new_observation = copy.deepcopy(row)
new_observation[column_to_explode] = explode_value
new_observations.append(new_observation)
else:
new_observation = copy.deepcopy(row)
new_observation[column_to_explode] = explode_values
new_observations.append(new_observation)
return_df = pd.DataFrame(new_observations)
return return_df
df = pandas_explode(df, column_name)
There are a lot of answers here but I'm surprised no one has mentioned the built in pandas explode function. Check out the link below:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html#pandas.DataFrame.explode
For some reason I was unable to access that function, so I used the below code:
import pandas_explode
pandas_explode.patch()
df_zlp_people_cnt3 = df_zlp_people_cnt2.explode('people')
Above is a sample of my data. As you can see the people column had series of people, and I was trying to explode it. The code I have given works for list type data. So try to get your comma separated text data into list format. Also since my code uses built in functions, it is much faster than custom/apply functions.
Note: You may need to install pandas_explode with pip.
I had a similar problem, my solution was converting the dataframe to a list of dictionaries first, then do the transition. Here is the function:
import re
import pandas as pd
def separate_row(df, column_name):
ls = []
for row_dict in df.to_dict('records'):
for word in re.split(',', row_dict[column_name]):
row = row_dict.copy()
row[column_name]=word
ls.append(row)
return pd.DataFrame(ls)
Example:
>>> from pandas import DataFrame
>>> import numpy as np
>>> a = DataFrame([{'var1': 'a,b,c', 'var2': 1},
{'var1': 'd,e,f', 'var2': 2}])
>>> a
var1 var2
0 a,b,c 1
1 d,e,f 2
>>> separate_row(a, "var1")
var1 var2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
You can also change the function a bit to support separating list type rows.
Upon adding few bits and pieces from all the solutions on this page, I was able to get something like this(for someone who need to use it right away).
parameters to the function are df(input dataframe) and key(column that has delimiter separated string). Just replace with your delimiter if that is different to semicolon ";".
def split_df_rows_for_semicolon_separated_key(key, df):
df=df.set_index(df.columns.drop(key,1).tolist())[key].str.split(';', expand=True).stack().reset_index().rename(columns={0:key}).loc[:, df.columns]
df=df[df[key] != '']
return df
Try:
vals = np.array(a.var1.str.split(",").values.tolist())
var = np.repeat(a.var2, vals.shape[1])
out = pd.DataFrame(np.column_stack((var, vals.ravel())), columns=a.columns)
display(out)
var1 var2
0 1 a
1 1 b
2 1 c
3 2 d
4 2 e
5 2 f
In recent version of pandas you can use split followed by explode
a.assign(var1=a['var1'].str.split(',')).explode('var1')
a
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
A short and simple way to change the format of the column using .apply() so that it can be used by .explod():
import string
import pandas as pd
from io import StringIO
file = StringIO(""" var1 var2
0 a,b,c 1
1 d,e,f 2""")
df = pd.read_csv(file, sep=r'\s\s+')
df['var1'] = df['var1'].apply(lambda x : str(x).split(','))
df.explode('var1')
Output:
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2

How to add values to an multiindexed column dataframe

My dataframe is
a b
1 2 1 2
0 0.281045 0.975469 -0.538213 -0.180008
1 0.128696 1.875480 0.247637 -0.047927
I want to insert the matrix to (a,3), (b, 3)
[[1, 1],
[1, 1]]
a b
1 2 3 1 2 3
0 0.281045 0.975469 1. -0.538213 -0.180008 1.
1 0.128696 1.875480 1. 0.247637 -0.047922 1.
It seems like there is no decent way to add value to the multiindex dataframe, Here is the code that I tried:
df[:,:,3] = [[1, 1],
[1, 1]]```
But it didn't work...
You can create new DataFrame with MultiIndex and then append to data by DataFrame.join with sorting MultiIndex:
arr = np.array([[1, 1],[1, 1]])
df1 = pd.DataFrame(arr,
index=df.index,
columns= pd.MultiIndex.from_product([df.columns.levels[0], [3]]))
df = df.join(df1).sort_index(axis=1)
print (df)
a b
1 2 3 1 2 3
0 0.281045 0.975469 1 -0.538213 -0.180008 1
1 0.128696 1.875480 1 0.247637 -0.047927 1

Copy columns to dataframe using panda

I have two dataframes and I want to copy the values from one to another. Returned NaN when copying column values to dataframe
These are my df:
data1 = [[1, 2], [3, 4], [5, 6]]
rc = pd.DataFrame(data, columns = ['Sold', 'Leads'])
data2 = [['Company1','2017-05-01',0, 0], ['Company1','2017-05-01',0, 0], ['Company1','2017-05-01',0, 0]]
final = pd.DataFrame(data2, columns = ['company','date','2019_sold', '2019_leads'])
I tried loc indexing
final.loc[(final['date'] == '2017-05-01') & (final['company'] == 'Company1'),['2019_sold','2019_leads']] = rc[['Leads','Sold']]
I expect them to copy the exact value of rc df to final df but the values returned NaN
By using update
rc.index=final.index[(final['date'] == '2017-05-01') & (final['company'] == 'Company1')]
rc.columns=['2019_sold','2019_leads']
final.update(rc)
final
Out[165]:
company date 2019_sold 2019_leads
0 Company1 2017-05-01 1 2
1 Company1 2017-05-01 3 4
2 Company1 2017-05-01 5 6

Find rows in dataframe with dict

df = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6]})
produces
a b
0 1 4
1 2 5
2 3 6
Given a dict
d = {'a': 2, 'b': 5}
how would I extract the rows of the dataframe where the dict's keys values match all the column values -- so in this case
a b
1 2 5
You can compare with Series and filter:
df[(df == pd.Series(d)).all(1)]
a b
1 2 5
This comparison is aligned on the index/columns and broadcasted for each row.
Compare the values and use indexing,
df[ (df.values == np.array(list(d.values()))).all(1) ]
a b
1 2 5

Julia: converting column type from Integer to Float64 in a DataFrame

I am trying to change type of numbers in a column of a DataFrame from integer to floating point. It should be straightforward to do this, but it's not working. The data type remains to be integer. What am I missing?
In [2]: using DataFrames
df = DataFrame(A = 1:4, B = ["M", "F", "F", "M"])
Out [2]: 4x2 DataFrame
| Row | A | B |
|-----|---|-----|
| 1 | 1 | "M" |
| 2 | 2 | "F" |
| 3 | 3 | "F" |
| 4 | 4 | "M" |
In [3]: df[:,:A] = float64(df[:,:A])
Out [3]: 4-element DataArray{Float64,1}:
1.0
2.0
3.0
4.0
In [4]: df
Out [4]: 4x2 DataFrame
| Row | A | B |
|-----|---|-----|
| 1 | 1 | "M" |
| 2 | 2 | "F" |
| 3 | 3 | "F" |
| 4 | 4 | "M" |
In [5]: typeof(df[:,:A])
Out [5]: DataArray{Int64,1} (constructor with 1 method)
The reason this happens is mutation and conversion.
If you have two vectors
a = [1:3]
b = [4:6]
you can make x refer to one of them with assignment.
x = a
Now x and a refer to the same vector [1, 2, 3]. If you then assign b to x
x = b
you have now changed x to refer to the same vector as b refers to.
You can also mutate vectors by copying over the values in one vector to the other. If you do
x[:] = a
you copy over the values in vector a to the vector b, so now you have two vectors with [1, 2, 3].
Then there is also conversion. If you copy a value of one type into a vector of another value Julia will attempt to convert the value to that of the elements vector.
x[1] = 5.0
This gives you a the vector [5, 2, 3] because Julia converted the Float64 value 5.0 to the Int value 5. If you tried
x[1] = 5.5
Julia will throw a InexactError() because 5.5 can't be losslessly converted to an integer.
When it comes to DataFrames things work the same as long as you realize a DataFrame is a collection of named references to vectors. So what you are doing when constructing the DataFrame in this call
df = DataFrame(A = 1:4, B = ["M", "F", "F", "M"])
is that you create the vector [1, 2, 3, 4], and the vector ["M", "F", "F", "M"]. You then construct a DataFrame with references to these two new vectors.
Later when you do
df[:,:A] = float64(df[:,:A])
you first create a new vector by converting the values in the vector [1, 2, 3, 4] into Float64. You then mutate the vector referred to with df[:A] by copying over the values in the Float64 vector back into the Int vector, which causes Julia to convert the values back to Int.
What Colin T Bower's answer
df[:A] = float64(df[:A])
does is that rather than mutating the vector referred to by the DataFrame, he changes the reference to refer to the vector with the Flaot64 values.
I hope this makes sense.
Try this:
df[:A] = float64(df[:A])
This works for me on Julia v0.3.5 with DataFrames v0.6.1.
This is quite interesting though. Notice that:
df[:, :A] = [2.0, 2.0, 3.0, 4.0]
will change the contents of the column to [2,2,3,4], but leaves the type as Int64, while
df[:A] = [2.0, 2.0, 3.0, 4.0]
will also change the type.
I just had quick look at the manual and couldn't see any reference to this behaviour (admittedly it was a very quick look). But I find this unintuitive enough that perhaps it is worth filing an issue.