a list as a sublist of a list from group into list - pandas

I have a dataframe, which has 2 columns,
a b
0 1 2
1 1 1
2 1 1
3 1 2
4 1 1
5 2 0
6 2 1
7 2 1
8 2 2
9 2 2
10 2 1
11 2 1
12 2 2
Is there a direct way to make a third column as below
a b c
0 1 2 0
1 1 1 1
2 1 1 0
3 1 2 1
4 1 1 0
5 2 0 0
6 2 1 1
7 2 1 0
8 2 2 1
9 2 2 0
10 2 1 0
11 2 1 0
12 2 2 0
in which target [1, 2] is a sublist of df.groupby('a').b.apply(list), find the 2 rows that firstly fit the target in every group.
df.groupby('a').b.apply(list) gives
1 [2, 1, 1, 2, 1]
2 [0, 1, 1, 2, 2, 1, 1, 2]
[1,2] is a sublist of [2, 1, 1, 2, 1] and [0, 1, 1, 2, 2, 1, 1, 2]
so far, I have a function
def is_sub_with_gap(sub, lst):
'''
check if sub is a sublist of lst
'''
ln, j = len(sub), 0
ans = []
for i, ele in enumerate(lst):
if ele == sub[j]:
j += 1
ans.append(i)
if j == ln:
return True, ans
return False, []
test on the function
In [55]: is_sub_with_gap([1,2], [2, 1, 1, 2, 1])
Out[55]: (True, [1, 3])

You can change output by select index values of groups in custom function, flatten it by Series.explode and then test index values by Index.isin:
L = [1, 2]
def is_sub_with_gap(sub, lst):
'''
check of sub is a sublist of lst
'''
ln, j = len(sub), 0
ans = []
for i, ele in enumerate(lst):
if ele == sub[j]:
j += 1
ans.append(i)
if j == ln:
return lst.index[ans]
return []
idx = df.groupby('a').b.apply(lambda x: is_sub_with_gap(L, x)).explode()
df['c'] = df.index.isin(idx).view('i1')
print (df)
a b c
0 1 2 0
1 1 1 1
2 1 1 0
3 1 2 1
4 1 1 0
5 2 0 0
6 2 1 1
7 2 1 0
8 2 2 1
9 2 2 0
10 2 1 0
11 2 1 0
12 2 2 0

Related

Add a new column if index in other 2 column is the same

I would add the new index to the new column e if b and c is the same.
In the mean time,
I need to consider the limit of the sum(d)<=20,
If the total d with the same b and c is exceed 20,
then give a new index.
the example input data below:
a
b
c
d
0
0
2
9
1
2
1
10
2
1
0
9
3
1
0
11
4
2
1
9
5
0
1
15
6
2
0
9
7
1
0
8
I sort the b and c first,
let comparing be more easier,
then I got key errorKeyError: 0, temporary_size += df.loc[df[i], 'd']\
Hope it like this:
a
b
c
d
e
5
0
1
15
1
0
0
2
9
2
2
1
0
9
3
3
1
0
11
3
7
1
0
8
4
6
2
0
9
5
1
2
1
10
6
4
2
1
9
6
and here is my code:
import pandas as pd
d = {'a': [0, 1, 2, 3, 4, 5, 6, 7], 'b': [0, 2, 1, 1, 2, 0, 2, 1], 'c': [2, 1, 0, 0, 1, 1, 0, 0], 'd': [9, 10, 9, 11, 9, 15, 9, 8]}
df = pd.DataFrame(data=d)
print(df)
df.sort_values(['b', 'c'], ascending=[True, True], inplace=True, ignore_index=True)
e_id = 0
total_size = 20
temporary_size = 0
for i in range(0, len(df.index)-1):
if df.loc[i, 'b'] == df.loc[i+1, 'b'] and df.loc[i, 'c'] != df.loc[i+1, 'c']:
temporary_size = temporary_size + df.loc[i, 'd']
if temporary_size <= total_size:
df.loc['e', i] = e_id
else:
df.loc[i, 'e'] = e_id
temporary_size = temporary_size + df.loc[i, 'd']
e_id += 1
else:
df.loc[i, 'e'] = e_id
temporary_size = temporary_size + df.loc[i, 'd']
print(df)
finally, I can't get the column c in my dataframe.
THANKS FOR ALL!

Pandas index clause across multiple columns in a multi-column header

I have a data frame with multi-column headers.
import pandas as pd
headers = pd.MultiIndex.from_tuples([("A", "u"), ("A", "v"), ("B", "x"), ("B", "y")])
f = pd.DataFrame([[1, 1, 0, 1], [1, 0, 0, 0], [0, 0, 1, 1], [1, 0, 1, 0]], columns = headers)
f
A B
u v x y
0 1 1 0 1
1 1 0 0 0
2 0 0 1 1
3 1 0 1 0
I want to select the rows in which either all the A columns or all the B columns are true.
I can do so explicitly.
f[f["A"]["u"].astype(bool) | f["A"]["v"].astype(bool)]
A B
u v x y
0 1 1 0 1
1 1 0 0 0
3 1 0 1 0
f[f["B"]["x"].astype(bool) | f["B"]["y"].astype(bool)]
A B
u v x y
0 1 1 0 1
2 0 0 1 1
3 1 0 1 0
I want to write a function select(f, top_level_name) where the indexing clause applies to all the columns under the same top level name such that
select(f, "A") == f[f["A"]["u"].astype(bool) | f["A"]["v"].astype(bool)]
select(f, "B") == f[f["B"]["x"].astype(bool) | f["B"]["y"].astype(bool)]
I want this function to work with arbitrary numbers of sub-columns with arbitrary names.
How do I write select?

Pandas apply function by group returning multiple new columns

I am trying to apply a function to a column by group with the objective of creating 2 new columns, containing the returned values of the function for each group. Example as follows:
def testms(x):
mu = np.sum(x)
si = np.sum(x)/2
return mu, si
df = pd.concat([pd.DataFrame({'A' : [1, 1, 1, 1, 1, 2, 2, 2, 2, 2]}), pd.DataFrame({'B' : np.random.rand(10)})],axis=1)
df
A B
0 1 0.696761
1 1 0.035178
2 1 0.468180
3 1 0.157818
4 1 0.281470
5 2 0.377689
6 2 0.336046
7 2 0.005879
8 2 0.747436
9 2 0.772405
desired_result =
A B mu si
0 1 0.696761 1.652595 0.826297
1 1 0.035178 1.652595 0.826297
2 1 0.468180 1.652595 0.826297
3 1 0.157818 1.652595 0.826297
4 1 0.281470 1.652595 0.826297
5 2 0.377689 2.997657 1.498829
6 2 0.336046 2.997657 1.498829
7 2 0.005879 2.997657 1.498829
8 2 0.747436 2.997657 1.498829
9 2 0.772405 2.997657 1.498829
I think I have found a solution but I am looking for something a bit more elegant and efficient:
x = df.groupby('A')['B'].apply(lambda x: pd.Series(testms(x),index=['mu','si']))
A
1 mu 1.652595
si 0.826297
2 mu 2.997657
si 1.498829
Name: B, dtype: float64
df.merge(x.drop(labels='mu',level=1),on='A',how='outer').merge(x.drop(labels='si',level=1),on='A',how='outer')
One idea is change function for create new columns filled by mu and si values and return x for return group:
def testms(x):
mu = np.sum(x['B'])
si = np.sum(x['B'])/2
x['mu'] = mu
x['si'] = si
return x
x = df.groupby('A').apply(testms)
print (x)
A B mu si
0 1 0.352297 3.590048 1.795024
1 1 0.860488 3.590048 1.795024
2 1 0.939260 3.590048 1.795024
3 1 0.988280 3.590048 1.795024
4 1 0.449723 3.590048 1.795024
5 2 0.125852 1.300524 0.650262
6 2 0.853474 1.300524 0.650262
7 2 0.000996 1.300524 0.650262
8 2 0.223886 1.300524 0.650262
9 2 0.096316 1.300524 0.650262
Your solution should be simplify with Series.unstack and DataFrame.join:
df1 = df.groupby('A')['B'].apply(lambda x: pd.Series(testms(x),index=['mu','si'])).unstack()
x = df.join(df1, on='A')
print (x)
A B mu si
0 1 0.085961 2.791346 1.395673
1 1 0.887589 2.791346 1.395673
2 1 0.685952 2.791346 1.395673
3 1 0.946613 2.791346 1.395673
4 1 0.185231 2.791346 1.395673
5 2 0.994415 3.173444 1.586722
6 2 0.159852 3.173444 1.586722
7 2 0.773711 3.173444 1.586722
8 2 0.867337 3.173444 1.586722
9 2 0.378128 3.173444 1.586722

Python Pandas Dataframe cell value split

I am lost on how to split the binary values such that each (0,1)value takes up a column of the data frame.
from jupyter
You can use concat with apply list:
df = pd.DataFrame({0:[1,2,3], 1:['1010','1100','0101']})
print (df)
0 1
0 1 1010
1 2 1100
2 3 0101
df = pd.concat([df[0],
df[1].apply(lambda x: pd.Series(list(x))).astype(int)],
axis=1, ignore_index=True)
print (df)
0 1 2 3 4
0 1 1 0 1 0
1 2 1 1 0 0
2 3 0 1 0 1
Another solution with DataFrame constructor:
df = pd.concat([df[0],
pd.DataFrame(df[1].apply(list).values.tolist()).astype(int)],
axis=1, ignore_index=True)
print (df)
0 1 2 3 4
0 1 1 0 1 0
1 2 1 1 0 0
2 3 0 1 0 1
EDIT:
df = pd.DataFrame({0:['1010','1100','0101']})
df1 = pd.DataFrame(df[0].apply(list).values.tolist()).astype(int)
print (df1)
0 1 2 3
0 1 0 1 0
1 1 1 0 0
2 0 1 0 1
But if need lists:
df[0] = df[0].apply(lambda x: [int(y) for y in list(x)])
print (df)
0
0 [1, 0, 1, 0]
1 [1, 1, 0, 0]
2 [0, 1, 0, 1]

copy_blanks(df,column) should copy the value in the original column to the last column for all values where the original column is blank

def copy_blanks(df, column):
like this, Please suggest me.
Input:
e-mail,number
n#gmail.com,0
p#gmail.com,1
h#gmail.com,0
s#gmail.com,0
l#gmail.com,1
v#gmail.com,0
,0
But, here we are having default_value option. In that we can use any value. when we have used this option. that value will adding.like below
e-mail,number
n#gmail.com,0
p#gmail.com,1
h#gmail.com,0
s#gmail.com,0
l#gmail.com,1
v#gmail.com,0
NA,0
But, my output is we have to default value and skip_blank options. when we will use skip_blank like true, then should not work default value,when we will keep skip_blank is false, then should work default value.
my output:
e-mail,number,e-mail_clean
n#gmail.com,0,n#gmail.com
p#gmail.com,1,p#gmail.com
h#gmail.com,0,h#gmail.com
s#gmail.com,0,s#gmail.com
l#gmail.com,1,l#gmail.com
v#gmail.com,0,v#gmail.com
,0,
consider your sample df
df = pd.DataFrame([
['n#gmail.com', 0],
['p#gmail.com', 1],
['h#gmail.com', 0],
['s#gmail.com', 0],
['l#gmail.com', 1],
['v#gmail.com', 0],
['', 0]
], columns=['e-mail','number'])
print(df)
e-mail number
0 n#gmail.com 0
1 p#gmail.com 1
2 h#gmail.com 0
3 s#gmail.com 0
4 l#gmail.com 1
5 v#gmail.com 0
6 0
If I understand you correctly:
def copy_blanks(df, column, skip_blanks=False, default_value='NA'):
df = df.copy()
s = df[column]
if not skip_blanks:
s = s.replace('', default_value)
df['{}_clean'.format(column)] = s
return df
copy_blanks(df, 'e-mail', skip_blanks=False)
e-mail number e-mail_clean
0 n#gmail.com 0 n#gmail.com
1 p#gmail.com 1 p#gmail.com
2 h#gmail.com 0 h#gmail.com
3 s#gmail.com 0 s#gmail.com
4 l#gmail.com 1 l#gmail.com
5 v#gmail.com 0 v#gmail.com
6 0 NA
copy_blanks(df, 'e-mail', skip_blanks=True)
e-mail number e-mail_clean
0 n#gmail.com 0 n#gmail.com
1 p#gmail.com 1 p#gmail.com
2 h#gmail.com 0 h#gmail.com
3 s#gmail.com 0 s#gmail.com
4 l#gmail.com 1 l#gmail.com
5 v#gmail.com 0 v#gmail.com
6 0
copy_blanks(df, 'e-mail', skip_blanks=False, default_value='new#gmail.com')
e-mail number e-mail_clean
0 n#gmail.com 0 n#gmail.com
1 p#gmail.com 1 p#gmail.com
2 h#gmail.com 0 h#gmail.com
3 s#gmail.com 0 s#gmail.com
4 l#gmail.com 1 l#gmail.com
5 v#gmail.com 0 v#gmail.com
6 0 new#gmail.com