Pandas Dataframe GroupBy Agg - LAMBDA - single values go to preexisting or new lists and preexisting lists fusion - pandas

I have this DataFrame to groupby key:
df = pd.DataFrame({
'key': ['1', '1', '1', '2', '2', '3', '3', '4', '4', '5'],
'data1': [['A', 'B', 'C'], 'D', 'P', 'E', ['F', 'G', 'H'], ['I', 'J'], ['K', 'L'], 'M', 'N', 'O']
'data2': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
})
df
I want to make the groupby key and sum data2, it's ok for this part.
But concerning data1, I want to :
If a list doesn't exist yet:
Single values don't change when key was not duplicated
Single values assigned to a key are combined into a new list
If a list already exist:
Other single values are append to it
Other lists values are append to it
The resulting DataFrame should then be :
dfgood = pd.DataFrame({
'key': ['1', '2', '3', '4', '5'],
'data1': [['A', 'B', 'C', 'D', 'P'], ['F', 'G', 'H', 'E'], ['I', 'J', 'K', 'L'], ['M', 'N'], 'O']
'data2': [6, 9, 13, 17, 10]
})
dfgood
In fact, I don't really care about the order of data1 values into the lists, it could also be any structure that keep them together, even a string with separators or a set, if it's easier to make it go the way you think best to do this.
I thought about two solutions :
Going that way :
dfgood = df.groupby('key', as_index=False).agg({
'data1' : lambda x: x.iloc[0].append(x.iloc[1]) if type(x.iloc[0])==list else list(x),
'data2' : sum,
})
dfgood
It doesn't work because of index out of range in x.iloc[1].
I also tried, because data1 was organized like this in another groupby from the question on this link:
dfgood = df.groupby('key', as_index=False).agg({
'data1' : lambda g: g.iloc[0] if len(g) == 1 else list(g)),
'data2' : sum,
})
dfgood
But it's creating new lists from preexisting lists or values and not appending data to already existing lists.
Another way to do it, but I think it's more complicated and there should be a better or faster solution :
Turning data1 lists and single values into individual series with apply,
use wide_to_long to keep single values for each key,
Then groupby applying :
dfgood = df.groupby('key', as_index=False).agg({
'data1' : lambda g: g.iloc[0] if len(g) == 1 else list(g)),
'data2' : sum,
})
dfgood
I think my problem is that I don't know how to use lambdas correctly and I try stupid things like x.iloc[1] in the previous example. I've looked at a lot of tutorial about lambdas, but it's still fuzzy in my mind.

There is problem combinations lists with scalars, possible solution is create first lists form scalars and then flatten them in groupby.agg:
dfgood = (df.assign(data1 = df['data1'].apply(lambda y: y if isinstance(y, list) else [y]))
.groupby('key', as_index=False).agg({
'data1' : lambda x: [z for y in x for z in y],
'data2' : sum,
})
)
print (dfgood)
key data1 data2
0 1 [A, B, C, D, P] 6
1 2 [E, F, G, H] 9
2 3 [I, J, K, L] 13
3 4 [M, N] 17
4 5 [O] 10
Another idea is use flatten function for flatten only lists, not strings:
#https://stackoverflow.com/a/5286571/2901002
def flatten(foo):
for x in foo:
if hasattr(x, '__iter__') and not isinstance(x, str):
for y in flatten(x):
yield y
else:
yield x
dfgood = (df.groupby('key', as_index=False).agg({
'data1' : lambda x: list(flatten(x)),
'data2' : sum}))

You could explode to get individual rows, then aggregate again with groupby+agg after taking care of masking the duplicated values in data2 (to avoid summing duplicates):
(df.explode('data1')
.assign(data2=lambda d: d['data2'].mask(d.duplicated(['key', 'data2']), 0))
.groupby('key')
.agg({'data1': list, 'data2': 'sum'})
)
output:
data1 data2
key
1 [A, B, C, D, P] 6
2 [E, F, G, H] 9
3 [I, J, K, L] 13
4 [M, N] 17
5 [O] 10

Related

Pandas for-loop more efficient

I want to get the different entries of a column of a df, if there are many dfs (with same columns) in a list more efficient like my actuall code. The aim is to prserve all existing IDs of the column 'ID'.
My code works, but i would like it to be more efficient.
list_of_dicts = [[{'ID' : A,
'timestamp': '2020-05-10T03:44:50+00:00',
'status': 'online'},
{'ID' : A,
'timestamp': '2020-05-10T05:45:50+00:00',
'status': 'online'},
{'ID' : B,
'timestamp': '2020-05-10T12:45:50+00:00',
'status': 'online'},
{'ID' : C,
'timestamp': '2020-05-10T04:45:50+00:00',
'status': 'online'},
{'ID' : A,
'timestamp': '2020-05-10T03:48:50+00:00',
'status': 'offline'}], [{...}{...}...]...]
(the keys of the dicts in the diffrent lists are always the same)
My Code:
stat = []
for i in range(len(list_of_dicts)):
stat_i = pd.DataFrame(list_of_dicts[i])
for x in range(len(stat_i)):
a = stat_i.loc[x, 'ID']
if a in stat:
continue
else:
stat.append(a)
Expectet output:
stat = ['A', 'B', 'C']
You need to loop over your lists and create a single dataframe to get the unique items, you could do this without pandas as well.
using a list comprehension.
stat = pd.concat([pd.DataFrame(item) for item in list_of_dicts])['ID'].unique()
array(['A', 'B', 'C'], dtype=object)

How to find the index in a list of numbers where there are repeating numbers [duplicate]

Does anyone know how I can get the index position of duplicate items in a python list?
I have tried doing this and it keeps giving me only the index of the 1st occurrence of the of the item in the list.
List = ['A', 'B', 'A', 'C', 'E']
I want it to give me:
index 0: A
index 2: A
You want to pass in the optional second parameter to index, the location where you want index to start looking. After you find each match, reset this parameter to the location just after the match that was found.
def list_duplicates_of(seq,item):
start_at = -1
locs = []
while True:
try:
loc = seq.index(item,start_at+1)
except ValueError:
break
else:
locs.append(loc)
start_at = loc
return locs
source = "ABABDBAAEDSBQEWBAFLSAFB"
print(list_duplicates_of(source, 'B'))
Prints:
[1, 3, 5, 11, 15, 22]
You can find all the duplicates at once in a single pass through source, by using a defaultdict to keep a list of all seen locations for any item, and returning those items that were seen more than once.
from collections import defaultdict
def list_duplicates(seq):
tally = defaultdict(list)
for i,item in enumerate(seq):
tally[item].append(i)
return ((key,locs) for key,locs in tally.items()
if len(locs)>1)
for dup in sorted(list_duplicates(source)):
print(dup)
Prints:
('A', [0, 2, 6, 7, 16, 20])
('B', [1, 3, 5, 11, 15, 22])
('D', [4, 9])
('E', [8, 13])
('F', [17, 21])
('S', [10, 19])
If you want to do repeated testing for various keys against the same source, you can use functools.partial to create a new function variable, using a "partially complete" argument list, that is, specifying the seq, but omitting the item to search for:
from functools import partial
dups_in_source = partial(list_duplicates_of, source)
for c in "ABDEFS":
print(c, dups_in_source(c))
Prints:
A [0, 2, 6, 7, 16, 20]
B [1, 3, 5, 11, 15, 22]
D [4, 9]
E [8, 13]
F [17, 21]
S [10, 19]
>>> def indices(lst, item):
... return [i for i, x in enumerate(lst) if x == item]
...
>>> indices(List, "A")
[0, 2]
To get all duplicates, you can use the below method, but it is not very efficient. If efficiency is important you should consider Ignacio's solution instead.
>>> dict((x, indices(List, x)) for x in set(List) if List.count(x) > 1)
{'A': [0, 2]}
As for solving it using the index method of list instead, that method takes a second optional argument indicating where to start, so you could just repeatedly call it with the previous index plus 1.
>>> List.index("A")
0
>>> List.index("A", 1)
2
I made a benchmark of all solutions suggested here and also added another solution to this problem (described in the end of the answer).
Benchmarks
First, the benchmarks. I initialize a list of n random ints within a range [1, n/2] and then call timeit over all algorithms
The solutions of #Paul McGuire and #Ignacio Vazquez-Abrams works about twice as fast as the rest on the list of 100 ints:
Testing algorithm on the list of 100 items using 10000 loops
Algorithm: dupl_eat
Timing: 1.46247477189
####################
Algorithm: dupl_utdemir
Timing: 2.93324529055
####################
Algorithm: dupl_lthaulow
Timing: 3.89198786645
####################
Algorithm: dupl_pmcguire
Timing: 0.583058259784
####################
Algorithm: dupl_ivazques_abrams
Timing: 0.645062989076
####################
Algorithm: dupl_rbespal
Timing: 1.06523873786
####################
If you change the number of items to 1000, the difference becomes much bigger (BTW, I'll be happy if someone could explain why) :
Testing algorithm on the list of 1000 items using 1000 loops
Algorithm: dupl_eat
Timing: 5.46171654555
####################
Algorithm: dupl_utdemir
Timing: 25.5582547323
####################
Algorithm: dupl_lthaulow
Timing: 39.284285326
####################
Algorithm: dupl_pmcguire
Timing: 0.56558489513
####################
Algorithm: dupl_ivazques_abrams
Timing: 0.615980005148
####################
Algorithm: dupl_rbespal
Timing: 1.21610942322
####################
On the bigger lists, the solution of #Paul McGuire continues to be the most efficient and my algorithm begins having problems.
Testing algorithm on the list of 1000000 items using 1 loops
Algorithm: dupl_pmcguire
Timing: 1.5019953958
####################
Algorithm: dupl_ivazques_abrams
Timing: 1.70856155898
####################
Algorithm: dupl_rbespal
Timing: 3.95820421595
####################
The full code of the benchmark is here
Another algorithm
Here is my solution to the same problem:
def dupl_rbespal(c):
alreadyAdded = False
dupl_c = dict()
sorted_ind_c = sorted(range(len(c)), key=lambda x: c[x]) # sort incoming list but save the indexes of sorted items
for i in xrange(len(c) - 1): # loop over indexes of sorted items
if c[sorted_ind_c[i]] == c[sorted_ind_c[i+1]]: # if two consecutive indexes point to the same value, add it to the duplicates
if not alreadyAdded:
dupl_c[c[sorted_ind_c[i]]] = [sorted_ind_c[i], sorted_ind_c[i+1]]
alreadyAdded = True
else:
dupl_c[c[sorted_ind_c[i]]].append( sorted_ind_c[i+1] )
else:
alreadyAdded = False
return dupl_c
Although it's not the best it allowed me to generate a little bit different structure needed for my problem (i needed something like a linked list of indexes of the same value)
dups = collections.defaultdict(list)
for i, e in enumerate(L):
dups[e].append(i)
for k, v in sorted(dups.iteritems()):
if len(v) >= 2:
print '%s: %r' % (k, v)
And extrapolate from there.
I think I found a simple solution after a lot of irritation :
if elem in string_list:
counter = 0
elem_pos = []
for i in string_list:
if i == elem:
elem_pos.append(counter)
counter = counter + 1
print(elem_pos)
This prints a list giving you the indexes of a specific element ("elem")
Using new "Counter" class in collections module, based on lazyr's answer:
>>> import collections
>>> def duplicates(n): #n="123123123"
... counter=collections.Counter(n) #{'1': 3, '3': 3, '2': 3}
... dups=[i for i in counter if counter[i]!=1] #['1','3','2']
... result={}
... for item in dups:
... result[item]=[i for i,j in enumerate(n) if j==item]
... return result
...
>>> duplicates("123123123")
{'1': [0, 3, 6], '3': [2, 5, 8], '2': [1, 4, 7]}
from collections import Counter, defaultdict
def duplicates(lst):
cnt= Counter(lst)
return [key for key in cnt.keys() if cnt[key]> 1]
def duplicates_indices(lst):
dup, ind= duplicates(lst), defaultdict(list)
for i, v in enumerate(lst):
if v in dup: ind[v].append(i)
return ind
lst= ['a', 'b', 'a', 'c', 'b', 'a', 'e']
print duplicates(lst) # ['a', 'b']
print duplicates_indices(lst) # ..., {'a': [0, 2, 5], 'b': [1, 4]})
A slightly more orthogonal (and thus more useful) implementation would be:
from collections import Counter, defaultdict
def duplicates(lst):
cnt= Counter(lst)
return [key for key in cnt.keys() if cnt[key]> 1]
def indices(lst, items= None):
items, ind= set(lst) if items is None else items, defaultdict(list)
for i, v in enumerate(lst):
if v in items: ind[v].append(i)
return ind
lst= ['a', 'b', 'a', 'c', 'b', 'a', 'e']
print indices(lst, duplicates(lst)) # ..., {'a': [0, 2, 5], 'b': [1, 4]})
Wow, everyone's answer is so long. I simply used a pandas dataframe, masking, and the duplicated function (keep=False markes all duplicates as True, not just first or last):
import pandas as pd
import numpy as np
np.random.seed(42) # make results reproducible
int_df = pd.DataFrame({'int_list': np.random.randint(1, 20, size=10)})
dupes = int_df['int_list'].duplicated(keep=False)
print(int_df['int_list'][dupes].index)
This should return Int64Index([0, 2, 3, 4, 6, 7, 9], dtype='int64').
def index(arr, num):
for i, x in enumerate(arr):
if x == num:
print(x, i)
#index(List, 'A')
In a single line with pandas 1.2.2 and numpy:
import numpy as np
import pandas as pd
idx = np.where(pd.DataFrame(List).duplicated(keep=False))
The argument keep=False will mark every duplicate as True and np.where() will return an array with the indices where the element in the array was True.
string_list = ['A', 'B', 'C', 'B', 'D', 'B']
pos_list = []
for i in range(len(string_list)):
if string_list[i] = ='B':
pos_list.append(i)
print pos_list
def find_duplicate(list_):
duplicate_list=[""]
for k in range(len(list_)):
if duplicate_list.__contains__(list_[k]):
continue
for j in range(len(list_)):
if k == j:
continue
if list_[k] == list_[j]:
duplicate_list.append(list_[j])
print("duplicate "+str(list_.index(list_[j]))+str(list_.index(list_[k])))
Here is one that works for multiple duplicates and you don't need to specify any values:
List = ['A', 'B', 'A', 'C', 'E', 'B'] # duplicate two 'A's two 'B's
ix_list = []
for i in range(len(List)):
try:
dup_ix = List[(i+1):].index(List[i]) + (i + 1) # dup onwards + (i + 1)
ix_list.extend([i, dup_ix]) # if found no error, add i also
except:
pass
ix_list.sort()
print(ix_list)
[0, 1, 2, 5]
def dup_list(my_list, value):
'''
dup_list(list,value)
This function finds the indices of values in a list including duplicated values.
list: the list you are working on
value: the item of the list you want to find the index of
NB: if a value is duplcated, its indices are stored in a list
If only one occurence of the value, the index is stored as an integer.
Therefore use isinstance method to know how to handle the returned value
'''
value_list = []
index_list = []
index_of_duped = []
if my_list.count(value) == 1:
return my_list.index(value)
elif my_list.count(value) < 1:
return 'Your argument is not in the list'
else:
for item in my_list:
value_list.append(item)
length = len(value_list)
index = length - 1
index_list.append(index)
if item == value:
index_of_duped.append(max(index_list))
return index_of_duped
# function call eg dup_list(my_list, 'john')
If you want to get index of all duplicate elements of different types you can try this solution:
# note: below list has more than one kind of duplicates
List = ['A', 'B', 'A', 'C', 'E', 'E', 'A', 'B', 'A', 'A', 'C']
d1 = {item:List.count(item) for item in List} # item and their counts
elems = list(filter(lambda x: d1[x] > 1, d1)) # get duplicate elements
d2 = dict(zip(range(0, len(List)), List)) # each item and their indices
# item and their list of duplicate indices
res = {item: list(filter(lambda x: d2[x] == item, d2)) for item in elems}
Now, if you print(res) you'll get to see this:
{'A': [0, 2, 6, 8, 9], 'B': [1, 7], 'C': [3, 10], 'E': [4, 5]}
def duplicates(list,dup):
a=[list.index(dup)]
for i in list:
try:
a.append(list.index(dup,a[-1]+1))
except:
for i in a:
print(f'index {i}: '+dup)
break
duplicates(['A', 'B', 'A', 'C', 'E'],'A')
Output:
index 0: A
index 2: A
This is a good question and there is a lot of ways to it.
The code below is one of the ways to do it
letters = ["a", "b", "c", "d", "e", "a", "a", "b"]
lettersIndexes = [i for i in range(len(letters))] # i created a list that contains the indexes of my previous list
counter = 0
for item in letters:
if item == "a":
print(item, lettersIndexes[counter])
counter += 1 # for each item it increases the counter which means the index
An other way to get the indexes but this time stored in a list
letters = ["a", "b", "c", "d", "e", "a", "a", "b"]
lettersIndexes = [i for i in range(len(letters)) if letters[i] == "a" ]
print(lettersIndexes) # as you can see we get a list of the indexes that we want.
Good day
Using a dictionary approach based on setdefault instance method.
List = ['A', 'B', 'A', 'C', 'B', 'E', 'B']
# keep track of all indices of every term
duplicates = {}
for i, key in enumerate(List):
duplicates.setdefault(key, []).append(i)
# print only those terms with more than one index
template = 'index {}: {}'
for k, v in duplicates.items():
if len(v) > 1:
print(template.format(k, str(v).strip('][')))
Remark: Counter, defaultdict and other container class from collections are subclasses of dict hence share the setdefault method as well
I'll mention the more obvious way of dealing with duplicates in lists. In terms of complexity, dictionaries are the way to go because each lookup is O(1). You can be more clever if you're only interested in duplicates...
my_list = [1,1,2,3,4,5,5]
my_dict = {}
for (ind,elem) in enumerate(my_list):
if elem in my_dict:
my_dict[elem].append(ind)
else:
my_dict.update({elem:[ind]})
for key,value in my_dict.iteritems():
if len(value) > 1:
print "key(%s) has indices (%s)" %(key,value)
which prints the following:
key(1) has indices ([0, 1])
key(5) has indices ([5, 6])
a= [2,3,4,5,6,2,3,2,4,2]
search=2
pos=0
positions=[]
while (search in a):
pos+=a.index(search)
positions.append(pos)
a=a[a.index(search)+1:]
pos+=1
print "search found at:",positions
I just make it simple:
i = [1,2,1,3]
k = 0
for ii in i:
if ii == 1 :
print ("index of 1 = ", k)
k = k+1
output:
index of 1 = 0
index of 1 = 2

Processing each N rows in for loop, print list of processed items

Following reproducible script is intended to process each two rows in a dataframe with length 5. For each two processed rows, I like to print a list of items that have been processed.
import pandas as pd
import itertools
my_dict = {
'name' : ['a', 'b', 'c', 'd', 'e'],
'age' : [10, 20, 30, 40, 50]
}
df = pd.DataFrame(my_dict)
for index, row in itertools.islice(df.iterrows(), 2):
rowlist = (row.name)
print('Processed two rows {}'.format(rowlist))
Output:
a
b
I'm looking for a way to get following desired output:
[a,b]
[c,d]
[e]
Tried:
print(df.groupby(df.index//2)['name'].agg(list))
Out:
0 [a, b]
1 [c, d]
2 [e]
Name: name, dtype: object
0 [a, b]
1 [c, d]
2 [e]
Name: name, dtype: object
Thanks for your help!

How to split every string in a list in a dataframe column

I have a dataframe with a column containing a list of strings 'A:B'. I'd like to modify this so there is a new column which contains a set split by ':' containing the first element.
data = [
{'Name': 'A', 'Servers':['A:s1', 'B:s2', 'C:s3', 'C:s2']},
{'Name': 'B', 'Servers':['B:s1', 'C:s2', 'B:s3', 'A:s2']},
{'Name': 'C', 'Servers':['G:s1', 'X:s2', 'Y:s3']}
]
df = pd.DataFrame(data)
df
df['Clusters'] = [
{'A', 'B', 'C'},
{'B', 'C', 'A'},
{'G', 'X', 'Y'}
]
Learn how to use apply
In [5]: df['Clusters'] = df['Servers'].apply(lambda x: {p.split(':')[0] for p in x})
In [6]: df
Out[6]:
Name Servers Clusters
0 A [A:s1, B:s2, C:s3, C:s2] {A, B, C}
1 B [B:s1, C:s2, B:s3, A:s2] {C, B, A}
2 C [G:s1, X:s2, Y:s3] {X, Y, G}

Aggregate/Remove duplicate rows in DataFrame based on swapped index levels

Sample input
import pandas as pd
df = pd.DataFrame([
['A', 'B', 1, 5],
['B', 'C', 2, 2],
['B', 'A', 1, 1],
['C', 'B', 1, 3]],
columns=['from', 'to', 'type', 'value'])
df = df.set_index(['from', 'to', 'type'])
Which looks like this:
value
from to type
A B 1 5
B C 2 2
A 1 1
C B 1 3
Goal
I now want to remove "duplicate" rows from this in the following sense: for each row with an arbitrary index (from, to, type), if there exists a row (to, from, type), the value of the second row should be added to the first row and the second row be dropped. In the example above, the row (B, A, 1) with value 1 should be added to the first row and dropped, leading to the following desired result.
Sample result
value
from to type
A B 1 6
B C 2 2
C B 1 3
This is my best try so far. It feels unnecessarily verbose and clunky:
# aggregate val of rows with (from,to,type) == (to,from,type)
df2 = df.reset_index()
df3 = df2.rename(columns={'from':'to', 'to':'from'})
df_both = df.join(df3.set_index(
['from', 'to', 'type']),
rsuffix='_b').sum(axis=1)
# then remove the second, i.e. the (to,from,t) row
rows_to_keep = []
rows_to_remove = []
for a,b,t in df_both.index:
if (b,a,t) in df_both.index and not (b,a,t) in rows_to_keep:
rows_to_keep.append((a,b,t))
rows_to_remove.append((b,a,t))
df_final = df_both.drop(rows_to_remove)
df_final
Especially the second "de-duplication" step feels very unpythonic. (How) can I improve these steps?
Not sure how much better this is, but it's certainly different
import pandas as pd
from collections import Counter
df = pd.DataFrame([
['A', 'B', 1, 5],
['B', 'C', 2, 2],
['B', 'A', 1, 1],
['C', 'B', 1, 3]],
columns=['from', 'to', 'type', 'value'])
df = df.set_index(['from', 'to', 'type'])
ls = df.to_records()
ls = list(ls)
ls2=[]
for l in ls:
i=0
while i <= l[3]:
ls2.append(list(l)[:3])
i+=1
counted = Counter(tuple(sorted(entry)) for entry in ls2)