How do I compare the value of the first row in col b and the last row in col b from grouping by col a, without using the groupby function? Because groupby function is very slow for a large dataset.
a = [1,1,1,1,1,1,1,2,2,2,2,2,2,2,3,3,3,3,3,3,3]
b = [1,0,0,0,0,0,7,8,0,0,0,0,0,4,1,0,0,0,0,0,1]
Return two lists: one has the group names from col a where the last value is larger than the first value, etc.
larger_or_equal = [1,3]
smaller = [2]
All numpy
a = np.array([1,1,1,1,1,1,1,2,2,2,2,2,2,2,3,3,3,3,3,3,3])
b = np.array([1,0,0,0,0,0,7,8,0,0,0,0,0,4,1,0,0,0,0,0,1])
w = np.where(a[1:] != a[:-1])[0] # find the edges
e = np.append(w, len(a) - 1) # define the end pos
s = np.append(0, w + 1) # define start pos
# slice end pos with boolean array. then slice groups with end postions.
# I could also have used start positions.
a[e[b[e] >= b[s]]]
a[e[b[e] < b[s]]]
[1 3]
[2]
Here is a solution without groupby. The idea is to shift column a to detect group changes:
df[df['a'].shift() != df['a']]
a b
0 1 1
7 2 8
14 3 1
df[df['a'].shift(-1) != df['a']]
a b
6 1 7
13 2 4
20 3 1
We will compare the column b in those two dataframes. We simply need to reset the index for the pandas comparison to work:
first = df[df['a'].shift() != df['a']].reset_index(drop=True)
last = df[df['a'].shift(-1) != df['a']].reset_index(drop=True)
first.loc[last['b'] >= first['b'], 'a'].values
array([1, 3])
Then do the same with < to get the other groups. Or do a set difference.
As I wrote in the comments, groupby(sort=False) might well be faster depending on your dataset.
Related
I have a dataframe with several numeric columns and their range goes either from 1 to 5 or 1 to 10
I want to create two lists of these columns names this way:
names_1to5 = list of all columns in df with numbers ranging from 1 to 5
names_1to10 = list of all columns in df with numbers from 1 to 10
Example:
IP track batch size type
1 2 3 5 A
9 1 2 8 B
10 5 5 10 C
from the dataframe above:
names_1to5 = ['track', 'batch']
names_1to10 = ['ip', 'size']
I want to use a function that gets a dataframe and perform the above transformation only on columns with numbers within those ranges.
I know that if the column 'max()' is 5 than it's 1to5 same idea when max() is 10
What I already did:
def test(df):
list_1to5 = []
list_1to10 = []
for col in df:
if df[col].max() == 5:
list_1to5.append(col)
else:
list_1to10.append(col)
return list_1to5, list_1to10
I tried the above but it's returning the following error msg:
'>=' not supported between instances of 'float' and 'str'
The type of the columns is 'object' maybe this is the reason. If this is the reason, how can I fix the function without the need to cast these columns to float as there are several, sometimes hundreds of these columns and if I run:
df['column'].max() I get 10 or 5
What's the best way to create this this function?
Use:
string = """alpha IP track batch size
A 1 2 3 5
B 9 1 2 8
C 10 5 5 10"""
temp = [x.split() for x in string.split('\n')]
cols = temp[0]
data = temp[1:]
def test(df):
list_1to5 = []
list_1to10 = []
for col in df.columns:
if df[col].dtype!='O':
if df[col].max() == 5:
list_1to5.append(col)
else:
list_1to10.append(col)
return list_1to5, list_1to10
df = pd.DataFrame(data, columns = cols, dtype=float)
Output:
(['track', 'batch'], ['IP', 'size'])
I have a daframe with several columns but some of them starts with test_
Below a sample with ONLY these test_ columns:
c = pd.DataFrame({'test_pierce':[10,30,40,50],'test_failure':[30,10,20,10] })
What I need to do:
For every column in my dataframe that starts with test_ I want to create another column just after to classify it's value like this:
if test_ > 30.0:
Y
else:
N
To get this output:
d = pd.DataFrame({'test_pierce':[10,30,40,50],'class_test_pierce':['N','N','Y','Y'],'test_failure':[30,10,20,10], 'class_test_failure':['N','N','N','N'] })
What I did:
I have the columns I need to classify:
cols = [c for c in c.columns if c.startswith('test_')]
I couldn't proceed from here tho
Code with the suggested order:
The code is a little ugly because you asked to be the columns after its test_ column. Otherwise, the code is simpler than that.
cols = [(i,c) for i,c in enumerate(c.columns) if c.startswith('test_')]
count = 1
for index,col in cols:
value = np.where(c[col] > 30.0,'Y','N')
c.insert(index+count, 'class_'+col, value)
count +=1
Code without the suggested order:
cols = [c for c in c.columns if c.startswith('test_')]
for col in cols:
c[f'class_{col}'] = np.where(c[col] > 30.0,'Y','N')
A format that may help you get started is:
cols = [c for c in c.columns if c.startswith('test_')]
for col in cols:
df[f'class_{col}'] = df.apply(lambda x: 'Y' if x[col] > 30.0 else 'N', axis=1)
Output:
test_pierce test_failure class_test_pierce class_test_failure
0 10 30 N N
1 30 10 N N
2 40 20 Y N
3 50 10 Y N
I have a dataframe like this:
df_test = pd.DataFrame({'ID1':['A','A','A','B','B','A','B','B','B','B','A','A','A','A'],
'ID2':[1,2,3,1,1,1,6,7,1,2,2,5,6,1]})
df_test
The result dataframe would be like this ('ID1' was group/slice by the value, for example, if A was repeated at least 2 times, these 2 rows will be treated as a group and calculate the mean of ID2. it's similar to 'B', but only if 'B' repeat at least 3 times):
df_result = pd.DataFrame({'ID1':['A1','B1','A2'],
'mean_ID2':[2,4,3.5]})
df_result
you can use run length encoding to figure out the rows to keep based on number of elements in consecutive runs. in the next step, group by consecutive runs again and take mean.
import pdrle
r = pdrle.encode(df_test.ID1)
r["chk"] = ((r.vals == "A") & (r.runs >=2)) | ((r.vals == "B") & (r.runs >= 3))
df2 = df_test[pdrle.decode(r.chk, r.runs)]
df2.groupby(pdrle.get_id(df2.ID1)).agg({"ID1": "first", "ID2": "mean"})
# ID1 ID2
# ID1
# 0 A 2.0
# 1 B 4.0
# 2 A 3.5
I am trying to repair a csv file.
Some data rows need to be removed based on a couple conditions.
Say you have the following dataframe:
-A----B-----C
000---0-----0
000---1-----0
001---0-----1
011---1-----0
001---1-----1
If two or more rows have column A in common, i want to keep the row that has column B set to 1.
The resulting dataframe should look like this:
-A----B-----C
000---1-----0
011---1-----0
001---1-----1
I've experimented with merges and drop_duplicates but cannot seem to get the result I need. It is not certain that the row with column B = 1 will be after a row with B = 0. The take_last argument of drop_duplicates seemed attractive but I don't think it applies here.
Any advice will be greatly appreciated.Thank you.
Not straight forward, but this should work
DF = pd.DataFrame({'A' : [0,0,1,11,1], 'B' : [0,1,0,1,1], 'C' : [0,0,1,0,1]})
DF.ix[DF.groupby('A').apply(lambda df: df[df.B == 1].index[0] if len(df) > 1 else df.index[0])]
A B C
1 0 1 0
4 1 1 1
3 11 1 0
Notes:
groupby divides DF into groups of rows with unique A values i.e. groups with A = 0 (2 rows), A=1 (2 rows) and A=11 (1 row)
Apply then calls the function on each group and assimilates the results
In the function (lambda) I'm looking for the index of row with value B == 1 if there is more than one row in the group, else I use the index of the default row
The result of apply is a list of index values that represent rows with B==1 if more than one row in the group else the default row for given A
The index values are then used to access the corresponding rows by ix operator
Was able to weave my way around panda to get the result I want.
It's not pretty but it gets the job done
res = DataFrame(columns=('CARD_NO', 'STATUS'))
for i in grouped.groups:
if len(grouped.groups[i]) > 1:
card_no = i
print card_no
for a in grouped.groups[card_no]:
status = df.iloc[a]['STATUS']
print 'iloc:'+str(a) +'\t'+'status:'+str(status)
if status == 1:
print 'yes'
row = pd.DataFrame([dict(CARD_NO=card_no, STATUS=status), ])
res = res.append(row, ignore_index=True)
else:
print 'no'
else:
#only 1 record found
#could be a status of 0 or 1
#add to dataframe
print 'UNIQUE RECORD'
card_no = i
print card_no
status = df.iloc[grouped.groups[card_no][0]]['STATUS']
print grouped.groups[card_no][0]
#print status
print 'iloc:'+str(grouped.groups[card_no][0]) +'\t'+'status:'+str(status)
row = pd.DataFrame([dict(CARD_NO=card_no, STATUS=status), ])
res = res.append(row, ignore_index=True)
print res
I have a DataFrame with MultiIndex, for example:
In [1]: arrays = [['one','one','one','two','two','two'],[1,2,3,1,2,3]]
In [2]: df = DataFrame(randn(6,2),index=MultiIndex.from_tuples(zip(*arrays)),columns=['A','B'])
In [3]: df
Out [3]:
A B
one 1 -2.028736 -0.466668
2 -1.877478 0.179211
3 0.886038 0.679528
two 1 1.101735 0.169177
2 0.756676 -1.043739
3 1.189944 1.342415
Now I want to compute the means of elements 2 and 3 (index level 1) for each row (index level 0) and each column. So I need a DataFrame which would look like
A B
one 1 mean(df['A'].ix['one'][1:3]) mean(df['B'].ix['one'][1:3])
two 1 mean(df['A'].ix['two'][1:3]) mean(df['B'].ix['two'][1:3])
How do I do that without using loops over rows (index level 0) of the original data frame? What if I want to do the same for a Panel? There must be a simple solution with groupby, but I'm still learning it and can't think of an answer.
You can use the xs function to select on levels.
Starting with:
A B
one 1 -2.712137 -0.131805
2 -0.390227 -1.333230
3 0.047128 0.438284
two 1 0.055254 -1.434262
2 2.392265 -1.474072
3 -1.058256 -0.572943
You can then create a new dataframe using:
DataFrame({'one':df.xs('one',level=0)[1:3].apply(np.mean), 'two':df.xs('two',level=0)[1:3].apply(np.mean)}).transpose()
which gives the result:
A B
one -0.171549 -0.447473
two 0.667005 -1.023508
To do the same without specifying the items in the level, you can use groupby:
grouped = df.groupby(level=0)
d = {}
for g in grouped:
d[g[0]] = g[1][1:3].apply(np.mean)
DataFrame(d).transpose()
I'm not sure about panels - it's not as well documented, but something similar should be possible
I know this is an old question, but for reference who searches and finds this page, the easier solution I think is the level keyword in mean:
In [4]: arrays = [['one','one','one','two','two','two'],[1,2,3,1,2,3]]
In [5]: df = pd.DataFrame(np.random.randn(6,2),index=pd.MultiIndex.from_tuples(z
ip(*arrays)),columns=['A','B'])
In [6]: df
Out[6]:
A B
one 1 -0.472890 2.297778
2 -2.002773 -0.114489
3 -1.337794 -1.464213
two 1 1.964838 -0.623666
2 0.838388 0.229361
3 1.735198 0.170260
In [7]: df.mean(level=0)
Out[7]:
A B
one -1.271152 0.239692
two 1.512808 -0.074682
In this case it means that level 0 is kept over axis 0 (the rows, default value for mean)
Do the following:
# Specify the indices you want to work with.
idxs = [("one", elem) for elem in [2,3]] + [("two", elem) for elem in [2,3]]
# Compute grouped mean over only those indices.
df.ix[idxs].mean(level=0)