Python: obtaining the first observation according to its date [duplicate] - pandas

I have a DataFrame with columns A, B, and C. For each value of A, I would like to select the row with the minimum value in column B.
That is, from this:
df = pd.DataFrame({'A': [1, 1, 1, 2, 2, 2],
'B': [4, 5, 2, 7, 4, 6],
'C': [3, 4, 10, 2, 4, 6]})
A B C
0 1 4 3
1 1 5 4
2 1 2 10
3 2 7 2
4 2 4 4
5 2 6 6
I would like to get:
A B C
0 1 2 10
1 2 4 4
For the moment I am grouping by column A, then creating a value that indicates to me the rows I will keep:
a = data.groupby('A').min()
a['A'] = a.index
to_keep = [str(x[0]) + str(x[1]) for x in a[['A', 'B']].values]
data['id'] = data['A'].astype(str) + data['B'].astype('str')
data[data['id'].isin(to_keep)]
I am sure that there is a much more straightforward way to do this.
I have seen many answers here that use MultiIndex, which I would prefer to avoid.
Thank you for your help.

I feel like you're overthinking this. Just use groupby and idxmin:
df.loc[df.groupby('A').B.idxmin()]
A B C
2 1 2 10
4 2 4 4
df.loc[df.groupby('A').B.idxmin()].reset_index(drop=True)
A B C
0 1 2 10
1 2 4 4

Had a similar situation but with a more complex column heading (e.g. "B val") in which case this is needed:
df.loc[df.groupby('A')['B val'].idxmin()]

The accepted answer (suggesting idxmin) cannot be used with the pipe pattern. A pipe-friendly alternative is to first sort values and then use groupby with DataFrame.head:
data.sort_values('B').groupby('A').apply(DataFrame.head, n=1)
This is possible because by default groupby preserves the order of rows within each group, which is stable and documented behaviour (see pandas.DataFrame.groupby).
This approach has additional benefits:
it can be easily expanded to select n rows with smallest values in specific column
it can break ties by providing another column (as a list) to .sort_values(), e.g.:
data.sort_values(['final_score', 'midterm_score']).groupby('year').apply(DataFrame.head, n=1)
As with other answers, to exactly match the result desired in the question .reset_index(drop=True) is needed, making the final snippet:
df.sort_values('B').groupby('A').apply(DataFrame.head, n=1).reset_index(drop=True)

I found an answer a little bit more wordy, but a lot more efficient:
This is the example dataset:
data = pd.DataFrame({'A': [1,1,1,2,2,2], 'B':[4,5,2,7,4,6], 'C':[3,4,10,2,4,6]})
data
Out:
A B C
0 1 4 3
1 1 5 4
2 1 2 10
3 2 7 2
4 2 4 4
5 2 6 6
First we will get the min values on a Series from a groupby operation:
min_value = data.groupby('A').B.min()
min_value
Out:
A
1 2
2 4
Name: B, dtype: int64
Then, we merge this series result on the original data frame
data = data.merge(min_value, on='A',suffixes=('', '_min'))
data
Out:
A B C B_min
0 1 4 3 2
1 1 5 4 2
2 1 2 10 2
3 2 7 2 4
4 2 4 4 4
5 2 6 6 4
Finally, we get only the lines where B is equal to B_min and drop B_min since we don't need it anymore.
data = data[data.B==data.B_min].drop('B_min', axis=1)
data
Out:
A B C
2 1 2 10
4 2 4 4
I have tested it on very large datasets and this was the only way I could make it work in a reasonable time.

You can sort_values and drop_duplicates:
df.sort_values('B').drop_duplicates('A')
Output:
A B C
2 1 2 10
4 2 4 4

The solution is, as written before ;
df.loc[df.groupby('A')['B'].idxmin()]
If the solution but then if you get an error;
"Passing list-likes to .loc or [] with any missing labels is no longer supported.
The following labels were missing: Float64Index([nan], dtype='float64').
See https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#deprecate-loc-reindex-listlike"
In my case, there were 'NaN' values at column B. So, I used 'dropna()' then it worked.
df.loc[df.groupby('A')['B'].idxmin().dropna()]

You can also boolean indexing the rows where B column is minimal value
out = df[df['B'] == df.groupby('A')['B'].transform('min')]
print(out)
A B C
2 1 2 10
4 2 4 4

Related

String to Columns

I have a string column in my df.
col
a: 1, b: 2, c: 3
b: 1, c: 3, a: 4
c: 2, b: 4, a: 3
I wish to convert this into multiple columns as:
a b c
1 2 3
4 1 3
3 4 2
Need help regarding this.
I am trying to convert this into a dict and then sort the dict. Post that, I want to maybe do a pivot table. Not exactly sure if it'll do but any help or better method will be appreciated.
Use nested list comprehension with double split by , and : for list of dictionaries and pass to DataFrame constructor:
df = pd.DataFrame([dict(y.split(': ') for y in x.split(', ')) for x in df['col']],
index=df.index)
print (df)
a b c
0 1 2 3
1 4 1 3
2 3 4 2
You can use str.extractall and unstack:
(df['col'].str.extractall('(\w+):\s*([^,]+)')
.set_index(0, append=True).droplevel('match')[1]
.unstack(0)
)
Output:
a b c
0 1 2 3
1 4 1 3
2 3 4 2

most efficient way to set dataframe column indexing to other columns

I have a large Dataframe. One of my columns contains the name of others. I want to eval this colum and set in each row the value of the referenced column:
|A|B|C|Column|
|:|:|:|:-----|
|1|3|4| B |
|2|5|3| A |
|3|5|9| C |
Desired output:
|A|B|C|Column|
|:|:|:|:-----|
|1|3|4| 3 |
|2|5|3| 2 |
|3|5|9| 9 |
I am achieving this result using:
df.apply(lambda d: eval("d." + d['Column']), axis=1)
But it is very slow, even using swifter. Is there a more efficient way of performing this?
For better performance, use df.to_numpy():
In [365]: df['Column'] = df.to_numpy()[df.index, df.columns.get_indexer(df.Column)]
In [366]: df
Out[366]:
A B C Column
0 1 3 4 3
1 2 5 3 2
2 3 5 9 9
For Pandas < 1.2.0, use lookup:
df['Column'] = df.lookup(df.index, df['Column'])
From 1.2.0+, lookup is decprecated, you can just use a for loop:
df['Column'] = [df.at[idx, r['Column']] for idx, r in df.iterrows()]
Output:
A B C Column
0 1 3 4 3
1 2 5 3 2
2 3 5 9 9
Since lookup is going to decprecated try numpy method with get_indexer
df['new'] = df.values[df.index,df.columns.get_indexer(df.Column)]
df
Out[75]:
A B C Column new
0 1 3 4 B 3
1 2 5 3 A 2
2 3 5 9 C 9

Know which row is similar with the another in a data frame

I have a data frame which is called data.
Also, I have 2057 columns and 197 rows in dataframe group1, I want to know which row is similar to another one.
I made this:
group1=pd.crosstab(data.column1,data.column2)
group1["EsDuplicado?"]=group1.duplicated(subset=group1.columns.difference(['BCP_Nombre_de_la_Matriz__c']),keep=False)
Until now it's working, and I added a new column with value is true(when the row is similar to another one) or false when the row is not similar.
I want To know which rows are similar and know exactly its pair.
Until now my table is like this:
But I would want something like this:
Or maybe this is not necessary, and maybe it's enough if the rows are next to each other, in that way I know which row is similar.
I want something like this, but in this example is only 2 column in my case I have 2057 columns:
find duplicate rows in a pandas dataframe
The answer by #cs95 to the question you linked to can straightforwardly be generalized to any number of columns.
Here's a small example dataset for testing, but I'll make the second code block general, so that it should also work for your DataFrame, as long as its name is group1.
import pandas as pd
group1 = pd.DataFrame({'column 1': [1, 1, 1, 1, 1, 1],
'column 2': [2, 2, 2, 2, 2, 2],
'column 3': [3, 4, 3, 5, 3, 4]})
group1
column 1 column 2 column 3
0 1 2 3
1 1 2 4
2 1 2 3
3 1 2 5
4 1 2 3
5 1 2 4
group1['first such row'] = group1.groupby(list(group1.columns))[group1.columns[0]
].transform('idxmin')
group1
column 1 column 2 column 3 first such row
0 1 2 3 0
1 1 2 4 1
2 1 2 3 0
3 1 2 5 3
4 1 2 3 0
5 1 2 4 1

pandas: idxmax for k-th largest

Having df of probabilities distribution, I get max probability for rows with df.idxmax(axis=1) like this:
df['1k-th'] = df.idxmax(axis=1)
and get the following result:
(scroll the tables to the right if you can not see all the columns)
0 1 2 3 4 5 6 1k-th
0 0.114869 0.020708 0.025587 0.028741 0.031257 0.031619 0.747219 6
1 0.020206 0.012710 0.010341 0.012196 0.812495 0.113863 0.018190 4
2 0.023585 0.735475 0.091795 0.021683 0.027581 0.054217 0.045664 1
3 0.009834 0.009175 0.013165 0.016014 0.015507 0.899115 0.037190 5
4 0.023357 0.736059 0.088721 0.021626 0.027341 0.056289 0.046607 1
the question is how to get the 2-th, 3th, etc probabilities, so that I get the following result?:
0 1 2 3 4 5 6 1k-th 2-th
0 0.114869 0.020708 0.025587 0.028741 0.031257 0.031619 0.747219 6 0
1 0.020206 0.012710 0.010341 0.012196 0.812495 0.113863 0.018190 4 3
2 0.023585 0.735475 0.091795 0.021683 0.027581 0.054217 0.045664 1 4
3 0.009834 0.009175 0.013165 0.016014 0.015507 0.899115 0.037190 5 4
4 0.023357 0.736059 0.088721 0.021626 0.027341 0.056289 0.046607 1 2
Thank you!
My own solution is not the prettiest, but does it's job and works fast:
for i in range(7):
p[f'{i}k'] = p[[0,1,2,3,4,5,6]].idxmax(axis=1)
p[f'{i}k_v'] = p[[0,1,2,3,4,5,6]].max(axis=1)
for x in range(7):
p[x] = np.where(p[x]==p[f'{i}k_v'], np.nan, p[x])
The loop does:
finds the largest value and it's column index
drops the found value (sets to nan)
again
finds the 2nd largest value
drops the found value
etc ...

Replacing values in pandas data frame

I am looking for a pythonic way of replacing values based on whether values are big of small. Say I have a data frame:
ds = pandas.DataFrame({'x' : [4,3,2,1,5], 'y' : [4,5,6,7,8]})
I'd like to replace values on x which are lower than 2 by 2 and values higher than 4 by 4. And similarly with y values, replacing values lower than 5 by 5 and values higher than 7 by 7 so as to get this data frame:
ds = pandas.DataFrame({'x' : [4,3,2,2,4], 'y' : [5,5,6,7,7]})
I did it by iterating on the rows but is really ugly, any more pandas-pythonic way (Basically I want to eliminate extreme values)
You can check with clip
ds.x.clip(2,4)
Out[42]:
0 4
1 3
2 2
3 2
4 4
Name: x, dtype: int64
#ds.x=ds.x.clip(2,4)
#ds.y=ds.y.clip(5,7)
One way of doing this as follows:
>>> ds[ds.x.le(2) ] =2
>>> ds[ds.x.ge(4) ] =4
>>> ds
x y
0 4 4
1 3 5
2 2 6
3 2 2
4 4 4