Allocating labels to grouped values in pandas - pandas

I am producing some statistics which require grouping results by church, and only counting those churches which have been visited more than once.
So I can do:
df = pd.read_excel('/home/tim/metatron/church_data.xlsx')
chthresh = 1 # Minimum number of visits to a church in order to be considered
chgp = df.groupby('Church')
chcnt = pd.DataFrame(chgp['Date'].count())
chcnt2 = chcnt[chcnt['Date'] > chthresh]
which gives me what I want:
In[8]: chcnt2
Out[8]:
Date
Church
Manchester 36
Sale 29
Salford 33
For the purposes of analysis, though, I would like to anonymise these churches and replace them with (say) A, B, C etc. (There may be more than three churches). What would be the easiest/best way to allocate some sort of alphabetic label, e.g. in this case "Manchester" -> "A", "Sale" -> "B", "Salford" -> "C"
I can give the churches some sort of ordinal value:
chcnt3 = chcnt2.reset_index()
chcnt3['Ordinal']=chcnt3.index.values
Which produces
In[9]: chcnt3
Out[9]:
Church Date Ordinal
0 Manchester 36 0
1 Sale 29 1
2 Salford 33 2
But how would I convert this to some sort of letter? Is there a better way to do this?

You can create a letter map:
from string import ascii_uppercase
letter_map = dict(zip(range(len(ascii_uppercase)), ascii_uppercase))
and use this for mapping:
chcnt3['letter'] = chcnt3['Ordinal'].map(letter_map)
chcnt3
Out:
Church Date Ordinal letter
0 Manchester 36 0 A
1 Sale 29 1 B
2 Salford 33 2 C
Without creating the ordinal column, you can do this on the chcnt2 DataFrame too:
chcnt2['letter'] = list(ascii_uppercase[:len(chcnt2)])
chcnt2
Out:
Date letter
Church
Manchester 36 A
Sale 29 B
Salford 33 C

Related

Pandas groupby nlargest slice

There were kind of similar named questions, but they do not reflect the use case I am facing. I have a dataframe with groups and values. I want to select values sliced by their order (confusing maybe, example will explain better).
This is my data:
group value
a 20
a 16
a 14
a 13
a 12
b 19
b 17
b 16
b 14
b 13
b 12
b 12
b 11
I want to group by group and slice [a:b] with nlargest logic, in other words, if a = 2 and b = 7 the biggest 3rd, 4th, 5ht, 6th and 7th variables per each group. I could not find any question here on this use case, or could I find something in pandas-dev github.
If there are less than b elements in any of the groups, then b = len(of that group) should be applied. If there are two or more elements with the same value, they should all be selected if they are within the [a:b] slice.
My desired result looks like this:
group value
a 14
a 13
a 12
b 16
b 14
b 13
b 12
b 12
Here, the group a has 5 elements which is less than b in the example and because of that, 3rd to the 5th biggest elements are returned. In group b 6th and 7th biggest values are the same, so they are both returned.
The closest question to mine is this question about slice but it does not use nlargest logic. It just slices the groups.
If you could guide me on that, I would appreciate!
You could try the following:
import pandas as pd
gbg = df.groupby('group')
a=2
b=7
res = gbg['value'].agg(lambda x: pd.Series.to_list(x)[a:b]).to_frame().explode('value').reset_index()
# .agg will "aggregate" the groups, here it will create the slices by group
# .to_frame will convert results from pd.Series to pd.DataFrame
# .explode() will write the list values in rows again
# .reset_index() will restore the column 'group'
The intermediate result after .agg():
group
a [14, 13, 12]
b [16, 14, 13, 12, 12]
Name: value, dtype: object
And the full result:
group value
0 a 14
1 a 13
2 a 12
3 b 16
4 b 14
5 b 13
6 b 12
7 b 12
By sorting the dataframe first and using the slice method which this approach gives me the result I expected.
df.sort_values(["group", "value"], ascending = False).groupby("group").slice(2, 7)
Output is
group value
a 14
a 13
a 12
b 16
b 14
b 13
b 12
b 12

Pandas groupby custom nlargest

When trying to solve my own question here I came up with an interesting problem. Consider I have this dataframe
import pandas as pd
import numpy as np
np.random.seed(0)
df= pd.DataFrame(dict(group = np.random.choice(["a", "b", "c", "d"],
size = 100),
values = np.random.randint(0, 100,
size = 100)
)
)
I want to select top values per each group, but I want to select the values according to some range. Let's say, top x to y values per each group. If any group has less than x values in it, give top(min((y-x), x)) values for that group.
In general, I am looking for a custom made alternative function which could be used with groupby objects to select not top n values, but instead top x to y range of values.
EDIT: nlargest() is a special case of the solution to my problem where x = 1 and y = n
Any further help, or guidance will be appreciated
Adding an example with this df and this top(3, 6). For every group output the values from top 3rd until top 6th values:
group value
a 190
b 166
a 163
a 106
b 86
a 77
b 70
b 69
c 67
b 54
b 52
a 50
c 24
a 20
a 11
As group c has just two members, it will output top(3)
group value
a 106
a 77
a 50
b 69
b 54
b 52
c 67
c 24
there are other means of doing this and depending on how large your dataframe is, you may want to search groupby slice or something similar. You may also need to check my conditions are correct (<, <=, etc)
x=3
y=6
# this gets the groups which don't meet the x minimum
df1 = df[df.groupby('group')['value'].transform('count')<x]
# this df takes those groups meeting the minimum and then shifts by x-1; does some cleanup and chooses nlargest
df2 = df[df.groupby('group')['value'].transform('count')>=x].copy()
df2['shifted'] = df2.groupby('group').shift(periods=-(x-1))
df2.drop('value', axis=1, inplace=True)
df2 = df2.groupby('group')['shifted'].nlargest(y-x).reset_index().rename(columns={'shifted':'value'}).drop('level_1', axis=1)
# putting it all together
df_final = pd.concat([df1, df2])
df_final
group value
8 c 67.0
12 c 24.0
0 a 106.0
1 a 77.0
2 a 50.0
3 b 70.0
4 b 69.0
5 b 54.0

Sorting df by column name of type timestamp

I have a dataframe df which consists of columns of countries and rows of dates. The index is of type "DateTime."
I would like to sort the df by the values of each country by the last element in the series (eg, the latest date) and the graph the "top N" countries by this latest value.
I thought if I sorted the transpose of the df and then slice it, I would have what I need. Hence, if N = 10, then I would select df[0:9].
However,when I attempt to select the last column, I get a 'keyerror' message, referencing the selected column:
KeyError: '2021-03-28 00:00:00'.
I'm stumped....
df_T = df.transpose()
column_name = str(df_T.columns[-1])
df_T.sort_values(by = column_name, axis = 'columns', inplace = True)
#select the top 10 countries by latest value, eg
# plot df_T[0:9]
What I'm trying to do, example df:
A B C .... X Y Z
2021-03-29 10 20 5 .... 50 100 7
2021-03-28 9 19 4 .... 45 90 6
2021-03-27 8 15 2 .... 40 80 4
...
2021-01-03 0 0 0 .... 0 0 0
I want to select series representing by the greatest N values as of the latest index value (eg, latest date).

Python Dataframe column operation using lambda function [duplicate]

I'm trying to multiply two existing columns in a pandas Dataframe (orders_df): Prices (stock close price) and Amount (stock quantities) and add the calculation to a new column called Value. For some reason when I run this code, all the rows under the Value column are positive numbers, while some of the rows should be negative. Under the Action column in the DataFrame there are seven rows with the 'Sell' string and seven with the 'Buy' string.
for i in orders_df.Action:
if i == 'Sell':
orders_df['Value'] = orders_df.Prices*orders_df.Amount
elif i == 'Buy':
orders_df['Value'] = -orders_df.Prices*orders_df.Amount)
Please let me know what i'm doing wrong !
I think an elegant solution is to use the where method (also see the API docs):
In [37]: values = df.Prices * df.Amount
In [38]: df['Values'] = values.where(df.Action == 'Sell', other=-values)
In [39]: df
Out[39]:
Prices Amount Action Values
0 3 57 Sell 171
1 89 42 Sell 3738
2 45 70 Buy -3150
3 6 43 Sell 258
4 60 47 Sell 2820
5 19 16 Buy -304
6 56 89 Sell 4984
7 3 28 Buy -84
8 56 69 Sell 3864
9 90 49 Buy -4410
Further more this should be the fastest solution.
You can use the DataFrame apply method:
order_df['Value'] = order_df.apply(lambda row: (row['Prices']*row['Amount']
if row['Action']=='Sell'
else -row['Prices']*row['Amount']),
axis=1)
It is usually faster to use these methods rather than over for loops.
If we're willing to sacrifice the succinctness of Hayden's solution, one could also do something like this:
In [22]: orders_df['C'] = orders_df.Action.apply(
lambda x: (1 if x == 'Sell' else -1))
In [23]: orders_df # New column C represents the sign of the transaction
Out[23]:
Prices Amount Action C
0 3 57 Sell 1
1 89 42 Sell 1
2 45 70 Buy -1
3 6 43 Sell 1
4 60 47 Sell 1
5 19 16 Buy -1
6 56 89 Sell 1
7 3 28 Buy -1
8 56 69 Sell 1
9 90 49 Buy -1
Now we have eliminated the need for the if statement. Using DataFrame.apply(), we also do away with the for loop. As Hayden noted, vectorized operations are always faster.
In [24]: orders_df['Value'] = orders_df.Prices * orders_df.Amount * orders_df.C
In [25]: orders_df # The resulting dataframe
Out[25]:
Prices Amount Action C Value
0 3 57 Sell 1 171
1 89 42 Sell 1 3738
2 45 70 Buy -1 -3150
3 6 43 Sell 1 258
4 60 47 Sell 1 2820
5 19 16 Buy -1 -304
6 56 89 Sell 1 4984
7 3 28 Buy -1 -84
8 56 69 Sell 1 3864
9 90 49 Buy -1 -4410
This solution takes two lines of code instead of one, but is a bit easier to read. I suspect that the computational costs are similar as well.
Since this question came up again, I think a good clean approach is using assign.
The code is quite expressive and self-describing:
df = df.assign(Value = lambda x: x.Prices * x.Amount * x.Action.replace({'Buy' : 1, 'Sell' : -1}))
To make things neat, I take Hayden's solution but make a small function out of it.
def create_value(row):
if row['Action'] == 'Sell':
return row['Prices'] * row['Amount']
else:
return -row['Prices']*row['Amount']
so that when we want to apply the function to our dataframe, we can do..
df['Value'] = df.apply(lambda row: create_value(row), axis=1)
...and any modifications only need to occur in the small function itself.
Concise, Readable, and Neat!
Good solution from bmu. I think it's more readable to put the values inside the parentheses vs outside.
df['Values'] = np.where(df.Action == 'Sell',
df.Prices*df.Amount,
-df.Prices*df.Amount)
Using some pandas built in functions.
df['Values'] = np.where(df.Action.eq('Sell'),
df.Prices.mul(df.Amount),
-df.Prices.mul(df.Amount))
For me, this is the clearest and most intuitive:
values = []
for action in ['Sell','Buy']:
amounts = orders_df['Amounts'][orders_df['Action'==action]].values
if action == 'Sell':
prices = orders_df['Prices'][orders_df['Action'==action]].values
else:
prices = -1*orders_df['Prices'][orders_df['Action'==action]].values
values += list(amounts*prices)
orders_df['Values'] = values
The .values method returns a numpy array allowing you to easily multiply element-wise and then you can cumulatively generate a list by 'adding' to it.
First, multiply the columns Prices and Amount. Afterwards use mask to negate the values if the condition is True:
df.assign(
Values=(df["Prices"] * df["Amount"]).mask(df["Action"] == "Buy", lambda x: -x)
)

Compare two data frames for different values in a column

I have two dataframe, please tell me how I can compare them by operator name, if it matches, then add the values ​​of quantity and time to the first data frame.
In [2]: df1 In [3]: df2
Out[2]: Out[3]:
Name count time Name count time
0 Bob 123 4:12:10 0 Rick 9 0:13:00
1 Alice 99 1:01:12 1 Jone 7 0:24:21
2 Sergei 78 0:18:01 2 Bob 10 0:15:13
85 rows x 3 columns 105 rows x 3 columns
I want to get:
In [5]: df1
Out[5]:
Name count time
0 Bob 133 4:27:23
1 Alice 99 1:01:12
2 Sergei 78 0:18:01
85 rows x 3 columns
Use set_index and add them together. Finally, update back.
df1 = df1.set_index('Name')
df1.update(df1 + df2.set_index('Name'))
df1 = df1.reset_index()
Out[759]:
Name count time
0 Bob 133.0 04:27:23
1 Alice 99.0 01:01:12
2 Sergei 78.0 00:18:01
Note: I assume time columns in both df1 and df2 are already in correct date/time format. If they are in string format, you need to convert them before running above commands as follows:
df1.time = pd.to_timedelta(df1.time)
df2.time = pd.to_timedelta(df2.time)