How to plot this graph? - pandas

total_income = df.groupby('genres')['gross'].sum()
average_income = df.groupby('genres')['gross'].mean()
total_income.plot.bar(label="Total Income", color = 'r')
average_income.plot.bar(label="Average Income")
plt.xlabel("Genres")
plt.ylabel("Dollars (Gross)")
plt.yscale("log")
Here's my code that plots the sum and average of gross by the genres of movies. The problem is when I plot the graph, it gives me a complete black graph. I believe it is due to the length of words in the genres because it contains multiple genres.
How Can I fix this so it shows the graph and it's genres? I need assistance.

You can use str.split for lists, then get len for length.
Last create new DataFrame by constructor with numpy.repeat and numpy.concatenate:
df = pd.DataFrame({'genres':['Comedy|Crime|Drama|Thriller','Comedy|Crime|Drama','Comedy|Crime','Drama|Thriller','Drama','Comedy|Crime'],
'gross':[10,20,30,40,50,60]})
print (df)
genres gross
0 Comedy|Crime|Drama|Thriller 10
1 Comedy|Crime|Drama 20
2 Comedy|Crime 30
3 Drama|Thriller 40
4 Drama 50
5 Comedy|Crime 60
splitted = df['genres'].str.split('|')
l = splitted.str.len()
df = pd.DataFrame({'gross': np.repeat(df['gross'], l), 'genres':np.concatenate(splitted)})
print (df)
genres gross
0 Comedy 10
0 Crime 10
0 Drama 10
0 Thriller 10
1 Comedy 20
1 Crime 20
1 Drama 20
2 Comedy 30
2 Crime 30
3 Drama 40
3 Thriller 40
4 Drama 50
5 Comedy 60
5 Crime 60
d = {'mean':'Average','sum':'Total'}
df1 = df.groupby('genres')['gross'].agg(['sum','mean']).rename(columns=d)
print (df1)
Total Average
genres
Comedy 120 30
Crime 120 30
Drama 120 30
Thriller 50 25
df1.plot.bar()

Related

How to use Pandas to find relative share of rooms in individual floors [duplicate]

This question already has answers here:
Pandas percentage of total with groupby
(16 answers)
Closed 12 days ago.
I have a dataframe df with the following structure
Floor Room Area
0 1 Living room 25
1 1 Kitchen 20
2 1 Bedroom 15
3 2 Bathroom 21
4 2 Bedroom 14
and I want to add a series floor_share with the relative share/ratio of the given floor, so that the dataframe becomes
Floor Room Area floor_share
0 1 Living room 18 0,30
1 1 Kitchen 18 0,30
2 1 Bedroom 24 0,40
3 2 Bathroom 10 0,67
4 2 Bedroom 20 0,33
If it is possible to do this with a one-liner (or any other idiomatic manner), I'll be very happy to learn how.
Current workaround
What I have done that produces the correct results is to first find the total floor areas by
floor_area_sums = df.groupby('Floor')['Area'].sum()
which gives
Floor
1 60
2 35
Name: Area, dtype: int64
I then initialize a new series to 0, and find the correct values while iterating through the dataframe rows.
df["floor_share"] = 0
for idx, row in df.iterrows():
df.loc[idx, 'floor_share'] = df.loc[idx, 'Area']/floor_area_sums[row.Floor]
IIUC use:
df["floor_share"] = df['Area'].div(df.groupby('Floor')['Area'].transform('sum'))
print (df)
Floor Room Area floor_share
0 1 Living room 18 0.300000
1 1 Kitchen 18 0.300000
2 1 Bedroom 24 0.400000
3 2 Bathroom 10 0.333333
4 2 Bedroom 20 0.666667

Concatenate labels to an existing dataframe

I want to use a list of names "headers" to create a new column in my dataframe. In the initial table, the name of each division is positioned above the results for each team in that division. I want to add that header to each row entry for each divsion to make the data more identifiable like this. I have the headers stored in the "header" object in my code. How can I multiply each division header by the number of rows that appear in the division and append to the dataset?
Edit: here is another snippet of what I want the get from the end product.
df3 = df.iloc[0:6]
df3.insert(0, 'Divisions', ['na','L5 Junior', 'L5 Junior', 'na',
'L5 Senior - Medium', 'L5 Senior - Medium'])
df3
.
import pandas as pd
import numpy as np
from bs4 import BeautifulSoup
import requests
Import HTML
scr = 'https://tv.varsity.com/results/7361971-2022-spirit-unlimited-battle-at-the-
boardwalk-atlantic-city-grand-ntls/31220'
scr1 = requests.get(scr)
soup = BeautifulSoup(scr1.text, "html.parser")
List of names to append
table_MN = pd.read_html(scr)
sp3 = soup.find(class_="full-content").find_all("h2")
headers = [elt.text for elt in sp3]
table_MN = pd.read_html(scr)
Extract text and header from division info
div = pd.DataFrame(headers)
div.columns = ["division"]
df = pd.concat(table_MN, ignore_index=True)
df.columns = df.iloc[0]
df
It is still not clear what is the output you are looking for. However, may I suggest the following, which accomplishes selecting common headers from tables in table_MN and the concatenating the results. If it is going in the right direction pls let me know, and indicate what else you want to extract from the resulting table:
tmn_1 = [tbl.T.set_index(0).T for tbl in table_MN]
pd.concat(tmn_1, axis=0, ignore_index = True)
output:
Rank Program Name Team Name Raw Score Deductions Performance Score Event Score
-- ------ --------------------------- ----------------- ----------- ------------ ------------------- -------------
0 1 Rockstar Cheer New Jersey City Girls 47.8667 0 95.7333 95.6833
1 2 Cheer Factor Xtraordinary 46.6667 0.15 93.1833 92.8541
2 1 Rockstar Cheer New Jersey City Girls 47.7667 0 95.5333 23.8833
3 2 Cheer Factor Xtraordinary 46.0333 0.2 91.8667 22.9667
4 1 Star Athletics Roar 47.5333 0.9 94.1667 93.9959
5 1 Prime Time All Stars Lady Onyx 43.9 1.35 86.45 86.6958
6 1 Prime Time All Stars Lady Onyx 44.1667 0.9 87.4333 21.8583
7 1 Just Cheer All Stars Jag 5 46.4333 0.15 92.7167 92.2875
8 1 Just Cheer All Stars Jag 5 45.8 0.6 91 22.75
9 1 Quest Athletics Black Ops 47.4333 0.45 94.4167 93.725
10 1 Quest Athletics Black Ops 46.5 1.35 91.65 22.9125
11 1 The Stingray Allstars X-Rays 45.3 0.95 89.65 88.4375
12 1 Vortex Allstars Lady Rays 45.7 0.5 90.9 91.1083
13 1 Vortex Allstars Lady Rays 45.8667 0 91.7333 22.9333
14 1 Upper Merion All Stars Citrus 46.4333 0 92.8667 92.7
15 2 Cheer Factor JUNIOR X 45.9 1.1 90.7 90.6542
16 3 NJ Premier All Stars Prodigy 44.6333 0.05 89.2167 89.8292
17 1 Upper Merion All Stars Citrus 46.1 0 92.2 23.05
18 2 NJ Premier All Stars Prodigy 45.8333 0 91.6667 22.9167
19 3 Cheer Factor JUNIOR X 45.7333 0.95 90.5167 22.6292
20 1 Virginia Royalty Athletics Dynasty 46.5 0 93 92.9
21 1 Virginia Royalty Athletics Dynasty 46.3 0 92.6 23.15
22 1 South Jersey Storm Lady Reign 47.2333 0 94.4667 93.4875
...

Pandas column merging on condition

This is my pandas df:
Id Protein A_Egg B_Meat C_Milk Category
A 10 10 20 0 egg
B 20 10 0 10 milk
C 20 10 10 10 meat
D 25 20 10 0 egg
I wish to merge protein column with other column based on "Category"
My output is
Id Protein_final
A 20
B 30
C 30
D 45
Ideally, I would like to show how I am approaching but, I am frankly clueless!!
EDIT: Also, How to handle is the category is blank or does meet one of the column (in that can final should be same as initial value in protein column)
Use DataFrame.lookup with some preprocessing with remove values in columns names before _ and lowercase, last add to column:
arr = df.rename(columns=lambda x: x.split('_')[-1].lower()).lookup(df.index, df['Category'])
df['Protein'] += arr
print (df)
Id Protein A_Egg B_Meat C_Milk Category
0 A 20 10 20 0 egg
1 B 30 10 0 10 milk
2 C 30 10 10 10 meat
3 D 45 20 10 0 egg
If need only 2 columns finally:
df = df[['Id','Protein']]
You can melt the dataframe, and filter for rows where category equals the variable column, and sum the final columns :
(
df
.melt(["Id", "Protein", "Category"])
.assign(variable=lambda x: x.variable.str[2:].str.lower(),
Protein_final=lambda x: x.Protein + x.value)
.query("Category == variable")
.filter(["Id", "Protein_final"])
)
Id Protein_final
0 A 20
3 D 45
6 C 30
9 B 30

how to apply one hot encoding or get dummies on 2 columns together in pandas?

I have below dataframe which contain sample values like:-
df = pd.DataFrame([["London", "Cambridge", 20], ["Cambridge", "London", 10], ["Liverpool", "London", 30]], columns= ["city_1", "city_2", "id"])
city_1 city_2 id
London Cambridge 20
Cambridge London 10
Liverpool London 30
I need the output dataframe as below which is built while joining 2 city columns together and applying one hot encoding after that:
id London Cambridge Liverpool
20 1 1 0
10 1 1 0
30 1 0 1
Currently, I am using the below code which works one time on a column, please could you advise if there is any pythonic way to get the above output
output_df = pd.get_dummies(df, columns=['city_1', 'city_2'])
which results in
id city_1_Cambridge city_1_London and so on columns
You can add parameters prefix_sep and prefix to get_dummies and then use max if want only 1 or 0 values (dummies or indicator columns) or sum if need count 1 values :
output_df = (pd.get_dummies(df, columns=['city_1', 'city_2'], prefix_sep='', prefix='')
.max(axis=1, level=0))
print (output_df)
id Cambridge Liverpool London
0 20 1 0 1
1 10 1 0 1
2 30 0 1 1
Or if want processing all columns without id convert not processing column(s) to index first by DataFrame.set_index, then use get_dummies with max and last add DataFrame.reset_index:
output_df = (pd.get_dummies(df.set_index('id'), prefix_sep='', prefix='')
.max(axis=1, level=0)
.reset_index())
print (output_df)
id Cambridge Liverpool London
0 20 1 0 1
1 10 1 0 1
2 30 0 1 1

Groupby on two columns with bins(ranges) on one of them in Pandas Dataframe

I am trying to make segregate my data into buckets based on certain user attributes and I would like to see some counts in each of the buckets.For this I have imported this data into a Pandas Dataframe.
I have data that has user city, kids age and their unique id. I would like to know the count of users who reside in city A and have kids in age group 0-5.
Sample Data frame looks something like this:
city kids_age user_id
A 10 1
B 4 2
A 4 3
C 8 4
A 3 5
Expected Output:
city bin count
A 0-5 2
5-10 1
B 0-5 1
5-10 0
C 0-5 0
5-10 1
I tried group by on two columns city and kids age:
user_details_df_cropped_1.groupby(['city', 'kids_age']).count()
It gave me an output that looks something like this:
city kids_age user_id count
A 10 1 1
4 3 1
3 5 1
B 4 2 1
C 8 4 1
I returns me the users grouped by city, but not really by kids age bins(ranges). What am I missing here? Appreciate the help!!
Use cut for binning, pass to DataFrame.groupby, add 0 rows with DataFrame.stack
DataFrame.unstack an last convert to DataFrame by Series.reset_index:
bins = [0,5,10]
labels = ['{}-{}'.format(i, j) for i, j in zip(bins[:-1], bins[1:])]
b = pd.cut(df['kids_age'], bins=bins, labels=labels, include_lowest=True)
df = df.groupby(['city', b]).size().unstack(fill_value=0).stack().reset_index(name='count')
print (df)
city kids_age count
0 A 0-5 2
1 A 5-10 1
2 B 0-5 1
3 B 5-10 0
4 C 0-5 0
5 C 5-10 1
Another solution with DataFrame.reindex and MultiIndex.from_product for added mising rows filled by 0:
bins = [0,5,10]
labels = ['{}-{}'.format(i, j) for i, j in zip(bins[:-1], bins[1:])]
b = pd.cut(df['kids_age'], bins=bins, labels=labels, include_lowest=True)
mux = pd.MultiIndex.from_product([df['city'].unique(), labels], names=['city','kids_age'])
df = (df.groupby(['city', b])
.size()
.reindex(mux, fill_value=0)
.reset_index(name='count'))
print (df)
city kids_age count
0 A 0-5 2
1 A 5-10 1
2 B 0-5 1
3 B 5-10 0
4 C 0-5 0
5 C 5-10 1