Joining Multiple Data Frames - dataframe

I am wondering if there is a way in Julia DataFrames to join multiple data frames in one go,
using DataFrames
employer = DataFrame(
ID = Array{Int64}([01,02,03,04,05,09,11,20]),
name = Array{String}(["Matthews","Daniella", "Kofi", "Vladmir", "Jean", "James", "Ayo", "Bill"])
)
salary = DataFrame(
ID = Array{Int64}([01,02,03,04,05,06,08,23]),
amount = Array{Int64}([2050,3000,3500,3500,2500,3400,2700,4500])
)
hours = DataFrame(
ID = Array{Int64}([01,02,03,04,05,08,09,23]),
time = Array{Int64}([40,40,40,40,40,38,45,50])
)
# I tried adding them in an array but ofcoures that results in an error
empSalHrs = innerjoin([employer,salary,hours], on = :ID)
# In python you can achieve this using
import pandas as pd
from functools import reduce
df = reduce(lambda l,r : pd.merge(l,r, on = "ID"), [employer, salary, hours])
Is there a similar way to do this in julia?

You were almost there. As it is written in DataFrames.jl manual you just need to pass more than one dataframe as an argument.
using DataFrames
employer = DataFrame(
ID = [01,02,03,04,05,09,11,20],
name = ["Matthews","Daniella", "Kofi", "Vladmir", "Jean", "James", "Ayo", "Bill"])
salary = DataFrame(
ID = [01,02,03,04,05,06,08,23],
amount = [2050,3000,3500,3500,2500,3400,2700,4500])
hours = DataFrame(
ID = [01,02,03,04,05,08,09,23],
time = [40,40,40,40,40,38,45,50]
)
empSalHrs = innerjoin(employer,salary,hours, on = :ID)
If for some reason you need to put your dataframes in a Vector, you can use splitting to achieve the same result
empSalHrs = innerjoin([employer,salary,hours]..., on = :ID)
Also, note that I've slightly changed the definitions of the dataframes. Since Array{Int} is an abstract type it shouldn't be used for the variable declaration, because it is bad for performance. It may be not important in this particular scenario, but it's better to make good habits from the start. Instead of Array{Int} one can use
Array{Int, 1}([1, 2, 3, 4])
Vector{Int}([1, 2, 3, 4])
Int[1, 2, 3]
[1, 2, 3]
The last one is legit because Julia can infer the type of the container on its own in this simple scenario.

Related

Filter dataframe based on condition before groupby

Suppose I have a dataframe like this
Create sample dataframe:
import pandas as pd
import numpy as np
data = {
'gender': np.random.choice(['m', 'f'], size=100),
'vaccinated': np.random.choice([0, 1], size=100),
'got sick': np.random.choice([0, 1], size=100)
}
df = pd.DataFrame(data)
and I want to see, by gender, what proportion of vaccinated people got sick.
I've tries something like this:
df.groupby('gender').agg(lambda group: sum(group['vaccinated']==1 & group['sick']==1)
/sum(group['sick']==1))
but this doesn't work because agg works on the series level. Same applies for transform. apply doesn't work either, but I'm not as clear why or how apply functions on groupby objects.
Any ideas how to accomplish this with a single line of code?
You could first filter for the vaccinated people and then group by gender and calculate the proportion of people that got sick..
df[df.vaccinated == 1].groupby("gender").agg({"got sick":"mean"})
Output:
got sick
gender
f 0.548387
m 0.535714
In this case the proportion is calculated based on a sample data that I've created
The docs for GroupBy.apply state that the function is applied "group-wise". This means that the function is called on each group separately as a data frame.
That is, df.groupby(c).apply(f) is conceptually equivalent to:
results = {}
for val in df[c]:
group = df.loc[df[c] == val]
result = f(group)
results[val] = result
pd.concat(results)
We can use this understanding to apply your custom aggregation function, using a top-level def just to make the code easier to read:
def calc_vax_sick_frac(group):
vaccinated = group['vaccinated'] == 1
sick = group['sick'] == 1
return (vaccinated & sick).sum() / sick.sum()
(
df
.groupby('gender')
.apply(calc_vax_sick_frac)
)

Working on multiple data frames with data for NBA players during the season, how can I modify all the dataframes at the same time?

I have a list of 16 dataframes that contain stats for each player in the NBA during the respective season. My end goal is to run unsupervised learning algorithms on the data frames. For example, I want to see if I can determine a player's position by their stats or if I can determine their total points during the season based on their stats.
What I would like to do is modify the list(df_list), unless there's a better solution, of these dataframes instead modifying each dataframe to:
Change the datatype of the MP(minutes played column from str to int.
Modify the dataframe where there are only players with 1000 or more MP and there are no duplicate players(Rk)
(for instance in a season, a player(Rk) can play for three teams in a season and have 200MP, 300MP, and 400MP mins with each team. He'll have a column for each team and a column called TOT which will render his MP as 900(200+300+400) for a total of four rows in the dataframe. I only need the TOT row
Use simple algebra with various and individual columns columns, for example: being able to total the MP column and the PTS column and then diving the sum of the PTS column by the MP column.
Or dividing the total of the PTS column by the len of the PTS column.
What I've done so far is this:
Import my libraries and create 16 dataframes using pd.read_html(url).
The first dataframes created using two lines of code:
url = "https://www.basketball-reference.com/leagues/NBA_1997_totals.html"
ninetysix = pd.read_html(url)[0]
HOWEVER, the next four data frames had to be created using a few additional line of code(I received an error code that said "html5lib not found, please install it" so I downloaded both html5lib and requests). I say that to say...this distinction in creating the DF may have to considered in a solution.
The code I used:
import requests
import uuid
url = 'https://www.basketball-reference.com/leagues/NBA_1998_totals.html'
cookies = {'euConsentId': str(uuid.uuid4())}
html = requests.get(url, cookies=cookies).content
ninetyseven = pd.read_html(html)[0]
These four data frames look like this:
I tried this but it didn't do anything:
df_list = [
eightyfour, eightyfive, eightysix, eightyseven,
eightyeight, eightynine, ninety, ninetyone,
ninetytwo, ninetyfour, ninetyfive,
ninetysix, ninetyseven, ninetyeight, owe_one, owe_two
]
for df in df_list:
df = df.loc[df['Tm'] == 'TOT']
df = df.copy()
df['MP'] = df['MP'].astype(int)
df['Rk'] = df['Rk'].astype(int)
df = list(df[df['MP'] >= 1000]['Rk'])
df = df[df['Rk'].isin(df)]
owe_two
============================UPDATE===================================
This code will solves a portion of problem # 2
url = 'https://www.basketball-reference.com/leagues/NBA_1997_totals.html'
dd = pd.read_html(url)[0]
dd = dd[dd['Rk'].ne('Rk')]
dd['MP'] = dd['MP'].astype(int)
players_1000_rk_list = list(dd[dd['MP'] >= 1000]['Rk'])
players_dd = dd[dd['Rk'].isin(players_1000_rk_list)]
But it doesn't remove the duplicates.
==================== UPDATE 10/11/22 ================================
Let's say I take rows with values "TOT" in the "Tm" and create a new DF with them, and these rows from the original data frame...
could I then compare the new DF with the original data frame and remove the names from the original data IF they match the names from the new data frame?
the problem is that the df you are working on in the loop is not the same df that is in the df_list. you could solve this by saving the new df back to the list, overwriting the old df
for i,df in enumerate(df_list):
df = df.loc[df['Tm'] == 'TOT']
df = df.copy()
df['MP'] = df['MP'].astype(int)
df['Rk'] = df['Rk'].astype(int)
df = list(df[df['MP'] >= 1000]['Rk'])
df = df[df['Rk'].isin(df)]
df_list[i] = df
the2 lines are probably wrong as well
df = list(df[df['MP'] >= 1000]['Rk'])
df = df[df['Rk'].isin(df)]
perhaps you want this
for i,df in enumerate(df_list):
df = df.loc[df['Tm'] == 'TOT']
df = df.copy()
df['MP'] = df['MP'].astype(int)
df['Rk'] = df['Rk'].astype(int)
#df = list(df[df['MP'] >= 1000]['Rk'])
#df = df[df['Rk'].isin(df)]
# just the rows where MP > 1000
df_list[i] = df[df['MP'] >= 1000]

how to get list of list after pandas groupby

I searched for my problem and it was similar to this question.
However, it did not give the expected results, so I am still stuck.
I have a list like this:
import pandas as pd
l=[[1,'John','Wed',28],[1,'John','Fri',30],[2,'Alex','Fri',40],[2,'Alex','Fri',60]]
I did
o=pd.DataFrame(l,columns=['id','name','day','marks'])
r = o.groupby(['id','name','day']).marks.mean().reset_index().values.tolist()
now what i got looks like this
[[1,'John','Wed',28],[1,'John','Fri',30],[2,'Alex','Fri',50]]
can somebody please help me to get something like
[[1,'John',[['Wed',28],['Fri',30]]], [2,'Alex',[['Fri',50]]]]
You can do :
# get mean marks
df['mean_marks'] = df.groupby(['id', 'name', 'day'])['marks'].transform('mean')
# create list of day, marks
df['mark_list'] = df[['day','mean_marks']].agg(list, 1)
# aggregate
df = (df
.groupby(['id', 'name'])['mark_list']
.apply(list)
.apply(lambda x: [list(y) for y in set([tuple(j) for j in x])])
.reset_index())
print(df)
id name mark_list
0 1 John [[Wed, 28], [Fri, 30]]
1 2 Alex [[Fri, 50]]
You can use defaultdict to get the data from r into your desired output :
from collections import defaultdict
d = defaultdict(list)
for number, name, day, day_number in r:
d[(number, name)].append([day, day_number])
#pull data into list form with a list comprehension
[[*key, value] for key, value in d.items()]
[[1, 'John', [['Fri', 30], ['Wed', 28]]], [2, 'Alex', [['Fri', 50]]]]
Alternatively, you could run the entire process in plain Python, caveat though is that you have to use defaultdict twice, which makes sense since in a way defaultdict is grouping your data :
from collections import defaultdict
from statistics import mean
d = defaultdict(list)
for number, name, day, day_number in l:
d[(number, name, day)].append(day_number)
d = {key:mean(value) for key, value in d.items()}
d = [[*key[:2], [*key[2:],value]] for key, value in d.items()]
box = defaultdict(list)
for number, name, [day, day_number] in d:
box[(number, name)].append([day, day_number])
[[*key, value] for key, value in box.items()]
[[1, 'John', [['Wed', 28], ['Fri', 30]]], [2, 'Alex', [['Fri', 50]]]]

Create pandas dataframe from series with ordered dict as rows

I am trying to extract lmfit parameter results as dataframes. I pass 1 column x, 1 column data through a fit_func and parameters pars and the output of the minimize function in lmfit outputs as OrderedDict.
out = minimize(fit_func, pars, method = 'leastsq', args=(x, data))
res = out.params.valuesdict()
res
Output:
OrderedDict([('a1', 12.850309404600393),
('c1', 1346.833513206811),
('s1', 44.22337472274829),
('f1', 1.1275639898142586),
('a2', 77.15732669480884),
('c2', 1580.5712512351947),
('s2', 16.239969775527275),
('f2', 0.8684363668111492)])
The output I want in DataFrames I achieved like this with pd.DataFrame(res,index=[0]) :
I have 3 data columns that I want to quickly fit:
x = d.iloc[:,0]
fit_odict = pd.DataFrame(d.iloc[:,1:4].\
apply(lambda y: minimize(fit_func, pars, method = 'leastsq', args=(x, y))\
.params.valuesdict()),index=[1])
But I get a series of Ordered Dicts as rows in the Dataframe:
How do I get the output I want with the three parameter results as rows ? Is there a better way to apply the function?
UPDATE:
Appended #M Newville in my solution. Might be helpful for those who want to quickly extract lmfit parameter results from multiple data columns d1.iloc[:,1:] :
def fff(cols):
out = minimize(fit_func, pars, method = 'leastsq', args=(x, cols))
return {key: par.value for key, par in out.params.items()}
results = d1.iloc[:,1:].apply(fff,result_type='expand').transpose()
Output:
For a single fit, this would probably be what you are looking for:
out = minimize(fit_func, pars, method = 'leastsq', args=(x, data))
fit_odict = pd.DataFrame({key: [par.value] for key, par in out.params.items()})
I think you probably are looking for something like this:
results = {key: [] for key in pars}
for data in datasets:
out = minimize(fit_func, pars, method='leastsq', args=(x, data))
for par_name, val_list in results.items():
val_list.append(out.params[par_name].value)
results = pd.DataFrame(results)
You could probably stuff that all into a single long line, but I wouldn't recommend it -- someone may want to read that code ;).
This is quick work around that you can do. The code is not efficient but you can optimize it. Note that index start 1 but you are welcome to re-index using pandas library
import pandas as pd
# Your output is a list of tuple
OrderedDict = [('a1', 12.850309404600393),('c1', 1346.833513206811),('s1',
44.22337472274829),('f1', 1.1275639898142586),('a2', 77.15732669480884),('c2',
1580.5712512351947),('s2', 16.239969775527275),('f2', 0.8684363668111492)]
# Create a dataframe from the list of tuple and tanspose
df = pd.DataFrame(OrderedDict).T
# Get the first row for the dataframe columns namea
columns = df.loc[0].values.tolist()
df.columns = columns
output = df.drop(df.index[0])
output
a1 c1 s1 f1 a2 c2 s2 f2
1 12.8503 1346.83 44.2234 1.12756 77.1573 1580.57 16.24 0.868436

To join complicated pandas tables

I'm trying to join a dataframe of results from a statsmodels GLM to a dataframe designed to hold both univariate data and model results as models are iterated through. i'm having trouble figuring out how to grammatically join the two data sets.
I've consulted the pandas documentation found below to no luck:
https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html#database-style-dataframe-or-named-series-joining-merging
This is difficult because of the out put of the model compared to the final table which holds values of each unique level of each unique variable.
See an example of what the data looks like with the code below:
import pandas as pd
df = {'variable': ['CLded_model','CLded_model','CLded_model','CLded_model','CLded_model','CLded_model','CLded_model'
,'channel_model','channel_model','channel_model']
, 'level': [0,100,200,250,500,750,1000, 'DIR', 'EA', 'IA']
,'value': [460955.7793,955735.0532,586308.4028,12216916.67,48401773.87,1477842.472,14587994.92,10493740.36
,36388470.44,31805316.37]}
final_table = pd.DataFrame(df)
df2 = {'variable': ['intercept','C(channel_model)[T.EA]','C(channel_model)[T.IA]', 'CLded_model']
, 'coefficient': [-2.36E-14,-0.091195797,-0.244225888, 0.00174356]}
model_results = pd.DataFrame(df2)
After this is run you can see that for categorical variables, the value is incased in a few layers compared to the final_table. Numerical values such as CLded_model needs to be joined with the one coefficient it's associated with.
There is a lot to this and i'm not sure where to start.
Update: The following code produces the desired result:
d3 = {'variable': ['intercept', 'CLded_model','CLded_model','CLded_model','CLded_model','CLded_model','CLded_model'
,'CLded_model','channel_model','channel_model','channel_model']
, 'level': [None, 0,100,200,250,500,750,1000, 'DIR', 'EA', 'IA']
,'value': [None, 60955.7793,955735.0532,586308.4028,12216916.67,48401773.87,1477842.472,14587994.92,10493740.36
,36388470.44,31805316.37]
, 'coefficient': [ -2.36E-14, 0.00174356, 0.00174356, 0.00174356, 0.00174356, 0.00174356 ,0.00174356
, 0.00174356,None, -0.091195797,-0.244225888, ]}
desired_result = pd.DataFrame(d3)
First you have to clean df2:
df2['variable'] = df2['variable'].str.replace("C\(","")\
.str.replace("\)\[T.", "-")\
.str.strip("\]")
df2
variable coefficient
0 intercept -2.360000e-14
1 channel_model-EA -9.119580e-02
2 channel_model-IA -2.442259e-01
3 CLded_model 1.743560e-03
Because you want to merge some of df1 on the level column and others not, we need to change df1 slightly to match df2:
df1.loc[df1['variable'] == 'channel_model', 'variable'] = "channel_model-"+df1.loc[df1['variable'] == 'channel_model', 'level']
df1
#snippet of what changed
variable level value
6 CLded_model 1000 1.458799e+07
7 channel_model-DIR DIR 1.049374e+07
8 channel_model-EA EA 3.638847e+07
9 channel_model-IA IA 3.180532e+07
Then we merge them:
df4 = df1.merge(df2, how = 'outer', left_on =['variable'], right_on = ['variable'])
And we get your result (except for the minor change in the variable name)