Working on multiple data frames with data for NBA players during the season, how can I modify all the dataframes at the same time? - pandas

I have a list of 16 dataframes that contain stats for each player in the NBA during the respective season. My end goal is to run unsupervised learning algorithms on the data frames. For example, I want to see if I can determine a player's position by their stats or if I can determine their total points during the season based on their stats.
What I would like to do is modify the list(df_list), unless there's a better solution, of these dataframes instead modifying each dataframe to:
Change the datatype of the MP(minutes played column from str to int.
Modify the dataframe where there are only players with 1000 or more MP and there are no duplicate players(Rk)
(for instance in a season, a player(Rk) can play for three teams in a season and have 200MP, 300MP, and 400MP mins with each team. He'll have a column for each team and a column called TOT which will render his MP as 900(200+300+400) for a total of four rows in the dataframe. I only need the TOT row
Use simple algebra with various and individual columns columns, for example: being able to total the MP column and the PTS column and then diving the sum of the PTS column by the MP column.
Or dividing the total of the PTS column by the len of the PTS column.
What I've done so far is this:
Import my libraries and create 16 dataframes using pd.read_html(url).
The first dataframes created using two lines of code:
url = "https://www.basketball-reference.com/leagues/NBA_1997_totals.html"
ninetysix = pd.read_html(url)[0]
HOWEVER, the next four data frames had to be created using a few additional line of code(I received an error code that said "html5lib not found, please install it" so I downloaded both html5lib and requests). I say that to say...this distinction in creating the DF may have to considered in a solution.
The code I used:
import requests
import uuid
url = 'https://www.basketball-reference.com/leagues/NBA_1998_totals.html'
cookies = {'euConsentId': str(uuid.uuid4())}
html = requests.get(url, cookies=cookies).content
ninetyseven = pd.read_html(html)[0]
These four data frames look like this:
I tried this but it didn't do anything:
df_list = [
eightyfour, eightyfive, eightysix, eightyseven,
eightyeight, eightynine, ninety, ninetyone,
ninetytwo, ninetyfour, ninetyfive,
ninetysix, ninetyseven, ninetyeight, owe_one, owe_two
]
for df in df_list:
df = df.loc[df['Tm'] == 'TOT']
df = df.copy()
df['MP'] = df['MP'].astype(int)
df['Rk'] = df['Rk'].astype(int)
df = list(df[df['MP'] >= 1000]['Rk'])
df = df[df['Rk'].isin(df)]
owe_two
============================UPDATE===================================
This code will solves a portion of problem # 2
url = 'https://www.basketball-reference.com/leagues/NBA_1997_totals.html'
dd = pd.read_html(url)[0]
dd = dd[dd['Rk'].ne('Rk')]
dd['MP'] = dd['MP'].astype(int)
players_1000_rk_list = list(dd[dd['MP'] >= 1000]['Rk'])
players_dd = dd[dd['Rk'].isin(players_1000_rk_list)]
But it doesn't remove the duplicates.
==================== UPDATE 10/11/22 ================================
Let's say I take rows with values "TOT" in the "Tm" and create a new DF with them, and these rows from the original data frame...
could I then compare the new DF with the original data frame and remove the names from the original data IF they match the names from the new data frame?

the problem is that the df you are working on in the loop is not the same df that is in the df_list. you could solve this by saving the new df back to the list, overwriting the old df
for i,df in enumerate(df_list):
df = df.loc[df['Tm'] == 'TOT']
df = df.copy()
df['MP'] = df['MP'].astype(int)
df['Rk'] = df['Rk'].astype(int)
df = list(df[df['MP'] >= 1000]['Rk'])
df = df[df['Rk'].isin(df)]
df_list[i] = df
the2 lines are probably wrong as well
df = list(df[df['MP'] >= 1000]['Rk'])
df = df[df['Rk'].isin(df)]
perhaps you want this
for i,df in enumerate(df_list):
df = df.loc[df['Tm'] == 'TOT']
df = df.copy()
df['MP'] = df['MP'].astype(int)
df['Rk'] = df['Rk'].astype(int)
#df = list(df[df['MP'] >= 1000]['Rk'])
#df = df[df['Rk'].isin(df)]
# just the rows where MP > 1000
df_list[i] = df[df['MP'] >= 1000]

Related

replacing df.append with pd.concat when building a new dataframe from file read

...
header = pd.DataFrame()
for x in {0,7,8,9,10,11,12,13,14,15,18,19,21,23}:
header = header.append({'col1':data1[x].split(':')[0],
'col2':data1[x].split(':')[1][:-1],
'col3':data2[x].split(':')[1][:-1],
'col4':data2[x]==data1[x],
'col5':'---'},
ignore_index=True)`
...
I have some Jupyter Notebook code which reads in 2 text files to data1 and data2 and using a list I am picking out specific matching lines in both files to a dataframe for easy display and comparison in the notebook
Since df.append is now being bumped for pd.concat what's the tidiest way to do this
is it basically to replace the inner loop code with
...
header = pd.concat(header, {all the column code from above })
...
addtional input to comment below
Yes, sorry for example the next block of code does this:
for x in {4,2 5}:
header = header.append({'col1':SOMENEWROWNAME'',
'col2':data1[x].split(':')[1][:-1],
'col3':data2[x].split(':')[1][:-1],
'col4':data2[x]==data1[x],
'col5':float(data2[x].split(':'},[1]([-1]) -float(data1[x].split(':'},[1]([-1])
ignore_index=True)`
repeated 5 times with different data indices in the loop, and then a different SOMENEWROWNAME
I inherited this notebook and I see now that this way of doing it was because they only wanted to do a numerical float difference on the columns where numbers come
but there are several such blocks, with different lines in the data and where that first parameter SOMENEWROWNAME is the different text fields from the respective lines in the data.
so I was primarily just trying to fix these append to concat warnings, but of course if the code can be better written then all good!
Use list comprehension and DataFrame constructor:
data = [{'col1':data1[x].split(':')[0],
'col2':data1[x].split(':')[1][:-1],
'col3':data2[x].split(':')[1][:-1],
'col4':data2[x]==data1[x],
'col5':'---'} for x in {0,7,8,9,10,11,12,13,14,15,18,19,21,23}]
df = pd.DataFrame(data)
EDIT:
out = []
#sample
for x in {1,7,30}:
out.append({'col1':SOMENEWROWNAME'',
'col2':data1[x].split(':')[1][:-1],
'col3':data2[x].split(':')[1][:-1],
'col4':data2[x]==data1[x],
'col5':float(data2[x].split(':'},[1]([-1]) -float(data1[x].split(':'},[1]([-1]))))))
df1 = pd.DataFrame(out)
out1 = []
#sample
for x in {1,7,30}:
out1.append({another dict})))
df2 = pd.DataFrame(out1)
df = pd.concat([df1, df2])
Or:
final = []
for x in {4,2,5}:
final.append({'col1':SOMENEWROWNAME'',
'col2':data1[x].split(':')[1][:-1],
'col3':data2[x].split(':')[1][:-1],
'col4':data2[x]==data1[x],
'col5':float(data2[x].split(':'},[1]([-1]) -float(data1[x].split(':'},[1]([-1]))))))
for x in {4,2, 5}:
final.append({another dict})))
df = pd.DataFrame(final)

Contatenate rows in Pandas

I have 12 months sales data for each month. I want to analyze the dataset as a whole.
I have tried using the concat function but It produces not a number (NaN) in my dataframe fields.
In R, cbind function solves this. How do i approach this differently in Python?
I tried using df.concat function to bind the rows cos all the column names are the same for the datasets.
What other options can i explore?
sales_1 = pd.read_csv('Sales_January_2019.csv')
sales_2 = pd.read_csv('Sales_February_2019.csv')
sales_3 = pd.read_csv('Sales_March_2019.csv')
sales_4 = pd.read_csv('Sales_April_2019.csv')
sales_5 = pd.read_csv('Sales_May_2019.csv')
sales_6 = pd.read_csv('Sales_June_2019.csv')
sales_7 = pd.read_csv('Sales_July_2019.csv')
sales_8 = pd.read_csv('Sales_August_2019.csv')
sales_9 = pd.read_csv('Sales_September_2019.csv')
sales_10 = pd.read_csv('Sales_October_2019.csv')
sales_11 = pd.read_csv('Sales_November_2019.csv')
sales_12 = pd.read_csv('Sales_December_2019.csv')
I expect all data frame to be merged into one since the column names are the same for all
perhaps
# using concat with the list of the DF that you already read-in to combine into a single DF
pd.concat([sales_1 ,sales_2 ,sales_3 ,sales_4 ,sales_5 ,sales_6 ,sales_7 ,sales_8 ,sales_9 ,sales_10 ,sales_11 ,sales_12 ])

adding a column to a dataframe from two Pandas DataFrames and currently using two loops with a conditional: Is there a faster way?

I am currently looping through GPS coordinates in a dataframe. I am using this loop to look into another dataframe with GPS coordinates of specific locations and update the original dataframe with the closest location. This works fine but it is VERY slow. Is there a faster way?
Here is sample data:
imports:
from shapely.geometry import Point
import pandas as pd
from geopy import distance
Create sample df1
gps_points = [Point(37.773972,-122.431297) , Point(35.4675602,-97.5164276) , Point(42.35843, -71.05977)]
df_gps = pd.DataFrame()
df_gps['points'] = gps_points
Create sample df2
locations = {'location':['San Diego', 'Austin', 'Washington DC'],
'gps':[Point(32.715738 , -117.161084), Point(30.267153 , -97.7430608), Point(38.89511 , -77.03637)]}
df_locations = pd.DataFrame(locations)
Two loops and update:
lst = [] #create empty list to populate new df column
for index , row in df_gps.iterrows(): # iterate over first dataframe rows
point = row['points'] # pull out GPS point
closest_distance = 999999 # create container for distance
closest_location = None #create container for closest location
for index1 , row1 in df_locations.iterrows(): # iterate over second dataframe
name = row1['location'] # assign name of location
point2 = row1['gps'] # assign coordinates of location
distances = distance.distance((point.x , point.y) , (point2.x , point2.y)).miles # calculate distance
if distances < closest_distance: # check to see if distance is closer
closest_distance = distances # if distance is closer assign it
closest_location = name # if distance is closer assign name
lst.append(closest_location) # append closest city
df_gps['closest_city'] = lst # add new column with closest cities
I'd really like to do this in the fastest way possible. I have read about the vectorization of pandas and have thought about creating a function and then using apply as mentioned in How to iterate over rows in a DataFrame in Pandas however I need two loops and a conditional in my code so the pattern breaks down. Thank you for the help.
You can use KDTree from Scipy:
from scipy.spatial import KDTree
# Extract lat/lon from your dataframes
points = df_gps['points'].apply(lambda p: (p.x, p.y)).apply(pd.Series)
cities = df_locations['gps'].apply(lambda p: (p.x, p.y)).apply(pd.Series)
distances, indices = KDTree(cities).query(points)
df_gps['closest_city'] = df_locations.iloc[indices]['location'].values
df_gps['distance'] = distances
You can use np.where to filter out distances that are too far away.
For performance, check my answer for a similar problem with 25k rows for df_gps and 200k for df_locations.
Based on the insight of Corralien the final answer in code:
from sklearn.neighbors import BallTree, DistanceMetric
points = df_gps['points'].apply(lambda p: np.radians((p.x, p.y))).apply(pd.Series)
cities = df_locations['gps'].apply(lambda p: np.radians((p.x, p.y))).apply(pd.Series)
dist = DistanceMetric.get_metric('haversine')
tree = BallTree(cities, metric=dist)
dists, cities = tree.query(points)
df_gps['dist'] = dists.flatten() * 3956
df_gps['closest_city'] = df_locations.iloc[cities.flatten()]['location'].values

pandas / numpy arithmetic mean in csv file

I have a csv file which contains 3000 rows and 5 columns, which constantly have more rows appended to it on a weekly basis.
What i'm trying to do is to find the arithmetic mean for the last column for the last 1000 rows, every week. (So when new rows are added to it weekly, it'll just take the average of most recent 1000 rows)
How should I construct the pandas or numpy array to achieve this?
df = pd.read_csv(fds.csv, index_col=False, header=0)
df_1 = df['Results']
#How should I write the next line of codes to get the average for the most 1000 rows?
I'm on a different machine than what my pandas is installed on so I'm going on memory, but I think what you'll want to do is...
df = pd.read_csv(fds.csv, index_col=False, header=0)
df_1 = df['Results']
#Let's pretend your 5th column has a name (header) of `Stuff`
last_thousand = df_1.tail(1000)
np.mean(last_thousand.Stuff)
A little bit quicker using mean():
df = pd.read_csv("fds.csv", header = 0)
results = df.tail(1000).mean()
Results will contain the mean for each column within the last 1000 rows. If you want more statistics, you can also use describe():
resutls = df.tail(1000).describe().unstack()
So basically I needed to use the pandas tail function. My Code below works.
df = pd.read_csv(fds.csv, index_col=False, header=0)
df_1 = df['Results']
numpy.average(df_1.tail(1000))

Joining files in pandas

I come from an Excel background but I love pandas and it has truly made me more efficient. Unfortunately, I probably carry over some bad habits from Excel. I have three large files (between 2 million and 13 million rows each) which contain data on interactions which could be tied together, unfortunately, there is no unique key connecting the files. I am literally concatenating (Excel formula) 3 fields into one new column on all three files.
Three columns which exist on each file which I combined together (the other fields would be like the reason for interaction on one file, the score on another file, and the some other data on the third file which I would like to tie together back to a certain agentID):
Date | CustomerID | AgentID
I edit my date format to be uniform on each file:
df[Date] = pd.to_datetime(df['Date'], coerce = True)
df[Date] = df[Date].apply(lambda x:x.date().strftime('%Y-%m-%d'))
Then I create a unique column (well, as unique as I can get it.. sometimes the same customer interacts with the same agent on the same date but this should be quite rare):
df[Unique] = df[Date].astype(str) + df[CustomerID].astype(str) + df[AgentID].astype(str)
I do the same steps for df2 and then:
combined = pd.merge(df, df2, how = 'left', on = 'Unique')
I typically send that to a new csv in case something crashes, gzip it, then read it again and do the same process again with the third file.
final = pd.merge(combined, df2, how = 'left', on = 'Unique')
As you can see, this takes time. I have to format the dates on each and then turn them into text, create an object column which adds to the filesize, and (due to the raw data issues themselves) drop duplicates so I don't accidentally inflate numbers. Is there a more efficient workflow for me to follow?
Instead of using on = 'Unique':
combined = pd.merge(df, df2, how = 'left', on = 'Unique')
you can pass a list of columns to the on keyword parameter:
combined = pd.merge(df, df2, how='left', on=['Date', 'CustomerID', 'AgentID'])
Pandas will correctly merge rows based on the triplet of values from the 'Date', 'CustomerID', 'AgentID' columns. This is safer (see below) and easier than building the Unique column.
For example,
import pandas as pd
import numpy as np
np.random.seed(2015)
df = pd.DataFrame({'Date': pd.to_datetime(['2000-1-1','2000-1-1','2000-1-2']),
'CustomerID':[1,1,2],
'AgentID':[10,10,11]})
df2 = df.copy()
df3 = df.copy()
L = len(df)
df['ABC'] = np.random.choice(list('ABC'), L)
df2['DEF'] = np.random.choice(list('DEF'), L)
df3['GHI'] = np.random.choice(list('GHI'), L)
df2 = df2.iloc[[0,2]]
combined = df
for x in [df2, df3]:
combined = pd.merge(combined, x, how='left', on=['Date','CustomerID', 'AgentID'])
yields
In [200]: combined
Out[200]:
AgentID CustomerID Date ABC DEF GHI
0 10 1 2000-1-1 C F H
1 10 1 2000-1-1 C F G
2 10 1 2000-1-1 A F H
3 10 1 2000-1-1 A F G
4 11 2 2000-1-2 A F I
A cautionary note:
Adding the CustomerID to the AgentID to create a Unique ID could be problematic
-- particularly if neither has a fixed-width format.
For example, if CustomerID = '12' and AgentID = '34' Then (ignoring the date which causes no problem since it does have a fixed-width) Unique would be
'1234'. But if CustomerID = '1' and AgentID = '234' then Unique would
again equal '1234'. So the Unique IDs may be mixing entirely different
customer/agent pairs.
PS. It is a good idea to parse the date strings into date-like objects
df['Date'] = pd.to_datetime(df['Date'], coerce=True)
Note that if you use
combined = pd.merge(combined, x, how='left', on=['Date','CustomerID', 'AgentID'])
it is not necessary to convert any of the columns back to strings.