Conditional join using sqldf in R with time data - sql

So I have a have a table (~2000 rows, call it df1) of when a particular subject received a medication on a particular date, and I have a large excel file (>1 million rows) of weight data for subjects for different dates (call it df2).
AIM: I want to group by subject and find the weight in df2 that was recorded closest to the medication admin time in df1 using sqldf(because tables are too big to load into R). Or alternatively, I can set up a time frame of interest (e.g. +/- 1 week of medication given) and find a row that falls within that timeframe.
Example:
df1 <- data.frame(
PtID = rep(c(1:5), each=2),
Dose = rep(seq(100,200,25),2),
ADMIN_TIME =seq.Date(as.Date("2016/01/01"), by = "month", length.out = 10)
)
df2 <- data.frame(
PtID = rep(c(1:5),each=10),
Weight = rnorm(50, 50, 10),
Wt_time = seq.Date(as.Date("2016/01/01"), as.Date("2016/10/31"), length.out = 50)
)
So I think i want to left_join df1 and df2, group by PtID, and set up some condition that identifies either the closest df2$Weight to the df1$Admin_time or a df2$Weight within an acceptable range around df1$Admin_time using sql formatting.
So I tried creating a range and then querying the following:
library(dplry)
library(lubridate)
df1 <- df1 %>%
mutate(ADMIN_START = ADMIN_TIME - ddays(30),
ADMIN_END = ADMIN_TIME + ddays(30))
#df2.csv is the large spreadsheet saved in my working directory
result <- read.csv.sql("df2.csv", sql = "select Weight from file
left join df1
on file.Wt_time between df1.ADMIN_START and df1.ADMIN_END")
This will run but it never results anything and I have to escape out of it. Any thoughts are appreciated.
Thanks!

Related

Working on multiple data frames with data for NBA players during the season, how can I modify all the dataframes at the same time?

I have a list of 16 dataframes that contain stats for each player in the NBA during the respective season. My end goal is to run unsupervised learning algorithms on the data frames. For example, I want to see if I can determine a player's position by their stats or if I can determine their total points during the season based on their stats.
What I would like to do is modify the list(df_list), unless there's a better solution, of these dataframes instead modifying each dataframe to:
Change the datatype of the MP(minutes played column from str to int.
Modify the dataframe where there are only players with 1000 or more MP and there are no duplicate players(Rk)
(for instance in a season, a player(Rk) can play for three teams in a season and have 200MP, 300MP, and 400MP mins with each team. He'll have a column for each team and a column called TOT which will render his MP as 900(200+300+400) for a total of four rows in the dataframe. I only need the TOT row
Use simple algebra with various and individual columns columns, for example: being able to total the MP column and the PTS column and then diving the sum of the PTS column by the MP column.
Or dividing the total of the PTS column by the len of the PTS column.
What I've done so far is this:
Import my libraries and create 16 dataframes using pd.read_html(url).
The first dataframes created using two lines of code:
url = "https://www.basketball-reference.com/leagues/NBA_1997_totals.html"
ninetysix = pd.read_html(url)[0]
HOWEVER, the next four data frames had to be created using a few additional line of code(I received an error code that said "html5lib not found, please install it" so I downloaded both html5lib and requests). I say that to say...this distinction in creating the DF may have to considered in a solution.
The code I used:
import requests
import uuid
url = 'https://www.basketball-reference.com/leagues/NBA_1998_totals.html'
cookies = {'euConsentId': str(uuid.uuid4())}
html = requests.get(url, cookies=cookies).content
ninetyseven = pd.read_html(html)[0]
These four data frames look like this:
I tried this but it didn't do anything:
df_list = [
eightyfour, eightyfive, eightysix, eightyseven,
eightyeight, eightynine, ninety, ninetyone,
ninetytwo, ninetyfour, ninetyfive,
ninetysix, ninetyseven, ninetyeight, owe_one, owe_two
]
for df in df_list:
df = df.loc[df['Tm'] == 'TOT']
df = df.copy()
df['MP'] = df['MP'].astype(int)
df['Rk'] = df['Rk'].astype(int)
df = list(df[df['MP'] >= 1000]['Rk'])
df = df[df['Rk'].isin(df)]
owe_two
============================UPDATE===================================
This code will solves a portion of problem # 2
url = 'https://www.basketball-reference.com/leagues/NBA_1997_totals.html'
dd = pd.read_html(url)[0]
dd = dd[dd['Rk'].ne('Rk')]
dd['MP'] = dd['MP'].astype(int)
players_1000_rk_list = list(dd[dd['MP'] >= 1000]['Rk'])
players_dd = dd[dd['Rk'].isin(players_1000_rk_list)]
But it doesn't remove the duplicates.
==================== UPDATE 10/11/22 ================================
Let's say I take rows with values "TOT" in the "Tm" and create a new DF with them, and these rows from the original data frame...
could I then compare the new DF with the original data frame and remove the names from the original data IF they match the names from the new data frame?
the problem is that the df you are working on in the loop is not the same df that is in the df_list. you could solve this by saving the new df back to the list, overwriting the old df
for i,df in enumerate(df_list):
df = df.loc[df['Tm'] == 'TOT']
df = df.copy()
df['MP'] = df['MP'].astype(int)
df['Rk'] = df['Rk'].astype(int)
df = list(df[df['MP'] >= 1000]['Rk'])
df = df[df['Rk'].isin(df)]
df_list[i] = df
the2 lines are probably wrong as well
df = list(df[df['MP'] >= 1000]['Rk'])
df = df[df['Rk'].isin(df)]
perhaps you want this
for i,df in enumerate(df_list):
df = df.loc[df['Tm'] == 'TOT']
df = df.copy()
df['MP'] = df['MP'].astype(int)
df['Rk'] = df['Rk'].astype(int)
#df = list(df[df['MP'] >= 1000]['Rk'])
#df = df[df['Rk'].isin(df)]
# just the rows where MP > 1000
df_list[i] = df[df['MP'] >= 1000]

Using a list of IDs to combine SQL tables into a data frame in R

I have a list of IDs that are not associated with any actual data. I have a SQL database that has a table for each of these IDs, and those tables have data that I would like to combine together into one large data frame based on the list of IDs that I have. I figured a for loop would be needed for this, but I haven't been able to get it to work properly.
For example I have a list of IDs"
1,2,3,4,5
I have a SQL database with tables for each of these, and they also have other data associated with the IDs. Each ID has multiple rows and columns.
I would like my end product to be the combination of those rows and columns for the list of IDs to be in a single data frame in r. How could I do this? What is the most efficient way to do so?
#Example data set
library(lubridate)
date <- rep_len(seq(dmy("26-12-2010"), dmy("20-12-2011"), by = "days"), 500)
ID <- rep(seq(1, 5), 100)
df <- data.frame(date = date,
x = runif(length(date), min = 60000, max = 80000),
y = runif(length(date), min = 800000, max = 900000),
ID)
for (i in 1: length(ID)){
ID[i] <- dbReadTable(mydb, ID[i])
}
Thank you so much for your time.
I'll expand on my comment to finish the question.
IDs <- lapply(setNames(nm=ID), function(i) dbReadTable(mydb, i))
and then one of:
## base R
IDs <- Map(function(x, nm) transform(x, id = nm), IDs, names(IDs))
DF <- do.call(rbind, IDs)
## dplyr
DF <- dplyr::bind_rows(IDs, .id = "id")
## data.table
DF <- data.table::rbindlist(IDs, idcol = "id")
The addition of the "id" column is to easily differentiate the rows based on the source ID. If the table already includes that, then you can omit the Map (base) and .id/idcol arguments.
(This assumes, btw, that all tables have the same exact structure: same column names and same data types.)

Plotting dates by weekdays and groups

I would like to compare the values from different weeks in different groups. Something like daily sales for two team members by week to demonstrate the effect of one person being off/a holiday etc. The time of the sale within each day needs to be ordered within the day but the x axis should be labeled by day.
Example is arbitrary.
Example data and output
stringsAsFactors =FALSE
library(lubridate)
library(tidyverse)
library(magrittr)
#=======================
# Week on week comparison of days by a group
#=======================
# Generate DF
Date <- data.frame(Date = rep(seq(as.Date("2020-04-01"),as.Date("2020-04-14"),by="days"),4))
Time <- data.frame(Time = c(rep("00:00:01",nrow(Date)/2),rep("00:00:02",nrow(Date)/2)))
Type <- data.frame(Type = rep(c(rep("a",nrow(Date)/4),rep("b",nrow(Date)/4)),2))
df <- cbind(Date,Time,Type)
# Add random values to plot
df %<>% mutate(values = runif(nrow(.),1,10))
# Create a groups for weeks, orders for days and labels as weekdays (char strings).
df %<>% mutate(weekLevel = week(Date),
dayLevel = wday(Date),
Day = as.character(weekdays(Date)),
orderVar = paste0(dayLevel, Time))
ggplot(df %>% arrange(orderVar), aes(x = orderVar, y = values,group = interaction(Type,weekLevel),colour=Type))+
geom_line()+
scale_x_discrete(breaks =df$orderVar , labels = df$Day) +
theme(axis.text.x = element_text(angle = 90, hjust=1))
This works but the day is repeated because the breaks are set to a more granular level than the labels. It also feels a bit hacky.
Any and all feedback is appreciate :)

How to plot only business hours and weekdays in pandas

I have hourly stock data.
I need a) to format it so that matplotlib ignores weekends and non-business hours and b) an hourly frequency.
The problem:
Currently, the graph looks crammed and I suspect it is because matplotlib is taking into account 24 hours instead of 8, and 7 days a week instead of business days.
How do I tell pandas to only take into account business hours, M- F?
How I am graphing the data:
I am looping through a list of price data dataframes, graphing each data frame:
mm = 0
for ii in df:
Ddate = ii['Date']
Pprice = ii['Price']
d = Ddate.to_list()
p = Pprice.to_list()
dates = make_dt(d)
prices = unstring(p)
plt.figure()
plt.plot(dates,prices)
plt.title(stocks[mm])
plt.grid(True)
plt.xlabel('Dates')
plt.ylabel('Prices')
mm += 1
the graph:
To fetch business days, you can use below function:
df["IsBDay"] = bool(len(pd.bdate_range(df['date'], df['date'])))
//Above line should add a new column into the DF as IsBday.
//You can also use Lambda expression to check and have new column for BDay.
df['IsBDay'] = df['date'].apply(lambda x: 'True' if bool(len(pd.bdate_range(x, x))) else 'False')
Now create a new DF that will have only True IsBday column value and other columns.
df[df.IsBday != 'False']
Now your DF is ready for ploting.
Hope this helps.

Joining files in pandas

I come from an Excel background but I love pandas and it has truly made me more efficient. Unfortunately, I probably carry over some bad habits from Excel. I have three large files (between 2 million and 13 million rows each) which contain data on interactions which could be tied together, unfortunately, there is no unique key connecting the files. I am literally concatenating (Excel formula) 3 fields into one new column on all three files.
Three columns which exist on each file which I combined together (the other fields would be like the reason for interaction on one file, the score on another file, and the some other data on the third file which I would like to tie together back to a certain agentID):
Date | CustomerID | AgentID
I edit my date format to be uniform on each file:
df[Date] = pd.to_datetime(df['Date'], coerce = True)
df[Date] = df[Date].apply(lambda x:x.date().strftime('%Y-%m-%d'))
Then I create a unique column (well, as unique as I can get it.. sometimes the same customer interacts with the same agent on the same date but this should be quite rare):
df[Unique] = df[Date].astype(str) + df[CustomerID].astype(str) + df[AgentID].astype(str)
I do the same steps for df2 and then:
combined = pd.merge(df, df2, how = 'left', on = 'Unique')
I typically send that to a new csv in case something crashes, gzip it, then read it again and do the same process again with the third file.
final = pd.merge(combined, df2, how = 'left', on = 'Unique')
As you can see, this takes time. I have to format the dates on each and then turn them into text, create an object column which adds to the filesize, and (due to the raw data issues themselves) drop duplicates so I don't accidentally inflate numbers. Is there a more efficient workflow for me to follow?
Instead of using on = 'Unique':
combined = pd.merge(df, df2, how = 'left', on = 'Unique')
you can pass a list of columns to the on keyword parameter:
combined = pd.merge(df, df2, how='left', on=['Date', 'CustomerID', 'AgentID'])
Pandas will correctly merge rows based on the triplet of values from the 'Date', 'CustomerID', 'AgentID' columns. This is safer (see below) and easier than building the Unique column.
For example,
import pandas as pd
import numpy as np
np.random.seed(2015)
df = pd.DataFrame({'Date': pd.to_datetime(['2000-1-1','2000-1-1','2000-1-2']),
'CustomerID':[1,1,2],
'AgentID':[10,10,11]})
df2 = df.copy()
df3 = df.copy()
L = len(df)
df['ABC'] = np.random.choice(list('ABC'), L)
df2['DEF'] = np.random.choice(list('DEF'), L)
df3['GHI'] = np.random.choice(list('GHI'), L)
df2 = df2.iloc[[0,2]]
combined = df
for x in [df2, df3]:
combined = pd.merge(combined, x, how='left', on=['Date','CustomerID', 'AgentID'])
yields
In [200]: combined
Out[200]:
AgentID CustomerID Date ABC DEF GHI
0 10 1 2000-1-1 C F H
1 10 1 2000-1-1 C F G
2 10 1 2000-1-1 A F H
3 10 1 2000-1-1 A F G
4 11 2 2000-1-2 A F I
A cautionary note:
Adding the CustomerID to the AgentID to create a Unique ID could be problematic
-- particularly if neither has a fixed-width format.
For example, if CustomerID = '12' and AgentID = '34' Then (ignoring the date which causes no problem since it does have a fixed-width) Unique would be
'1234'. But if CustomerID = '1' and AgentID = '234' then Unique would
again equal '1234'. So the Unique IDs may be mixing entirely different
customer/agent pairs.
PS. It is a good idea to parse the date strings into date-like objects
df['Date'] = pd.to_datetime(df['Date'], coerce=True)
Note that if you use
combined = pd.merge(combined, x, how='left', on=['Date','CustomerID', 'AgentID'])
it is not necessary to convert any of the columns back to strings.