How to correlate columns from two different dataframes with datetimeindex - pandas

I am trying to correlate the same column in two different dataframes (same size). The dfs use stock data with a datetimeindex. Every possible correlation I can come up with only gives NaN for an answer. Is the dtype of the df indees messing things up? Note: at this point in the program, I don't care what the dates / index actually are.
input:
import pandas as pd
pd.core.common.is_list_like = pd.api.types.is_list_like # temp fix
import numpy as np
import fix_yahoo_finance as yf
from pandas_datareader import data, wb
from datetime import date
df1 = yf.download('IBM', start = date (2000, 1, 3), end = date (2000, 1, 5), progress = False)
df2 = yf.download('IBM', start = date (2000, 1, 6), end = date (2000, 1, 10), progress = False)
print (df1)
print (df2)
print (df1['Open'].corr(df2['Open']))
output:
Open High Low Close Adj Close Volume
Date
2000-01-03 112.4375 116.00 111.875 116.0000 81.096031 10347700
2000-01-04 114.0000 114.50 110.875 112.0625 78.343300 8227800
2000-01-05 112.9375 119.75 112.125 116.0000 81.096031 12733200
Open High Low Close Adj Close Volume
Date
2000-01-06 118.00 118.9375 113.500 114.0 79.697784 7971900
2000-01-07 117.25 117.9375 110.625 113.5 79.348267 11856700
2000-01-10 117.25 119.3750 115.375 118.0 82.494217 8540500
nan

The indexes are not matching, that's why you get nan I believe. Use numpy.corrcoef on the raw values to get your result:
np.corrcoef(df1['Open'].values,df2['Open'].values)
Output
[[ 1. -0.74615579]
[-0.74615579 1. ]]

Related

How to Exclude NaNs from Pandas Rolling, but not return NaN if there is one in the DataFrame

Currently I have the DataFrame seen below and I want to do a rolling average over the last 10 occurrences that have actual values, but to skip the NaNs
Example DataFrame
The issues is that if I run df['AST_Hit'].rolling(10).mean(skipna=True).shift(1) I get this DataFrame below which is not what I am looking for
Example Output DataFrame
I've tried using window and min_period but that does not give me what I want as I don't want the average over anything greater than 10.
Ideally I would like the DataFrame to be able to discard a NaN, but still look to see if there are 10 values in that selection. From what I am describing I think I need some sort of max period where it is equal to 10 as well as the min period equal to 10, but I could not find anything on Pandas documentation for rolling on setting up a max period.
Maybe it would also be best if I just dropped any NaN rows. My DataFrame is much bigger than what is seen, so it isn't just those 3 rows that contain a NaN, but it may be the best course of action
Any help or tips is greatly appreciated.
This might help you:
import pandas as pd
import numpy as np
# create a sample DataFrame with non-numeric columns
np.random.seed(123)
df = pd.DataFrame({
'Date': pd.date_range(start='2022-01-01', periods=100),
'AST_Hit': np.random.randint(0, 10, size=100),
'Other_Column': ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'] * 10
})
df.iloc[3:6, 1] = np.nan
df.iloc[7, 0] = np.nan
df.iloc[10:15, 2] = np.nan
df.iloc[20:25, 1] = np.nan
df.iloc[30:40, 2] = np.nan
# compute rolling average over the last 10 non-null values
rolling_mask = df['AST_Hit'].notnull().rolling(window=10, min_periods=1).sum().eq(10)
result = df['AST_Hit'].rolling(window=10, min_periods=1).apply(lambda x: np.mean(x[rolling_mask]))
print(result)
which gives
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
...
95 5.6
96 5.1
97 4.7
98 4.2
99 3.9
Name: AST_Hit, Length: 100, dtype: float64

Plotting time series box and whisker plot with missing date values for origin destination pairs

I have the following data set:
df.head(7)
Origin Dest Date Quantity
0 Atlanta LA 2021-09-09 1
1 Atlanta LA 2021-09-11 4
2 Atlanta Chicago 2021-09-16 1
3 Atlanta Seattle 2021-09-27 12
4 Seattle LA 2021-09-29 2
5 Seattle Atlanta 2021-09-13 2
6 Seattle Newark 2021-09-17 7
In short, this table represents the number of items (Quantity) that were sent from a given origin to a given destination on a given date. The table contains 1 month of data. This table was read with:
shipments = pd.read_csv('shipments.csv', parse_dates=['Date'])
Note that this is a sparse table: if Quantity=0 for a particular (Origin,Dest,Date) pair then this row is not included in the table. As per example, on 2021-09-10 no items were sent from Atlanta to LA this row is not included in the data.
I would like to visualize this data using time series box and whisker plots. The x-axis of my graph should show the day, and Quantity should be on the y-axis. A boxplot should represent the various percentiles aggregated over all (origin-destination) pairs.
Similarly, would it be possible to create a graph which, instead of every day, only shows Monday-Sunday on the x-axis (and hence shows the results per day of the week)?
To generate the rows with missing data I used the following code:
table = pd.pivot_table(data=shipments, index='Date', columns=['Origin','Dest'], values='Quantity', fill_value=0)
idx = pd.date_range('2021-09-06','2021-10-10')
table = table.reindex(idx,fill_value=0)
You could transpose the table dataframe, and use that as input for a sns.boxplot. And you could create a similar table for the day of the week. Note that with many zeros, the boxplot might look a bit strange.
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
# first create some test data, somewhat similar to the given data
N = 1000
cities = ['Atlanta', 'LA', 'Chicago', 'Seattle', 'Newark']
shipments = pd.DataFrame({'Origin': np.random.choice(cities, N),
'Dest': np.random.choice(cities, N),
'Date': np.random.choice(pd.date_range('2021-09-06', '2021-10-10'), N),
'Quantity': (np.random.uniform(1, 4, N) ** 3).astype(int)})
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(15, 5), gridspec_kw={'width_ratios': [3, 1]})
# create boxplots for each day
table_month = pd.pivot_table(data=shipments, index='Date', columns=['Origin', 'Dest'], values='Quantity', fill_value=0)
idx = pd.date_range('2021-09-06', '2021-10-10')
table_month = table_month.reindex(idx, fill_value=0)
sns.boxplot(data=table_month.T, ax=ax1)
labels = [day.strftime('%d\n%b %Y') if i == 0 or day.day == 1 else day.strftime('%d')
for i, day in enumerate(table_month.index)]
ax1.set_xticklabels(labels)
# create boxplots for each day of the week
table_dow = pd.pivot_table(data=shipments, index=shipments['Date'].dt.dayofweek, columns=['Origin', 'Dest'],
values='Quantity', fill_value=0)
table_dow = table_dow.reindex(range(7), fill_value=0)
sns.boxplot(data=table_dow.T, ax=ax2)
labels = ['Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun']
ax2.set_xticklabels(labels)
ax2.set_xlabel('') # remove superfluous x label
fig.tight_layout()
plt.show()

Random Choice loop through groups of samples

I have a df containing column of "Income_group", "Rate", and "Probability", respectively. I need randomly select rate for each income group. How can I write a Loop function and print out the result for each income bin.
The pandas data frame table looks like this:
import pandas as pd
df={'Income_Groups':['1','1','1','2','2','2','3','3','3'],
'Rate':[1.23,1.25,1.56, 2.11,2.32, 2.36,3.12,3.45,3.55],
'Probability':[0.25, 0.50, 0.25,0.50,0.25,0.25,0.10,0.70,0.20]}
df2=pd.DataFrame(data=df)
df2
Datatable
Shooting in the dark here, but you can use np.random.choice:
(df2.groupby('Income_Groups')
.apply(lambda x: np.random.choice(x['Rate'], p=x['Probability']))
)
Output (can vary due to randomness):
Income_Groups
1 1.25
2 2.36
3 3.45
dtype: float64
You can also pass size into np.random.choice:
(df2.groupby('Income_Groups')
.apply(lambda x: np.random.choice(x['Rate'], size=3, p=x['Probability']))
)
Output:
Income_Groups
1 [1.23, 1.25, 1.25]
2 [2.36, 2.11, 2.11]
3 [3.12, 3.12, 3.45]
dtype: object
GroupBy.apply because of the weights.
import numpy as np
(df2.groupby('Income_Groups')
.apply(lambda gp: np.random.choice(a=gp.Rate, p=gp.Probability, size=1)[0]))
#Income_Groups
#1 1.23
#2 2.11
#3 3.45
#dtype: float64
Another silly way because your weights seem to be have precision to 2 decimal places:
s = df2.set_index(['Income_Groups', 'Probability']).Rate
(s.repeat(s.index.get_level_values('Probability')*100) # Weight
.sample(frac=1) # Shuffle |
.reset_index() # + | -> Random Select
.drop_duplicates(subset=['Income_Groups']) # Select |
.drop(columns='Probability'))
# Income_Groups Rate
#0 2 2.32
#1 1 1.25
#3 3 3.45

monthly frequency time series data frame, fill NaNs with specific values

How do I pass values to months from April to September.
I would like the April value equals to 42000, May=41000, June=61200, July=71000,August=71000
df.index
RangeIndex(start=0, stop=60, step=1)
For a mapping like this, you would typically define a dictionary and map the values. Use .split to get the month part of the date and fillna to fill only the missing values.
Data:
import pandas as pd
import numpy as np
df = pd.DataFrame({'Date': ['2018-Jan', '2018-Feb', '2018-Mar', '2018-Apr', '2018-May',
'2018-Jun', '2018-Jul', '2018-Aug', '2018-Sep'],
'Value': [75267.169, 42258.868, 43793]+[np.NaN]*6})
Code:
d = {'Apr': 42000, 'May': 41000, 'Jun': 61200, 'Jul': 71000, 'Aug': 71000}
df['Value'] = df.Value.fillna(df.Date.str.split('-').str[1].map(d))
Output:
Date Value
0 2018-Jan 75267.169
1 2018-Feb 42258.868
2 2018-Mar 43793.000
3 2018-Apr 42000.000
4 2018-May 41000.000
5 2018-Jun 61200.000
6 2018-Jul 71000.000
7 2018-Aug 71000.000
8 2018-Sep NaN
super simple and ugly way to do it using pd.DataFrame.iloc
to_fill = [42000,41000,61200,71000,71000]
df.iloc[54:59,1] = to_fill

Double grouping data by bins AND time with pandas

I am trying to bin values from a timeseries (hourly and subhourly temperature values) within a time window.
That is, from original hourly values, I'd like to extract binned values on a daily, weekly or monthly basis.
I have tried to combine groupby+TimeGrouper(" ") with pd.cut, with poor results.
I have came across a nice function from this tutorial, which suggests to map the data (associating to each value with its mapped range on the next column) and then grouping according to that.
def map_bin(x, bins):
kwargs = {}
if x == max(bins):
kwargs['right'] = True
bin = bins[np.digitize([x], bins, **kwargs)[0]]
bin_lower = bins[np.digitize([x], bins, **kwargs)[0]-1]
return '[{0}-{1}]'.format(bin_lower, bin)
df['Binned'] = df['temp'].apply(map_bin, bins=freq_bins)
However, applying this function results in an IndexError: index n is out of bounds for axis 0 with size n.
Ideally, I'd like make this work and apply it to achieve a double grouping at the same time: one by bins and one by timegrouper.
Update:
It appears that my earlier attempt was causing problems because of the double-indexed columns. I have simplified to something that seems to work much better.
import pandas as pd
import numpy as np
xaxis = np.linspace(0,50)
temps = pd.Series(data=xaxis,name='temps')
times = pd.date_range(start='2015-07-15',periods=50,freq='6H')
temps.index = times
bins = [0,10,20,30,40,50]
temps.resample('W').agg(lambda series:pd.value_counts(pd.cut(series,bins),sort=False)).unstack()
This outputs:
(0, 10] (10, 20] (20, 30] (30, 40] (40, 50]
2015-07-19 9 10 0 0 0
2015-07-26 0 0 10 10 8
2015-08-02 0 0 0 0 2