Given a dataframe with a non-regular time series as an index, I'd like to find the max delta between the values for a period of 10 secs. Here is some code that does the same thing:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(0)
xs = np.cumsum(np.random.rand(200))
# This function is to create a general situation where the max is not aways at the end or beginning
ys = xs**1.2 + 10 * np.sin(xs)
plt.plot(xs, ys, '+-')
threshold = 10
xs_thresh_ind = np.zeros_like(xs, dtype=int)
deltas = np.zeros_like(ys)
for i, x in enumerate(xs):
# Find indices that lie within the time threshold
period_end_ind = np.argmax(xs > x + threshold)
# Only operate when the window is wide enough (this can be treated differently)
if period_end_ind > 0:
xs_thresh_ind[i] = period_end_ind
# Find extrema in the period
period_min = np.min(ys[i:period_end_ind + 1])
period_max = np.max(ys[i:period_end_ind + 1])
deltas[i] = period_max - period_min
max_ind_low = np.argmax(deltas)
max_ind_high = xs_thresh_ind[max_ind_low]
max_delta = deltas[max_ind_low]
print(
'Max delta {:.2f} is in period x[{}]={:.2f},{:.2f} and x[{}]={:.2f},{:.2f}'
.format(max_delta, max_ind_low, xs[max_ind_low], ys[max_ind_low],
max_ind_high, xs[max_ind_high], ys[max_ind_high]))
df = pd.DataFrame(ys, index=xs)
OUTPUT:
Max delta 48.76 is in period x[167]=86.10,200.32 and x[189]=96.14,249.09
Is there an efficient pandaic way to achieve something similar?
Create a Series from ys values, indexed by xs - but convert xs to be actual timedelta elements, rather than the float equivalent.
ts = pd.Series(ys, index=pd.to_timedelta(xs, unit="s"))
We want to apply a leading, 10 second window in which we calculate the difference between max and min. Because we want it to be leading, we'll sort the Series in descending order and apply a trailing window.
deltas = ts.sort_index(ascending=False).rolling("10s").agg(lambda s: s.max() - s.min())
Find the maximum delta with deltas[deltas == deltas.max()], which gives
0 days 00:01:26.104797298 48.354851
meaning a delta of 48.35 was found in the interval [86.1, 96.1)
Related
TLDR: How can one adjust the for-loop for a faster execution time:
import numpy as np
import pandas as pd
import time
np.random.seed(0)
# Given a DataFrame df and a row_index
df = pd.DataFrame(np.random.randint(0, 3, size=(30000, 50)))
target_row_index = 5
start = time.time()
target_row = df.loc[target_row_index]
result = []
# Method 1: Optimize this for-loop
for row in df.iterrows():
"""
Logic of calculating the variables check and score:
if the values for a specific column are 2 for both rows (row/target_row), it should add 1 to the score
if for one of the rows the value is 1 and for the other 2 for a specific column, it should subtract 1 from the score.
"""
check = row[1]+target_row # row[1] takes 30 microseconds per call
score = np.sum(check == 4) - np.sum(check == 3) # np.sum takes 47 microseconds per call
result.append(score)
print(time.time()-start)
# Goal: Calculate the list result as efficient as possible
# Method 2: Optimize Apply
def add(a, b):
check = a + b
return np.sum(check == 4) - np.sum(check == 3)
start = time.time()
q = df.apply(lambda row : add(row, target_row), axis = 1)
print(time.time()-start)
So I have a dataframe of size 30'000 and a target row in this dataframe with a given row index. Now I want to compare this row to all the other rows in the dataset by calculating a score. The score is calculated as follows:
if the values for a specific column are 2 for both rows, it should add 1 to the score
if for one of the rows the value is 1 and for the other 2 for a specific column, it should subtract 1 from the score.
The result is then the list of all the scores we just calculated.
As I need to execute this code quite often I would like to optimize it for performance.
Any help is very much appreciated.
I already read Optimization when using Pandas are there further resources you can recommend? Thanks
If you're willing to convert your df to a NumPy array, NumPy has some really good vectorisation that helps. My code using NumPy is as below:
df = pd.DataFrame(np.random.randint(0, 3, size=(30000, 50)))
target_row_index = 5
start_time = time.time()
# Converting stuff to NumPy arrays
target_row = df.loc[target_row_index].to_numpy()
np_arr = df.to_numpy()
# Calculations
np_arr += target_row
check = np.sum(np_arr == 4, axis=1) - np.sum(np_arr == 3, axis=1)
result = list(check)
end_time = time.time()
print(end_time - start_time)
Your complete code (on Google Colab for me) outputs a time of 14.875332832336426 s, while the NumPy code above outputs a time of 0.018691539764404297 s, and of course, the result list is the same in both cases.
Note that in general, if your calculations are purely numerical, NumPy will virtually always be better than Pandas and a for loop. Pandas really shines through with strings and when you need the column and row names, but for pure numbers, NumPy is the way to go due to vectorisation.
I have a random variable X sampled at random times T similar to this toy data:
import numpy as np
T = np.random.exponential(size=1000).cumsum()
X = np.random.normal(size=1000)
This timeseries looks like this:
A key point is that the sampling interval is non-uniform: by this I mean that all elements of np.diff(T) are not equal. I need to resample the timeseries T,X on uniform intervals with a specified width dt, meaning (np.diff(T)==dt).all() should return True.
I can resample the timeseries on uniform intervals using scipy.interpolate.interp1d, but this method does not allow me to specify the interval size dt:
from scipy.interpolate import interp1d
T = np.linspace(T.min(),T.max(),T.size) # same range and size with a uniform interval
F = interp1d(T,X,fill_value='extrapolate') # resample the series on uniform interval
X = F(T) # Now it's resampled.
The essential issue is that interp1d does not accept an array T unless T.size==X.size.
Is there another method I can try to resample the time series T,X on uniform intervals of width dt?
dt = ...
from scipy.interpolate import interp1d
Told = np.arange(T.min(),T.max(),T.size)
F = interp1d(Told,X,fill_value='extrapolate')
Tnew = np.linspace(T.min(), T.max(), dt)
Xnew = F(Tnew)
I am creating monthly diurnal plots from pandas dataframe. I need to plot mean, median or any quantile. I am able to achieve it correctly, but with large data, quantile computation is way slower than mean or median computation. Is there any faster way to achieve this?
import pandas as pd
import numpy as np
import datetime as dt
date_range = pd.date_range(start=dt.datetime(2018,1,1,00,00), end=dt.datetime(2018,12,31,23,59), freq='1min')
N = len(date_range)
df = pd.DataFrame({'Test': np.random.rand(N)}, index=date_range)
df['Time'] = df.index.time
df['Month'] = df.index.month
time_mean_median = dt.datetime(2019,1,1,0,0,0)
time_qunatiles = dt.datetime(2019,1,1,0,0,0)
for i in range(12):
df_month = df[['Test', 'Time']].loc[df['Month'] == i + 1]
start_time = dt.datetime.now()
df1_group = df[['Test', 'Time']].groupby('Time').agg([np.mean, np.median])
time_mean_median += dt.datetime.now()-start_time
quantiles = [0.23, 0.72]
start_time = dt.datetime.now()
df2_group = df[['Test', 'Time']].groupby('Time').quantile(q=quantiles).unstack()
time_qunatiles += dt.datetime.now() - start_time
print('Mean/median computation time {}'.format(time_mean_median.time()))
print('Quantile computation time {}'.format(time_qunatiles.time()))
In this example I get mean/median total computation time around 0.7 seconds, compare to almost 12 second with quantile computation.
I am trying to filter out some outliers from a scatter plot of GPS elevation displacements with dates
I'm trying to use df.rolling to compute a median and standard deviation for each window and then remove the point if it is greater than 3 standard deviations.
However, I can't figure out a way to loop through the column and compare the the median value rolling calculated.
Here is the code I have so far
import pandas as pd
import numpy as np
def median_filter(df, window):
cnt = 0
median = df['b'].rolling(window).median()
std = df['b'].rolling(window).std()
for row in df.b:
#compare each value to its median
df = pd.DataFrame(np.random.randint(0,100,size=(100,2)), columns = ['a', 'b'])
median_filter(df, 10)
How can I loop through and compare each point and remove it?
Just filter the dataframe
df['median']= df['b'].rolling(window).median()
df['std'] = df['b'].rolling(window).std()
#filter setup
df = df[(df.b <= df['median']+3*df['std']) & (df.b >= df['median']-3*df['std'])]
There might well be a more pandastic way to do this - this is a bit of a hack, relying on a sorta manual way of mapping the original df's index to each rolling window. (I picked size 6). The records up and until row 6 are associated with the first window; row 7 is the second window, and so on.
n = 100
df = pd.DataFrame(np.random.randint(0,n,size=(n,2)), columns = ['a','b'])
## set window size
window=6
std = 1 # I set it at just 1; with real data and larger windows, can be larger
## create df with rolling stats, upper and lower bounds
bounds = pd.DataFrame({'median':df['b'].rolling(window).median(),
'std':df['b'].rolling(window).std()})
bounds['upper']=bounds['median']+bounds['std']*std
bounds['lower']=bounds['median']-bounds['std']*std
## here, we set an identifier for each window which maps to the original df
## the first six rows are the first window; then each additional row is a new window
bounds['window_id']=np.append(np.zeros(window),np.arange(1,n-window+1))
## then we can assign the original 'b' value back to the bounds df
bounds['b']=df['b']
## and finally, keep only rows where b falls within the desired bounds
bounds.loc[bounds.eval("lower<b<upper")]
This is my take on creating a median filter:
def median_filter(num_std=3):
def _median_filter(x):
_median = np.median(x)
_std = np.std(x)
s = x[-1]
return s if s >= _median - num_std * _std and s <= _median + num_std * _std else np.nan
return _median_filter
df.y.rolling(window).apply(median_filter(num_std=3), raw=True)
python Matplotlib's "specgram" display of a heatmap showing frequency (y-axis) vs. time (x-axis) is useful for time series analysis, but I would like to have the y-axis displayed in terms of Period (= 1/frequency), rather than frequency. I am still asking if anyone has a complete working solution to achieve this?
The immediately following python code generates the author's original plot using "specgram" and (currently commented out) a comparison with the suggested solution that was offered using "mlab.specgram". This suggested solution succeeds with the easy conversion from frequency to period = 1/frequency, but does not generate a viable plot for the authors example.
from __future__ import division
from datetime import datetime
import numpy as np
from pandas import DataFrame, Series
import pandas.io.data as web
import pandas as pd
from pylab import plot,show,subplot,specgram
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
################################################
# obtain data:
ticker = "SPY"
source = "google"
start_date = datetime(1999,1,1)
end_date = datetime(2012,1,1)
qt = web.DataReader(ticker, source, start_date, end_date)
qtC = qt.Close
################################################
data = qtC
fs = 1 # 1 sample / day
nfft = 128
# display the time-series data
fig = plt.figure()
ax1 = fig.add_subplot(311)
ax1.plot(range(len(data)),data)
#----------------
# Original version
##################
# specgram (NOT mlab.specgram) --> gives direct plot, but in Frequency space (want plot in Period, not freq).
ax2 = fig.add_subplot(212)
spec, freq, t = specgram(data, NFFT=nfft, Fs=fs, noverlap=0)
#----------------
"""
# StackOverflow version (with minor changes to axis titles)
########################
# calcuate the spectrogram
spec, freq, t = mlab.specgram(data, NFFT=nfft, Fs=fs, noverlap=0)
# calculate the bin limits in time (x dir)
# note that there are n+1 fence posts
dt = t[1] - t[0]
t_edge = np.empty(len(t) + 1)
t_edge[:-1] = t - dt / 2.
# however, due to the way the spectrogram is calculates, the first and last bins
# a bit different:
t_edge[0] = 0
t_edge[-1] = t_edge[0] + len(data) / fs
# calculate the frequency bin limits:
df = freq[1] - freq[0]
freq_edge = np.empty(len(freq) + 1)
freq_edge[:-1] = freq - df / 2.
freq_edge[-1] = freq_edge[-2] + df
# calculate the period bin limits, omit the zero frequency bin
p_edge = 1. / freq_edge[1:]
# we'll plot both
ax2 = fig.add_subplot(312)
ax2.pcolormesh(t_edge, freq_edge, spec)
ax2.set_ylim(0, fs/2)
ax2.set_ylabel('freq.[day^-1]')
ax3 = fig.add_subplot(313)
# note that the period has to be inverted both in the vector and the spectrum,
# as pcolormesh wants to have a positive difference between samples
ax3.pcolormesh(t_edge, p_edge[::-1], spec[:0:-1])
#ax3.set_ylim(0, 100/fs)
ax3.set_ylim(0, nfft)
ax3.set_xlabel('t [days]')
ax3.set_ylabel('period [days]')
"""
If you are only asking how to display the spectrogram differently, then it is actually rather straightforward.
One thing to note is that there are two functions called specgram: matplotlib.pyplot.specgram and matplotlib.mlab.specgram. The difference between these two is that the former draws a spectrogram wheras the latter only calculates one (and that's what we want).
The only slightly tricky thing is to calculate the colour mesh rectangle edge positions. We get the following from the specgram:
t: centerpoints in time
freq: frequency centers of the bins
For the time dimension it is easy to calculate the bin limits by the centers:
t_edge[n] = t[0] + (n - .5) * dt, where dt is the time difference of two consecutive bins
It would be similarly simple for frequencies:
f_edge[n] = freq[0] + (n - .5) * df
but we want to use the period instead of frequency. This makes the first bin unusable, and we'll have to toss the DC component away.
A bit of code:
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
import numpy as np
# create some data: (fs = sampling frequency)
fs = 2000.
ts = np.arange(10000) / fs
sig = np.sin(500 * np.pi * ts)
sig[5000:8000] += np.sin(200 * np.pi * (ts[5000:8000] + 0.0005 * np.random.random(3000)))
# calcuate the spectrogram
spec, freq, t = mlab.specgram(sig, Fs=fs)
# calculate the bin limits in time (x dir)
# note that there are n+1 fence posts
dt = t[1] - t[0]
t_edge = np.empty(len(t) + 1)
t_edge[:-1] = t - dt / 2.
# however, due to the way the spectrogram is calculates, the first and last bins
# a bit different:
t_edge[0] = 0
t_edge[-1] = t_edge[0] + len(sig) / fs
# calculate the frequency bin limits:
df = freq[1] - freq[0]
freq_edge = np.empty(len(freq) + 1)
freq_edge[:-1] = freq - df / 2.
freq_edge[-1] = freq_edge[-2] + df
# calculate the period bin limits, omit the zero frequency bin
p_edge = 1. / freq_edge[1:]
# we'll plot both
fig = plt.figure()
ax1 = fig.add_subplot(211)
ax1.pcolormesh(t_edge, freq_edge, spec)
ax1.set_ylim(0, fs/2)
ax1.set_ylabel('frequency [Hz]')
ax2 = fig.add_subplot(212)
# note that the period has to be inverted both in the vector and the spectrum,
# as pcolormesh wants to have a positive difference between samples
ax2.pcolormesh(t_edge, p_edge[::-1], spec[:0:-1])
ax2.set_ylim(0, 100/fs)
ax2.set_xlabel('t [s]')
ax2.set_ylabel('period [s]')
This gives: