python-xarray: rolling mean example - pandas

I have a file which is monthly data for one year (12 points). The data starts in December and ends in November. I'm hoping to create a 3-month running mean file which would be DJF, JFM, ..., SON (10 points)
I noticed there is a DataArray.rolling function which returns a rolling window option and I think would be useful for this. However, I haven't found any examples using the rolling function. I admit i'm not familiar with bottleneck, pandas.rolling_mean or the more recent pandas.rolling so my entry level is fairly low.
Here's some code to test:
import numpy as np
import pandas as pd
import xarray as xr
lat = np.linspace(-90, 90, num=181); lon = np.linspace(0, 359, num=360)
# Define monthly average time as day in middle of month
time = pd.date_range('15/12/1999', periods=12, freq=pd.DateOffset(months=1))
# Create data as 0:11 at each grid point
a = np.linspace(0,11,num=12)
# expand to 2D
a2d = np.repeat(tmp[:, np.newaxis], len(lat), axis=1)
# expand to 3D
a3d = np.repeat(a2d[:, :, np.newaxis], len(lon), axis=2)
# I'm sure there was a cleaner way to do that...
da = xr.DataArray(a3d, coords=[time, lat, lon], dims=['time','lat','lon'])
# Having a stab at the 3-month rolling mean
da.rolling(dim='time',window=3).mean()
# Error output:
Traceback (most recent call last):
File "<ipython-input-132-9d64cc09c263>", line 1, in <module>
da.rolling(dim='time',window=3).mean()
File "/Users/Ray/anaconda/lib/python3.6/site-packages/xarray/core/common.py", line 478, in rolling
center=center, **windows)
File "/Users/Ray/anaconda/lib/python3.6/site-packages/xarray/core/rolling.py", line 126, in __init__
center=center, **windows)
File "/Users/Ray/anaconda/lib/python3.6/site-packages/xarray/core/rolling.py", line 62, in __init__
raise ValueError('exactly one dim/window should be provided')
ValueError: exactly one dim/window should be provided

You are very close. The rolling method takes a key/value pair that maps as dim/window_size. This should work for you.
da.rolling(time=3).mean()

Related

NumPy Tensordot axes=2

I know there are many questions about tensordot, and I've skimmed some of the 15 page mini-book answers that people I'm sure spent hours making, but I haven't found an explanation of what axes=2 does.
This made me think that np.tensordot(b,c,axes=2) == np.sum(b * c), but as an array:
b = np.array([[1,10],[100,1000]])
c = np.array([[2,3],[5,7]])
np.tensordot(b,c,axes=2)
Out: array(7532)
But then this failed:
a = np.arange(30).reshape((2,3,5))
np.tensordot(a,a,axes=2)
If anyone can provide a short, concise explanation of np.tensordot(x,y,axes=2), and only axes=2, then I would gladly accept it.
In [70]: a = np.arange(24).reshape(2,3,4)
In [71]: np.tensordot(a,a,axes=2)
Traceback (most recent call last):
File "<ipython-input-71-dbe04e46db70>", line 1, in <module>
np.tensordot(a,a,axes=2)
File "<__array_function__ internals>", line 5, in tensordot
File "/usr/local/lib/python3.8/dist-packages/numpy/core/numeric.py", line 1116, in tensordot
raise ValueError("shape-mismatch for sum")
ValueError: shape-mismatch for sum
In my previous post I deduced that axis=2 translates to axes=([-2,-1],[0,1])
How does numpy.tensordot function works step-by-step?
In [72]: np.tensordot(a,a,axes=([-2,-1],[0,1]))
Traceback (most recent call last):
File "<ipython-input-72-efdbfe6ff0d3>", line 1, in <module>
np.tensordot(a,a,axes=([-2,-1],[0,1]))
File "<__array_function__ internals>", line 5, in tensordot
File "/usr/local/lib/python3.8/dist-packages/numpy/core/numeric.py", line 1116, in tensordot
raise ValueError("shape-mismatch for sum")
ValueError: shape-mismatch for sum
So that's trying to do a double axis reduction on the last 2 dimensions of the first a, and the first 2 dimensions of the second a. With this a that's a dimensions mismatch. Evidently this axes was intended for 2d arrays, without much thought given to 3d ones. It is not a 3 axis reduction.
These single digit axes values are something that some developer thought would be convenient, but that does not mean they were rigorously thought out or tested.
The tuple axes gives you more control:
In [74]: np.tensordot(a,a,axes=[(0,1,2),(0,1,2)])
Out[74]: array(4324)
In [75]: np.tensordot(a,a,axes=[(0,1),(0,1)])
Out[75]:
array([[ 880, 940, 1000, 1060],
[ 940, 1006, 1072, 1138],
[1000, 1072, 1144, 1216],
[1060, 1138, 1216, 1294]])

TypeError: 1st argument must be a real sequence 2 signal.spectrogram

I'm trying to take a signal from an electrical reading and decompose it into its spectrogram, but I keep getting a weird error. Here is the code:
f, t, Sxx = signal.spectrogram(i_data.values, 130)
plt.pcolormesh(t, f, Sxx)
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec]')
plt.show()
And here is the error:
convert_to_spectrogram(i_data.iloc[1000,:10020].dropna().values)
Traceback (most recent call last):
File "<ipython-input-140-e5951b2d2d97>", line 1, in <module>
convert_to_spectrogram(i_data.iloc[1000,:10020].dropna().values)
File "<ipython-input-137-5d63a96c8889>", line 2, in convert_to_spectrogram
f, t, Sxx = signal.spectrogram(wf, 130)
File "//anaconda3/lib/python3.7/site-packages/scipy/signal/spectral.py", line 750, in spectrogram
mode='psd')
File "//anaconda3/lib/python3.7/site-packages/scipy/signal/spectral.py", line 1836, in _spectral_helper
result = _fft_helper(x, win, detrend_func, nperseg, noverlap, nfft, sides)
File "//anaconda3/lib/python3.7/site-packages/scipy/signal/spectral.py", line 1921, in _fft_helper
result = func(result, n=nfft)
File "//anaconda3/lib/python3.7/site-packages/mkl_fft/_numpy_fft.py", line 335, in rfft
output = mkl_fft.rfft_numpy(x, n=n, axis=axis)
File "mkl_fft/_pydfti.pyx", line 609, in mkl_fft._pydfti.rfft_numpy
File "mkl_fft/_pydfti.pyx", line 502, in mkl_fft._pydfti._rc_fft1d_impl
TypeError: 1st argument must be a real sequence 2
My reading has a full cycle of 130 observations and its stored as individual values of a pandas df. The wave I am using in particular can be found here. Anyone have any ideas what this error means?
(Small disclaimer, I do not know much about signal processing, so please forgive me if this is a naive question)
Python 3.6.9, scipy 1.3.3
Downloading your file and reading it with pandas.read_csv, I could generate the following spectrogram.
import matplotlib.pyplot as plt
import pandas as pd
from scipy.signal import spectrogram
i_data = pd.read_csv('wave.csv')
f, t, Sxx = spectrogram(i_data.values[:, 1], 130)
plt.pcolormesh(t, f, Sxx)
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec]')
plt.show()

Inconsistencies between latest numpy and scikit-learn versons?

I just upgraded my versions of numpy and scikit-learn to the latest versions, i.e. numpy-1.16.3 and sklearn-0.21.0 (for Python 3.7). A lot is crashing, e.g. a simple PCA on a numeric matrix will not work anymore. For instance, consider this toy matrix:
Xt
Out[3561]:
matrix([[-0.98200559, 0.80514289, 0.02461868, -1.74564111],
[ 2.3069239 , 1.79912014, 1.47062378, 2.52407335],
[-0.70465054, -1.95163302, -0.67250316, -0.56615338],
[-0.75764211, -1.03073475, 0.98067997, -2.24648769],
[-0.2751523 , -0.46869694, 1.7917171 , -3.31407694],
[-1.52269241, 0.05986123, -1.40287416, 2.57148354],
[ 1.38349325, -1.30947483, 0.90442436, 2.52055143],
[-0.4717785 , -1.46032344, -1.50331841, 3.58598692],
[-0.03124986, -3.52378987, 1.22626145, 1.50521572],
[-1.01453403, -3.3211243 , -0.00752532, 0.56538522]])
Then run PCA on it:
import sklearn.decomposition as skd
est2 = skd.PCA(n_components=4)
est2.fit(Xt)
This fails:
Traceback (most recent call last):
File "<ipython-input-3563-1c97b7d5474f>", line 2, in <module>
est2.fit(Xt)
File "/home/sven/anaconda3/lib/python3.7/site-packages/sklearn/decomposition/pca.py", line 341, in fit
self._fit(X)
File "/home/sven/anaconda3/lib/python3.7/site-packages/sklearn/decomposition/pca.py", line 407, in _fit
return self._fit_full(X, n_components)
File "/home/sven/anaconda3/lib/python3.7/site-packages/sklearn/decomposition/pca.py", line 446, in _fit_full
total_var = explained_variance_.sum()
File "/home/sven/anaconda3/lib/python3.7/site-packages/numpy/core/_methods.py", line 36, in _sum
return umr_sum(a, axis, dtype, out, keepdims, initial)
TypeError: float() argument must be a string or a number, not '_NoValueType'
My impression is that numpy has been restructured at a very fundamental level, including single column matrix referencing, such that functions such as np.sum, np.sqrt etc don't behave as they did in older versions.
Does anyone know what the path forward with numpy is and what exactly is going on here?
At this point your code fit as run scipy.linalg.svd on your Xt, and is looking at the singular values S.
self.mean_ = np.mean(X, axis=0)
X -= self.mean_
U, S, V = linalg.svd(X, full_matrices=False)
# flip eigenvectors' sign to enforce deterministic output
U, V = svd_flip(U, V)
components_ = V
# Get variance explained by singular values
explained_variance_ = (S ** 2) / (n_samples - 1)
total_var = explained_variance_.sum()
In my working case:
In [175]: est2.explained_variance_
Out[175]: array([6.12529695, 3.20400543, 1.86208619, 0.11453425])
In [176]: est2.explained_variance_.sum()
Out[176]: 11.305922832602981
np.sum explains that, as of v 1.15, it takes a initial parameter (ref. ufunc.reduce). And the default is initial=np._NoValue
In [178]: np._NoValue
Out[178]: <no value>
In [179]: type(np._NoValue)
Out[179]: numpy._globals._NoValueType
So that explains, in part, the _NoValueType reference in the error.
What's your scipy version?
In [180]: import scipy
In [181]: scipy.__version__
Out[181]: '1.2.1'
I wonder if your scipy.linalg.svd is returning a S array that is an 'old' ndarray, and doesn't fully implement this initial parameter. I can't explain why that could happen, but can't explain otherwise why the array sum is having problems with a np._NoValue.

Pandas Group Example Errors

I am trying to replicate one example out of Wes McKinney's book on Pandas, the code is here (it assumes all names datafiles are under names folder)
# -*- coding: utf-8 -*-
import numpy as np
import pandas as pd
years = range(1880, 2011)
pieces = []
columns = ['name', 'sex', 'births']
for year in years:
path = 'names/yob%d.txt' % year
frame = pd.read_csv(path, names=columns)
frame['year'] = year
pieces.append(frame)
names = pd.concat(pieces, ignore_index=True)
names
def get_tops(group):
return group.sort_index(by='births', ascending=False)[:1000]
grouped = names.groupby(['year','sex'])
grouped.apply(get_tops)
I am using Pandas 0.10 and Python 2.7. The error I am seeing is this:
Traceback (most recent call last):
File "names.py", line 21, in <module>
grouped.apply(get_tops)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.10.0-py2.7-linux-i686.egg/pandas/core/groupby.py", line 321, in apply
return self._python_apply_general(f)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.10.0-py2.7-linux-i686.egg/pandas/core/groupby.py", line 324, in _python_apply_general
keys, values, mutated = self.grouper.apply(f, self.obj, self.axis)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.10.0-py2.7-linux-i686.egg/pandas/core/groupby.py", line 585, in apply
values, mutated = splitter.fast_apply(f, group_keys)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.10.0-py2.7-linux-i686.egg/pandas/core/groupby.py", line 2127, in fast_apply
results, mutated = lib.apply_frame_axis0(sdata, f, names, starts, ends)
File "reduce.pyx", line 421, in pandas.lib.apply_frame_axis0 (pandas/lib.c:24934)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.10.0-py2.7-linux-i686.egg/pandas/core/frame.py", line 2028, in __setattr__
self[name] = value
File "/usr/local/lib/python2.7/dist-packages/pandas-0.10.0-py2.7-linux-i686.egg/pandas/core/frame.py", line 2043, in __setitem__
self._set_item(key, value)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.10.0-py2.7-linux-i686.egg/pandas/core/frame.py", line 2078, in _set_item
value = self._sanitize_column(key, value)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.10.0-py2.7-linux-i686.egg/pandas/core/frame.py", line 2112, in _sanitize_column
raise AssertionError('Length of values does not match '
AssertionError: Length of values does not match length of index
Any ideas?
I think this was a bug introduced in 0.10, namely issue #2605,
"AssertionError when using apply after GroupBy". It's since been fixed.
You can either wait for the 0.10.1 release, which shouldn't be too long from now, or you can upgrade to the development version (either via git or simply by downloading the zip of master.)

Clustering of sparse matrix in python and scipy

I'm trying to cluster some data with python and scipy but the following code does not work for reason I do not understand:
from scipy.sparse import *
matrix = dok_matrix((en,en), int)
for pub in pubs:
authors = pub.split(";")
for auth1 in authors:
for auth2 in authors:
if auth1 == auth2: continue
id1 = e2id[auth1]
id2 = e2id[auth2]
matrix[id1, id2] += 1
from scipy.cluster.vq import vq, kmeans2, whiten
result = kmeans2(matrix, 30)
print result
It says:
Traceback (most recent call last):
File "cluster.py", line 40, in <module>
result = kmeans2(matrix, 30)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 683, in kmeans2
clusters = init(data, k)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 576, in _krandinit
return init_rankn(data)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 563, in init_rankn
mu = np.mean(data, 0)
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 2374, in mean
return mean(axis, dtype, out)
TypeError: mean() takes at most 2 arguments (4 given)
When I'm using kmenas instead of kmenas2 I have the following error:
Traceback (most recent call last):
File "cluster.py", line 40, in <module>
result = kmeans(matrix, 30)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 507, in kmeans
guess = take(obs, randint(0, No, k), 0)
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 103, in take
return take(indices, axis, out, mode)
TypeError: take() takes at most 3 arguments (5 given)
I think I have the problems because I'm using sparse matrices but my matrices are too big to fit the memory otherwise. Is there a way to use standard clustering algorithms from scipy with sparse matrices? Or I have to re-implement them myself?
I created a new version of my code to work with vector space
el = len(experts)
pl = len(pubs)
print el, pl
from scipy.sparse import *
P = dok_matrix((pl, el), int)
p_id = 0
for pub in pubs:
authors = pub.split(";")
for auth1 in authors:
if len(auth1) < 2: continue
id1 = e2id[auth1]
P[p_id, id1] = 1
from scipy.cluster.vq import kmeans, kmeans2, whiten
result = kmeans2(P, 30)
print result
But I'm still getting the error:
TypeError: mean() takes at most 2 arguments (4 given)
What am I doing wrong?
K-means cannot be run on distance matrixes.
It needs a vector space to compute means in, that is why it is called k-means. If you want to use a distance matrix, you need to look into purely distance based algorithms such as DBSCAN and OPTICS (both on Wikipedia).
May I suggest, "Affinity Propagation" from scikit-learn? On the work I've been doing with it, I find that it has generally been able to find the 'naturally' occurring clusters within my data set. The inputs into the algorithm are an affinity matrix, or similarity matrix, of any arbitrary similarity measure.
I don't have a good handle on the kind of data you have on hand, so I can't speak to the exact suitability of this method to your data set, but it may be worth a try, perhaps?
Alternatively, if you're looking to cluster graphs, I'd take a look at NetworkX. That might be a useful tool for you. The reason I suggest this is because it looks like the data you're looking to work with networks of authors. Hence, with NetworkX, you can put in an adjacency matrix and find out which authors are clustered together.
For a further elaboration on this, you can see a question that I had asked earlier for inspiration here.