I have a dataframe named, "df", with 4 columns. Three columns are independent variables: x1, x2, and x3. And, the other variable, y, is the dependent variable
I would like to calculate the distance, "pdist" between the dependent variable and each of the dependent variables, so I first converted each column to a numpy array as follows:
y = df[["y"]].values
x1 = df[["x1"]].values
x2 = df[["x2"]].values
x3 = df[["x3"]].values
When I feed these arrays through this coding pipeline I got from Github:
import numpy as np
from scipy.spatial.distance import pdist
def distance_correlation(Xval, Yval, pval=True, nruns=500):
X, Y = np.atleast_1d(Xval),np.atleast_1d(Yval)
if np.prod(X.shape) == len(X):X = X[:, None]
if np.prod(Y.shape) == len(Y):Y = Y[:, None]
X, Y = np.atleast_2d(X),np.atleast_2d(Y)
n = X.shape[0]
if Y.shape[0] != X.shape[0]:raise ValueError('Number of samples must match')
a, b = squareform(pdist(X)),squareform(pdist(Y))
A = a - a.mean(axis=0)[None, :] - a.mean(axis=1)[:, None] + a.mean()
B = b - b.mean(axis=0)[None, :] - b.mean(axis=1)[:, None] + b.mean()
dcov2_xy = (A * B).sum() / float(n * n)
dcov2_xx = (A * A).sum() / float(n * n)
dcov2_yy = (B * B).sum() / float(n * n)
dcor = np.sqrt(dcov2_xy) / np.sqrt(np.sqrt(dcov2_xx) * np.sqrt(dcov2_yy))
if pval:
greater = 0
for i in range(nruns):
Y_r = copy.copy(Yval)
np.random.shuffle(Y_r)
if distcorr(Xval, Y_r, pval=False) > dcor:
greater += 1
return (dcor, greater / float(nruns))
else:
return dcor
distance_correlation(x1, y, pval=True, nruns=500)
I get this error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-32-c720c9df4e97> in <module>
----> 1 distance_correlation(bop_sp500, price, pval=True, nruns=500)
<ipython-input-17-e0b3aea12c32> in distance_correlation(Xval, Yval, pval, nruns)
9 n = X.shape[0]
10 if Y.shape[0] != X.shape[0]:raise ValueError('Number of samples must match')
---> 11 a, b = squareform(pdist(X)),squareform(pdist(Y))
12 A = a - a.mean(axis=0)[None, :] - a.mean(axis=1)[:, None] + a.mean()
13 B = b - b.mean(axis=0)[None, :] - b.mean(axis=1)[:, None] + b.mean()
~\Anaconda3\lib\site-packages\scipy\spatial\distance.py in pdist(X, metric, *args, **kwargs)
1997 s = X.shape
1998 if len(s) != 2:
-> 1999 raise ValueError('A 2-dimensional array must be passed.')
2000
2001 m, n = s
ValueError: A 2-dimensional array must be passed..
Could anyone identify where I am going wrong? I know the error originates from the manner in which I created my numpy arrays. But, I have no clues on fixing it.
Please explain it with examples that use my variable definitions. I am new to Python
Ok, so I finally managed to figure out the cause of the problem I faced:
The Numpy array that was being fed into the helper function was a 2d array.
While the helper function required a "Numpy vector"; i.e. a 1d Numpy array.
The best way to create it is to use the numpy.ravel() function. Hence, for my datasets, the code would be as follows (I have broken down the steps for simplicity):
# Create Arrays
y = df[["y"]].values
x1 = df[["x1"]].values
x2 = df[["x2"]].values
x3 = df[["x3"]].values
# Ravel Them
y = y.ravel()
x1 = x1.ravel()
x2 = x2.ravel()
x3 = x3.ravel()
Related
I wander, is it possible to index several dimensions at once ? With some broadcasting. Example :
Suppose i have an array A, shaped (n,d). Suppose i have a indexing array, say I with integer values between 0 and d-1. Set B = A[:,I].
If shape(I) == (k,), for whaterver k, then B has shape (n,k) and B[x,y] = A[x,I[y]].
But if shape(I) == (k,p) for whatever (k,p), then i wanted B to be shaped (n,k,p) with B[x,y,z] = A[x,I[y,z]].
1° How can i get this behavior ?
2° Does it have a drawback i did not see ?
You can do it exactly as you described it:
import numpy as np
n = 100
d = 20
k = 10
p = 17
A = np.random.random((n, d))
I = np.random.randint(low=0, high=d, size=(k, p))
B = A[:, I]
print(B.shape) # (n, k, p)
# Testing if the new array B is constructed as expected
x = 3
y = 5
z = 7
print(B[x, y, z])
print(A[x, I[y, z]])
print(B[x, y, z] == A[x, I[y, z]])
Its hard to answer if this is a good implementation or not, without context. But in general it is a good idea to use numpy and vectorization if you have speed in mind.
I have the following code. It is taking forever in Python. There must be a way to translate this calculation into a broadcast...
def euclidean_square(a,b):
squares = np.zeros((a.shape[0],b.shape[0]))
for i in range(squares.shape[0]):
for j in range(squares.shape[1]):
diff = a[i,:] - b[j,:]
sqr = diff**2.0
squares[i,j] = np.sum(sqr)
return squares
You can use np.einsum after calculating the differences in a broadcasted way, like so -
ab = a[:,None,:] - b
out = np.einsum('ijk,ijk->ij',ab,ab)
Or use scipy's cdist with its optional metric argument set as 'sqeuclidean' to give us the squared euclidean distances as needed for our problem, like so -
from scipy.spatial.distance import cdist
out = cdist(a,b,'sqeuclidean')
I collected the different methods proposed here, and in two other questions, and measured the speed of the different methods:
import numpy as np
import scipy.spatial
import sklearn.metrics
def dist_direct(x, y):
d = np.expand_dims(x, -2) - y
return np.sum(np.square(d), axis=-1)
def dist_einsum(x, y):
d = np.expand_dims(x, -2) - y
return np.einsum('ijk,ijk->ij', d, d)
def dist_scipy(x, y):
return scipy.spatial.distance.cdist(x, y, "sqeuclidean")
def dist_sklearn(x, y):
return sklearn.metrics.pairwise.pairwise_distances(x, y, "sqeuclidean")
def dist_layers(x, y):
res = np.zeros((x.shape[0], y.shape[0]))
for i in range(x.shape[1]):
res += np.subtract.outer(x[:, i], y[:, i])**2
return res
# inspired by the excellent https://github.com/droyed/eucl_dist
def dist_ext1(x, y):
nx, p = x.shape
x_ext = np.empty((nx, 3*p))
x_ext[:, :p] = 1
x_ext[:, p:2*p] = x
x_ext[:, 2*p:] = np.square(x)
ny = y.shape[0]
y_ext = np.empty((3*p, ny))
y_ext[:p] = np.square(y).T
y_ext[p:2*p] = -2*y.T
y_ext[2*p:] = 1
return x_ext.dot(y_ext)
# https://stackoverflow.com/a/47877630/648741
def dist_ext2(x, y):
return np.einsum('ij,ij->i', x, x)[:,None] + np.einsum('ij,ij->i', y, y) - 2 * x.dot(y.T)
I use timeit to compare the speed of the different methods. For the comparison, I use vectors of length 10, with 100 vectors in the first group, and 1000 vectors in the second group.
import timeit
p = 10
x = np.random.standard_normal((100, p))
y = np.random.standard_normal((1000, p))
for method in dir():
if not method.startswith("dist_"):
continue
t = timeit.timeit(f"{method}(x, y)", number=1000, globals=globals())
print(f"{method:12} {t:5.2f}ms")
On my laptop, the results are as follows:
dist_direct 5.07ms
dist_einsum 3.43ms
dist_ext1 0.20ms <-- fastest
dist_ext2 0.35ms
dist_layers 2.82ms
dist_scipy 0.60ms
dist_sklearn 0.67ms
While the two methods dist_ext1 and dist_ext2, both based on the idea of writing (x-y)**2 as x**2 - 2*x*y + y**2, are very fast, there is a downside: When the distance between x and y is very small, due to cancellation error the numerical result can sometimes be (very slightly) negative.
Another solution besides using cdist is the following
difference_squared = np.zeros((a.shape[0], b.shape[0]))
for dimension_iterator in range(a.shape[1]):
difference_squared = difference_squared + np.subtract.outer(a[:, dimension_iterator], b[:, dimension_iterator])**2.
I am trying to train the MNIST data (which I downloaded from Kaggle) with simple multi-class logistic regression, but the scipy.optimize functions hang.
Here's the code:
import csv
from math import exp
from numpy import *
from scipy.optimize import fmin, fmin_cg, fmin_powell, fmin_bfgs
# Prepare the data
def getIiter(ifname):
"""
Get the iterator from a csv file with filename ifname
"""
ifile = open(ifname, 'r')
iiter = csv.reader(ifile)
iiter.__next__()
return iiter
def parseRow(s):
y = [int(x) for x in s]
lab = y[0]
z = y[1:]
return (lab, z)
def getAllRows(ifname):
iiter = getIiter(ifname)
x = []
l = []
for row in iiter:
lab, z = parseRow(row)
x.append(z)
l.append(lab)
return x, l
def cutData(x, y):
"""
70% training
30% testing
"""
m = len(x)
t = int(m * .7)
return [(x[:t], y[:t]), (x[t:], y[t:])]
def num2IndMat(l):
t = array(l)
tt = [vectorize(int)((t == i)) for i in range(10)]
return array(tt).T
def readData(ifname):
x, l = getAllRows(ifname)
t = [[1] + y for y in x]
return array(t), num2IndMat(l)
#Calculate the cost function
def sigmoid(x):
return 1 / (1 + exp(-x))
vSigmoid = vectorize(sigmoid)
vLog = vectorize(log)
def costFunction(theta, x, y):
sigxt = vSigmoid(dot(x, theta))
cm = (- y * vLog(sigxt) - (1 - y) * vLog(1 - sigxt)) / m / N
return sum(cm)
def unflatten(flatTheta):
return [flatTheta[i * N : (i + 1) * N] for i in range(n + 1)]
def costFunctionFlatTheta(flatTheta):
return costFunction(unflatten(flatTheta), trainX, trainY)
def costFunctionFlatTheta1(flatTheta):
return costFunction(flatTheta.reshape(785, 10), trainX, trainY)
x, y = readData('train.csv')
[(trainX, trainY), (testX, testY)] = cutData(x, y)
m = len(trainX)
n = len(trainX[0]) - 1
N = len(trainY[0])
initTheta = zeros(((n + 1), N))
flatInitTheta = ndarray.flatten(initTheta)
flatInitTheta1 = initTheta.reshape(1, -1)
In the last two lines we flatten initTheta because the fmin{,_cg,_bfgs,_powell} functions seem to only take vectors as the initial value argument x0. I also flatten initTheta using reshape in hope this answer can be of help.
There is no problem computing the cost function which takes up less than 2 seconds on my computer:
print(costFunctionFlatTheta(flatInitTheta), costFunctionFlatTheta1(flatInitTheta1))
# 0.69314718056 0.69314718056
But all the fmin functions hang, even if I set maxiter=0.
e.g.
newFlatTheta = fmin(costFunctionFlatTheta, flatInitTheta, maxiter=0)
or
newFlatTheta1 = fmin(costFunctionFlatTheta1, flatInitTheta1, maxiter=0)
When I interrupt the program, it seems to me it all hangs at lines in optimize.py calling the cost functions, lines like this:
return function(*(wrapper_args + args))
For example, if I use fmin_cg, this would be line 292 in optimize.py (Version 0.5).
How do I solve this problem?
OK I found a way to stop fmin_cg from hanging.
Basically I just need to write a function that computes the gradient of the cost function, and pass it to the fprime parameter of fmin_cg.
def gradient(theta, x, y):
return dot(x.T, vSigmoid(dot(x, theta)) - y) / m / N
def gradientFlatTheta(flatTheta):
return ndarray.flatten(gradient(flatTheta.reshape(785, 10), trainX, trainY))
Then
newFlatTheta = fmin_cg(costFunctionFlatTheta, flatInitTheta, fprime=gradientFlatTheta, maxiter=0)
terminates within seconds, and setting maxiter to a higher number (say 100) one can train the model within reasonable amount of time.
The documentation of fmin_cg says the gradient would be numerically computed if no fprime is given, which is what I suspect caused the hanging.
Thanks to this notebook by zgo2016#Kaggle which helped me find the solution.
Let x and y be vectors of length N, and z is a function z = f(x,y). In Tensorflow v1.0.0, tf.hessians(z,x) and tf.hessians(z,y) both returns an N by N matrix, which is what I expected.
However, when I concatenate the x and y into a vector p of size 2*N using tf.concat, and run tf.hessian(z, p), it returns error "ValueError: None values not supported."
I understand this is because in the computation graph x,y ->z and x,y -> p, so there is no gradient between p and z. To circumvent the problem, I can create p first, slice it into x and y, but I will have to change a ton of my code. Is there a more elegant way?
related question: Slice of a variable returns gradient None
import tensorflow as tf
import numpy as np
N = 2
A = tf.Variable(np.random.rand(N,N).astype(np.float32))
B = tf.Variable(np.random.rand(N,N).astype(np.float32))
x = tf.Variable(tf.random_normal([N]) )
y = tf.Variable(tf.random_normal([N]) )
#reshape to N by 1
x_1 = tf.reshape(x,[N,1])
y_1 = tf.reshape(y,[N,1])
#concat x and y to form a vector with length of 2*N
p = tf.concat([x,y],axis = 0)
#define the function
z = 0.5*tf.matmul(tf.matmul(tf.transpose(x_1), A), x_1) + 0.5*tf.matmul(tf.matmul(tf.transpose(y_1), B), y_1) + 100
#works , hx and hy are both N by N matrix
hx = tf.hessians(z,x)
hy = tf.hessians(z,y)
#this gives error "ValueError: None values not supported."
#expecting a matrix of size 2*N by 2*N
hp = tf.hessians(z,p)
Compute the hessian by its definition.
gxy = tf.gradients(z, [x, y])
gp = tf.concat([gxy[0], gxy[1]], axis=0)
hp = []
for i in range(2*N):
hp.append(tf.gradients(gp[i], [x, y]))
Because tf.gradients computes the sum of (dy/dx), so when computing the second partial derivative, one should slice the vector into scalars and then compute the gradient. Tested on tf1.0 and python2.
In other words, I want to make a heatmap (or surface plot) where the color varies as a function of 2 variables. (Specifically, luminance = magnitude and hue = phase.) Is there any native way to do this?
Some examples of similar plots:
Several good examples of exactly(?) what I want to do.
More examples from astronomy, but with non-perceptual hue
Edit: This is what I did with it: https://github.com/endolith/complex_colormap
imshow can take an array of [r, g, b] entries. So you can convert the absolute values to intensities and phases - to hues.
I will use as an example complex numbers, because for it it makes the most sense. If needed, you can always add numpy arrays Z = X + 1j * Y.
So for your data Z you can use e.g.
imshow(complex_array_to_rgb(Z))
where (EDIT: made it quicker and nicer thanks to this suggestion)
def complex_array_to_rgb(X, theme='dark', rmax=None):
'''Takes an array of complex number and converts it to an array of [r, g, b],
where phase gives hue and saturaton/value are given by the absolute value.
Especially for use with imshow for complex plots.'''
absmax = rmax or np.abs(X).max()
Y = np.zeros(X.shape + (3,), dtype='float')
Y[..., 0] = np.angle(X) / (2 * pi) % 1
if theme == 'light':
Y[..., 1] = np.clip(np.abs(X) / absmax, 0, 1)
Y[..., 2] = 1
elif theme == 'dark':
Y[..., 1] = 1
Y[..., 2] = np.clip(np.abs(X) / absmax, 0, 1)
Y = matplotlib.colors.hsv_to_rgb(Y)
return Y
So, for example:
Z = np.array([[3*(x + 1j*y)**3 + 1/(x + 1j*y)**2
for x in arange(-1,1,0.05)] for y in arange(-1,1,0.05)])
imshow(complex_array_to_rgb(Z, rmax=5), extent=(-1,1,-1,1))
imshow(complex_array_to_rgb(Z, rmax=5, theme='light'), extent=(-1,1,-1,1))
imshow will take an NxMx3 (rbg) or NxMx4 (grba) array so you can do your color mapping 'by hand'.
You might be able to get a bit of traction by sub-classing Normalize to map your vector to a scaler and laying out a custom color map very cleverly (but I think this will end up having to bin one of your dimensions).
I have done something like this (pdf link, see figure on page 24), but the code is in MATLAB (and buried someplace in my archives).
I agree a bi-variate color map would be useful (primarily for representing very dense vector fields where your kinda up the creek no matter what you do).
I think the obvious extension is to let color maps take complex arguments. It would require specialized sub-classes of Normalize and Colormap and I am going back and forth on if I think it would be a lot of work to implement. I suspect if you get it working by hand it will just be a matter of api wrangling.
I created an easy to use 2D colormap class, that takes 2 NumPy arrays and maps them to an RGB image, based on a reference image.
I used #GjjvdBurg's answer as a starting point. With a bit of work, this could still be improved, and possibly turned into a proper Python module - if you want, feel free to do so, I grant you all credits.
TL;DR:
# read reference image
cmap_2d = ColorMap2D('const_chroma.jpeg', reverse_x=True) # , xclip=(0,0.9))
# map the data x and y to the RGB space, defined by the image
rgb = cmap_2d(data_x, data_y)
# generate a colorbar image
cbar_rgb = cmap_2d.generate_cbar()
The ColorMap2D class:
class ColorMap2D:
def __init__(self, filename: str, transpose=False, reverse_x=False, reverse_y=False, xclip=None, yclip=None):
"""
Maps two 2D array to an RGB color space based on a given reference image.
Args:
filename (str): reference image to read the x-y colors from
rotate (bool): if True, transpose the reference image (swap x and y axes)
reverse_x (bool): if True, reverse the x scale on the reference
reverse_y (bool): if True, reverse the y scale on the reference
xclip (tuple): clip the image to this portion on the x scale; (0,1) is the whole image
yclip (tuple): clip the image to this portion on the y scale; (0,1) is the whole image
"""
self._colormap_file = filename or COLORMAP_FILE
self._img = plt.imread(self._colormap_file)
if transpose:
self._img = self._img.transpose()
if reverse_x:
self._img = self._img[::-1,:,:]
if reverse_y:
self._img = self._img[:,::-1,:]
if xclip is not None:
imin, imax = map(lambda x: int(self._img.shape[0] * x), xclip)
self._img = self._img[imin:imax,:,:]
if yclip is not None:
imin, imax = map(lambda x: int(self._img.shape[1] * x), yclip)
self._img = self._img[:,imin:imax,:]
if issubclass(self._img.dtype.type, np.integer):
self._img = self._img / 255.0
self._width = len(self._img)
self._height = len(self._img[0])
self._range_x = (0, 1)
self._range_y = (0, 1)
#staticmethod
def _scale_to_range(u: np.ndarray, u_min: float, u_max: float) -> np.ndarray:
return (u - u_min) / (u_max - u_min)
def _map_to_x(self, val: np.ndarray) -> np.ndarray:
xmin, xmax = self._range_x
val = self._scale_to_range(val, xmin, xmax)
rescaled = (val * (self._width - 1))
return rescaled.astype(int)
def _map_to_y(self, val: np.ndarray) -> np.ndarray:
ymin, ymax = self._range_y
val = self._scale_to_range(val, ymin, ymax)
rescaled = (val * (self._height - 1))
return rescaled.astype(int)
def __call__(self, val_x, val_y):
"""
Take val_x and val_y, and associate the RGB values
from the reference picture to each item. val_x and val_y
must have the same shape.
"""
if val_x.shape != val_y.shape:
raise ValueError(f'x and y array must have the same shape, but have {val_x.shape} and {val_y.shape}.')
self._range_x = (np.amin(val_x), np.amax(val_x))
self._range_y = (np.amin(val_y), np.amax(val_y))
x_indices = self._map_to_x(val_x)
y_indices = self._map_to_y(val_y)
i_xy = np.stack((x_indices, y_indices), axis=-1)
rgb = np.zeros((*val_x.shape, 3))
for indices in np.ndindex(val_x.shape):
img_indices = tuple(i_xy[indices])
rgb[indices] = self._img[img_indices]
return rgb
def generate_cbar(self, nx=100, ny=100):
"generate an image that can be used as a 2D colorbar"
x = np.linspace(0, 1, nx)
y = np.linspace(0, 1, ny)
return self.__call__(*np.meshgrid(x, y))
Usage:
Full example, using the constant chroma reference taken from here as a screenshot:
# generate data
x = y = np.linspace(-2, 2, 300)
xx, yy = np.meshgrid(x, y)
ampl = np.exp(-(xx ** 2 + yy ** 2))
phase = (xx ** 2 - yy ** 2) * 6 * np.pi
data = ampl * np.exp(1j * phase)
data_x, data_y = np.abs(data), np.angle(data)
# Here is the 2D colormap part
cmap_2d = ColorMap2D('const_chroma.jpeg', reverse_x=True) # , xclip=(0,0.9))
rgb = cmap_2d(data_x, data_y)
cbar_rgb = cmap_2d.generate_cbar()
# plot the data
fig, plot_ax = plt.subplots(figsize=(8, 6))
plot_extent = (x.min(), x.max(), y.min(), y.max())
plot_ax.imshow(rgb, aspect='auto', extent=plot_extent, origin='lower')
plot_ax.set_xlabel('x')
plot_ax.set_ylabel('y')
plot_ax.set_title('data')
# create a 2D colorbar and make it fancy
plt.subplots_adjust(left=0.1, right=0.65)
bar_ax = fig.add_axes([0.68, 0.15, 0.15, 0.3])
cmap_extent = (data_x.min(), data_x.max(), data_y.min(), data_y.max())
bar_ax.imshow(cbar_rgb, extent=cmap_extent, aspect='auto', origin='lower',)
bar_ax.set_xlabel('amplitude')
bar_ax.set_ylabel('phase')
bar_ax.yaxis.tick_right()
bar_ax.yaxis.set_label_position('right')
for item in ([bar_ax.title, bar_ax.xaxis.label, bar_ax.yaxis.label] +
bar_ax.get_xticklabels() + bar_ax.get_yticklabels()):
item.set_fontsize(7)
plt.show()
I know this is an old post, but want to help out others that may arrive late. Below is a python function to implement complex_to_rgb from sage. Note: This implementation isn't optimal, but it is readable. See links: (examples)(source code)
Code:
import numpy as np
def complex_to_rgb(z_values):
width = z_values.shape[0]
height = z_values.shape[1]
rgb = np.zeros(shape=(width, height, 3))
for i in range(width):
row = z_values[i]
for j in range(height):
# define value, real(value), imag(value)
zz = row[j]
x = np.real(zz)
y = np.imag(zz)
# define magnitued and argument
magnitude = np.hypot(x, y)
arg = np.arctan2(y, x)
# define lighness
lightness = np.arctan(np.log(np.sqrt(magnitude) + 1)) * (4 / np.pi) - 1
if lightness < 0:
bot = 0
top = 1 + lightness
else:
bot = lightness
top = 1
# define hue
hue = 3 * arg / np.pi
if hue < 0:
hue += 6
# set ihue and use it to define rgb values based on cases
ihue = int(hue)
# case 1
if ihue == 0:
r = top
g = bot + hue * (top - bot)
b = bot
# case 2
elif ihue == 1:
r = bot + (2 - hue) * (top - bot)
g = top
b = bot
# case 3
elif ihue == 2:
r = bot
g = top
b = bot + (hue - 2) * (top - bot)
# case 4
elif ihue == 3:
r = bot
g = bot + (4 - hue) * (top - bot)
b = top
# case 5
elif ihue == 4:
r = bot + (hue - 4) * (top - bot)
g = bot
b = top
# case 6
else:
r = top
g = bot
b = bot + (6 - hue) * (top - bot)
# set rgb array values
rgb[i, j, 0] = r
rgb[i, j, 1] = g
rgb[i, j, 2] = b
return rgb