calculating the root mean squared value and the average in graphlab - graphlab

I have a dataframe of the format cp= [cars['car_models']=="chevy"]
How is it possible to get the average of cp?
have looked everywhere for how to this this. thanks for the help

As #papayawarrior said, SFrames don't have averages. SArrays (of type float/int) might have these.
>>> sf = gl.SFrame({"x":[1,2,3]}) # SFrame with a single column (SArray) x.
>>> sf["x"].mean() # sf["x"] grabs the SArray x, then we take its average.
2.0
If you want the root mean square error, you should also have two SArrays (maybe in the same SFrame). I don't know what a "root mean squared value" is.
import graphlab as gl
cars = gl.SFrame({
"car_models": ["chevy", "ford", "chevy"],
"targets": [1, 2, 3],
"predictions": [7, 9, 8]
})
cp = cars[cars["car_models"] == "chevy"]
rmse = gl.evaluation.rmse(cp["targets"], cp["predictions"])
The rmse in this example is 5.522680508593631 .

Related

Explicit slicing across a particular dimension

I've got a 3D tensor x (e.g 4x4x100). I want to obtain a subset of this by explicitly choosing elements across the last dimension. This would have been easy if I was choosing the same elements across last dimension (e.g. x[:,:,30:50] but I want to target different elements across that dimension using the 2D tensor indices which specifies the idx across third dimension. Is there an easy way to do this in numpy?
A simpler 2D example:
x = [[1,2,3,4,5,6],[10,20,30,40,50,60]]
indices = [1,3]
Let's say I want to grab two elements across third dimension of x starting from points specified by indices. So my desired output is:
[[2,3],[40,50]]
Update: I think I could use a combination of take() and ravel_multi_index() but some of the platforms that are inspired by numpy (like PyTorch) don't seem to have ravel_multi_index so I'm looking for alternative solutions
Iterating over the idx, and collecting the slices is not a bad option if the number of 'rows' isn't too large (and the size of the sizes is relatively big).
In [55]: x = np.array([[1,2,3,4,5,6],[10,20,30,40,50,60]])
In [56]: idx = [1,3]
In [57]: np.array([x[j,i:i+2] for j,i in enumerate(idx)])
Out[57]:
array([[ 2, 3],
[40, 50]])
Joining the slices like this only works if they all are the same size.
An alternative is to collect the indices into an array, and do one indexing.
For example with a similar iteration:
idxs = np.array([np.arange(i,i+2) for i in idx])
But broadcasted addition may be better:
In [58]: idxs = np.array(idx)[:,None]+np.arange(2)
In [59]: idxs
Out[59]:
array([[1, 2],
[3, 4]])
In [60]: x[np.arange(2)[:,None], idxs]
Out[60]:
array([[ 2, 3],
[40, 50]])
ravel_multi_index is not hard to replicate (if you don't need clipping etc):
In [65]: np.ravel_multi_index((np.arange(2)[:,None],idxs),x.shape)
Out[65]:
array([[ 1, 2],
[ 9, 10]])
In [66]: x.flat[_]
Out[66]:
array([[ 2, 3],
[40, 50]])
In [67]: np.arange(2)[:,None]*x.shape[1]+idxs
Out[67]:
array([[ 1, 2],
[ 9, 10]])
along the 3D axis:
x = [x[:,i].narrow(2,index,2) for i,index in enumerate(indices)]
x = torch.stack(x,dim=1)
by enumerating you get the index of the axis and index from where you want to start slicing in one.
narrow gives you a zero-copy length long slice from a starting index start along a certain axis
you said you wanted:
dim = 2
start = index
length = 2
then you simply have to stack these tensors back to a single 3D.
This is the least work intensive thing i can think of for pytorch.
EDIT
if you just want different indices along different axis and indices is a 2D tensor you can do:
x = [x[:,i,index] for i,index in enumerate(indices)]
x = torch.stack(x,dim=1)
You really should have given a proper working example, making it unnecessarily confusing.
Here is how to do it in numpy, now clue about torch, though.
The following picks a slice of length n along the third dimension starting from points idx depending on the other two dimensions:
# example
a = np.arange(60).reshape(2, 3, 10)
idx = [(1,2,3),(4,3,2)]
n = 4
# build auxiliary 4D array where the last two dimensions represent
# a sliding n-window of the original last dimension
j,k,l = a.shape
s,t,u = a.strides
aux = np.lib.stride_tricks.as_strided(a, (j,k,l-n+1,n), (s,t,u,u))
# pick desired offsets from sliding windows
aux[(*np.ogrid[:j, :k], idx)]
# array([[[ 1, 2, 3, 4],
# [12, 13, 14, 15],
# [23, 24, 25, 26]],
# [[34, 35, 36, 37],
# [43, 44, 45, 46],
# [52, 53, 54, 55]]])
I came up with below using broadcasting:
x = np.array([[1,2,3,4,5,6,7,8,9,10],[10,20,30,40,50,60,70,80,90,100]])
i = np.array([1,5])
N = 2 # number of elements I want to extract along each dimension. Starting points specified in i
r = np.arange(x.shape[-1])
r = np.broadcast_to(r, x.shape)
ii = i[:, np.newaxis]
ii = np.broadcast_to(ii, x.shape)
mask = np.logical_and(r-ii>=0, r-ii<=N)
output = x[mask].reshape(2,3)
Does this look alright?

Problem understanding Principal Component Analysis code

Can anyone please explain me this line of code?
P = vectors.T.dot(C.T)
at line 22
I have searched for online documentation but I found nothing.
from numpy import array
from numpy import mean
from numpy import cov
from numpy.linalg import eig
# define a matrix
A = array([[1, 2], [3, 4], [5, 6]])
print(A)
# calculate the mean of each column
M = mean(A.T, axis=1)
print(M)
# center columns by subtracting column means
C = A - M
print(C)
# calculate covariance matrix of centered matrix
V = cov(C.T)
print(V)
# eigendecomposition of covariance matrix
values, vectors = eig(V)
print(vectors)
print(values)
# project data
P = vectors.T.dot(C.T) # Explain me this line
print(P.T)
vectors.T.dot(C.T) is the dot product of the transposed array vectors with the transposed array C
The dot product operation and projections are related as one can use the dot product to obtain the length of a projected vector along a direction (the other vector), when that vector is a unit vector.
As your question is rather vague, I'll let you comment on this answer and adapt it if necessary.

what is the use of reduce command in tensorflow?

tensorflow.reduce_sum(..) computes the sum of elements across dimensions of a tensor. it is Ok.
But one thing is not clear to me , what is the purpose of saying reduce in the function name ?
Is it related to map_reduce of parallel computation?
Let's say, it distributes the required computation to
different cores , and collect the result from the cores , finally delivers the sum of the collected results ?
Because you can compute the sum along a given dimension (and therefore reduce it). And no it has nothing to do with map-reduce.
Quoting the documentation string of the method:
Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keepdims is true, the reduced dimensions are retained with length 1.
Example from the API:
x = tf.constant([[1, 1, 1], [1, 1, 1]])
tf.reduce_sum(x) # 6
tf.reduce_sum(x, 0) # [2, 2, 2]
tf.reduce_sum(x, 1) # [3, 3]
tf.reduce_sum(x, 1, keepdims=True) # [[3], [3]]
tf.reduce_sum(x, [0, 1]) # 6

Append a "layer" to 3D-Array with Numpy

I have a numpy array with dimensions 12 x 12 x 4. Now I'm trying to add an extra layer to this cube resulting in a 12 x 13 x 4 array. This 13th layer should contain the corresponding indices from the first axis, so for example addressing [7, 13, :] results in [7, 7, 7, 7].
Hard to explain but maybe someone has some advice on how to achieve this with numpy?
EDIT:
I've found a solution, though it seems a little overcomplicated:
# Generate extra layer
layer = np.repeat(np.arange(0, 12)[:, np.newaxis], data.shape[2], axis=1)
# Get dimensions right...
layer = np.expand_dims(layer, axis=1)
# ... and finally append to data
result = np.append(data, layer, axis=1)
Still open for better suggestions.
You have the right idea. A slight simplification:
layer = np.repeat(np.arange(3)[:,None,None], data.shape[2], axis=2)
result = np.concatenate((data, layer), axis=1)

Multiply certain columns of a 2D tensor by a scaler

Is their a way using tf functions to multiply certain columns of a 2D tensor by a scaler?
e.g. multiply the second and third column of a matrix by 2:
[[2,3,4,5],[4,3,4,3]] -> [[2,6,8,5],[4,6,8,3]]
Thanks for any help.
EDIT:
Thank you Psidom for the reply. Unfortunately I am not using a tf.Variable, so it seems I have to use tf.slice.
What I am trying to do is to multiply all components by 2 of a single-sided PSD, except for the DC component and the Nyquist frequency component, to conserve the total power when going from a double-sided spectrum to a single-sided spectrum.
This would correspond to: 2*PSD[:,1:-1] if it was a numpy array.
Here is my attempt with tf.assign and tf.slice:
x['PSD'] = tf.assign(tf.slice(x['PSD'], [0, 1], [tf.shape(x['PSD'])[0], tf.shape(x['PSD'])[1] - 2]),
tf.scalar_mul(2, tf.slice(x['PSD'], [0, 1], [tf.shape(x['PSD'])[0], tf.shape(x['PSD'])[1] - 2]))) # single-sided power spectral density.
However:
AttributeError: 'Tensor' object has no attribute 'assign'
If the tensor is a variable, you can do this by slicing the columns you want to update and then use tf.assign:
x = tf.Variable([[2,3,4,5],[4,3,4,3]])
x = tf.assign(x[:,1:3], x[:,1:3]*2) # update the second and third columns and assign
# the new tensor to x ​
with tf.Session() as sess:
tf.global_variables_initializer().run()
print(sess.run(x))
#[[2 6 8 5]
# [4 6 8 3]]
Ended up taking 3 different slices and concatenating them together, with the middle slice multiplied by 2. Probably not the most efficient way, but it works:
x['PSD'] = tf.concat([tf.slice(x['PSD'], [0, 0], [tf.shape(x['PSD'])[0], 1]),
tf.scalar_mul(2, tf.slice(x['PSD'], [0, 1], [tf.shape(x['PSD'])[0], tf.shape(x['PSD'])[1] - 2])),
tf.slice(x['PSD'], [0, tf.shape(x['PSD'])[1] - 1], [tf.shape(x['PSD'])[0], 1])], 1) # single-sided power spectral density.