Flatten out y-matrix and repeat x-vector - numpy

So I have a vector x and a matrix y where y[i] = [f(x[i]),f(x[i]),f(x[i])...] is a row of experimental values of f at x. I need to flatten out y so that I have two vectors with y[i] = f(x[i]). Here's what I'm using now:
x = np.ravel([[xx]*y.shape[1] for xx in x]); y = np.ravel(y)
Is there a cleaner/faster way?

You could just use np.repeat -
x = x.repeat(y.shape[1])

Related

Can Dense() and Conv2d() can act as same layer function?

Lets suppose that I have a input layer with shape (h,w,f) = (1 x 1 x 256 )
And let me make two sequence
case 1 :
input = keras.models.Input((1,1,256))
x = keras.layers.Conv2d(f= 32, k=(1,1),s = 1)(input)
x = keras.layers.ReLU()(x)
x = keras.layers.Conv2d(f= 256, k=(1,1),s = 1)(x)
case 2 :
input = keras.models.Input((1,1,256))
x = keras.layers.Flatten()(input)
x = keras.layers.Dense(32)(x)
x = keras.layers.ReLU()(x)
x = keras.layers.Dense(256)(x)
x = keras.layers.reshape((1,1,256))(x)
In these 2 cases are the output x is same?
I am making a SE-Net-like attention module but not the same.
Yes, and you do not need to apply Flatten() and Reshape() in code 2. Dense will be applied on the last channel automatically.

batch process of graph_cnn in tensorflow

I want to use the graph_cnn (Defferrard et al. 2016) for inputs with variation of number of nodes. The author provided the example code (see graph_cnn). Below is the what I think the critical part of the code
def chebyshev5(self, x, L, Fout, K):
N, M, Fin = x.get_shape()
N, M, Fin = int(N), int(M), int(Fin)
# Rescale Laplacian and store as a TF sparse tensor. Copy to not modify the shared L.
L = scipy.sparse.csr_matrix(L)
L = graph.rescale_L(L, lmax=2)
L = L.tocoo()
indices = np.column_stack((L.row, L.col))
L = tf.SparseTensor(indices, L.data, L.shape)
L = tf.sparse_reorder(L)
# Transform to Chebyshev basis
x0 = tf.transpose(x, perm=[1, 2, 0]) # M x Fin x N
x0 = tf.reshape(x0, [M, Fin*N]) # M x Fin*N
x = tf.expand_dims(x0, 0) # 1 x M x Fin*N
def concat(x, x_):
x_ = tf.expand_dims(x_, 0) # 1 x M x Fin*N
return tf.concat([x, x_], axis=0) # K x M x Fin*N
if K > 1:
x1 = tf.sparse_tensor_dense_matmul(L, x0)
x = concat(x, x1)
for k in range(2, K):
x2 = 2 * tf.sparse_tensor_dense_matmul(L, x1) - x0 # M x Fin*N
x = concat(x, x2)
x0, x1 = x1, x2
x = tf.reshape(x, [K, M, Fin, N]) # K x M x Fin x N
x = tf.transpose(x, perm=[3,1,2,0]) # N x M x Fin x K
x = tf.reshape(x, [N*M, Fin*K]) # N*M x Fin*K
# Filter: Fin*Fout filters of order K, i.e. one filterbank per feature pair.
W = self._weight_variable([Fin*K, Fout], regularization=False)
x = tf.matmul(x, W) # N*M x Fout
return tf.reshape(x, [N, M, Fout]) # N x M x Fout
Essentially, I think what this does can be simplified as something like
return = concat{(L*x)^k for (k=0 to K-1)} * W
x is the input of N x M x Fin (size variable in any batch):
L is an array of operators on x each with the size of M x M matching the corresponding sample (size variable in any batch).
W is the neural network parameters to be optimized, its size is Fin x K x Fout
N: number of samples in a batch (size fixed for any batch);
M: the number of nodes in the graph (size variable in any batch);
Fin: the number of input features (size fixed for any batch)].
Fout is the number of output features (size fixed for any batch).
K is a constant representing the number of steps (hops) in the graph
For single example, the above code works. But since both x and L have variable length for each sample in a batch, I don't know how to make it work for a batch of samples.
The tf.matmul currently (v1.4) only supports batch matrix multiplication on the lowest 2 dims for dense tensors. If either of the input tensor is sparse, it will prompt dimension mismatch error. tf.sparse_tensor_dense_matmul cannot be applied to batch inputs either.
Therefore, my current solution is to move all L preparation steps before calling the function, pass the L as a dense tensor (shape: [N, M, M]), and use the tf.matmul to perform the batch matrix multiplication.
Here is my revised code:
'''
chebyshev5_batch
Purpose:
perform the graph filtering on the given layer
Args:
x: the batch of inputs for the given layer,
dense tensor, size: [N, M, Fin],
L: the batch of sorted Laplacian of the given layer (tf.Tensor)
if in dense format, size of [N, M, M]
Fout: the number of output features on the given layer
K: the filter size or number of hopes on the given layer.
lyr_num: the idx of the original Laplacian lyr (start form 0)
Output:
y: the filtered output from the given layer
'''
def chebyshev5_batch(x, L, Fout, K, lyr_num):
N, M, Fin = x.get_shape()
#N, M, Fin = int(N), int(M), int(Fin)
# # Rescale Laplacian and store as a TF sparse tensor. Copy to not modify the shared L.
# L = scipy.sparse.csr_matrix(L)
# L = graph.rescale_L(L, lmax=2)
# L = L.tocoo()
# indices = np.column_stack((L.row, L.col))
# L = tf.SparseTensor(indices, L.data, L.shape)
# L = tf.sparse_reorder(L)
# # Transform to Chebyshev basis
# x0 = tf.transpose(x, perm=[1, 2, 0]) # M x Fin x N
# x0 = tf.reshape(x0, [M, Fin*N]) # M x Fin*N
def expand_concat(orig, new):
new = tf.expand_dims(new, 0) # 1 x N x M x Fin
return tf.concat([orig, new], axis=0) # (shape(x)[0] + 1) x N x M x Fin
# L: # N x M x M
# x0: # N x M x Fin
# L*x0: # N x M x Fin
x0 = x # N x M x Fin
stk_x = tf.expand_dims(x0, axis=0) # 1 x N x M x Fin (eventually K x N x M x Fin, if K>1)
if K > 1:
x1 = tf.matmul(L, x0) # N x M x Fin
stk_x = expand_concat(stk_x, x1)
for kk in range(2, K):
x2 = tf.matmul(L, x1) - x0 # N x M x Fin
stk_x = expand_concat(stk_x, x2)
x0 = x1
x1 = x2
# now stk_x has the shape of K x N x M x Fin
# transpose to the shape of N x M x Fin x K
## source positions 1 2 3 0
stk_x_transp = tf.transpose(stk_x, perm=[1,2,3,0])
stk_x_forMul = tf.reshape(stk_x_transp, [N*M, Fin*K])
#W = self._weight_variable([Fin*K, Fout], regularization=False)
W_initial = tf.truncated_normal_initializer(0, 0.1)
W = tf.get_variable('weights_L_'+str(lyr_num), [Fin*K, Fout], tf.float32, initializer=W_initial)
tf.summary.histogram(W.op.name, W)
y = tf.matmul(stk_x_forMul, W)
y = tf.reshape(y, [N, M, Fout])
return y

How to plot 4-D data embedded in a dataframe in Julia using a subplots approach?

I have a Julia DataFrame where the first 4 columns are dimensions and the 5th one contains the actual data.
I would like to plot it using a subplots approach where the two main plot axis concern the first two dimensions and each subplot then is a contour plot over the remaining two dimensions.
I am almost there with the above code:
using DataFrames,Plots
# plotlyjs() # doesn't work with plotlyjs backend
pyplot()
X = [1,2,3,4]
Y = [0.1,0.15,0.2]
I = [2,4,6,8,10,12,14]
J = [10,20,30,40,50,60]
df = DataFrame(X=Int64[], Y=Float64[], I=Float64[], J=Float64[], V=Float64[] )
[push!(df,[x,y,i,j,(5*x+20*y+2)*(0.2*i^2+0.5*j^2+3*i*j+2*i^2*j+1)]) for x in X, y in Y, i in I, j in J]
minvalue = minimum(df[:V])
maxvalue = maximum(df[:V])
function toDict(df, dimCols, valueCol)
toReturn = Dict()
for r in eachrow(df)
keyValues = []
[push!(keyValues,r[d]) for d in dimCols]
toReturn[(keyValues...)] = r[valueCol]
end
return toReturn
end
dict = toDict(df, [:X,:Y,:I,:J], :V )
M = [dict[(x,y,i,j)] for j in J, i in I, y in Y, x in X ]
yL = length(Y)
xL = length(X)
plot(contour(M[:,:,3,1], ylabel="y = $(string(Y[3]))", zlims=(minvalue,maxvalue)), contour(M[:,:,3,2]), contour(M[:,:,3,3]), contour(M[:,:,3,4]),
contour(M[:,:,2,1], ylabel="y = $(string(Y[2]))", zlims=(minvalue,maxvalue)), contour(M[:,:,2,2]), contour(M[:,:,2,3]), contour(M[:,:,2,4]),
contour(M[:,:,1,1], ylabel="y = $(string(Y[1]))", xlabel="x = $(string(X[1]))"), contour(M[:,:,1,2], xlabel="x = $(string(X[2]))"), contour(M[:,:,1,3], xlabel="x = $(string(X[3]))"), contour(M[:,:,3,4], xlabel="x = $(string(X[4]))"),
layout=(yL,xL) )
This produces:
I remain however with the following concerns:
How do I automatize the creation of each subplot in the subplot call ? Do I need to write a macro ?
I would like each subplot to have the same limits in the z axis, but zlims seems not to work. Is zlims not yet supported ?
How do I hide the legend on the z axis on each subplot and plot it instead apart (best would be on the right side of the main/total plot) ?
EDIT:
For the first point I don't need a macro, I can create the subplots in a for loop, add them in a array and pass the array to the plot() call using the ellipsis operator:
plots = []
for y in length(Y):-1:1
for x in 1:length(X)
xlabel = y == 1 ? "x = $(string(X[x]))" : ""
ylabel = x==1 ? "y = $(string(Y[y]))" : ""
println("$y - $x")
plot = contour(I,J,M[:,:,y,x], xlabel=xlabel, ylabel=ylabel, zlims=(minvalue,maxvalue))
push!(plots,plot)
end
end
plot(plots..., layout=(yL,xL))

tensorflow ValueError: Shape must be rank 1 but is rank 2

import tensorflow as tf
x = [[1,2,3],[4,5,6]]
y = [0,1]
z = [1,2]
x = tf.constant(x)
y = tf.constant(y)
z = tf.constant(z)
m = x[y,z]
What I expect is m = [2,6]
I can get the result by theano or numpy. How I get the result using tensorflow?
You would want to use tf.gather_nd
slices = tf.gather_nd(x, [y, z])
Hope this helps.

Explain np.polyfit and np.polyval for a scatter plot

I have to make a scatter plot and liner fit to my data. prediction_08.Dem_Adv and prediction_08.Dem_Win are two column of datas. I know that np.polyfit returns coefficients. But what is np.polyval doing here? I saw the documentation, but the explanation is confusing. can some one explain to me clearly
plt.plot(prediction_08.Dem_Adv, prediction_08.Dem_Win, 'o')
plt.xlabel("2008 Gallup Democrat Advantage")
plt.ylabel("2008 Election Democrat Win")
fit = np.polyfit(prediction_08.Dem_Adv, prediction_08.Dem_Win, 1)
x = np.linspace(-40, 80, 10)
y = np.polyval(fit, x)
plt.plot(x, y)
print fit
np.polyval is applying the polynomial function which you got using polyfit. If you get y = mx+ c relationship. The np.polyval function will multiply your x values with fit[0] and add fit[1]
Polyval according to Docs:
N = len(p)
y = p[0]*x**(N-1) + p[1]*x**(N-2) + ... + p[N-2]*x + p[N-1]
If the relationship is y = ax**2 + bx + c,
fit = np.polyfit(x,y,2)
a = fit[0]
b = fit[1]
c = fit[2]
If you do not want to use the polyval function:
y = a*(x**2) + b*(x) + c
This will create the same output as polyval.