In a problem I want to solve using Tensorflow, I want to build a n-dimensional rank tensor that is 'diagonal' by blocks. That is, I want to generate a tensor object from a concatenation of low order tensors.
I have tried to define the whole tf.Variable tensor and then to impose the value 0 to some variables but Tensorflow does not allow assignments when working with variable tensors.
Moreover, I would want to create 'diagonal' tensors with the same independent variables, as, for example, using a stacked 2D representation, being A a 2 dimensional tensor:
T = [A, 0;0 , A]
My current source code:
shape1 = [3,3,10,10]
shape2 = [3,3]
i1 = tf.truncated_normal(shape1, stddev=1.0, dtype = tf.float32)
i2 = tf.truncated_normal(shape2, stddev=1.0, dtype = tf.float32)
A = tf.Variable(i1)
V = tf.Variable(i2)
for i in range(10):
for j in range(10):
if i != j:
A[:,:,i,j] = tf.zeros((3,3))
else:
A[:,:,i,j] = V
Of course, this code returns the error Variable object does not support item assignment.
What I want, at the end of the day, is to define a variable tensor such as:
T[:,:,i,j] = tf.zeros([D0,D1]), if i != j
and
T[:,:,i,j] = A, if i = j
with A = tf.variable([D0,D1])
Thank you very much in advance!
One way would be to use tf.stack, which converts a list of tensors of dimension n to a tensor of dimension n+1.
l = []
for i in range(10):
li = [V * 0.0 if i != j else V for j in range(10)]
Ai = tf.stack(li)
l.append(Ai)
A = tf.stack(l)
Related
I'm trying to have a layer in keras that takes a flat tensor x (doesn't have zero value in it and shape = (batch_size, units)) multiplied by a mask (of the same shape), and it will sort it in the way that masked values will be placed first in the output (the order of the elements value doesn't matter). For clarity here is an example (batch_size = 1, units = 8):
It seems simple but the problem is that I can't find a good solution. Any code or idea is appreciated.
My current code is as below, If you know a more efficient way please let me know.
class Sort(keras.layers.Layer):
def call(self, inputs):
x = inputs.numpy()
nonx, nony = x.nonzero() # idxs of nonzero elements
zero = [np.where(x == 0)[0][0], np.where(x == 0)[1][0]] # idx of first zero
x_shape = tf.shape(inputs)
result = np.zeros((x_shape[0], x_shape[1], 2), dtype = 'int') # mapping matrix
result[:, :, 0] += zero[0]
result[:, :, 1] += zero[1]
p = np.zeros((x_shape[0]), dtype = 'int')
for i, j in zip(nonx, nony):
result[i, p[i]] = [i, j]
p[i] += 1
y = tf.gather_nd(inputs, result)
return y
I am working on a multi-class classification task using my own images.
filenames = [] # a list of filenames
labels = [] # a list of labels corresponding to the filenames
full_ds = tf.data.Dataset.from_tensor_slices((filenames, labels))
This full dataset will be shuffled and split into train, valid and test dataset
full_ds_size = len(filenames)
full_ds = full_ds.shuffle(buffer_size=full_ds_size*2, seed=128) # seed is used for reproducibility
train_ds_size = int(0.64 * full_ds_size)
valid_ds_size = int(0.16 * full_ds_size)
train_ds = full_ds.take(train_ds_size)
remaining = full_ds.skip(train_ds_size)
valid_ds = remaining.take(valid_ds_size)
test_ds = remaining.skip(valid_ds_size)
Now I am struggling to understand how each class is distributed in train_ds, valid_ds and test_ds. An ugly solution is to iterate all the element in the dataset and count the occurrence of each class. Is there any better way to solve it?
My ugly solution:
def get_class_distribution(dataset):
class_distribution = {}
for element in dataset.as_numpy_iterator():
label = element[1]
if label in class_distribution.keys():
class_distribution[label] += 1
else:
class_distribution[label] = 0
# sort dict by key
class_distribution = collections.OrderedDict(sorted(class_distribution.items()))
return class_distribution
train_ds_class_dist = get_class_distribution(train_ds)
valid_ds_class_dist = get_class_distribution(valid_ds)
test_ds_class_dist = get_class_distribution(test_ds)
print(train_ds_class_dist)
print(valid_ds_class_dist)
print(test_ds_class_dist)
The answer below assumes:
there are five classes.
labels are integers from 0 to 4.
It can be modified to suit your needs.
Define a counter function:
def count_class(counts, batch, num_classes=5):
labels = batch['label']
for i in range(num_classes):
cc = tf.cast(labels == i, tf.int32)
counts[i] += tf.reduce_sum(cc)
return counts
Use the reduce operation:
initial_state = dict((i, 0) for i in range(5))
counts = train_ds.reduce(initial_state=initial_state,
reduce_func=count_class)
print([(k, v.numpy()) for k, v in counts.items()])
A solution inspired by user650654 's answer, only using TensorFlow primitives (with tf.unique_with_counts instead of for loop):
In theory, this should have better performance and scale better to large datasets, batches or class count.
num_classes = 5
#tf.function
def count_class(counts, batch):
y, _, c = tf.unique_with_counts(batch[1])
return tf.tensor_scatter_nd_add(counts, tf.expand_dims(y, axis=1), c)
counts = train_ds.reduce(
initial_state=tf.zeros(num_classes, tf.int32),
reduce_func=count_class)
print(counts.numpy())
Similar and simpler version with numpy that actually had better performances for my simple use-case:
count = np.zeros(num_classes, dtype=np.int32)
for _, labels in train_ds:
y, _, c = tf.unique_with_counts(labels)
count[y.numpy()] += c.numpy()
print(count)
In general when we multiply a vector v of dimension 1*n with a tensor T of dimension m*n*k, we expect to get a matrix/tensor of dimension m*k/m*1*k. This means that our tensor has m slices of matrices with dimension n*k, and v is multiplied to each matrix and the resulting vectors are stacked together. In order to do this multiplication in tensorflow, I came up with the following formulation. I am just wondering if there is any built-in function that does this standard multiplication straightforward?
T = tf.Variable(tf.random_normal((m,n,k)), name="tensor")
v = tf.Variable(tf.random_normal((1,n)), name="vector")
c = tf.stack([v,v]) # m times, here set m=2
output = tf.matmul(c,T)
You can do it with:
tf.reduce_sum(tf.expand_dims(v,2)*T,1)
Code:
m, n, k = 2, 3, 4
T = tf.Variable(tf.random_normal((m,n,k)), name="tensor")
v = tf.Variable(tf.random_normal((1,n)), name="vector")
c = tf.stack([v,v]) # m times, here set m=2
out1 = tf.matmul(c,T)
out2 = tf.reduce_sum(tf.expand_dims(v,2)*T,1)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
n_out1 = sess.run(out1)
n_out2 = sess.run(out2)
#both n_out1 and n_out2 matches
Not sure if there is a better way, but it sounds like you could use tf.map_fn like this:
output = tf.map_fn(lambda x: tf.matmul(v, x), T)
I am trying to train an autoencoder NN (3 layers - 2 visible, 1 hidden) using numpy and scipy for the MNIST digits images dataset. The implementation is based on the notation given here Below is my code:
def autoencoder_cost_and_grad(theta, visible_size, hidden_size, lambda_, data):
"""
The input theta is a 1-dimensional array because scipy.optimize.minimize expects
the parameters being optimized to be a 1d array.
First convert theta from a 1d array to the (W1, W2, b1, b2)
matrix/vector format, so that this follows the notation convention of the
lecture notes and tutorial.
You must compute the:
cost : scalar representing the overall cost J(theta)
grad : array representing the corresponding gradient of each element of theta
"""
training_size = data.shape[1]
# unroll theta to get (W1,W2,b1,b2) #
W1 = theta[0:hidden_size*visible_size]
W1 = W1.reshape(hidden_size,visible_size)
W2 = theta[hidden_size*visible_size:2*hidden_size*visible_size]
W2 = W2.reshape(visible_size,hidden_size)
b1 = theta[2*hidden_size*visible_size:2*hidden_size*visible_size + hidden_size]
b2 = theta[2*hidden_size*visible_size + hidden_size: 2*hidden_size*visible_size + hidden_size + visible_size]
#feedforward pass
a_l1 = data
z_l2 = W1.dot(a_l1) + numpy.tile(b1,(training_size,1)).T
a_l2 = sigmoid(z_l2)
z_l3 = W2.dot(a_l2) + numpy.tile(b2,(training_size,1)).T
a_l3 = sigmoid(z_l3)
#backprop
delta_l3 = numpy.multiply(-(data-a_l3),numpy.multiply(a_l3,1-a_l3))
delta_l2 = numpy.multiply(W2.T.dot(delta_l3),
numpy.multiply(a_l2, 1 - a_l2))
b2_derivative = numpy.sum(delta_l3,axis=1)/training_size
b1_derivative = numpy.sum(delta_l2,axis=1)/training_size
W2_derivative = numpy.dot(delta_l3,a_l2.T)/training_size + lambda_*W2
#print(W2_derivative.shape)
W1_derivative = numpy.dot(delta_l2,a_l1.T)/training_size + lambda_*W1
W1_derivative = W1_derivative.reshape(hidden_size*visible_size)
W2_derivative = W2_derivative.reshape(visible_size*hidden_size)
b1_derivative = b1_derivative.reshape(hidden_size)
b2_derivative = b2_derivative.reshape(visible_size)
grad = numpy.concatenate((W1_derivative,W2_derivative,b1_derivative,b2_derivative))
cost = 0.5*numpy.sum((data-a_l3)**2)/training_size + 0.5*lambda_*(numpy.sum(W1**2) + numpy.sum(W2**2))
return cost,grad
I have also implemented a function to estimate the numerical gradient and verify the correctness of my implementation (below).
def compute_gradient_numerical_estimate(J, theta, epsilon=0.0001):
"""
:param J: a loss (cost) function that computes the real-valued loss given parameters and data
:param theta: array of parameters
:param epsilon: amount to vary each parameter in order to estimate
the gradient by numerical difference
:return: array of numerical gradient estimate
"""
gradient = numpy.zeros(theta.shape)
eps_vector = numpy.zeros(theta.shape)
for i in range(0,theta.size):
eps_vector[i] = epsilon
cost1,grad1 = J(theta+eps_vector)
cost2,grad2 = J(theta-eps_vector)
gradient[i] = (cost1 - cost2)/(2*epsilon)
eps_vector[i] = 0
return gradient
The norm of the difference between the numerical estimate and the one computed by the function is around 6.87165125021e-09 which seems to be acceptable. My main problem seems to be to get the gradient descent algorithm "L-BGFGS-B" working using the scipy.optimize.minimize function as below:
# theta is the 1-D array of(W1,W2,b1,b2)
J = lambda x: utils.autoencoder_cost_and_grad(theta, visible_size, hidden_size, lambda_, patches_train)
options_ = {'maxiter': 4000, 'disp': False}
result = scipy.optimize.minimize(J, theta, method='L-BFGS-B', jac=True, options=options_)
I get the below output from this:
scipy.optimize.minimize() details:
fun: 90.802022224079778
hess_inv: <16474x16474 LbfgsInvHessProduct with dtype=float64>
jac: array([ -6.83667742e-06, -2.74886002e-06, -3.23531941e-06, ...,
1.22425735e-01, 1.23425062e-01, 1.28091250e-01])
message: b'ABNORMAL_TERMINATION_IN_LNSRCH'
nfev: 21
nit: 0
status: 2
success: False
x: array([-0.06836677, -0.0274886 , -0.03235319, ..., 0. ,
0. , 0. ])
Now, this post seems to indicate that the error could mean that the gradient function implementation could be wrong? But my numerical gradient estimate seems to confirm that my implementation is correct. I have tried varying the initial weights by using a uniform distribution as specified here but the problem still persists. Is there anything wrong with my backprop implementation?
Turns out the issue was a syntax error (very silly) with this line:
J = lambda x: utils.autoencoder_cost_and_grad(theta, visible_size, hidden_size, lambda_, patches_train)
I don't even have the lambda parameter x in the function declaration. So the theta array wasn't even being passed whenever J was being invoked.
This fixed it:
J = lambda x: utils.autoencoder_cost_and_grad(x, visible_size, hidden_size, lambda_, patches_train)
Here is how my code looks like:
N = 16, num_ckfs = 5
init_variances = tf.placeholder(tf.float64, shape=[ num_ckfs, N],name='inital_variances')
init_states = tf.placeholder(tf.float64, shape=[num_ckfs, N], name='init_states')
#some more code
predicted_state = prior_state_expanded + kalman_gain * diff_expanded
error_covariance = sum_cov_cholesky + tf.batch_matmul(kg , kalman_gain, adj_x=True)
projected_output = tf.batch_matmul(predicted_state,input_vectors_extra, adj_y=True)
session = tf.Session()
init_var = [10 for i in range(N)]
init_var_ckfs = [init_var for i in range(num_ckfs)]
init_state = [0 for i in range(N)]
init_state_ckfs = [init_state for i in range(num_ckfs)]
for timestep in range(10):
out= session.run([projected_output, predicted_state, error_covariance], {init_variances:init_var_ckfs, init_states:init_state_ckfs })
#for the next tilmestep, I want to initialize init_state_ckfs with the predicted_state, and the init_var_ckfs with error_covariance.
#predicted_state is a tensor with shape (num_ckfs, 1, N)
#error_covariance is a tensor with shape (num_ckfs, N, N): I just need the diagonal elements from each of the N X N matrices
Although I have mentioned this in the code as a comment, I will mention it here again. I am wanting to know how to use the updated tensors from the previous time step by converting them into lists and feeding them as inputs for the next time step. Can someone please help me.
use tf.assign to assign to the placeholder the last value of the variable. As long as the Session is active the state is preserved