Reshape ndarray broadcasting values to new axis - numpy

Assume that I have an array with shape (6, 4, 512, 512) and I have another array of shape (6, 512, 512).
How could I reshape the second array so it has the same shape as the
first one?
Also, would be possible to propagate the values of the second array
across that new axis?
Edit
The function np.resize does exactly what I need.

Is this want you want?
x = np.random.rand(6,4,512,512)
y = np.random.rand(6,512,512)
y_ex = np.expand_dims(y, axis=1)
ones_y = np.ones((6,4,512,512))
y_ = ones_y * y_ex
For example:
x = np.random.rand(3,2,4,4)
y = np.random.rand(3,4,4)
y_ex = np.expand_dims(y, axis=1)
ones_y = np.ones((3,2,4,4))
y_ = ones_y * y_ex
gives you y_ex of shape (3,1,4,4) and then y_ of shape (3,2,4,4) with the values of y_ex repeated across axis 1
y
array([[[0.49055614, 0.28459745, 0.87471246, 0.74127825],
[0.74965895, 0.77622936, 0.98992284, 0.32420505],
[0.34014753, 0.84957355, 0.47974344, 0.12784663],
[0.84201589, 0.5556073 , 0.34622819, 0.34372987]],
[[0.75703384, 0.26535935, 0.13812319, 0.14055896],
[0.29595331, 0.61979815, 0.14830348, 0.89501206],
[0.29457856, 0.58359228, 0.38900858, 0.50324793],
[0.23427909, 0.14967761, 0.79638139, 0.21718771]],
[[0.54901808, 0.66504512, 0.93174202, 0.22874321],
[0.43236616, 0.33947959, 0.8224133 , 0.96464956],
[0.89242413, 0.72640099, 0.07075724, 0.18180732],
[0.11402021, 0.47821353, 0.86334281, 0.39736966]]])
and
y_
array([[[[0.49055614, 0.28459745, 0.87471246, 0.74127825],
[0.74965895, 0.77622936, 0.98992284, 0.32420505],
[0.34014753, 0.84957355, 0.47974344, 0.12784663],
[0.84201589, 0.5556073 , 0.34622819, 0.34372987]],
[[0.49055614, 0.28459745, 0.87471246, 0.74127825],
[0.74965895, 0.77622936, 0.98992284, 0.32420505],
[0.34014753, 0.84957355, 0.47974344, 0.12784663],
[0.84201589, 0.5556073 , 0.34622819, 0.34372987]]],
[[[0.75703384, 0.26535935, 0.13812319, 0.14055896],
[0.29595331, 0.61979815, 0.14830348, 0.89501206],
[0.29457856, 0.58359228, 0.38900858, 0.50324793],
[0.23427909, 0.14967761, 0.79638139, 0.21718771]],
[[0.75703384, 0.26535935, 0.13812319, 0.14055896],
[0.29595331, 0.61979815, 0.14830348, 0.89501206],
[0.29457856, 0.58359228, 0.38900858, 0.50324793],
[0.23427909, 0.14967761, 0.79638139, 0.21718771]]],
[[[0.54901808, 0.66504512, 0.93174202, 0.22874321],
[0.43236616, 0.33947959, 0.8224133 , 0.96464956],
[0.89242413, 0.72640099, 0.07075724, 0.18180732],
[0.11402021, 0.47821353, 0.86334281, 0.39736966]],
[[0.54901808, 0.66504512, 0.93174202, 0.22874321],
[0.43236616, 0.33947959, 0.8224133 , 0.96464956],
[0.89242413, 0.72640099, 0.07075724, 0.18180732],
[0.11402021, 0.47821353, 0.86334281, 0.39736966]]]])

Related

Tensorflow, how to implement sorting layer

I'm trying to have a layer in keras that takes a flat tensor x (doesn't have zero value in it and shape = (batch_size, units)) multiplied by a mask (of the same shape), and it will sort it in the way that masked values will be placed first in the output (the order of the elements value doesn't matter). For clarity here is an example (batch_size = 1, units = 8):
It seems simple but the problem is that I can't find a good solution. Any code or idea is appreciated.
My current code is as below, If you know a more efficient way please let me know.
class Sort(keras.layers.Layer):
def call(self, inputs):
x = inputs.numpy()
nonx, nony = x.nonzero() # idxs of nonzero elements
zero = [np.where(x == 0)[0][0], np.where(x == 0)[1][0]] # idx of first zero
x_shape = tf.shape(inputs)
result = np.zeros((x_shape[0], x_shape[1], 2), dtype = 'int') # mapping matrix
result[:, :, 0] += zero[0]
result[:, :, 1] += zero[1]
p = np.zeros((x_shape[0]), dtype = 'int')
for i, j in zip(nonx, nony):
result[i, p[i]] = [i, j]
p[i] += 1
y = tf.gather_nd(inputs, result)
return y

In Tensorflow, is there a built in function to compute states over time given a transition matrix?

I have a system given by this recursive relationship: xt = At xt-1 + bt. I wish to compute xt for all t, with At, bt and x0 given. Is there are built-in function for that? If I use a loop it would be extremely slow. Thanks!
There is sort of a way. Let's say you have your A matrices in a 3D tensor with shape (T, N, N), where T is the total number of time steps and N is the size of your vector. Similarly, B values are in a 2D tensor (T, N). The first step in the computation would be:
x1 = A[0] # x0 + B[0]
Where # represents matrix product. But you can convert this into a single matrix product. Suppose we add a value 1 at the end of x0, and we call that x0p (for prime):
x0p = tf.concat([x, [1]], axis=0)
And now we build a new 3D tensor Ap with shape (T, N+1, N+1), such that for each A[i] we concatenate B[i] as a new column, and then we add a row with N zeros and a single one at the end:
AwithB = tf.concat([tf.concat([A, tf.expand_dims(B, 2)], axis=2)], axis=1)
AnewRow = tf.concat([tf.zeros((T, 1, N), A.dtype), tf.ones((T, 1, 1), A.dtype)], axis=2)
Ap = tf.concat([AwithB, AnewRow], axis=1)
As it turns out, you can now say:
x1p = Ap[0] # x0p
And therefore:
x2p = Ap[1] # x1p = Ap[1] # Ap[0] # x0p
So we just need to compute all the matrix product of all matrices in Ap across the first dimension. Unfortunately, there does not seem to be a direct operation to compute that with TensorFlow, but you can do it relatively fast with tf.scan:
Ap_prod = tf.scan(tf.matmul, Ap)[-1]
And with that you just have to do:
xtp = Ap_prod # x0p
Here is a proof of concept (the code is tweaked to support single examples and batches, either in the A and B values or in the x)
import tensorflow as tf
def compute_state(a, b, x):
s = tf.shape(a)
t = s[-3]
n = s[-1]
# Add final 1 to x
xp = tf.concat([x, tf.ones_like(x[..., :1])], axis=-1)
# Add B column to A
a_b = tf.concat([tf.concat([a, tf.expand_dims(b, axis=-1)], axis=-1)], axis=-2)
# Make new final row for A
a_row = tf.concat([tf.zeros_like(a[..., :1, :]),
tf.ones_like(a[..., :1, :1])], axis=-1)
# Add new row to A
ap = tf.concat([a_b, a_row], axis=-2)
# Compute matrix product reduction
ap_prod = tf.scan(tf.matmul, ap)[..., -1, :, :]
# Compute final result
outp = tf.linalg.matvec(ap_prod, xp)
return outp[..., :-1]
#Test
tf.random.set_seed(0)
a = tf.random.uniform((10, 5, 5), -1, 1)
b = tf.random.uniform((10, 5), -1, 1)
x = tf.random.uniform((5,), -1, 1)
y = compute_state(a, b, x)
# Also works with batches of (a, b) or x
a = tf.random.uniform((100, 10, 5, 5), -1, 1)
b = tf.random.uniform((100, 10, 5), -1, 1)
x = tf.random.uniform((100, 5), -1, 1)
y = compute_state(a, b, x)

Pairwise distance between a set of Matrices in Keras/Tensorflow

I want to calculate pairwise distance between a set of Tensor (e.g 4 Tensor). Each matrix is 2D Tensor. I don't know how to do this in vectorize format. I wrote following sudo-code to determine what I need:
E.shape => [4,30,30]
sum = 0
for i in range(4):
for j in range(4):
res = calculate_distance(E[i],E[j]) # E[i] is one the 30*30 Tensor
sum = sum + reduce_sum(res)
Here is my last try:
x_ = tf.expand_dims(E, 0)
y_ = tf.expand_dims(E, 1)
s = x_ - y_
P = tf.reduce_sum(tf.norm(s, axis=[-2, -1]))
This code works But I don't know how do this in a Batch. For instance when E.shape is [BATCH_SIZE * 4 * 30 * 30] my code doesn't work and Out Of Memory will happen. How can I do this efficiently?
Edit: After a day, I find a solution. it's not perfect but works:
res = tf.map_fn(lambda x: tf.map_fn(lambda y: tf.map_fn(lambda z: tf.norm(z - x), x), x), E)
res = tf.reduce_mean(tf.square(res))
Your solution with expand_dims should be okay if your batch size is not too large. However, given that your original pseudo code loops over range(4), you should probably expand axes 1 and 2, instead of 0 and 1.
You can check the shape of the tensors to ensure that you're specifying the correct axes. For example,
batch_size = 8
E_np = np.random.rand(batch_size, 4, 30, 30)
E = K.variable(E_np) # shape=(8, 4, 30, 30)
x_ = K.expand_dims(E, 1)
y_ = K.expand_dims(E, 2)
s = x_ - y_ # shape=(8, 4, 4, 30, 30)
distances = tf.norm(s, axis=[-2, -1]) # shape=(8, 4, 4)
P = K.sum(distances, axis=[-2, -1]) # shape=(8,)
Now P will be the sum of pairwise distances between the 4 matrices for each of the 8 samples.
You can also verify that the values in P is the same as what would be computed in your pseudo code:
answer = []
for batch_idx in range(batch_size):
s = 0
for i in range(4):
for j in range(4):
a = E_np[batch_idx, i]
b = E_np[batch_idx, j]
s += np.sqrt(np.trace(np.dot(a - b, (a - b).T)))
answer.append(s)
print(answer)
[149.45960605637578, 147.2815068236368, 144.97487402393705, 146.04866735065312, 144.25537059201062, 148.9300986019226, 146.61229889228133, 149.34259789169045]
print(K.eval(P).tolist())
[149.4595947265625, 147.281494140625, 144.97488403320312, 146.04867553710938, 144.25537109375, 148.9300994873047, 146.6123046875, 149.34259033203125]
Tensorflow allows to compute the Frobenius norm via tf.norm function. In case of 2D matrices, it's equivalent to 1-norm.
The following solution isn't vectorized and assumes that the first dimension in E is known statically:
E = tf.random_normal(shape=[5, 3, 3], dtype=tf.float32)
F = tf.split(E, E.shape[0])
total = tf.reduce_sum([tf.norm(tensor=(lhs-rhs), ord=1, axis=(-2, -1)) for lhs in F for rhs in F])
Update:
An optimized vectorized version of the same code:
E = tf.random_normal(shape=[1024, 4, 30, 30], dtype=tf.float32)
lhs = tf.expand_dims(E, axis=1)
rhs = tf.expand_dims(E, axis=2)
total = tf.reduce_sum(tf.norm(tensor=(lhs - rhs), ord=1, axis=(-2, -1)))
Memory concerns: upon evaluating this code,
tf.contrib.memory_stats.MaxBytesInUse() reports that the peak memory consumption is 73729792 = 74Mb, which indicates relatively moderate overhead (the raw lhs-rhs tensor is 59Mb). Your OOM is most likely caused by the duplication of BATCH_SIZE dimension when you compute s = x_ - y_, because your batch size is much larger than the number of matrices (1024 vs 4).

ctc_loss error "No valid path found."

Training a model with tf.nn.ctc_loss produces an error every time the train op is run:
tensorflow/core/util/ctc/ctc_loss_calculator.cc:144] No valid path found.
Unlike in previous questions about this function, this is not due to divergence. I have a low learning rate, and the error occurs on even the first train op.
The model is a CNN -> LSTM -> CTC. Here is the model creation code:
# Build Graph
self.videoInput = tf.placeholder(shape=(None, self.maxVidLen, 50, 100, 3), dtype=tf.float32)
self.videoLengths = tf.placeholder(shape=(None), dtype=tf.int32)
self.keep_prob = tf.placeholder(dtype=tf.float32)
self.targets = tf.sparse_placeholder(tf.int32)
self.targetLengths = tf.placeholder(shape=(None), dtype=tf.int32)
conv1 = tf.layers.conv3d(self.videoInput ...)
pool1 = tf.layers.max_pooling3d(conv1 ...)
conv2 = ...
pool2 = ...
conv3 = ...
pool3 = ...
cnn_out = tf.reshape(pool3, shape=(-1, self.maxVidLength, 4*7*96))
fw_cell = tf.nn.rnn_cell.MultiRNNCell(self.cell(), for _ in range(3))
bw_cell = tf.nn.rnn_cell.MultiRNNCell(self.cell(), for _ in range(3))
outputs, _ = tf.nn.bidirectional_dynamic_rnn(
fw_cell, bw_cell, cnn_out, sequence_length=self.videoLengths, dtype=tf.float32)
outputs = tf.concat(outputs, 2)
outputs = tf.reshape(outputs, [-1, self.hidden_size * 2])
w = tf.Variable(tf.random_normal((self.hidden_size * 2, len(self.char2index) + 1), stddev=0.2))
b = tf.Variable(tf.zeros(len(self.char2index) + 1))
out = tf.matmul(outputs, w) + b
out = tf.reshape(out, [-1, self.maxVidLen, len(self.char2index) + 1])
out = tf.transpose(out, [1, 0, 2])
cost = tf.reduce_mean(tf.nn.ctc_loss(self.targets, out, self.targetLengths))
self.train_op = tf.train.AdamOptimizer(0.0001).minimize(cost)
And here is the feed dict creation code:
indices = []
values = []
shape = [len(vids) * 2, self.maxLabelLen]
vidInput = np.zeros((len(vids) * 2, self.maxVidLen, 50, 100, 3), dtype=np.float32)
# Actual video, then left-right flip
for j in range(len(vids) * 2):
# K is video index
k = j if j < len(vids) else j - len(vids)
# convert video and label to input format
vidInput[j, 0:len(vids[k])] = vids[k] if k == j else vids[k][:,::-1,:]
indices.extend([j, i] for i in range(len(labelList[k])))
values.extend(self.char2index[c] for c in labelList[k])
fd[self.targets] = (indices, values, shape)
fd[self.videoInput] = vidInput
# Collect video lengths and label lengths
vidLengths = [len(j) for j in vids] + [len(j) for j in vids]
labelLens = [len(l) for l in labelList] + [len(l) for l in labelList]
fd[self.videoLengths] = vidLengths
fd[self.targetLengths] = labelLens
It turns out that the ctc_loss requires that the label lengths be shorter than the input lengths. If the label lengths are too long, the loss calculator cannot unroll completely and therefore cannot compute the loss.
For example, the label BIFI would require input length of at least 4 while the label BIIF would require input length of at least 5 due to a blank being inserted between the repeated symbols.
I had the same issue but I soon realized it was just because I was using glob and my label was in the filename so it was exceeding.
You can fix this issue by using:
os.path.join(*(filename.split(os.path.sep)[noOfDir:]))
For me the problem was fixed by setting preprocess_collapse_repeated=True.
FWIW: My target sequence length was already shorter than inputs, and the RNN outputs are that of softmax.
Another possible reason which I found out in my case is the input data range is not normalized to 0~1, due to that LSTM activation function becomes saturated in the beginning of the training, and causes "no valid path" log somehow.

Tensorflow indexing into 2d tensor with 1d tensor

I have a 2D tensor A with shape [batch_size, D] , and a 1D tensor B with shape [batch_size]. Each element of B is a column index of A, for each row of A, eg. B[i] in [0,D).
What is the best way in tensorflow to get the values A[B]
For example:
A = tf.constant([[0,1,2],
[3,4,5]])
B = tf.constant([2,1])
with desired output:
some_slice_func(A, B) -> [2,4]
There is another constraint. In practice, batch_size is actually None.
Thanks in advance!
I was able to get it working using a linear index:
def vector_slice(A, B):
""" Returns values of rows i of A at column B[i]
where A is a 2D Tensor with shape [None, D]
and B is a 1D Tensor with shape [None]
with type int32 elements in [0,D)
Example:
A =[[1,2], B = [0,1], vector_slice(A,B) -> [1,4]
[3,4]]
"""
linear_index = (tf.shape(A)[1]
* tf.range(0,tf.shape(A)[0]))
linear_A = tf.reshape(A, [-1])
return tf.gather(linear_A, B + linear_index)
This feels slightly hacky though.
If anyone knows a better (as in clearer or faster) please also leave an answer! (I won't accept my own for a while)
Code for what #Eugene Brevdo said:
def vector_slice(A, B):
""" Returns values of rows i of A at column B[i]
where A is a 2D Tensor with shape [None, D]
and B is a 1D Tensor with shape [None]
with type int32 elements in [0,D)
Example:
A =[[1,2], B = [0,1], vector_slice(A,B) -> [1,4]
[3,4]]
"""
B = tf.expand_dims(B, 1)
range = tf.expand_dims(tf.range(tf.shape(B)[0]), 1)
ind = tf.concat([range, B], 1)
return tf.gather_nd(A, ind)
the least hacky way is probably to build a proper 2d index by concatenating range(batch_size) and B, to get a batch_size x 2 matrix. then pass this to tf.gather_nd.
The simplest approach is to do:
def tensor_slice(target_tensor, index_tensor):
indices = tf.stack([tf.range(tf.shape(index_tensor)[0]), index_tensor], 1)
return tf.gather_nd(target_tensor, indices)
Consider to use tf.one_hot, tf.math.multiply and tf.reduce_sum to solve it.
e.g.
def vector_slice (inputs, inds, axis = None):
axis = axis if axis is not None else tf.rank(inds) - 1
inds = tf.one_hot(inds, inputs.shape[axis])
for i in tf.range(tf.rank(inputs) - tf.rank(inds)):
inds = tf.expand_dims(inds, axis = -1)
inds = tf.cast(inds, dtype = inputs.dtype)
x = tf.multiply(inputs, inds)
return tf.reduce_sum(x, axis = axis)