I get the following error:
ValueError: Cannot feed value of shape (1, 251, 5) for Tensor u'vector_rnn_1/Placeholder_1:0', which has shape '(1, 117, 5)'
when running code from here
https://github.com/tensorflow/magenta-demos/blob/master/jupyter-notebooks/Sketch_RNN.ipynb
The error occurs in this method:
def encode(input_strokes):
strokes = to_big_strokes(input_strokes).tolist()
strokes.insert(0, [0, 0, 1, 0, 0])
seq_len = [len(input_strokes)]
draw_strokes(to_normal_strokes(np.array(strokes)))
return sess.run(eval_model.batch_z, feed_dict={eval_model.input_data: [strokes], eval_model.sequence_lengths: seq_len})[0]
I have to mention I trained my own model following the instructions here:
https://github.com/tensorflow/magenta/tree/master/magenta/models/sketch_rnn
Can someone help me into understanding and solving this issue ?
Thanks
Regards
For my case, the problem is caused by to_big_strokes() function. If you do not modify the to_big_stroke() in sketch_rnn/utils.py, it will by default prolong the input_strokes sequence to the length of 250.
All you need to do, is to modify the parameter max_len in that function. You need to change that value to the maximum sequence length of your own dataset, which is 21 for me, as the line marked with "change" shown below.
def to_big_strokes(stroke, max_len=21): # change: 250 -> 21
"""Converts from stroke-3 to stroke-5 format and pads to given length."""
# (But does not insert special start token).
result = np.zeros((max_len, 5), dtype=float)
l = len(stroke)
assert l <= max_len
result[0:l, 0:2] = stroke[:, 0:2]
result[0:l, 3] = stroke[:, 2]
result[0:l, 2] = 1 - result[0:l, 3]
result[l:, 4] = 1
return result
The problem was that the strokes size is not equal as the array size expected by the algorithm.
So adapting the strokes array fixed the issue.
Related
dataset = slim.dataset.Dataset(...)
provider = slim.dataset_data_provider.DatasetDataProvider(dataset, ..._
image, labels = provider.get(['image', 'label')
Let's say, for an example in a dataset A, labels could be [1, 2, 1, 3]. However, for some reason (e.g, due to dataset B), I would like to map the label IDs to other values. The mapping could be like below.
# {old_label: target_label}
mapping = {0: 0, 1: 2, 2: 2, 3: 2, 4: 2, 5: 3, 6: 1}
For now, I am guessing two ways:
-- tf.data.Dataset seems to have a map(map_func) function that every examples should pass, which could be the solution. However, I am more familiar to slim.dataset.Dataset. Is there a similar trick for slim.dataset.Dataset?
-- I was wondering if I can simply apply some mapping function to a tensor label such as:
new_labels = tf.map_fn(lambda x: x+1, labels, dtype=tf.int32)
# labels = [1 2 1 3] --> new_labels = [2 3 2 4]. This works.
new_labels = tf.map_fn(lambda x: mapping[x], labels, dtype=tf.int32)
# I wished but this does not work!
However, the below didn't work, which is what I need. Could anyone please advise?
I think you can try tf.contrib.lookup:
keys = list(mapping.keys())
values = [mapping[k] for k in keys]
table = tf.contrib.lookup.HashTable(
tf.contrib.lookup.KeyValueTensorInitializer(keys, values, key_dtype=tf.int64, value_dtype=tf.int64), -1
)
new_labels = table.lookup(labels)
sess=tf.Session()
sess.run(table.init)
print(sess.run(new_labels))
I want to design a follow function for expanding any 1D/2D/3D matrix to a 4D matrix.
import tensorflow as tf
def inputs_2_4D(inputs):
_ranks = tf.rank(inputs)
return tf.case({tf.equal(_ranks, 3): lambda: tf.expand_dims(inputs, 3),
tf.equal(_ranks, 2): lambda: tf.expand_dims(tf.expand_dims(inputs, 0), 3),
tf.equal(_ranks, 1): lambda: tf.expand_dims(tf.expand_dims(tf.expand_dims(inputs, 0), 0), 3)},
default=lambda: tf.identity(inputs))
def run():
with tf.Session() as sess:
mat_1d = tf.constant([1, 1])
mat_2d = tf.constant([[1, 1]])
mat_3d = tf.constant([[[1, 1]]])
mat_4d = tf.constant([[[[1, 1]]]])
result = inputs_2_4D(mat_1d)
print(result.eval())
The function, however, cannot run well. It can only perform to output a 4-D matrix when the mat_3d and mat-4d tensors are passed into it. There will be some errors information if a 1D or 2D matrix are passed to the function.
When passing mat_3dormat_4dinto inputs_2_4D(), they can be expanded to a 4D matrix or original matrix:
mat_3d -----> [[[[1]
[1]]]]
mat_4d -----> [[[[1 1]]]]
When mat_1dormat_2dmatrixes are passed into inputs_2_4D, error information:
ValueError: dim 3 not in the interval [-2, 1]. for 'case/cond/ExpandDims' (op: 'ExpandDims') with input shapes: [2], [] and with computed input tensors: input[1] = <3>.
I tested another similar function before. That function can run correctly.
import tensorflow as tf
def test_2_4D(inputs):
_ranks = tf.rank(inputs)
return tf.case({tf.equal(_ranks, 3): lambda: tf.constant(3),
tf.equal(_ranks, 2): lambda: tf.constant(2),
tf.equal(_ranks, 1): lambda: tf.constant(1)},
default=lambda: tf.identity(inputs))
def run():
with tf.Session() as sess:
mat_1d = tf.constant([1, 1])
mat_2d = tf.constant([[1, 1]])
mat_3d = tf.constant([[[1, 1]]])
mat_4d = tf.constant([[[[1, 1]]]])
result = test_2_4D(mat_3d)
print(result.eval())
This function can correctly output the corresponding results when passing all of matrixes.
test_2_4D() RESULTS:
mat_1d -----> 1
mat_2d -----> 2
mat_3d -----> 3
mat_4d -----> [[[[1 1]]]]
I don't know why the correct branch in inputs_2_4D() cannot be found while the tf.equal() in each branch were executed. I feel that the 1st and 2nd branches in the function seem to still work if the input matrix is "mat_1d" or "mat_2d". So, the program will crash down. Please help me to analyze this problem!
I think I worked out what the problem is here. Turns out all condition/function pairs are evaluated. This can be revealed by giving the ops different names. The problem is that if your input is, say, rank 2, Tensorflow seems to still evaluate tf.equal(_ranks, 3): lambda: tf.expand_dims(inputs, 3). This leads to a crash because it cannot expand dim 3 for a rank-2 tensor (the maximum allowed value is 2).
This actually makes sense since with tf.case you're basically saying "I don't know which of these cases is going to be true at runtime, so check which one is appropriate and execute the corresponding function". However this means that Tensorflow needs to prepare execution paths for all possible cases, which in this case leads to invalid computations (trying to expand invalid dimensions).
At this point it would be nice to know a little more about your problem, i.e. why exactly you need that function. If you have different inputs and you simply want to bring them all to 4D, but each input always has the same dimensionality, consider simply using Python if-statements. Example:
inputs3d = tf.constant([[[1,1]]]) # this is always 3D
inputs2d = tf.constant([[1,1]]) # this is alwayas 2D
...
def inputs_2_4D(inputs):
_rank = len(inputs.shape.as_list())
if _rank == 3:
return tf.expand_dims(inputs, 3)
elif _rank == 2:
return tf.expand_dims(tf.expand_dims(inputs, 0), 3)
...
This will check the input rank while the graph is being built (not at runtime like tf.case) and really only prepare those expand_dims ops that are appropriate for the given input.
However if you have a single inputs tensor and this could have different ranks at different times of your program this would require a different solution. Please let us know which problem you're trying to solve!
I have implement the functionality I want through 2 ways. Now, I provide my code to share.
The 1st method based on tf.cond:
def inputs_2_4D(inputs):
_rank1d = tf.rank(inputs)
def _1d_2_2d(): return tf.expand_dims(inputs, 0)
def _greater_than_1d(): return tf.identity(inputs)
_tmp_2d = tf.cond(_rank1d < 2, _1d_2_2d, _greater_than_1d)
_rank2d = tf.rank(_tmp_2d)
def _2d_2_3d(): return tf.expand_dims(_tmp_2d, 0)
def _greater_than_2d(): return tf.identity(_tmp_2d)
_tmp_3d = tf.cond(_rank2d < 3, _2d_2_3d, _greater_than_2d)
_rank3d = tf.rank(_tmp_3d)
def _3d_2_4d(): return tf.expand_dims(_tmp_3d, 3)
def _greater_than_3d(): return tf.identity(_tmp_3d)
return (tf.cond(_rank3d < 4, _3d_2_4d, _greater_than_3d))
The 2nd method based on tf.case with tf.cond:
def inputs_2_4D_1(inputs):
_rank = tf.rank(inputs)
def _assign_original(): return tf.identity(inputs)
def _dummy(): return tf.expand_dims(inputs, 0)
_1d = tf.cond(tf.equal(_rank, 1), _assign_original, _dummy)
_2d = tf.cond(tf.equal(_rank, 2), _assign_original, _dummy)
_3d = tf.cond(tf.equal(_rank, 3), _assign_original, _dummy)
def _1d_2_4d(): return tf.expand_dims(tf.expand_dims(tf.expand_dims(_1d, 0), 0), 3)
def _2d_2_4d(): return tf.expand_dims(tf.expand_dims(_2d, 0), 3)
def _3d_2_4d(): return tf.expand_dims(_3d, 3)
return (tf.case({tf.equal(_rank, 1): _1d_2_4d,
tf.equal(_rank, 2): _2d_2_4d,
tf.equal(_rank, 3): _3d_2_4d},
default=_assign_original))
I think the efficiency of the 2nd method should be less than the 1st method's, because the function _dummy() always wastes 2 operations when allocating inputs into _1d,_2d,_3d respectively.
I would like to pad my labels so that they would be of equal length to be passed into the ctc_loss function. Apparently, -1 is not allowed. If I were to apply padding, should the padding value be part of the labels for ctc?
Update
I have this code that converts dense labels into sparse ones to be passed to the ctc_loss function which I think is related to the problem.
def dense_to_sparse(dense_tensor, out_type):
indices = tf.where(tf.not_equal(dense_tensor, tf.constant(0, dense_tensor.dtype)
values = tf.gather_nd(dense_tensor, indices)
shape = tf.shape(dense_tensor, out_type=out_type)
return tf.SparseTensor(indices, values, shape)
Actually, -1 values are allowed to be present in the y_true argument of the ctc_batch_cost with one limitation - they should not appear within the actual label "content" which is specified by label_length (here i-th label "content" would start from the index 0 and end at the index label_length[i]).
So it is perfectly fine to pad labels with -1 so that they would be of equal length, as you intended. The only thing you should take care about is to correctly calculate and pass corresponding label_length values.
Here is the sample code which is a modified version of the test_ctc unit test from keras:
import numpy as np
from tensorflow.keras import backend as K
number_of_categories = 4
number_of_timesteps = 5
labels = np.asarray([[0, 1, 2, 1, 0], [0, 1, 1, 0, -1]])
label_lens = np.expand_dims(np.asarray([5, 4]), 1)
# dimensions are batch x time x categories
inputs = np.zeros((2, number_of_timesteps, number_of_categories), dtype=np.float32)
input_lens = np.expand_dims(np.asarray([5, 5]), 1)
k_labels = K.variable(labels, dtype="int32")
k_inputs = K.variable(inputs, dtype="float32")
k_input_lens = K.variable(input_lens, dtype="int32")
k_label_lens = K.variable(label_lens, dtype="int32")
res = K.eval(K.ctc_batch_cost(k_labels, k_inputs, k_input_lens, k_label_lens))
It runs perfectly fine even with -1 as the last element of the (second) labels sequence because corresponding label_lens item (second) specified that its length is 4.
If we change it to be 5 or if we change some other label value to be -1 then we have the All labels must be nonnegative integers exception that you've mentioned. But this just means that our label_lens is invalid.
Here's how I do it. I have a dense tensor labels that includes padding with -1, so that all targets in a batch have the same length. Then I use
labels_sparse = dense_to_sparse(labels, sparse_val=-1)
where
def dense_to_sparse(dense_tensor, sparse_val=0):
"""Inverse of tf.sparse_to_dense.
Parameters:
dense_tensor: The dense tensor. Duh.
sparse_val: The value to "ignore": Occurrences of this value in the
dense tensor will not be represented in the sparse tensor.
NOTE: When/if later restoring this to a dense tensor, you
will probably want to choose this as the default value.
Returns:
SparseTensor equivalent to the dense input.
"""
with tf.name_scope("dense_to_sparse"):
sparse_inds = tf.where(tf.not_equal(dense_tensor, sparse_val),
name="sparse_inds")
sparse_vals = tf.gather_nd(dense_tensor, sparse_inds,
name="sparse_vals")
dense_shape = tf.shape(dense_tensor, name="dense_shape",
out_type=tf.int64)
return tf.SparseTensor(sparse_inds, sparse_vals, dense_shape)
This creates a sparse tensor of the labels, which is what you need to put into the ctc loss. That is, you call tf.nn.ctc_loss(labels=labels_sparse, ...) The padding (i.e. all values equal to -1 in the dense tensor) is simply not represented in this sparse tensor.
There's problem on shared variable indexing.
The shape of train_set_x is (n, 3, 50176) and n is larger than index
I can get value from shared variable train_set_x with following code
train_set_x.get_value(borrow=True)[index]
array([[ 143., 142., 142., ..., 141., 141., 145.],
[ 114., 113., 113., ..., 141., 141., 145.],
[ 108., 107., 107., ..., 139., 139., 143.]], dtype=float32)
but I can't get value with following code
check = theano.function([index], x, givens = {x : train_set_x[index]})
check(index)
It shows an error message
*** IndexError: index out of bounds
Apply node that caused the error: GpuSubtensor{int64}(<CudaNdarrayType(float32, 3D)>, ScalarFromTensor.0)
Toposort index: 1
Inputs types: [CudaNdarrayType(float32, 3D), Scalar(int64)]
Inputs shapes: [(1039, 3, 50176), ()]
Inputs strides: [(150528, 50176, 1), ()]
Inputs values: ['not shown', 1039]
Outputs clients: [[HostFromGpu(GpuSubtensor{int64}.0)]]
HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
What is the difference between them? and What would be the way to use the the value in train_set_x with theano.function?
The Inputs values: ['not shown', 1039] shows value passed as index argument (it is 1039) and the Inputs shapes: [(1039, 3, 50176), ()] line shows that your n is 1039.
So it looks like you are really out of bounds as theano says.
The following sample works for me:
train_set_x = theano.shared(np.random.rand(10, 3, 3))
x = theano.tensor.matrix()
index = theano.tensor.iscalar()
check = theano.function([index], x, givens = {x : train_set_x[index]}
check(3)
While check(10) displays very similar error:
IndexError: index out of bounds
Apply node that caused the error: Subtensor{int32}(<TensorType(float64, 3D)>, ScalarFromTensor.0)
Toposort index: 1
Inputs types: [TensorType(float64, 3D), Scalar(int32)]
Inputs shapes: [(10, 3, 3), ()]
Inputs strides: [(72, 24, 8), ()]
Inputs values: ['not shown', 10]
Outputs clients: [[DeepCopyOp(Subtensor{int32}.0)]]
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node
I have two Numpy array whose size is 994 and 1000. As such I when I am doing the below operation:
X * Y
I get error that "ValueError: operands could not be broadcast together with shapes (994) (1000)"
Hence as per fix I am trying to pad extras / trailing zeros to the array which great size by below method:
padzero = 0
if(bw.size > w.size):
padzero = bw.size - w.size
w = np.pad(w,padzero, 'constant', constant_values=0)
if(bw.size < w.size):
padzero = w.size - bw.size
bw = np.pad(bw,padzero, 'constant', constant_values=0)
But now the issue comes that if the size difference is 6 then 12 0's are getting padded in the array - which exactly should be six in my case.
I tried many ways to achieve this but its not resulting to resolve the issue. If I try he below way:
bw = np.pad(bw,padzero/2, 'constant', constant_values=0)
ValueError: Unable to create correctly shaped tuple from 3.0
How can I fix the issue?
a = np.array([1, 2, 3])
To insert zeros front:
np.pad(a,(2,0),'constant', constant_values=0)
array([0, 0, 1, 2, 3])
To insert zeros back:
np.pad(a,(0,2),'constant', constant_values=0)
array([1, 2, 3, 0, 0])
Front and back:
np.pad(a,(1,1),'constant', constant_values=0)
array([0, 1, 2, 3, 0])