ValueError: Error converting shape to a TensorShape: Dimension -5 must be >= 0 - tensorflow

I have no idea how this error is arising. I'm trying to change the input format to an RNN and have printed out the tensors in the original version (which works) and the modified version (which crashes).
FUNCTIONAL:
LABEL= Tensor("concat_1:0", shape=(?, 2), dtype=float32, device=/device:CPU:0) (?, 2)
inputs=Tensor("concat:0", shape=(?, 8), dtype=float32, device=/device:CPU:0)
x=[<tf.Tensor 'split:0' shape=(?, 1) dtype=float32>,
<tf.Tensor 'split:1' shape=(?, 1) dtype=float32>,
<tf.Tensor 'split:2' shape=(?, 1) dtype=float32>,
<tf.Tensor 'split:3' shape=(?, 1) dtype=float32>,
<tf.Tensor 'split:4' shape=(?, 1) dtype=float32>,
<tf.Tensor 'split:5' shape=(?, 1) dtype=float32>,
<tf.Tensor 'split:6' shape=(?, 1) dtype=float32>,
<tf.Tensor 'split:7' shape=(?, 1) dtype=float32>]
last outputs=Tensor("rnn/rnn/basic_lstm_cell/mul_23:0", shape=(?, 3), dtype=float32)
PREDICTION Tensor("add:0", shape=(?, 2), dtype=float32)
LOSS Tensor("mean_squared_error/value:0", shape=(), dtype=float32)
BROKEN:
X= 5 Tensor("Const:0", shape=(49, 10), dtype=float32, device=/device:CPU:0)
labels= Tensor("Const_5:0", shape=(49, 10), dtype=float32)
OUTPUTS Tensor("rnn/rnn/basic_lstm_cell/mul_14:0", shape=(49, 5), dtype=float32)
PREDICTIONS Tensor("add:0", shape=(49, 10), dtype=float32)
LABELS Tensor("Const_5:0", shape=(49, 10), dtype=float32)
LOSS Tensor("mean_squared_error/value:0", shape=(), dtype=float32)
Here is the code for the model, which is the same for each of them:
lstm_cell = rnn.BasicLSTMCell(LSTM_SIZE, forget_bias=1.0)
outputs, _ = tf.nn.static_rnn(lstm_cell, x, dtype=tf.float32)
outputs = outputs[-1]
print('-->OUTPUTS', outputs)
weight = tf.Variable(tf.random_normal([LSTM_SIZE, N_OUTPUTS]))
bias = tf.Variable(tf.random_normal([N_OUTPUTS]))
predictions = tf.matmul(outputs, weight) + bias
print('-->PREDICTIONS', predictions)
print('-->LABELS', labels)
loss = tf.losses.mean_squared_error(labels, predictions)
print('-->LOSS', loss)
train_op = tf.contrib.layers.optimize_loss(loss=loss, global_step=tf.train.get_global_step(), learning_rate=0.01, optimizer="SGD")
eval_metric_ops = {"rmse": tf.metrics.root_mean_squared_error(labels, predictions)}

TL;DR: Use x = tf.split( x, 10, axis = -1 ) to split x before feeding it.
TS;WM:
The error probably happens at tf.nn_static_rnn(), in the second line of your code (would have been nice had you posted the error line number):
outputs, _ = tf.nn.static_rnn(lstm_cell, x, dtype=tf.float32)
The "broken" version tries to feed a tensor with shape ( 49, 10 ) whereas the working version is feeding a list of 8 tensors with shape ( ?, 1 ). The documentation says:
inputs: A length T list of inputs, each a Tensor of shape [batch_size, input_size], or a nested tuple of such elements.
In the previous line you define the lstm_cell with tf.contrib.rnn.BasicLSTMCell.__init__() (presumably, because the import lines are omitted from your code), which has the num_units argument filled by LSTM_SIZE (which is, again, omitted from your code):
lstm_cell = rnn.BasicLSTMCell(LSTM_SIZE, forget_bias=1.0)
So you have to get your ducks in a row. x has to be a list of ( batch_size, 1 ) tensors, which you can achieve with tf.split():
x = tf.split( x, 10, axis = -1 )
where I presume 10 to be the length of the data you're trying to feed, just based on the output you pasted.

Related

which type of input_shape should I use in tensorflow.keras?

I am learning TensorFlow through its documentation and a little bit confused about the input_shape type in the first layer. Some of the examples have list, but usually, it is a tuple. Is there any specific case that I have to use a certain type?
# I am learning RNN and see this example.
tf.keras.layers.Dense(100, input_shape=[30])
tf.keras.layers.Dense(1)
vs
# This is what I usually see
tf.keras.layers.Dense(32, input_shape=(224, 224, 3)),
tf.keras.layers.Dense(32)
It seems like it depends on my data and some other facts, but I don't know which one determines its type.
You can use either a list or a tuple to define the input shape, they both give the same result, see this example:
import tensorflow as tf
>>> tf.keras.Input(shape=(10,))
<tf.Tensor 'input_1:0' shape=(?, 10) dtype=float32>
>>> tf.keras.Input(shape=[10])
<tf.Tensor 'input_2:0' shape=(?, 10) dtype=float32>
>>> tf.keras.Input(shape=(32,32,3))
<tf.Tensor 'input_3:0' shape=(?, 32, 32, 3) dtype=float32>
>>> tf.keras.Input(shape=[32,32,3])
<tf.Tensor 'input_4:0' shape=(?, 32, 32, 3) dtype=float32>
It is up to you, there is no advantage or disadvantage of using the either. The same applies for input_shape in a layer.
In Keras, the input layer itself is not a layer, it is a tensor. It's the starting tensor we send to the first hidden layer. A Keras input_shape argument requires a subscribable object in which the size of each dimension could be stored as an integer. Following are all the valid approaches:
tfd = tf.keras.layers.Dense(1, input_shape=(3,))
x = tfd(tf.ones(shape=(5, 3)))
print(x.shape) # (5, 1)
or,
tfd = tf.keras.layers.Dense(1, input_shape=[3])
x = tfd(tf.ones(shape=(5, 3)))
print(x.shape) # (5, 1)
Note, we can't pass only input_shape=3 as it's not subscribable. Likewise,
tfd = tf.keras.layers.Dense(1, input_shape=(224, 224, 3))
x = tfd(tf.ones(shape=(5, 3)))
print(x.shape) # (5, 1)
or,
tfd = tf.keras.layers.Dense(1, input_shape=[224, 224, 3])
x = tfd(tf.ones(shape=(5, 3)))
print(x.shape) # (5, 1)
This tensor must have the same shape as our training data. When you set input_shape=(224, 224, 3) that means you have training data which is an RGB image with the shape of 224 x 224. The model never knows this shape at first, so we need to manually set it. This is mostly a general picture for Image modeling. And same as this goes to the RNN or sequence modeling: input_shape=(None, features) or input_shape=(features, )

What is tensorflow concrete function outputs correspond to structured_outputs?

I trained my customized ssd_mobilenet_v2 using TensorFlow2 Object Detection API.
After training completed, I used exporter_main_v2.py to export a saved_model of my customized model.
If I load saved_model by TensorFlow2, it seem there are two kind of output format.
import tensorflow as tf
saved_model = tf.saved_model.load("saved_model")
detect_fn = saved_model["serving_default"]
print(detect_fn.outputs)
'''
[<tf.Tensor 'Identity:0' shape=(1, 100) dtype=float32>,
<tf.Tensor 'Identity_1:0' shape=(1, 100, 4) dtype=float32>,
<tf.Tensor 'Identity_2:0' shape=(1, 100) dtype=float32>,
<tf.Tensor 'Identity_3:0' shape=(1, 100, 7) dtype=float32>,
<tf.Tensor 'Identity_4:0' shape=(1, 100) dtype=float32>,
<tf.Tensor 'Identity_5:0' shape=(1,) dtype=float32>,
<tf.Tensor 'Identity_6:0' shape=(1, 1917, 4) dtype=float32>,
<tf.Tensor 'Identity_7:0' shape=(1, 1917, 7) dtype=float32>]
'''
print(detect_fn.structured_outputs)
'''
{'detection_classes': TensorSpec(shape=(1, 100), dtype=tf.float32, name='detection_classes'),
'detection_scores': TensorSpec(shape=(1, 100), dtype=tf.float32, name='detection_scores'),
'detection_multiclass_scores': TensorSpec(shape=(1, 100, 7), dtype=tf.float32, name='detection_multiclass_scores'),
'num_detections': TensorSpec(shape=(1,), dtype=tf.float32, name='num_detections'),
'raw_detection_boxes': TensorSpec(shape=(1, 1917, 4), dtype=tf.float32, name='raw_detection_boxes'),
'detection_boxes': TensorSpec(shape=(1, 100, 4), dtype=tf.float32, name='detection_boxes'),
'detection_anchor_indices': TensorSpec(shape=(1, 100), dtype=tf.float32, name='detection_anchor_indices'),
'raw_detection_scores': TensorSpec(shape=(1, 1917, 7), dtype=tf.float32, name='raw_detection_scores')}
'''
Then, I try to convert this saved_model to onnx format using tf2onnx.
However, the outputs of onnxruntime was a list.
By the shape of result in the list, I think that the sequence is same as detect_fn.outputs
import numpy as np
import onnxruntime as rt
sess = rt.InferenceSession("model.onnx")
input_name = sess.get_inputs()[0].name
pred_onx = sess.run(None, {input_name: np.zeros((1,300,300,3), dtype=np.uint8)})
print(pred_onx) # a list
print([i.shape for i in pred_onx])
'''
[(1, 100),
(1, 100, 4),
(1, 100),
(1, 100, 7),
(1, 100),
(1,),
(1, 1917, 4),
(1, 1917, 7)]
'''
Because there is some shape of result which is same as others, so it become hard to recognized.
Is there any document talk about this relationship that I can refer?
After I looked closely into the values in outputs.
I found the mapping relationship below.
def result_mapper(outputs):
result_dict = dict()
result_dict["num_detections"] = outputs[5]
result_dict["raw_detection_scores"] = outputs[7]
result_dict["raw_detection_boxes"] = outputs[6]
result_dict["detection_multiclass_scores"] = outputs[3]
result_dict["detection_boxes"] = outputs[1]
result_dict["detection_scores"] = outputs[4]
result_dict["detection_classes"] = outputs[2]
result_dict["detection_anchor_indices"] = outputs[0]
return result_dict
Had the same question and after some hours of stepping through the debugger found they are... in sorted order. The method determining iteration order is here.
In the case of dict instances, the sequence consists of the values, sorted by
key to ensure deterministic behavior.

Using the output of intermediate layers in a loss function in TF2

I'm trying to replicate the training of OpenPose in Tensorflow 2 as part of my TF2 learning, but to be able to do this I need to use the output of the S, L intermediate layers in my loss function.
I've tried using the functional API but I can't seem to get the output from the S/L layers to be able to use them in a loss function as required. I can see how this might be possible with subclassing but that would add complexity and not as ideal for debugging. Debugging and ease of use would probably be a massive plus at this stage in my learnings.
Is there any way I can do this type of model with the functional API or sequential model?
Yes, both functional and sequential keras models support this. You can always pass a dict containing the layer names as keys and loss functions as values. Here is a code demonstrating this.
If you want to construct a model from scratch, you can just add the layer as one of the model's outputs.
import tensorflow as tf
img = tf.keras.Input([128, 128, 3], name='image')
conv_1 = tf.keras.layers.Conv2D(16, 3, 1, name='conv_1')(img)
conv_2 = tf.keras.layers.Conv2D(16, 3, 1, name='conv_2')(conv_1)
conv_3 = tf.keras.layers.Conv2D(16, 3, 1, name='conv_3')(conv_2)
conv_4 = tf.keras.layers.Conv2D(16, 3, 1, name='conv_4')(conv_3)
conv_5 = tf.keras.layers.Conv2D(16, 3, 1, name='conv_5')(conv_4)
avg_pool = tf.keras.layers.GlobalAvgPool2D(name='avg_pool')(conv_5)
output = tf.keras.layers.Dense(1, activation='sigmoid')(avg_pool)
model = tf.keras.Model(inputs=[img], outputs=[output, conv_5])
print(model.outputs)
output:
[<tf.Tensor 'dense/Sigmoid:0' shape=(None, 1) dtype=float32>,
<tf.Tensor 'conv_5/BiasAdd:0' shape=(None, 118, 118, 16) dtype=float32>]
But incase you are working with a model that is already constructed you can use the model.get_layer method to access a layer and create a new model:
intermediate_layer = model.get_layer('avg_pool').output
new_model = tf.keras.Model(inputs=model.inputs, outputs=model.outputs + [intermediate_layer])
print(new_model.outputs)
output:
[<tf.Tensor 'dense/Sigmoid:0' shape=(None, 1) dtype=float32>,
<tf.Tensor 'conv_5/BiasAdd:0' shape=(None, 118, 118, 16) dtype=float32>,
<tf.Tensor 'avg_pool/Mean:0' shape=(None, 16) dtype=float32>]
And then compiling your model and specifying a separate loss for each of the model's outputs. These losses can be strings if they are the default losses that Keras provides, or these can be callables that implement your loss function.
new_model.compile(optimizer='sgd',
loss={
'dense': 'binary_crossentropy',
'conv_5': 'mse',
'avg_pool': 'mae'
})
Some dummy data and labels
images = tf.random.normal([100, 128, 128, 3])
conv_3_labels = tf.random.normal([100, 118, 118, 16])
avg_pool_labels = tf.random.normal([100, 16])
class_labels = tf.random.uniform([100], 0, 2, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices(
(images, (class_labels, conv_3_labels, avg_pool_labels))
)
dataset = dataset.batch(4, drop_remainder=True)
Training
new_model.fit(dataset)
output:
25/25 [==============================] - 2s 79ms/step - loss: 2.4339
- dense_loss: 0.3904 - conv_5_loss: 1.2367 - avg_pool_loss: 0.8068

Difference between graph.get_tensor_by_name and tf.global_variable

I can get a tensor by graph.get_tensor_by_name, however I cannot find it in tf.global_variable.
In my case, I defined some tf.Tensor as follows:
output_y = Dense(units=y.shape[1],activation='softmax',kernel_regularizer=regularizers.l2(),bias_regularizer=regularizers.l2(),activity_regularizer=regularizers.l2(),name='output_y_'+str(index))(pretrain_output)
y_tf = tf.placeholder(tf.float32, shape=(None, y.shape[1]),name='y_tf_'+str(index))
loss_tensor = tf.nn.softmax_cross_entropy_with_logits(logits=output_y, labels=y_tf, name='loss_tensor_' + str(index))
I can export the tensor shape and name as follows:
>>output_y
<tf.Tensor 'train_variable/output_y_0/Softmax:0' shape=(?, 4) dtype=float32>
>>y_tf
<tf.Tensor 'train_variable/y_tf_0:0' shape=(?, 4) dtype=float32>
>>loss_tensor
<tf.Tensor 'train_variable/loss_tensor_0/Reshape_2:0' shape=(?,) dtype=float32>
Also, I can use tf.get_default_graph.get_tensor_by_name to retrieve the tensor:
>>tf.get_default_graph().get_tensor_by_name('train_variable/output_y_0/Softmax:0')
<tf.Tensor 'train_variable/output_y_0/Softmax:0' shape=(?, 4) dtype=float32>
>>tf.get_default_graph().get_tensor_by_name('train_variable/y_tf_0:0')
<tf.Tensor 'train_variable/y_tf_0:0' shape=(?, 4) dtype=float32>
>>tf.get_default_graph().get_tensor_by_name('train_variable/loss_tensor_0/Reshape_2:0')
<tf.Tensor 'train_variable/loss_tensor_0/Reshape_2:0' shape=(?,) dtype=float32>
However, these variable names cannot be found in tf.global_variables(). It seems that tf.global_variables() only contains the parameter variables like kernel/bias. Now I have to remember the tensor name in order to retrieve the object output (output_y in my case). Can someone show me how to retrieve a tensor, for example search it in a list with all tensor?
There is a difference between a tensor from the read-operation of a node and a tensor as a variable.
A variable consists of a value and several operations:
import tensorflow as tf
a = tf.get_variable('a', tf.float32)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
sess.run(a) # gives 42.
sess.run(tf.get_default_graph().get_tensor_by_name('a/read:0')) # gives 42. as well
print(a.op.outputs) # <tf.Tensor 'a:0' shape=() dtype=float32_ref>]
It just behaves similarly:
>>> type(a)
<class 'tensorflow.python.ops.variables.Variable'>
>>> type(tf.get_default_graph().get_tensor_by_name('a/read:0'))
<class 'tensorflow.python.framework.ops.Tensor'>
but they are different though.
The easiest way is to return output_y in case you need it again. Otherwise just follow:
https://stackoverflow.com/a/36893840/7443104

how to convert string Tensor to Tensor float list?

So, my code is like
parsed_line = tf.decode_csv(line, [[0], [0], [""]])
print(parsed_line[0])
del parsed_line[0]
del parsed_line[0]
features = parsed_line
print(parsed_line[0])
then the result is
[<tf.Tensor 'DecodeCSV:0' shape=() dtype=int32>, <tf.Tensor 'DecodeCSV:1' shape=() dtype=int32>, <tf.Tensor 'DecodeCSV:2' shape=() dtype=string>]
and
[<Tensor("DecodeCSV:2", shape=(), dtype=string)>]
the csv i will give to this decode function is
1, 0, 0101010010101010101010
and I want this "0101010010101010101010" to
[0,1,0,1,0,.........]
in tensorflow
[<Tensor("DecodeCSV:2", shape=(), dtype=string)>]
to
[<tf.Tensor 'DecodeCSV:0' shape=() dtype=int32>, <tf.Tensor 'DecodeCSV:1' shape=() dtype=int32>, ............]
do you have any ideas of this?
You can do it this way using tf.string_split and tf.string_to_number:
import tensorflow as tf
line = tf.constant("1000111101", shape=(1,))
b = tf.string_split(line, delimiter="").values
c = tf.string_to_number(b, tf.int32)
print(c)
with tf.Session() as sess:
print(sess.run(c))
[1 0 0 0 1 1 1 1 0 1]