ValueError: Exception encountered when calling layer "ctc_loss" - tensorflow

Tensorflow Beginner here. I got this error message and I have no clue what to do, where to look or what to change? Can someone guide me into the right direction?
Code: https://github.com/niconielsen32/NeuralNetworks/blob/main/OCRcaptchas.ipynb
ValueError: Exception encountered when calling layer "ctc_loss" " f"(type CTCLayer).
in user code:
File "C:\Users\guowe\PycharmProjects\ocr_gas\ocr.py", line 135, in call *
label_length = tf.cast(tf.shape(y_true)[1], dtype="int64")
ValueError: slice index 1 of dimension 0 out of bounds. for '{{node ocr_model_v1/ctc_loss/strided_slice_2}} = StridedSlice[Index=DT_INT32, T=DT_INT32, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](ocr_model_v1/ctc_loss/Shape_2, ocr_model_v1/ctc_loss/strided_slice_2/stack, ocr_model_v1/ctc_loss/strided_slice_2/stack_1, ocr_model_v1/ctc_loss/strided_slice_2/stack_2)' with input shapes: [1], [1], [1], [1] and with computed input tensors: input[1] = <1>, input[2] = <2>, input[3] = <1>.
Call arguments received by layer "ctc_loss" " f"(type CTCLayer):
• y_true=tf.Tensor(shape=(None,), dtype=float32)
• y_pred=tf.Tensor(shape=(None, 50, 12), dtype=float32)

Related

How to reshape tensor without graph disconnect in TensorFlow?

Reshaping tensor with tf.reshape during forward pass causes disconnect error in TensorFlow.
For example,
...
image = tf.keras.layers.Input(INPUT_SHAPE, name='image', dtype=tf.uint8)
image = tf.reshape(image, shape = (-1, 1344, 768, 1))
image_norm = normalize(image)
...
Above code causes the following error
Graph disconnected: cannot obtain value for tensor
KerasTensor(type_spec=TensorSpec(shape=(None, 8, 1344, 768, 1),
dtype=tf.uint8, name='image'), name='image', description="created by
layer 'image'") at layer "tf.reshape". The following previous layers
were accessed without issue: []
Is there any way to reshape tensor without disconnet?

Tensorflow: ValueError: `generator` yielded an element of shape () where an element of shape () was expected

My dataset:
ds = tf.data.Dataset.from_generator(
data_generator.generate,
output_types=({"input_1": tf.int64, "input_2": tf.int64}, tf.int64),
output_shapes = ({"input_1": tf.TensorShape([None, 500]), "input_2":
tf.TensorShape([None, 500])},tf.TensorShape(None)))
My preprocess function returns:
return ({'input_1': ids1, 'input_2': ids2}, label)
My model (taking multiple inputs):
input_1 = Input(shape=(500,) , name='input_1')
............
input_2 = Input(shape=(500,) , name='input_2')
On starting the training, I'm getting the following error:
ValueError: generator yielded an element of shape (500,) where an element of shape (None, 500) was expected
Any idea what I might be doing wrong ?
EDIT: On updating the output_shapes to:
output_shapes = ({"input_1": tf.TensorShape([500]), "input_2": tf.TensorShape([500])},tf.TensorShape(None)))
I get the following warning:
WARNING:tensorflow:Model was constructed with shape (None, 500) for input KerasTensor(type_spec=TensorSpec(shape=(None, 500), dtype=tf.float32, name='input_1'), name='input_1', description="created by layer 'input_1'"), but it was called on an input with incompatible shape (500, 1)

need help deciphering tensorflow feed InvalidArgumentError

I don't think I understand this feed error from tensorflow
Debug: [[ 0. 0.]]
Debug: (1, 2)
Debug: float64
2018-05-09 09:56:34.615561: W tensorflow/core/kernels/queue_base.cc:295] _0_input_producer: Skipping cancelled enqueue attempt with queue not closed
Traceback (most recent call last):
File "/home/kiran/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1323, in _do_call
return fn(*args)
File "/home/kiran/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1302, in _run_fn
status, run_metadata)
File "/home/kiran/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'seqModel/a_prev' with dtype double and shape [1,2]
[[Node: seqModel/a_prev = Placeholder[dtype=DT_DOUBLE, shape=[1,2], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
the way I feed the placeholder is:
self.a_prev = tf.placeholder(tf.float64, shape=[1,2], name='a_prev')
batch = tf.train.batch([self.x_acc, self.y_acc,
self.prev_pos],
batch_size=1, capacity=20000, num_threads=1)
x_acc, y_acc, prev_pos = sess.run(batch)
test = np.array([[ x_acc[0,0], y_acc[0,0] ]])
print("Debug: ",test)
print("Debug:",test.shape)
print("Debug:",test.dtype)
_,X_hat_val,loss_val, X_val = sess.run([train,X_hat,loss, self.X],
feed_dict={self.a_prev : np.array([[x_acc[0,0],y_acc[0,0] ]]),
self.pos1 : np.array([[ prev_pos[0,0] ]])
})
The error does not make sense because I am feeding values to the placeholder but it says that there are no values. What does that mean?
NB: I didn't run your code, as it depends on unavailable data.
However, it's probable your error is caused by reassigning the self.a_prev attribute, line 173. With this line, self.a_prev doesn't point to the tf.placeholder(..., name='a_prev') anymore, but to a different Tensor (from self.new_evidence) - so the actual placeholder doesn't get fed when running.
Toy example for this supposition
import tensorflow as tf
import numpy as np
x_acc = np.random.rand(2, 2)
y_acc = np.random.rand(2, 2)
a_prev = tf.placeholder(tf.float64, shape=[1,2], name='a_prev')
some_results = tf.add(a_prev, 1.)
a_prev = tf.constant([[-1, -1]])
# ... now "a_prev" the python variable isn't pointing to the placeholder anymore,
# so "a_prev" the placeholder exists in the graph with no python pointer to it.
with tf.Session() as sess:
res = sess.run(some_results, feed_dict={a_prev : np.array([[x_acc[0,0],y_acc[0,0] ]])})
# "a_prev" the constant is assigned the values, not "a_prev" the placeholder,
# hence an error.
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'a_prev' with dtype double and shape [1,2]
[[Node: a_prev = Placeholderdtype=DT_DOUBLE, shape=[1,2],
_device="/job:localhost/replica:0/task:0/device:GPU:0"]] [[Node: Add/_1 = _Recvclient_terminated=false,
recv_device="/job:localhost/replica:0/task:0/device:CPU:0",
send_device="/job:localhost/replica:0/task:0/device:GPU:0",
send_device_incarnation=1, tensor_name="edge_8_Add",
tensor_type=DT_DOUBLE,
_device="/job:localhost/replica:0/task:0/device:CPU:0"]]

why do I have to reshape `inputs` from `tf.train.batch()` to use with `slim.fully_connected()?

Why do I get this error for slim.fully_connected()?
ValueError: Input 0 of layer fc1 is incompatible with the layer: : expected min_ndim=2, found ndim=1. Full shape received: [32]
my input is Tensor("batch:0", shape=(32,), dtype=float32) from tf.train.batch()
inputs, labels = tf.train.batch(
[input, label],
batch_size=batch_size,
num_threads=1,
capacity=2 * batch_size)
if I reshape the input to (32,1) it works fine.
inputs, targets = load_batch(train_dataset)
print("inputs:", inputs, "targets:", targets)
# inputs: Tensor("batch:0", shape=(32,), dtype=float32) targets: Tensor("batch:1", shape=(32,), dtype=float32)
inputs = tf.reshape(inputs, [-1,1])
targets = tf.reshape(targets, [-1,1])
The examples in slim walkthrough seem to work without explicitly reshaping after load_batch()
tf.train.batch expects array like inputs because scalars are quite rare (practically speaking). So, you have to reshape your input. I think the next code snippet would clear things out.
>>> import numpy as np
>>> a = np.array([1,2,3,4])
>>> a.shape
(4,)
>>> a = np.reshape(a,[4,1])
>>> a
array([[1],
[2],
[3],
[4]])
>>>

TensorFlow: FIFOQueue's DequeueMany and DequeueUpTo require the components to have specified shapes

This Tensorflow code using FIFOQueue causes the below error
import tensorflow as tf
with tf.Session() as sess:
queue = tf.FIFOQueue(100, tf.float32)
enqueue_op = queue.enqueue([1.2, 2.3])
inputs = queue.dequeue_many(2)
sess.run(enqueue_op)
sess.run(enqueue_op)
print sess.run(inputs)
The error
InvalidArgumentError (see above for traceback): FIFOQueue's DequeueMany and DequeueUpTo require the components to have specified shapes.
[[Node: fifo_queue_DequeueMany = QueueDequeueMany[_class=["loc:#fifo_queue"], component_types=[DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](fifo_queue, fifo_queue_DequeueMany/n)]]
Can you please tell me what I am doing wrong?
Asked too soon. Perhaps I will save future generations
with tf.Session() as sess:
queue = tf.FIFOQueue(100, dtypes=[tf.float32, tf.float32], shapes=[(), ()])
enqueue_op = queue.enqueue_many([[1.2, 2.3], [4.5, 6.7]])
inputs = queue.dequeue_many(4)
sess.run(enqueue_op)
sess.run(enqueue_op)
print sess.run(inputs)
prints out,
[array([ 1.20000005, 2.29999995, 1.20000005, 2.29999995], dtype=float32),
array([ 4.5 , 6.69999981, 4.5 , 6.69999981], dtype=float32)]