tensorflow.python.framework.errors_impl.InvalidArgumentError in python - tensorflow

I'm learning TensorFlow (I started with TensorFlow 1.15.5 and working on a ready-made code from GitHub here) and I have a runtime error. The error in line 70 of the train.py file:
_, dists, out1, out2, out3 = sess.run ([model.opt_op, model.loss, model.output1, model.output2, model.output3], feed_dict = feed_dict)
The error is:
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices [0,7] = -1 is not in [0, 157)
[[node GatherV2_12 (defined at /.cifar/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1748)]]
Can anyone explain when this error occurs and how should I fix it?
Thanks in advance for any ideas and tips

Related

Save and resuse deeplearning model in Keras/Tensoflow [duplicate]

Setting
As already mentioned in the title, I got a problem with my custom loss function, when trying to load the saved model. My loss looks as follows:
def weighted_cross_entropy(weights):
weights = K.variable(weights)
def loss(y_true, y_pred):
y_pred = K.clip(y_pred, K.epsilon(), 1-K.epsilon())
loss = y_true * K.log(y_pred) * weights
loss = -K.sum(loss, -1)
return loss
return loss
weighted_loss = weighted_cross_entropy([0.1,0.9])
So during training, I used the weighted_loss function as loss function and everything worked well. When training is finished I save the model as .h5file with the standard model.save function from keras API.
Problem
When I am trying to load the model via
model = load_model(path,custom_objects={"weighted_loss":weighted_loss})
I am getting a ValueError telling me that the loss is unknown.
Error
The error message looks as follows:
File "...\predict.py", line 29, in my_script
"weighted_loss": weighted_loss})
File "...\Continuum\anaconda3\envs\processing\lib\site-packages\keras\engine\saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "...\Continuum\anaconda3\envs\processing\lib\site-packages\keras\engine\saving.py", line 312, in _deserialize_model
sample_weight_mode=sample_weight_mode)
File "...\Continuum\anaconda3\envs\processing\lib\site-packages\keras\engine\training.py", line 139, in compile
loss_function = losses.get(loss)
File "...\Continuum\anaconda3\envs\processing\lib\site-packages\keras\losses.py", line 133, in get
return deserialize(identifier)
File "...\Continuum\anaconda3\envs\processing\lib\site-packages\keras\losses.py", line 114, in deserialize
printable_module_name='loss function')
File "...\Continuum\anaconda3\envs\processing\lib\site-packages\keras\utils\generic_utils.py", line 165, in deserialize_keras_object
':' + function_name)
ValueError: Unknown loss function:loss
Questions
How can I fix this problem? May it be possible that the reason for that is my wrapped loss definition? So keras doesn't know, how to handle the weights variable?
Your loss function's name is loss (i.e. def loss(y_true, y_pred):). Therefore, when loading back the model you need to specify 'loss' as its name:
model = load_model(path, custom_objects={'loss': weighted_loss})
For full examples demonstrating saving and loading Keras models with custom loss functions or models, please have a look at the following GitHub gist files:
Custom loss function defined using a wrapper:
https://gist.github.com/ashkan-abbasi66/a81fe4c4d588e2c187180d5bae734fde
Custom loss function defined by subclassing:
https://gist.github.com/ashkan-abbasi66/327efe2dffcf9788847d26de934ef7bd
Custom model:
https://gist.github.com/ashkan-abbasi66/d5a525d33600b220fa7b095f7762cb5b
Note:
I tested the above examples on Python 3.8 with Tensorflow 2.5.

cannot save a Custom model in keras in .h5 [duplicate]

Setting
As already mentioned in the title, I got a problem with my custom loss function, when trying to load the saved model. My loss looks as follows:
def weighted_cross_entropy(weights):
weights = K.variable(weights)
def loss(y_true, y_pred):
y_pred = K.clip(y_pred, K.epsilon(), 1-K.epsilon())
loss = y_true * K.log(y_pred) * weights
loss = -K.sum(loss, -1)
return loss
return loss
weighted_loss = weighted_cross_entropy([0.1,0.9])
So during training, I used the weighted_loss function as loss function and everything worked well. When training is finished I save the model as .h5file with the standard model.save function from keras API.
Problem
When I am trying to load the model via
model = load_model(path,custom_objects={"weighted_loss":weighted_loss})
I am getting a ValueError telling me that the loss is unknown.
Error
The error message looks as follows:
File "...\predict.py", line 29, in my_script
"weighted_loss": weighted_loss})
File "...\Continuum\anaconda3\envs\processing\lib\site-packages\keras\engine\saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "...\Continuum\anaconda3\envs\processing\lib\site-packages\keras\engine\saving.py", line 312, in _deserialize_model
sample_weight_mode=sample_weight_mode)
File "...\Continuum\anaconda3\envs\processing\lib\site-packages\keras\engine\training.py", line 139, in compile
loss_function = losses.get(loss)
File "...\Continuum\anaconda3\envs\processing\lib\site-packages\keras\losses.py", line 133, in get
return deserialize(identifier)
File "...\Continuum\anaconda3\envs\processing\lib\site-packages\keras\losses.py", line 114, in deserialize
printable_module_name='loss function')
File "...\Continuum\anaconda3\envs\processing\lib\site-packages\keras\utils\generic_utils.py", line 165, in deserialize_keras_object
':' + function_name)
ValueError: Unknown loss function:loss
Questions
How can I fix this problem? May it be possible that the reason for that is my wrapped loss definition? So keras doesn't know, how to handle the weights variable?
Your loss function's name is loss (i.e. def loss(y_true, y_pred):). Therefore, when loading back the model you need to specify 'loss' as its name:
model = load_model(path, custom_objects={'loss': weighted_loss})
For full examples demonstrating saving and loading Keras models with custom loss functions or models, please have a look at the following GitHub gist files:
Custom loss function defined using a wrapper:
https://gist.github.com/ashkan-abbasi66/a81fe4c4d588e2c187180d5bae734fde
Custom loss function defined by subclassing:
https://gist.github.com/ashkan-abbasi66/327efe2dffcf9788847d26de934ef7bd
Custom model:
https://gist.github.com/ashkan-abbasi66/d5a525d33600b220fa7b095f7762cb5b
Note:
I tested the above examples on Python 3.8 with Tensorflow 2.5.

Trouble with TensorFlow and MNIST recognition

Beforehand, I thank you for analyzing my post and helping out. I've recently gotten interested in ML with Tensorflow,
but I've encountered a problem with my code. I'm reading a book called Learning TensorFlow, and I've written out the whole thing
from the first example. They are analyzing MNIST images, and I've also added my own comments with my perspective on how things work
in the code. When I run the code, however, I get an error. Here's my code, and the error.
#Import tensorflow under the name of ts
import tensorflow as tf
#Import MNIST tutorial data from tensorflow
from tensorflow.examples.tutorials.mnist import input_data
#Declare constants
#Data path
DATA_DIR = 'C:/tmp/data'
#Number of steps
NUM_STEPS = 1000
#Number of examples per step
MINIBATCH_SIZE = 100
#When we read the data-set it saves it locally under our data path, or under c:/tmp/data
data = input_data.read_data_sets(DATA_DIR, one_hot = True)
#Our placeholder X is the image. Placeholders are supplied when running the computation graph
x = tf.placeholder(tf.float32, [None, 784])
#Create a variable representing the weights. Variables are manipulated by the computation graph
W = tf.Variable(tf.zeros([784, 10]))
y_true = tf.placeholder(tf.float32, [None, 784])
y_pred = tf.matmul(x, W)
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=y_pred, labels=y_true))
gd_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
correct_mask = tf.equal(tf.argmax(y_pred, 1), tf.argmax(y_true, 1))
accuracy = tf.reduce_mean(tf.cast(correct_mask, tf.float32))
with tf.Session() as sess:
#Initialize global variables
sess.run(tf.global_variables_initializer())
for _ in range(NUM_STEPS):
batch_xs, batch_ys = data.train.next_batch(MINIBATCH_SIZE)
sess.run(gd_step, feed_dict={x: batch_xs, y_true: batch_ys})
ans = sess.run(accuracy, feed_dict={x: data.test.images,
y_true: data.test.labels})
print("Accuracy: {:.4}%".format(ans*100))
Now here's the error.
runfile('C:/Users/user/.spyder-py3/temp.py', wdir='C:/Users/user/.spyder-py3')
Extracting C:/tmp/data\train-images-idx3-ubyte.gz
Extracting C:/tmp/data\train-labels-idx1-ubyte.gz
Extracting C:/tmp/data\t10k-images-idx3-ubyte.gz
Extracting C:/tmp/data\t10k-labels-idx1-ubyte.gz
Traceback (most recent call last):
File "<ipython-input-11-bf503334b166>", line 1, in <module>
runfile('C:/Users/user/.spyder-py3/temp.py', wdir='C:/Users/CwWJc/.spyder-py3')
File "C:\Users\user\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)
File "C:\Users\user\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/user/.spyder-py3/temp.py", line 38, in <module>
sess.run(gd_step, feed_dict={x: batch_xs, y_true: batch_ys})
File "C:\Users\user\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 950, in run
run_metadata_ptr)
File "C:\Users\user\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1149, in _run
str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (100, 10) for Tensor
'Placeholder_15:0', which has shape '(?, 784)'
Any help is greatly appreciated. Sorry if I'm making a stupid mistake. I find that I often do, though. Thanks in advance! Also, sorry for garbage formatting. :)
Hahaha! I got y_true mixed up. Sorry for the hassle everyone.

KeyError: "The name 'boosted_trees/QuantileAccumulator/' refers to an Operation not in the graph." when loading saved model

I created a TensorFlow estimator:
outlier_estimator = tf.estimator.BoostedTreesClassifier(
n_batches_per_layer = 15,
feature_columns=outlier_feature_columns,
model_dir="./tensorboard_logs/wifi_outliers/",
n_classes=2
)
and saved it:
def serving_input_receiver_fn():
inputs = {
"signal_0": tf.placeholder(shape=[1], dtype=tf.float32, name="signal_0"),
"signal_1": tf.placeholder(shape=[1], dtype=tf.float32, name="signal_1")
}
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
outlier_estimator.export_savedmodel(export_dir_base="./export/", serving_input_receiver_fn=serving_input_receiver_fn)
But when I try to load saved model
tf.reset_default_graph()
with tf.Session() as sess:
tf.saved_model.loader.load(
sess,
[tf.saved_model.tag_constants.SERVING],
"./export/1551699998"
)
I faced an error:
KeyError: "The name 'boosted_trees/QuantileAccumulator/' refers to an
Operation not in the graph."
What am I doing wrong?
I'm using:
Python 3.7
tensorflow 1.13.1
I faced the same problem. Updating my Tensorflow version to Tensorflow 2 (precisely tf 2.1.0) solved the problem

When I am trying to execute my RNN code in TensorFlow, it is showing the following errors?

The tensorflow code is attached, please assist
List of errors
Error 1:
TypeError: Input 'split_dim' of 'Split' Op has type float32 that does not match expected type of int32.
From: x = tf.split(0, n_chunks, x)
To: x = tf.split(axis=0, num_or_size_splits=n_chunks, value=x)
Error 2:
AttributeError: module 'tensorflow.contrib.rnn.python.ops.rnn_cell' has no attribute 'BasicLSTMCell'
From:
from tensorflow.contrib.rnn.python.ops import rnn_cell
lstm_cell = rnn_cell.BasicLSTMCell(rnn_size,state_is_tuple=True)
To:
from tensorflow.contrib.rnn import BasicLSTMCell
lstm_cell = BasicLSTMCell(rnn_size,state_is_tuple=True)
Error 3:
NameError: name 'rnn' is not defined
From: outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
To: outputs, states = tf.nn.dynamic_rnn(cell=lstm_cell, inputs=x, dtype=tf.float32)
Error 4:
ValueError: Dimension must be 2 but is 3 for 'transpose_3' (op: 'Transpose') with input shapes: [?,28], [3].
Comment:
x = tf.transpose(x, [1,0,2])
x = tf.reshape(x, [-1, chunk_size])
x = tf.split(axis=0, num_or_size_splits=n_chunks, value=x)
Error 5:
ValueError: Only call softmax_cross_entropy_with_logits with named arguments (labels=..., logits=..., ...)
From:
cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(prediction,y) )
To:
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y))
Error 6:
InvalidArgumentError: logits and labels must be same size: logits_size=[28,10] labels_size=[128,10]
That's a maybe concept problem with the softmax_cross_entropy_with_logits function. Take a look at: softmax_cross_entropy_with_logits and doc
Start understanding and fixing all these problems. If you still aren't able to run the code, come back and post the updated code, but, at least, with that 6 problems solved :)