Why is this simple code giving error?
inputs = tf.Variable(np.random.rand(2,2))
tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(tf.reduce_mean(inputs))
It gives me :
FailedPreconditionError (see above for traceback): Attempting to use uninitialized value Variable_4
[[Node: Variable_4/read = Identity[T=DT_DOUBLE, _class=["loc:#Variable_4"], _device="/job:localhost/replica:0/task:0/cpu:0"](Variable_4)]]
tf.global_variables_initializer() returns an operation that you must execute. This operation, indeed, when initializes the global variables.
Therefore, change that line with:
init_op = tf.global_variables_initializer()
and within the session, execute it.
with tf.Session() as sess:
sess.run(init_op)
Moreover, it's logically better to do not mix the graph definition and the graph execution.
Define the graph outside the session, then execute the operations. Here's an improved version of your code.
import tensorflow as tf
import numpy as np
inputs = tf.Variable(np.random.rand(2,2))
init_op = tf.global_variables_initializer()
mean_op = tf.reduce_mean(inputs)
with tf.Session() as sess:
sess.run(init_op)
mean_value = sess.run(mean_op)
print(mean_value)
By the way, I suggest you reading the tensorflow getting started page: https://www.tensorflow.org/get_started/
Related
I am observing a strange behavior where Saver can't restore if the checkpoint was saved earlier in the same Python process. It loads fine if done from a different process. Here's some simple code that will show the problem.
import tensorflow.compat.v1 as tf
def train():
W = tf.Variable(tf.zeros([1, 1]))
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver.save(sess, "./model.ckpt")
def predict():
W = tf.Variable(tf.zeros([1, 1]))
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver.restore(sess, "./model.ckpt")
train()
predict()
Here we save and restore immediately after that in the same process. Restoration fails with errors like:
Key Variable_1 not found in checkpoint
But if I run just the predict() code again from a new Python process it works just fine.
#train()
predict()
Am I doing something wrong here?
After predict, if you run:
print([v for v in tf.trainable_variables()])
you will see that two different variables are being created. That's why TF is not able to restore the value of the second one.
In order to link both variables into a single one, you can either:
Pass a dictionary to the argument var_list of tf.train.Saver. For example:
saver = tf.train.Saver({'W': W})
Use auto-reusing when creating the variable. For example:
with tf.variable_scope('', reuse=tf.AUTO_REUSE):
W = tf.get_variable(initializer=lambda: tf.zeros([1, 1]),
name='W')
I am trying to use simple_save for tensorflow, but it isn't working :(
Here is my code:
def export_model(saved_model_dir, final_tensor_name):
with tf.Session() as sess:
with sess.graph.as_default() as graph:
tf.saved_model.simple_save(
sess,
saved_model_dir,
inputs={'image': tf.placeholder(tf.float32)},
outputs={'prediction': graph.get_tensor_by_name(final_tensor_name + ":0")}
)
I get the following error:
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value final_training_ops/biases/final_biases
[[{{node save/SaveV2}}]]
I am working with the following tutorial: https://github.com/BartyzalRadek/Multi-label-Inception-net
I've spent so many hours trying to find solutions online and I know it can't be that tough. I already have a graph that is being exported and all I need now is that saved_model.pb. Any help is appreciated! Thank you!
NEW UPDATE - CODE BELOW
def export_model(saved_model_dir, final_tensor_name):
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
with sess.graph.as_default() as graph:
tf.saved_model.simple_save(
sess,
saved_model_dir,
inputs={'image': tf.placeholder(tf.string)},
outputs={'prediction': graph.get_tensor_by_name(final_tensor_name + ":0")}
)
The code runs now, but when I test the saved model, I always get the same result.
IMAGE_LABELING_CODE
import tensorflow as tf
import sys
image_path = sys.argv[1]
image_data = tf.gfile.FastGFile(image_path, 'rb').read()
label_lines = [line.rstrip() for line
in tf.gfile.GFile("labels.txt")]
with tf.gfile.FastGFile("retrained_graph.pb", 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
with tf.Session() as sess:
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
predictions = sess.run(softmax_tensor, \
{'DecodeJpeg/contents:0': image_data})
As #giser_yugang said maybe you should put at the end of the construction part of the graph: init = tf.global_variables_initializer() and then at execution, after beginning the session: sess.run(init)
Nevertheless, if it were a local variable you would have to add the variable to some collection, establish the initializer and the run it. For example:
a = tf.Variable(..., collections=[tf.GRAPH_KEYS.LOCAL_VARIABLES])
local_init = tf.local_variable_initializer()
...
with tf.Session() as sess:
sess.run(local_init)
nevertheless, some implementations from tensorflow library go directly to local variables, for example, tf.metrics (if they have not changed this) and you just have to define and run local_init = tf.local_variables_initializer() and sess.run(local_init)
I am trying to use tf_hub for universal-sentence-encoder-large when I have the following problem:
FailedPreconditionError (see above for traceback): Table not initialized.
It seems that TensorFlow thinks I did not run init op, but actually, I have run the init op:
embed = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-large/3")
embeddings = embed([
"The quick brown fox jumps over the lazy dog."])
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
embeddings = sess.run(embeddings)
print(embeddings)
The same code structure has been fine with other tf_hub models like elmo.
Looks like to use this tensorflow hub, I need to run an addtional initializer:
init = tf.global_variables_initializer()
table_init = tf.tables_initializer()
with tf.Session() as sess:
sess.run([init, table_init])
embeddings_ = sess.run(embeddings)
print(embeddings)
You could try
with tf.train.SingularMonitoredSession() as sess:
...
which does all standard initializations by itself (including "shared resources", for which there was no public API last time I checked).
I am new to TensorFlow. When I read tensorflow saving and restoring variables manual, I encountered a problem. I saved a variable initialized by a constant, but I can not restore the variable. The code is as following:
a = tf.get_variable("name_a", initializer=[1,2,3])
op1 = a.assign(a+1)
saver = tf.train.Saver()
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
op1.op.run()
print(a.eval())
saver.save(sess,"log1/model.ckpt")
Then I restore it.
a = tf.get_variable("name_a", shape=[3])
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, "log1/model.ckpt")
print(a.eval())
I want to get output like [2,3,4], but I got [ 2.80259693e-45 4.20389539e-45 5.60519386e-45]. It's all zeros.
However, when I modify the first line in the first code snippet to
a = tf.get_variable("name_a", initializer=tf.zeros([3]))
I can get the right restored variable: [ 1. 1. 1.]
I wonder the reason for this situation.
I'm not 100% sure, but it looks like the reason is that your two variables:
tf.get_variable("name_a", initializer=[1,2,3])
tf.get_variable("name_a", shape=[3])
are not equivalent and can't be used one for another that easily (Update: the dtype is different, thanks #BlueSun for noticing this).
You will have a stable output if you define the tensors in restore code just like in saving: a = tf.get_variable("name_a", initializer=[1,2,3]). However, even better would be to work with the saved graph directly:
saver = tf.train.import_meta_graph('log1/model.ckpt.meta')
with tf.Session() as sess:
saver.restore(sess, "log1/model.ckpt")
saved = sess.graph.get_tensor_by_name('name_a:0')
print(sess.run(saved))
Which works correctly no matter how you define the initializer.
You have to define the variable a with the same data type. If you don't specify it and don't have any initializer, the dtype will be tf.float32 by default and the loading of tf.int32 will fail. Simply setting the data type to int32 will solve the problem:
a = tf.get_variable("name_a", shape=[3], dtype=tf.int32)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, "log1/model.ckpt")
print(a.eval())
Using a = tf.get_variable("name_a", initializer=tf.zeros([3])) worked because tf.zeros([3]) has the same dtype as [2, 3, 4]. It is safer to always set the dtype whenever you create a variables.
I have following code for reading files names from directory:
directory = "C:/pics/*.csv"
file_names=tf.train.match_filenames_once(directory)
print(file_names)
<tf.Variable 'matching_filenames_1:0' shape=<unknown> dtype=string_ref>
with tf.Session() as sess:
tf.global_variables_initializer().run()
print(sess.run(file_names))
When I run session I am getting the following error:
" Attempting to use uninitialized value matching_filenames"
Please tell me what I am doing wrong.
There is a subtle distinction between what TF considers global and local variables. This code works as you expect
import tensorflow as tf
directory = "*.*"
file_names = tf.train.match_filenames_once(directory)
init = (tf.global_variables_initializer(), tf.local_variables_initializer())
with tf.Session() as sess:
sess.run(init)
print(sess.run(file_names))