Tensorflow: InvalidArgumentError for placeholder - tensorflow

I tried to run the following code in Jupyter Notebook, however I got the InvalidArgumentError for the placeholder.
But when I wrote a Python script and ran it in command window, it worked. I want to know how can I ran my code in the Notebook successfully, thanks.
OS: Ubuntu 16.04 LTS
Tensorflow version: 0.12rc (installed from source)
Programs and Output:
Command window:
Actural code:
import tensorflow as tf
import numpy as np
raw_data = np.random.normal(10, 1, 100)
# Define alpha as a constant
alpha = tf.constant(0.05)
# A placeholder is just like a variable, but the value is injected from the
# session
curr_value = tf.placeholder(tf.float32)
# Initialize the previous average to some
prev_avg = tf.Variable(0.)
avg_hist = tf.summary.scalar("running_average", update_avg)
value_hist = tf.summary.scalar("incoming_values", curr_value)
merged = tf.summary.merge_all()
writer = tf.summary.FileWriter("./logs")
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(len(raw_data)):
summary_str, curr_avg = sess.run([merged, update_avg], feed_dict={curr_value: raw_data[i]})
sess.run(tf.assign(prev_avg, curr_avg))
print(raw_data[i], curr_avg)
writer.add_summary(summary_str, i)

Your raw_data is float64 (default numpy float type) whereas your placeholder is float32 (default tensorflow float type). You should explicitly cast your data to float32

Related

Tensorflow: FailedPreconditionError: Error while reading resource variable from Container: localhost. When running sess.run() on custom loss function

I have a code running Keras with TensorFlow 1. The code modifies the loss function in order to do deep reinforcement learning:
import os
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
env = gym.make("CartPole-v0").env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
from tensorflow import keras
import random
from tensorflow.keras import layers as L
import tensorflow as tf
from tensorflow.python.keras.backend import set_session
sess = tf.compat.v1.Session()
graph = tf.compat.v1.get_default_graph()
init = tf.global_variables_initializer()
sess.run(init)
network = keras.models.Sequential()
network.add(L.InputLayer(state_dim))
# let's create a network for approximate q-learning following guidelines above
network.add(L.Dense(5, activation='elu'))
network.add(L.Dense(5, activation='relu'))
network.add(L.Dense(n_actions, activation='linear'))
s = env.reset()
# Create placeholders for the <s, a, r, s'> tuple and a special indicator for game end (is_done = True)
states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + state_dim)
actions_ph = keras.backend.placeholder(dtype='int32', shape=[None])
rewards_ph = keras.backend.placeholder(dtype='float32', shape=[None])
next_states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + state_dim)
is_done_ph = keras.backend.placeholder(dtype='bool', shape=[None])
#get q-values for all actions in current states
predicted_qvalues = network(states_ph)
#select q-values for chosen actions
predicted_qvalues_for_actions = tf.reduce_sum(predicted_qvalues * tf.one_hot(actions_ph, n_actions),
axis=1)
gamma = 0.99
# compute q-values for all actions in next states
predicted_next_qvalues = network(next_states_ph)
# compute V*(next_states) using predicted next q-values
next_state_values = tf.math.reduce_max(predicted_next_qvalues, axis=1)
# compute "target q-values" for loss - it's what's inside square parentheses in the above formula.
target_qvalues_for_actions = rewards_ph + tf.constant(gamma) * next_state_values
# at the last state we shall use simplified formula: Q(s,a) = r(s,a) since s' doesn't exist
target_qvalues_for_actions = tf.where(is_done_ph, rewards_ph, target_qvalues_for_actions)
#mean squared error loss to minimize
loss = (predicted_qvalues_for_actions - tf.stop_gradient(target_qvalues_for_actions)) ** 2
loss = tf.reduce_mean(loss)
# training function that resembles agent.update(state, action, reward, next_state) from tabular agent
train_step = tf.compat.v1.train.AdamOptimizer(1e-4).minimize(loss)
a = 0
next_s, r, done, _ = env.step(a)
sess.run(train_step, {
states_ph: [s], actions_ph: [a], rewards_ph: [r],
next_states_ph: [next_s], is_done_ph: [done]
})
When I run a sess.run() training step, I get the following error:
tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable beta1_power from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/beta1_power)
Any ideas on what might be the problem?
The initialization operation should be fetched and run (only one time) after the variables (i.e. model) have been created or the computation graph has been defined. Therefore, they should be put right before running the training step:
# Define and create the computation graph/model
# ...
# Initialize variables in the graph/model
init = tf.global_variables_initializer()
sess.run(init)
# Start training
sess.run(train_step, ...)

Tensorflow freeze_graph unable to initialize local_variables

When freezing a graph with a local variable, freeze_graph has an error stating "Attempting to use uninitialized value...". The local variable in question was initialized via:
with tf.variable_scope(tf.get_variable_scope(),reuse=tf.AUTO_REUSE):
b_init = tf.constant(10.0, shape=[2, 1], dtype="float32",name = 'bi')
b = tf.get_variable('b',initializer=b_init,collections=[tf.GraphKeys.LOCAL_VARIABLES])
I'm able to create a saved model and run the saved model. However, I'm trying to freeze another graph for optimization. This error will go away if I remove the 'LOCAL_VARIABLES' flag. However, this variable then becomes global, which causes an issue with reloading my checkpoint (Tensorflow is unable to find the variable in the checkpoint).
Normally, I'd expect freeze_graph to initialize 'b' using 'b_init'.
Code to reproduce the issue:
import os, sys, json
import tensorflow as tf
from tensorflow.python.lib.io import file_io
from tensorflow.core.framework import variable_pb2
from tensorflow.python.framework import ops
from tensorflow.python.ops import variables
from tensorflow.python.framework.ops import register_proto_function
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.tools import freeze_graph
from tensorflow.python import ops
from tensorflow.tools.graph_transforms import TransformGraph
#flags
tf.app.flags.DEFINE_integer('model_version',1,'Models version number.')
tf.app.flags.DEFINE_string('export_model_dir','../model_batch/versions', 'Directory where model will be exported to')
FLAGS = tf.app.flags.FLAGS
def main(_):
''' main function'''
a = tf.placeholder(dtype = tf.float32, shape = [2,1])
with tf.variable_scope(tf.get_variable_scope(),reuse=tf.AUTO_REUSE):
b_init = tf.constant(10.0, shape=[2, 1], dtype="float32",name = 'bi')
b = tf.get_variable('b',initializer=b_init,collections=[tf.GraphKeys.LOCAL_VARIABLES])
b = tf.assign(b,a)
c = []
for d in range(5):
b = b * 1.1
c.append(b)
c = tf.identity(c,name = 'c')
init = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
with tf.Session() as sess:
#init
sess.run(init)
print(tf.get_default_graph().get_collection(tf.GraphKeys.LOCAL_VARIABLES))
#create saved model builder class
export_path_base = FLAGS.export_model_dir
export_path = os.path.join(
tf.compat.as_bytes(export_path_base),
tf.compat.as_bytes(str(FLAGS.model_version)))
if tf.gfile.Exists(export_path):
print ('Removing previous artifacts')
tf.gfile.DeleteRecursively(export_path)
#inputs
tensor_info_a = tf.saved_model.utils.build_tensor_info(a)
#outputs
tensor_info_c = tf.saved_model.utils.build_tensor_info(c)
print('Exporting trained model to', export_path)
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
#define signatures
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'cameras': tensor_info_a},
outputs = {'depthmap' : tensor_info_c},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map = {'predict_batch': prediction_signature})
#export model
builder.save(as_text=True)
writer = tf.summary.FileWriter("output_batch", sess.graph)
writer.close()
#load graph from saved model
print ('Freezing graph')
initializer_nodes = ''
output_node_names = 'c'
saved_model_dir = os.path.join(FLAGS.export_model_dir,str(FLAGS.model_version))
output_graph_filename = os.path.join(saved_model_dir,'frozen_graph.pb')
freeze_graph.freeze_graph(
input_saved_model_dir=saved_model_dir,
output_graph=output_graph_filename,
saved_model_tags = tag_constants.SERVING,
output_node_names=output_node_names,
initializer_nodes=initializer_nodes,
input_graph=None,
input_saver=False,
input_binary=False,
input_checkpoint=None,
restore_op_name=None,
filename_tensor_name=None,
clear_devices=False)
if __name__ == '__main__':
tf.app.run()
I wasn't able to include local_variables in my frozen graph, but I did come up with a work around.
The initial problem was that my checkpoint was created from a graph that contained local_variables. Unfortunately, freezing the graph produced the error:
Attempting to use uninitialized value...
What I did to work-around the issue was to change the local variables to untrainable global variables. I then filtered out the global variables not in my checkpoint using the following solution:
https://stackoverflow.com/a/39142780/6693924
I'm able to create a savedModel and freeze its graph.

Run importing TensorFlow graph fails for uninitialized variables

I'm attempting to run TensorFlow training in java by using javacpp-presets for TensorFlow. I've generated a .pb file by using tf.train.write_graph(sess.graph_def, '.', 'example.pb', as_text=False) as below.
import tensorflow as tf
import numpy as np
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data * 0.1 + 0.3
Weights = tf.Variable(tf.random_uniform([1], -1.0, 1.0), name='Weights')
biases = tf.Variable(tf.zeros([1]), name='biases')
y = Weights * x_data + biases
loss = tf.reduce_mean(tf.square(y - y_data)) #compute the loss
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss, name='train')
init = tf.global_variables_initializer()
with tf.Session() as sess:
print(sess.run(Weights), sess.run(biases))
tf.train.write_graph(sess.graph_def, '.', 'example.pb', as_text=False)
I got:
Exception in thread "main" java.lang.Exception: Attempting to use uninitialized value Weights"
when I run:
tensorflow.Status s = session.Run(new StringTensorPairVector(new String[] {}, new Tensor[] {}), new tensorflow.StringVector(), new tensorflow.StringVector("train"), outputs);
after loading the graph,tensorflow.ReadBinaryProto(Env.Default(), "./example.pb", def);
Is there any javacpp-presets api to do the same work as init = tf.global_variables_initializer()?
Or any C++ TensorFlow api I can use to initialize all variable?
In your Python program, init (the result of tf.global_variables_initializer()) is a tf.Operation that, when passed to sess.run(). If you capture the value of init.name when building the Python graph, you can pass that name to session.Run() in your Java program before running the training step.
I'm not 100% sure what the API for javacpp-presets looks like, but I think you would be able to do this as:
tensorflow.Status s = session.Run(
new StringTensorPairVector(new String[] {}, new Tensor[] {}),
new tensorflow.StringVector(),
new tensorflow.StringVector(value_of_init_dot_name),
outputs);
...where value_of_init_dot_name is the value of init.name you obtained from the Python program.

Can I use TensorBoard also with jupyter notebooks

I am experimenting (learning) TensorBoard and use the following code I got from the internet (simple regression function)
import tensorflow as tf
import numpy as np
#sess = tf.InteractiveSession() #define a session
# Create 100 phony x, y data points in NumPy, y = x * 0.1 + 0.3
x_data = np.random.rand(100).astype("float32")
y_data = x_data * 0.1 + 0.3
# Try to find values for W and b that compute y_data = W * x_data + b
# (We know that W should be 0.1 and b 0.3, but Tensorflow will
# figure that out for us.)
with tf.name_scope("calculatematmul") as scope:
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
y = W * x_data + b
# Minimize the mean squared errors.
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
# Before starting, initialize the variables. We will 'run' this first.
init = tf.initialize_all_variables()
# Launch the graph.
sess = tf.Session()
sess.run(init)
#### ----> ADD THIS LINE <---- ####
writer = tf.train.SummaryWriter('mnist_logs', sess.graph_def)
# Fit the line.
for step in xrange(201):
sess.run(train)
if step % 20 == 0:
print(step, sess.run(W), sess.run(b))
The code runs fine when I create a python file and run the file with
python test.py
also it runs fine in the jupyter notebook
However, while Tensorboard gets the information from running the python file (that is to say, it creates the xyz....home file), the interactive version does not create any info usable for Tensorboard.
Can somebody explain to me why, please!
Thanks
Peter
Be sure that you use the full path when starting tensorboard.
tensorboard --logdir='./somedirectory/mnist_logs'

Initializing tensorflow Variable with an array larger than 2GB

I am trying to initialize a tensorflow Variable with pre-trained word2vec embeddings.
I have the following code:
import tensorflow as tf
from gensim import models
model = models.Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
X = model.syn0
embeddings = tf.Variable(tf.random_uniform(X.shape, minval=-0.1, maxval=0.1), trainable=False)
sess.run(tf.initialize_all_variables())
sess.run(embeddings.assign(X))
And I am receiving the following error:
ValueError: Cannot create an Operation with a NodeDef larger than 2GB.
The array (X) I am trying to assign is of shape (3000000, 300) and its size is 3.6GB.
I am getting the same error if I try tf.convert_to_tensor(X) as well.
I know that it fails due to the fact that the array is larger than 2GB. However, I do not know how to assign an array larger than 2GB to a tensorflow Variable
It seems like the only option is to use a placeholder. The cleanest way I can find is to initialize to a placeholder directly:
X_init = tf.placeholder(tf.float32, shape=(3000000, 300))
X = tf.Variable(X_init)
# The rest of the setup...
sess.run(tf.initialize_all_variables(), feed_dict={X_init: model.syn0})
The easiest solution is to feed_dict'ing it into a placeholder node that you use to tf.assign to the variable.
X = tf.Variable([0.0])
place = tf.placeholder(tf.float32, shape=(3000000, 300))
set_x = X.assign(place)
# set up your session here....
sess.run(set_x, feed_dict={place: model.syn0})
As Joshua Little noted in a separate answer, you can also use it in the initializer:
X = tf.Variable(place) # place as defined above
...
init = tf.initialize_all_variables()
... create sess ...
sess.run(init, feed_dict={place: model.syn0})
try this:
import tensorflow as tf
from gensim import models
model = models.KeyedVectors.load_word2vec_format('./GoogleNews-vectors-negative300.bin', binary=True)
X = model.syn0
embeddings = tf.Variable(tf.random_uniform(X.shape, minval=-0.1, maxval=0.1), trainable=False)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
embeddings.load(model.syn0, sess)