Output node for tensorflow graph created with tf.layers - tensorflow

I have built a tensorflow neural net and now want to run the graph_util.convert_variables_to_constants function on it. However this requires an output_node_names parameter. The last layer in the net has the name logit and is built as follows:
logits = tf.layers.dense(inputs=dropout, units=5, name='logit')
however there are many nodes in that scope:
gd = sess.graph_def
for n in gd.node:
if 'logit' in n.name:print(n.name)
prints:
logit/kernel/Initializer/random_uniform/shape
logit/kernel/Initializer/random_uniform/min
logit/kernel/Initializer/random_uniform/max
logit/kernel/Initializer/random_uniform/RandomUniform
logit/kernel/Initializer/random_uniform/sub
logit/kernel/Initializer/random_uniform/mul
logit/kernel/Initializer/random_uniform
logit/kernel
logit/kernel/Assign
logit/kernel/read
logit/bias/Initializer/zeros
logit/bias
logit/bias/Assign
logit/bias/read
logit/Tensordot/Shape
logit/Tensordot/Rank
logit/Tensordot/axes
...
logit/Tensordot/Reshape_1
logit/Tensordot/MatMul
logit/Tensordot/Const_2
logit/Tensordot/concat_2/axis
logit/Tensordot/concat_2
logit/Tensordot
logit/BiasAdd
...
How do I work out which of these nodes is the output node?

If the graph is complex, a common way is to add an identity node at the end:
output = tf.identity(logits, 'output')
# you can use the name "output"
For example, the following code should work:
logits = tf.layers.dense(inputs=dropout, units=5, name='logit')
output = tf.identity(logits, 'output')
output_graph_def = tf.graph_util.convert_variables_to_constants(
ss, tf.get_default_graph().as_graph_def(), ['output'])

Related

How to implement Gaussian Mixture for VAE?

I feel like I don't really know what I'm doing so I will describe what I think I'm doing and what I want to do and where that fails.
Given a normal variational autoencoder:
...
net = tf.layers.dense(net, units=code_size * 2, activation=None)
mean = net[:, :code_size]
std = net[:, code_size:]
posterior = tfd.MultivariateNormalDiagWithSoftplusScale(mean, std)
net = posterior.sample()
net = tf.layers.dense(net, units=input_size, ...)
...
What I think I'm doing: Let the neural network find a "mean" and "std" value and use it to create a Normal distribution (Gaussian).
Sample from that distribution and use that for the decoder.
In other words: learn a Gaussian distribution of the encoding
Now I would like to do the same for a mixture of Gaussians.
...
net = tf.layers.dense(net, units=code_size * 2 * code_size, activation=None)
means, stds = tf.split(net, 2, axis=-1)
means = tf.split(means, code_size, axis=-1)
stds = tf.split(stds, code_size, axis=-1)
components = [tfd.MultivariateNormalDiagWithSoftplusScale(means[i], stds[i]) for i in range(code_size)]
probs = [1.0 / code_size] * code_size
gauss_mix = tfd.Mixture(cat=tfd.Categorical(probs=probs), components=components)
net = gauss_mix.sample()
net = tf.layers.dense(net, units=input_size, ...)
...
That seemed relatively straight forward for me except that it fails with the following error:
Shapes () and (?,) are not compatible
This seems to come from probs that doesn't have the batch dimension (I didn't thought it would need that).
I thought that probs defines the probability between the components.
If I define a probs that also has the batch dimension I get the following cryptic error I don't know what it should mean:
Dimension -1796453376 must be >= 0
Do I generally misunderstand some concepts?
Or what do I need to do differently?

How to apply a function to the value of a tensor and then assigning the output to the same tensor

I want to project the updated weights of my network (after performing optimization) to a special space in which I need the value of that tensor to be passed. The function which applies projection gets a numpy array as an input. Is there a way I can do this?
I used tf.assign() as a solution but since my function accepts arrays and not tensors it failed.
Here is a sketch of what I want to do:
W = tf.Variable(...)
...
opt = tf.train.AdamOptimizer(learning_rate).minimize(loss, var_list=['W'])
W = my_function(W)
It seems that tf.control_dependencies is what you need
one simple exmaple:
import tensorflow as tf
var = tf.get_variable('var', initializer=0.0)
# replace `tf.add` with your custom function
addop = tf.add(var, 1)
with tf.control_dependencies([addop]):
updateop = tf.assign(var, addop)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True # pylint: disable=no-member
with tf.Session(config=config) as sess:
sess.run(tf.global_variables_initializer())
updateop.eval()
print(var.eval())
updateop.eval()
print(var.eval())
updateop.eval()
print(var.eval())
output:
1.0
2.0
3.0

In what order does TensorFlow evaluate nodes in a computation graph?

I am having a strange bug in TensorFlow. Consider the following code, part of a simple feed-forward neural network:
output = (tf.matmul(layer_3,w_out) + b_out)
prob = tf.nn.sigmoid(output);
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = output, targets = y_, name=None))
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(loss, var_list = model_variables)`
(Notice that prob is not used to define the loss function. This is because sigmoid_cross_entropy applies sigmoid internally in its definition)
I later run the optimizer in the following line:
result,step_loss,_ = sess.run(fetches = [output,loss,optimizer],feed_dict = {x_ : np.array([[x,y,x*x,y*y,x*y]]), y_ : [[1,0]]});
The above works just fine. However, if I instead run the following line to run the code, the network seems to perform terribly, even though there shouldn't be any difference!
result,step_loss,_ = sess.run(fetches = [prob,loss,optimizer],feed_dict = {x_ : np.array([[x,y,x*x,y*y,x*y]]), y_ : [[1,0]]});
I have a feeling it has something to do with the order in which TF computes the nodes in the graph during a session, but I'm not sure. What could the issue be?
It's not an issue with the graph, it's just that you are looking at different things.
In the first example you provide:
result,step_loss,_ = sess.run(fetches = [output,loss,optimizer],feed_dict = {x_ : np.array([[x,y,x*x,y*y,x*y]]), y_ : [[1,0]]})
you are saving the result of running the output op in the result python variable.
In the second one:
result,step_loss,_ = sess.run(fetches = [prob,loss,optimizer],feed_dict = {x_ : np.array([[x,y,x*x,y*y,x*y]]), y_ : [[1,0]]})
you are saving the result of the prob op in the result python variable.
Since both ops are different it is to be expected that the values returned by them would be different.
You could run
logits, activation, step_loss, _ = sess.run(fetches = [output, prob, loss, optimizer], ...)
to check your results.

"Output 0 of type double does not match declared output type string" while running the iris sample program in TensorFlow Serving

I am running the sample iris program in TensorFlow Serving. Since it is a TF.Learn model, I am exporting the model using the following classifier.export(export_dir=model_dir,signature_fn=my_classification_signature_fn) and the signature_fn is defined as shown below:
def my_classification_signature_fn(examples, unused_features, predictions):
"""Creates classification signature from given examples and predictions.
Args:
examples: `Tensor`.
unused_features: `dict` of `Tensor`s.
predictions: `Tensor` or dict of tensors that contains the classes tensor
as in {'classes': `Tensor`}.
Returns:
Tuple of default classification signature and empty named signatures.
Raises:
ValueError: If examples is `None`.
"""
if examples is None:
raise ValueError('examples cannot be None when using this signature fn.')
if isinstance(predictions, dict):
default_signature = exporter.classification_signature(
examples, classes_tensor=predictions['classes'])
else:
default_signature = exporter.classification_signature(
examples, classes_tensor=predictions)
named_graph_signatures={
'inputs': exporter.generic_signature({'x_values': examples}),
'outputs': exporter.generic_signature({'preds': predictions})}
return default_signature, named_graph_signatures
The model gets successfully exported using the following piece of code.
I have created a client which makes real-time predictions using TensorFlow Serving.
The following is the code for the client:
flags.DEFINE_string("model_dir", "/tmp/iris_model_dir", "Base directory for output models.")
tf.app.flags.DEFINE_integer('concurrency', 1,
'maximum number of concurrent inference requests')
tf.app.flags.DEFINE_string('server', '', 'PredictionService host:port')
#connection
host, port = FLAGS.server.split(':')
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
# Classify two new flower samples.
new_samples = np.array([5.8, 3.1, 5.0, 1.7], dtype=float)
request = predict_pb2.PredictRequest()
request.model_spec.name = 'iris'
request.inputs["x_values"].CopyFrom(
tf.contrib.util.make_tensor_proto(new_samples))
result = stub.Predict(request, 10.0) # 10 secs timeout
However, on making the predictions, the following error is displayed:
grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INTERNAL, details="Output 0 of type double does not match declared output type string for node _recv_input_example_tensor_0 = _Recv[client_terminated=true, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=2016246895612781641, tensor_name="input_example_tensor:0", tensor_type=DT_STRING, _device="/job:localhost/replica:0/task:0/cpu:0"]()")
Here is the entire stack trace.
enter image description here
The iris model is defined in the following manner:
# Specify that all features have real-value data
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=4)]
# Build 3 layer DNN with 10, 20, 10 units respectively.
classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
hidden_units=[10, 20, 10],
n_classes=3, model_dir=model_dir)
# Fit model.
classifier.fit(x=training_set.data,
y=training_set.target,
steps=2000)
Kindly guide a solution for this error.
I think the problem is that your signature_fn is going on the else branch and passing predictions as the output to the classification signature, which expects a string output and not a double output. Either use a regression signature function or add something to the graph to get the output in the form of a string.

Permanently Inject Constant into Tensorflow Graph for Inference

I train a model with a placeholder for is_training:
is_training_ph = tf.placeholder(tf.bool)
however once training and validation are done, I would like to permanently inject a constant of false in for this value and then "re-optimize" the graph (ie using optimize_for_inference). Is there something along the lines of freeze_graph that will do this?
One possibility is to use the tf.import_graph_def() function and its input_map argument to rewrite the value of that tensor in the graph. For example, you could structure your program as follows:
with tf.Graph().as_default() as training_graph:
# Build model.
is_training_ph = tf.placeholder(tf.bool, name="is_training")
# ...
training_graph_def = training_graph.as_graph_def()
with tf.Graph().as_default() as temp_graph:
tf.import_graph_def(training_graph_def,
input_map={is_training_ph.name: tf.constant(False)})
temp_graph_def = temp_graph.as_graph_def()
After building temp_graph_def, you can use it as the input to freeze_graph.
An alternative, which might be more compatible with the freeze_graph and optimize_for_inference scripts (which make assumptions about variable names and checkpoint keys) would be to modify TensorFlow's graph_util.convert_variables_to_constants() function so that it converts placeholders instead:
def convert_placeholders_to_constants(input_graph_def,
placeholder_to_value_map):
"""Replaces placeholders in the given tf.GraphDef with constant values.
Args:
input_graph_def: GraphDef object holding the network.
placeholder_to_value_map: A map from the names of placeholder tensors in
`input_graph_def` to constant values.
Returns:
GraphDef containing a simplified version of the original.
"""
output_graph_def = tf.GraphDef()
for node in input_graph_def.node:
output_node = tf.NodeDef()
if node.op == "Placeholder" and node.name in placeholder_to_value_map:
output_node.op = "Const"
output_node.name = node.name
dtype = node.attr["dtype"].type
data = np.asarray(placeholder_to_value_map[node.name],
dtype=tf.as_dtype(dtype).as_numpy_dtype)
output_node.attr["dtype"].type = dtype
output_node.attr["value"].CopyFrom(tf.AttrValue(
tensor=tf.contrib.util.make_tensor_proto(data,
dtype=dtype,
shape=data.shape)))
else:
output_node.CopyFrom(node)
output_graph_def.node.extend([output_node])
return output_graph_def
...then you could build training_graph_def as above, and write:
temp_graph_def = convert_placeholders_to_constants(training_graph_def,
{is_training_ph.op.name: False})