I want to visualize a complex model with tensorboard graph. I run the code below to restore the graph from a ckpt.meta file. However, data flows are not display properly as I can't get useful information like input channels or image size. Components are not connected explicitly and inputs are represented as mul0-704 which looks like an intermediate var. I didn't run a training session and the original training is run with tf.train.MonitoredTrainingSession() which is something I do not have a firm grasp with currently. Why does this graph show a mess? I am a rookie with tensorboard and I want to refactor the code with pytorch. Any help would be appreciated. Thanks in advance. disconnected graph, mul0-704
import tensorflow as tf
tf.train.import_meta_graph('meta_file_dir')
export_dir = 'export_log_dir'
for n in tf.get_default_graph().as_graph_def().node:
print(n)
with tf.Session() as sess:
writer = tf.summary.FileWriter(export_dir, sess.graph)
writer.close()
Related
Right now, I am using the default retrain.py from tensorflow to train an image classification model. But when I serve the model on google ai platform and try to call the api, I get an error saying that the image is too large since it is a float32 array. I’m thinking the best thing would be is to change retrain.py to take in a b64 image instead of a float32 array, but I have no idea how to do that. Any suggestions?
Any help is appreciated! Thanks!
UPDATE
def export_model(module_spec, class_count, saved_model_dir):
sess, in_image, _, _, _, _ = build_eval_session(module_spec, class_count)
image = tf.placeholder(shape=[None], dtype=tf.string)
export_dir = "/tmp/save/"
inputs = {'image_bytes': image}
with sess.graph.as_default() as graph:
tf.saved_model.simple_save(sess, export_dir, inputs, {'prediction': graph.get_tensor_by_name('final_result:0')})
This is what I have updated my code to be, but it still doesn't work
take a look at this post, it contains the info you need. If not, then reply and I'll help you prepare some code, you probbably need the url safe b64 variant though.
EDIT
your code is a bit confusing, I don't think the input is already connected to your graph, have you looked at the graph with tf.summary.FileWriter('output folder', sess.graph)?
I'm gonna gradually try to explain how you build some layers in front of your model, with some examples, this code should not be in retrain.py and can be ran after you trained the model.
1) Load your tensorflow model in if it is build with savedModelBuilder or the simple save thing you can do it like this:
def loader(path):
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, [tag_constants.TRAINING], path)
return tf.get_default_graph().as_graph_def()
The tagconstants can be checked with the saved_model_cli tool, it is possible that this has to be empty [] in your case.
2) add the layers/tensors you need, you need something that accept a byte string, or base 64 in this case, that decodes it and transforms it into a 3D image:
image_str_tensor = tf.placeholder(dtype=tf.string, shape=(None,), name='input_image_bytes')
input_image = tf.decode_base64(image_str_tensor)
decoder = tf.image.decode_jpeg(input_image[0], channels=3)
The other tensors like converting to float, dim_expanding and reshaping should already be in the graph if you got it from retrain.py.
3) implement them into your graph by feeding them into it.
graph_def_inception = loader('path to your saved model')
output_prediction, = tf.import_graph_def(graph_def_inception, input_map={"DecodeJpeg:0": decoder}, return_elements=['final_result:0'], name="")
4) create a saved model and check if everything is like you want it to be!
builder = tf.saved_model.builder.SavedModelBuilder('output/model/path')
with tf.Session() as sess:
tf.summary.FileWriter('output/graph_log/files', sess.graph)
input_tensor_info = tf.saved_model.utils.build_tensor_info(input_image)
output_tensor_info = tf.saved_model.utils.build_tensor_info(output_prediction)
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs={'input_image': input_tensor_info},
outputs={'output_prediction': output_tensor_info},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
# save as SavedModel
builder.add_meta_graph_and_variables(sess,
[tf.saved_model.tag_constants.SERVING],
signature_def_map={'serving_default': signature})
builder.save()
5) if you get errors try to debug them with tensorboard
tensorboard --logdir=output/graph_log/files
I hoped I helped a bit, this code will not work from the first try, you need to puzzle with some parts. If you truly cannot succeed, then you should share the model, maybe I can do it then and share the code with you, if I have time.
Everyone!
I have a question releate in trained model reusing( tensorflow ).
I have train model
I want predict new data used trained model.
I use DNNClassifier.
I have a model.ckpt-200000.meta, model.ckpt-200000.index, checkpoint, and eval folder.
but I don't know reuse this model..
plz help me.
First, you need to import your graph,
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
new_saver = tf.train.import_meta_graph('model.ckpt-200000.meta')
new_saver.restore(sess, tf.train.latest_checkpoint('./'))
Then you can give input to the graph and get the output.
graph = tf.get_default_graph()
input = graph.get_tensor_by_name("input:0")#input tensor by name
feed_dict ={input:} #input to the model
#Now, access theoutput operation.
op_to_restore = graph.get_tensor_by_name("y_:0") #output tensor
print sess.run(op_to_restore,feed_dict) #get output here
Few things to note,
You can replace the above code with your training part of the graph
(i.e you can get the output without training).
However, you still have to construct your graph as previously and
only replace the training part.
Above method only loading the weights for the constructed graph. Therefore, you have to construct the graph first.
A good tutorial on this can be found here, http://cv-tricks.com/tensorflow-tutorial/save-restore-tensorflow-models-quick-complete-tutorial/
If you don't want to construct the graph again you can follow this tutorial, https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc
I've searched around the internet for a few days and cannot seem to find an example of someone feeding a single image into a graph created using inception. Please let me know if I have grossly overlooked something obvious. To but the problem in context, I've
1) Trained a model and produced the relevant checkpoint files
model.ckpt-10000.data-00000-of-00001
model.ckpt-10000.index
model.ckpt-10000.meta
2) I then load the model
tf.reset_default_graph()
sess = tf.Session()
saver = tf.train.import_meta_graph(checkpoint_path + "/model.ckpt-10000.meta", clear_devices=True)
#<tensorflow.python.training.saver.Saver object at 0x11eea89e8>
sess.run(saver.restore(sess, checkpoint_path + "/model.ckpt-10000"))
3) This works correctly, so I load the default graph,
graph = tf.get_default_graph()
Here is where I am lost. As seen by this example, we must identify the layers of the graph by name to pass our image data into -- http://cv-tricks.com/tensorflow-tutorial/training-convolutional-neural-network-for-image-classification/.
So, what are the names of these layers? I suppose they something like "DecodeJpeg" and "/tower1/preditions/logits", but those are no better than guesses.
Thank you for your help.
The standard way of mapping between operations before and after save/restore is by adding them to collections. Search for tf.add_to_collection and tf.get_collection in https://www.tensorflow.org/api_guides/python/meta_graph. These examples save training_op and logits, but you can save your input placeholders as well.
If you cannot re-save the meta graph def and it does not have any collections, looking at node names and types (inputs are typically placeholder ops) might be the best you can do.
I have a Keras model that I would like to convert to a Tensorflow protobuf (e.g. saved_model.pb).
This model comes from transfer learning on the vgg-19 network in which and the head was cut-off and trained with fully-connected+softmax layers while the rest of the vgg-19 network was frozen
I can load the model in Keras, and then use keras.backend.get_session() to run the model in tensorflow, generating the correct predictions:
frame = preprocess(cv2.imread("path/to/img.jpg")
keras_model = keras.models.load_model("path/to/keras/model.h5")
keras_prediction = keras_model.predict(frame)
print(keras_prediction)
with keras.backend.get_session() as sess:
tvars = tf.trainable_variables()
output = sess.graph.get_tensor_by_name('Softmax:0')
input_tensor = sess.graph.get_tensor_by_name('input_1:0')
tf_prediction = sess.run(output, {input_tensor: frame})
print(tf_prediction) # this matches keras_prediction exactly
If I don't include the line tvars = tf.trainable_variables(), then the tf_prediction variable is completely wrong and doesn't match the output from keras_prediction at all. In fact all the values in the output (single array with 4 probability values) are exactly the same (~0.25, all adding to 1). This made me suspect that weights for the head are just initialized to 0 if tf.trainable_variables() is not called first, which was confirmed after inspecting the model variables. In any case, calling tf.trainable_variables() causes the tensorflow prediction to be correct.
The problem is that when I try to save this model, the variables from tf.trainable_variables() don't actually get saved to the .pb file:
with keras.backend.get_session() as sess:
tvars = tf.trainable_variables()
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), ['Softmax'])
graph_io.write_graph(constant_graph, './', 'saved_model.pb', as_text=False)
What I am asking is, how can I save a Keras model as a Tensorflow protobuf with the tf.training_variables() intact?
Thanks so much!
So your approach of freezing the variables in the graph (converting to constants), should work, but isn't necessary and is trickier than the other approaches. (more on this below). If your want graph freezing for some reason (e.g. exporting to a mobile device), I'd need more details to help debug, as I'm not sure what implicit stuff Keras is doing behind the scenes with your graph. However, if you want to just save and load a graph later, I can explain how to do that, (though no guarantees that whatever Keras is doing won't screw it up..., happy to help debug that).
So there are actually two formats at play here. One is the GraphDef, which is used for Checkpointing, as it does not contain metadata about inputs and outputs. The other is a MetaGraphDef which contains metadata and a graph def, the metadata being useful for prediction and running a ModelServer (from tensorflow/serving).
In either case you need to do more than just call graph_io.write_graph because the variables are usually stored outside the graphdef.
There are wrapper libraries for both these use cases. tf.train.Saver is primarily used for saving and restoring checkpoints.
However, since you want prediction, I would suggest using a tf.saved_model.builder.SavedModelBuilder to build a SavedModel binary. I've provided some boiler plate for this below:
from tensorflow.python.saved_model.signature_constants import DEFAULT_SERVING_SIGNATURE_DEF_KEY as DEFAULT_SIG_DEF
builder = tf.saved_model.builder.SavedModelBuilder('./mymodel')
with keras.backend.get_session() as sess:
output = sess.graph.get_tensor_by_name('Softmax:0')
input_tensor = sess.graph.get_tensor_by_name('input_1:0')
sig_def = tf.saved_model.signature_def_utils.predict_signature_def(
{'input': input_tensor},
{'output': output}
)
builder.add_meta_graph_and_variables(
sess, tf.saved_model.tag_constants.SERVING,
signature_def_map={
DEFAULT_SIG_DEF: sig_def
}
)
builder.save()
After running this code you should have a mymodel/saved_model.pb file as well as a directory mymodel/variables/ with protobufs corresponding to the variable values.
Then to load the model again, simply use tf.saved_model.loader:
# Does Keras give you the ability to start with a fresh graph?
# If not you'll need to do this in a separate program to avoid
# conflicts with the old default graph
with tf.Session(graph=tf.Graph()):
meta_graph_def = tf.saved_model.loader.load(
sess,
tf.saved_model.tag_constants.SERVING,
'./mymodel'
)
# From this point variables and graph structure are restored
sig_def = meta_graph_def.signature_def[DEFAULT_SIG_DEF]
print(sess.run(sig_def.outputs['output'], feed_dict={sig_def.inputs['input']: frame}))
Obviously there's a more efficient prediction available with this code through tensorflow/serving, or Cloud ML Engine, but this should work.
It's possible that Keras is doing something under the hood which will interfere with this process as well, and if so we'd like to hear about it (and I'd like to make sure that Keras users are able to freeze graphs as well, so if you want to send me a gist with your full code or something maybe I can find someone who knows Keras well to help me debug.)
EDIT: You can find an end to end example of this here: https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/keras/trainer/model.py#L85
I am a beginner in TensorFlow, currently training a CNN.
I am using Saver in order to save the parameters used by the model, but I am having concerns whether this would itself store all the Variables used by the model, and is sufficient to restore the values to re-run the program for performing classification/testing on the trained network.
Let us look at the famous example MNIST given by TensorFlow.
In the example, we have bunch of Convolutional blocks, all of which have weight, and bias variables that gets initialised when the program is run.
W_conv1 = init_weight([5,5,1,32])
b_conv1 = init_bias([32])
After having processed several layers, we create a session, and initialise all the variables added to the graph.
sess = tf.Session()
sess.run(tf.initialize_all_variables())
saver = tf.train.Saver()
Here, is it possible to comment the saver.save code, and replace it by saver.restore(sess,file_path) after the training, in order to restore the weight, bias, etc., parameters back to the graph? Is this how it should be ?
for i in range(1000):
...
if i%500 == 0:
saver.save(sess,"model%d.cpkt"%(i))
I am currently training on large dataset, so terminating, and restarting the training is a waste of time, and resources so I request someone to please clarify before the I start the training.
If you want to save the final result only once, you can do this:
with tf.Session() as sess:
for i in range(1000):
...
path = saver.save(sess, "model.ckpt") # out of the loop
print "Saved:", path
In other programs, you can load the model using the path returned from saver.save for prediction or something. You can see some examples at https://github.com/sugyan/tensorflow-mnist.
Based on the explanation in here and Sung Kim solution I wrote a very simple model exactly for this problem. Basically in this way you need to create an object from the same class and restore its variables from the saver. You can find an example of this solution here.