How to change retrain.py to take in a b64 image - tensorflow

Right now, I am using the default retrain.py from tensorflow to train an image classification model. But when I serve the model on google ai platform and try to call the api, I get an error saying that the image is too large since it is a float32 array. I’m thinking the best thing would be is to change retrain.py to take in a b64 image instead of a float32 array, but I have no idea how to do that. Any suggestions?
Any help is appreciated! Thanks!
UPDATE
def export_model(module_spec, class_count, saved_model_dir):
sess, in_image, _, _, _, _ = build_eval_session(module_spec, class_count)
image = tf.placeholder(shape=[None], dtype=tf.string)
export_dir = "/tmp/save/"
inputs = {'image_bytes': image}
with sess.graph.as_default() as graph:
tf.saved_model.simple_save(sess, export_dir, inputs, {'prediction': graph.get_tensor_by_name('final_result:0')})
This is what I have updated my code to be, but it still doesn't work

take a look at this post, it contains the info you need. If not, then reply and I'll help you prepare some code, you probbably need the url safe b64 variant though.
EDIT
your code is a bit confusing, I don't think the input is already connected to your graph, have you looked at the graph with tf.summary.FileWriter('output folder', sess.graph)?
I'm gonna gradually try to explain how you build some layers in front of your model, with some examples, this code should not be in retrain.py and can be ran after you trained the model.
1) Load your tensorflow model in if it is build with savedModelBuilder or the simple save thing you can do it like this:
def loader(path):
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, [tag_constants.TRAINING], path)
return tf.get_default_graph().as_graph_def()
The tagconstants can be checked with the saved_model_cli tool, it is possible that this has to be empty [] in your case.
2) add the layers/tensors you need, you need something that accept a byte string, or base 64 in this case, that decodes it and transforms it into a 3D image:
image_str_tensor = tf.placeholder(dtype=tf.string, shape=(None,), name='input_image_bytes')
input_image = tf.decode_base64(image_str_tensor)
decoder = tf.image.decode_jpeg(input_image[0], channels=3)
The other tensors like converting to float, dim_expanding and reshaping should already be in the graph if you got it from retrain.py.
3) implement them into your graph by feeding them into it.
graph_def_inception = loader('path to your saved model')
output_prediction, = tf.import_graph_def(graph_def_inception, input_map={"DecodeJpeg:0": decoder}, return_elements=['final_result:0'], name="")
4) create a saved model and check if everything is like you want it to be!
builder = tf.saved_model.builder.SavedModelBuilder('output/model/path')
with tf.Session() as sess:
tf.summary.FileWriter('output/graph_log/files', sess.graph)
input_tensor_info = tf.saved_model.utils.build_tensor_info(input_image)
output_tensor_info = tf.saved_model.utils.build_tensor_info(output_prediction)
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs={'input_image': input_tensor_info},
outputs={'output_prediction': output_tensor_info},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
# save as SavedModel
builder.add_meta_graph_and_variables(sess,
[tf.saved_model.tag_constants.SERVING],
signature_def_map={'serving_default': signature})
builder.save()
5) if you get errors try to debug them with tensorboard
tensorboard --logdir=output/graph_log/files
I hoped I helped a bit, this code will not work from the first try, you need to puzzle with some parts. If you truly cannot succeed, then you should share the model, maybe I can do it then and share the code with you, if I have time.

Related

How to load .ckpt, .meta output ffiles rom a TF model, input prediction data and make a prediction?

I followed a tutorial to use tf.estimator.DNNLinearCombinedRegressor to train a TF model to predict Chicago taxi fare. The model generates the following outputs in a Google Cloud Storage bucket.
checkpoint
eval_estimator-eval/
events.out.tfevents.1590293793.taxi-train-job-5-5qj5l
graph.pbtxt
model.ckpt-0.data-00000-of-00002
model.ckpt-0.data-00001-of-00002
model.ckpt-0.index
model.ckpt-0.meta
model.ckpt-15165.data-00000-of-00002
model.ckpt-15165.data-00001-of-00002
model.ckpt-15165.index
model.ckpt-15165.meta
model.ckpt-25000.data-00000-of-00002
model.ckpt-25000.data-00001-of-00002
model.ckpt-25000.index
model.ckpt-25000.meta
Question: How can I load the model using these files, input a new data point, and make a prediction?
Suppose these are the variables and corresponding values:
hour_of_day: 8
pickup_latitude: 41.5
pickup_longitude: -87.2
dropoff_latitude: 41.9
dropoff_longitude: -87.1
I searched for answers online, and this is what I have now. It gave me the error message. Can anyone help and let me know how to correct it? Thank you!
with tf.Session() as sess:
new_saver = tf.train.import_meta_graph('gs://folder_name/model.ckpt-15165.meta')
new_saver.restore(sess, tf.train.latest_checkpoint('gs://folder_name/'))
graph = tf.get_default_graph()
x = graph.get_tensor_by_name("Placeholder:0")
feed_dict = {x:[8, 41.5, -87.2, 41.9, -87.1]}
print(sess.run(feed_dict))
I figured it out.
There's a built in function, making it easy to make predictions.
pred_input_func = tf.estimator.inputs.pandas_input_fn(x = x_test, shuffle = False)
predictions = estimator.predict(pred_input_func)
for result in predictions:
print (result)

How I reuse trained model in DNN?

Everyone!
I have a question releate in trained model reusing( tensorflow ).
I have train model
I want predict new data used trained model.
I use DNNClassifier.
I have a model.ckpt-200000.meta, model.ckpt-200000.index, checkpoint, and eval folder.
but I don't know reuse this model..
plz help me.
First, you need to import your graph,
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
new_saver = tf.train.import_meta_graph('model.ckpt-200000.meta')
new_saver.restore(sess, tf.train.latest_checkpoint('./'))
Then you can give input to the graph and get the output.
graph = tf.get_default_graph()
input = graph.get_tensor_by_name("input:0")#input tensor by name
feed_dict ={input:} #input to the model
#Now, access theoutput operation.
op_to_restore = graph.get_tensor_by_name("y_:0") #output tensor
print sess.run(op_to_restore,feed_dict) #get output here
Few things to note,
You can replace the above code with your training part of the graph
(i.e you can get the output without training).
However, you still have to construct your graph as previously and
only replace the training part.
Above method only loading the weights for the constructed graph. Therefore, you have to construct the graph first.
A good tutorial on this can be found here, http://cv-tricks.com/tensorflow-tutorial/save-restore-tensorflow-models-quick-complete-tutorial/
If you don't want to construct the graph again you can follow this tutorial, https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc

Feeding a single image into model trained with inception v3

I've searched around the internet for a few days and cannot seem to find an example of someone feeding a single image into a graph created using inception. Please let me know if I have grossly overlooked something obvious. To but the problem in context, I've
1) Trained a model and produced the relevant checkpoint files
model.ckpt-10000.data-00000-of-00001
model.ckpt-10000.index
model.ckpt-10000.meta
2) I then load the model
tf.reset_default_graph()
sess = tf.Session()
saver = tf.train.import_meta_graph(checkpoint_path + "/model.ckpt-10000.meta", clear_devices=True)
#<tensorflow.python.training.saver.Saver object at 0x11eea89e8>
sess.run(saver.restore(sess, checkpoint_path + "/model.ckpt-10000"))
3) This works correctly, so I load the default graph,
graph = tf.get_default_graph()
Here is where I am lost. As seen by this example, we must identify the layers of the graph by name to pass our image data into -- http://cv-tricks.com/tensorflow-tutorial/training-convolutional-neural-network-for-image-classification/.
So, what are the names of these layers? I suppose they something like "DecodeJpeg" and "/tower1/preditions/logits", but those are no better than guesses.
Thank you for your help.
The standard way of mapping between operations before and after save/restore is by adding them to collections. Search for tf.add_to_collection and tf.get_collection in https://www.tensorflow.org/api_guides/python/meta_graph. These examples save training_op and logits, but you can save your input placeholders as well.
If you cannot re-save the meta graph def and it does not have any collections, looking at node names and types (inputs are typically placeholder ops) might be the best you can do.

Saving tf.trainable_variables() using convert_variables_to_constants

I have a Keras model that I would like to convert to a Tensorflow protobuf (e.g. saved_model.pb).
This model comes from transfer learning on the vgg-19 network in which and the head was cut-off and trained with fully-connected+softmax layers while the rest of the vgg-19 network was frozen
I can load the model in Keras, and then use keras.backend.get_session() to run the model in tensorflow, generating the correct predictions:
frame = preprocess(cv2.imread("path/to/img.jpg")
keras_model = keras.models.load_model("path/to/keras/model.h5")
keras_prediction = keras_model.predict(frame)
print(keras_prediction)
with keras.backend.get_session() as sess:
tvars = tf.trainable_variables()
output = sess.graph.get_tensor_by_name('Softmax:0')
input_tensor = sess.graph.get_tensor_by_name('input_1:0')
tf_prediction = sess.run(output, {input_tensor: frame})
print(tf_prediction) # this matches keras_prediction exactly
If I don't include the line tvars = tf.trainable_variables(), then the tf_prediction variable is completely wrong and doesn't match the output from keras_prediction at all. In fact all the values in the output (single array with 4 probability values) are exactly the same (~0.25, all adding to 1). This made me suspect that weights for the head are just initialized to 0 if tf.trainable_variables() is not called first, which was confirmed after inspecting the model variables. In any case, calling tf.trainable_variables() causes the tensorflow prediction to be correct.
The problem is that when I try to save this model, the variables from tf.trainable_variables() don't actually get saved to the .pb file:
with keras.backend.get_session() as sess:
tvars = tf.trainable_variables()
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), ['Softmax'])
graph_io.write_graph(constant_graph, './', 'saved_model.pb', as_text=False)
What I am asking is, how can I save a Keras model as a Tensorflow protobuf with the tf.training_variables() intact?
Thanks so much!
So your approach of freezing the variables in the graph (converting to constants), should work, but isn't necessary and is trickier than the other approaches. (more on this below). If your want graph freezing for some reason (e.g. exporting to a mobile device), I'd need more details to help debug, as I'm not sure what implicit stuff Keras is doing behind the scenes with your graph. However, if you want to just save and load a graph later, I can explain how to do that, (though no guarantees that whatever Keras is doing won't screw it up..., happy to help debug that).
So there are actually two formats at play here. One is the GraphDef, which is used for Checkpointing, as it does not contain metadata about inputs and outputs. The other is a MetaGraphDef which contains metadata and a graph def, the metadata being useful for prediction and running a ModelServer (from tensorflow/serving).
In either case you need to do more than just call graph_io.write_graph because the variables are usually stored outside the graphdef.
There are wrapper libraries for both these use cases. tf.train.Saver is primarily used for saving and restoring checkpoints.
However, since you want prediction, I would suggest using a tf.saved_model.builder.SavedModelBuilder to build a SavedModel binary. I've provided some boiler plate for this below:
from tensorflow.python.saved_model.signature_constants import DEFAULT_SERVING_SIGNATURE_DEF_KEY as DEFAULT_SIG_DEF
builder = tf.saved_model.builder.SavedModelBuilder('./mymodel')
with keras.backend.get_session() as sess:
output = sess.graph.get_tensor_by_name('Softmax:0')
input_tensor = sess.graph.get_tensor_by_name('input_1:0')
sig_def = tf.saved_model.signature_def_utils.predict_signature_def(
{'input': input_tensor},
{'output': output}
)
builder.add_meta_graph_and_variables(
sess, tf.saved_model.tag_constants.SERVING,
signature_def_map={
DEFAULT_SIG_DEF: sig_def
}
)
builder.save()
After running this code you should have a mymodel/saved_model.pb file as well as a directory mymodel/variables/ with protobufs corresponding to the variable values.
Then to load the model again, simply use tf.saved_model.loader:
# Does Keras give you the ability to start with a fresh graph?
# If not you'll need to do this in a separate program to avoid
# conflicts with the old default graph
with tf.Session(graph=tf.Graph()):
meta_graph_def = tf.saved_model.loader.load(
sess,
tf.saved_model.tag_constants.SERVING,
'./mymodel'
)
# From this point variables and graph structure are restored
sig_def = meta_graph_def.signature_def[DEFAULT_SIG_DEF]
print(sess.run(sig_def.outputs['output'], feed_dict={sig_def.inputs['input']: frame}))
Obviously there's a more efficient prediction available with this code through tensorflow/serving, or Cloud ML Engine, but this should work.
It's possible that Keras is doing something under the hood which will interfere with this process as well, and if so we'd like to hear about it (and I'd like to make sure that Keras users are able to freeze graphs as well, so if you want to send me a gist with your full code or something maybe I can find someone who knows Keras well to help me debug.)
EDIT: You can find an end to end example of this here: https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/keras/trainer/model.py#L85

tensorflow image retraining with serving

I am trying to serve retrained inception graph using tensorflow serving. For retraining, I am using this example. However I need to make changes to this graph to get it working with serving export code.
Since in tensorflow serving, you will receive serialized images as input, graph input should start with this:
serialized_tf_example = tf.placeholder(tf.string, name='tf_example')
feature_configs = {
'image/encoded': tf.FixedLenFeature(shape=[], dtype=tf.string),
}
tf_example = tf.parse_example(serialized_tf_example, feature_configs)
jpegs = tf_example['image/encoded']
images = tf.map_fn(preprocess_image, jpegs, dtype=tf.float32)
This images tensor should be input to retrained inception graph. However I don't know if its possible to prepend one graph to another in tensorflow like you can append easily using placeholder_with_input (which has been done in retraining code).
graph, bottleneck_tensor, jpeg_data_tensor, resized_image_tensor = (
create_inception_graph())
Ideally, in image retraining code, I receive a placeholder tensor jpeg_input_data. I need to append tensor images to this placeholder tensor jpeg_data_tensor and export it as single graph using exporter so that it can be served using tensorflow serving. However I don't any tensorflow instruction that does it. Are there any other alternatives apart from this method?
One way of going about it is:
model_path = 'trained/export.pb'
with tf.Graph().as_default():
with tf.Session() as sess:
with gfile.FastGFile(model_path, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# Your prepending ops here
images_placeholder = tf.placeholder(tf.string, name='tf_example')
...
images = tf.map_fn(preprocess_image, jpegs, dtype=tf.float32)
tf.import_graph_def(graph_def, name='inception', input_map={'ResizeBilinear:0': images})
Notice especially the input_map argument. ResizeBilinear:0 is likely not the correct name of the operation you need - you can list the ops by:
[n.name for n in tf.get_default_graph().as_graph_def().node]
I realize this is not a full answer and perhaps not the most efficient but hopefully it can get you started. Just a heads-up, there is also this blogpost.
So since you have already retrained the model, I'm assuming that the model is a Protobuf, but you can just load that up into a Python object and serve off of that object using a custom function that processes either a batch or an atomic operation.
And to your graph question, as far as I know, when you load a tf.Graph() object you are only working with that object and can't work with other graphs... that being said if you had another graph that is an extension of the existing Inception-V3 Graph then you can easily add that to the computation graph for your custom graph quite easily.