Feeding a single image into model trained with inception v3 - tensorflow

I've searched around the internet for a few days and cannot seem to find an example of someone feeding a single image into a graph created using inception. Please let me know if I have grossly overlooked something obvious. To but the problem in context, I've
1) Trained a model and produced the relevant checkpoint files
model.ckpt-10000.data-00000-of-00001
model.ckpt-10000.index
model.ckpt-10000.meta
2) I then load the model
tf.reset_default_graph()
sess = tf.Session()
saver = tf.train.import_meta_graph(checkpoint_path + "/model.ckpt-10000.meta", clear_devices=True)
#<tensorflow.python.training.saver.Saver object at 0x11eea89e8>
sess.run(saver.restore(sess, checkpoint_path + "/model.ckpt-10000"))
3) This works correctly, so I load the default graph,
graph = tf.get_default_graph()
Here is where I am lost. As seen by this example, we must identify the layers of the graph by name to pass our image data into -- http://cv-tricks.com/tensorflow-tutorial/training-convolutional-neural-network-for-image-classification/.
So, what are the names of these layers? I suppose they something like "DecodeJpeg" and "/tower1/preditions/logits", but those are no better than guesses.
Thank you for your help.

The standard way of mapping between operations before and after save/restore is by adding them to collections. Search for tf.add_to_collection and tf.get_collection in https://www.tensorflow.org/api_guides/python/meta_graph. These examples save training_op and logits, but you can save your input placeholders as well.
If you cannot re-save the meta graph def and it does not have any collections, looking at node names and types (inputs are typically placeholder ops) might be the best you can do.

Related

Is there linter for model(inputs) of PyTorch like model.predict(inputs) of TensorFlow?

My goal is to do object detection. However, YOLOv7 and (hack to create bounding box with feature map) tutorial is using PyTorch.
The problem is: model(inputs) do not have typings.
The code L148-L150
out = model(inputs)
probs, class_preds = torch.max(out[0], dim=-1)
feature_maps = out[1].to("cpu")
The forced me to debug the helper.py file to understand what [0] and out[1] are. Currently, I assume that out[0] as the softmax probability and out[1] as the feature maps.
I think the answer is no, in general it is non-trivial to automatically infer the semantic meaning of the outputs of a neural network; this is a product of the semantic meaning of the inputs and the model structure itself. You could reference the Yolo model architecture provided in model.py (though as an aside you should not link to external code but rather provide relevant code in your question itself) and investigate the structure of the outputs, then reference the structure of the labeled inputs (as the model by definition is learning to replicate the structure of the labels.)
That being said in your case the output is quite obviously per-class probabilities and class indexes as shown in line 149:
probs, class_preds = torch.max(out[0], dim=-1)
as the outputs from torch.max per pytorch documentation are (maximum value, maximum index).

I need to upload weights that were saved on tensorflow 1.x to an identical model in tensroflow 2.x

So I have an old model with tensorflow 1.x code and it includes too much stuff I don't need, all I need is just the model and I created the model in a way I'm almost certain is identical to the previous one (I checked a bunch of stuff)
I have the .data and .index and a .meta file and I tried very many different types of things and either it says that "a few things weren't saved" and then lists all of the weights (but not really the entire thing, cause when the weights are too big it just adds three dots (...) )
I would LOVE to have someone tell me how I can use that in my new model
I tried:
model.load_weights
I tried:
tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.Session()
saver = tf.compat.v1.train.import_meta_graph('checkpoints/pix2pix-60.meta')
saver.restore( "checkpoints/pix2pix-60")
I tried:
tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.Session()
saver = tf.compat.v1.train.Checkpoint(model=gen)
saver.restore(tf.train.latest_checkpoint('checkpoints')).assert_consumed()
I tried:
ck_path = tf.train.latest_checkpoint('checkpoints')
gen.load_weights(ck_path)
I tried:
from tensorflow.python.training import checkpoint_utils as cp
ckpt = cp.load_checkpoint('checkpoints/pix2pix--60')
and then tried to see what I can do with that
and I think I tried honestly a bunch of more stuff
I honestly won't mind if someone can even just tell me how I can read the .index or .data files so that I can just copy the weights and from there I can deal with it
I would again really love some help,
Thanks!
It seems that your TF1.x model is saved as a ckpt format, and to restore a ckpt model, you need get the graph before load weight.
To convert it to TF2.x model, you may instantiate the original model, then save it as like recommended saved_model format use 2.x api.
Your can continue your second trying, use compat v1 to instantiate a default Session, then load graph from meta file, then load weight, after this, your Session will contain your graph and loaded weights.
To convert to 2.x model, you need get the inputs and outputs tensors from graph:
# you have loaded graph and weight into sess
sess.as_default()
g = sess.graph
# assuming that your input output names are "input:0", "output:0"
input_tensor = g.get_tensor_by_name("input:0")
output_tensor = g.get_tensor_by_name("output:0")
# then use tf2.x to save a saved_model format model
model = tf.keras.Model(input_tensor, output_tensor, name="tf2_model")
model.save("your_saved_dir")
A saved_model format model stores all graph and weight, you can simply use
model = tf.saved_model.load("your_model_dir")
to instantiate model for using.
Ok, So I think I figured it out although it was quite tedious
In the model in tensorflow 1.x all variables were created with tf.name_scope and in tensorflow 2.x there is no such thing so the variable names were unmatched and so I pretty much had to kind of manually change the names so they would fit and then it really did upload the weights as such:
checkpoint = tf.train.Checkpoint(model=gen)
checkpoint.restore('checkpoints/pix2pix--60').assert_consumed()
this also seemed to work:
gen.load_weights('checkpoints/pix2pix--60')
however something is still not working correctly since the output is actually not what I am expecting (what the output is like in the tensorflow 1.x model)
It may have something to do with the batch_normalization weights that aren't being loaded but I checked and in my current tf 2.x model they are untrainable and are equal to exactly the weights that aren't being loaded
Another weird thing is that when I do gen.predict(x) it gives me a different outcome each time, so I guess the weights aren't being frozen or something...
So I have yet to understand what went wrong previously, but I do know that there have been many changes in the API of tf2 from tf1 including default parameters and more so what I eventually did which worked perfectly was this:
tf_upgrade_v2
--intree my_project/
--outtree my_project_v2/
--reportfile report.txt
as explained here
you just put all the pieces of code you want to change in folder my_project and it creates a folder named myproject_v2 with the tf1 code converted to tf2

Can I save a graph with its values without saving the inputs?

I have a network with weights filled by manual tf.assign, and now I want to save the network with the weight values but without the placeholder inputs. It seems tf.train.Saver works only when I have the feed_dict available, and tf.train.export_meta_graph only saves the network structure. I tried pickle and dill but they both have errors. Are there any better solutions for this kind of saving?
Placeholders convert the input data into Tensors so I guess they are an important part of the Graph and I don't understand why you don't want to include them.
Even if you use tf.assign, you can freeze the graph, which means combining the structure with the weights. What freezing does is to convert Tensorflow variables into constants.
You have to save the structure of your graph:
gdef = g.as_graph_def()
tf.train.write_graph(gdef,".","graph.pb",False)
Then save the weights (after training)
saver.save(sess, 'tmp/my-weights')
And freeze the graph according to the tutorial in https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite
After that, you can use the Graph.

Saving tf.trainable_variables() using convert_variables_to_constants

I have a Keras model that I would like to convert to a Tensorflow protobuf (e.g. saved_model.pb).
This model comes from transfer learning on the vgg-19 network in which and the head was cut-off and trained with fully-connected+softmax layers while the rest of the vgg-19 network was frozen
I can load the model in Keras, and then use keras.backend.get_session() to run the model in tensorflow, generating the correct predictions:
frame = preprocess(cv2.imread("path/to/img.jpg")
keras_model = keras.models.load_model("path/to/keras/model.h5")
keras_prediction = keras_model.predict(frame)
print(keras_prediction)
with keras.backend.get_session() as sess:
tvars = tf.trainable_variables()
output = sess.graph.get_tensor_by_name('Softmax:0')
input_tensor = sess.graph.get_tensor_by_name('input_1:0')
tf_prediction = sess.run(output, {input_tensor: frame})
print(tf_prediction) # this matches keras_prediction exactly
If I don't include the line tvars = tf.trainable_variables(), then the tf_prediction variable is completely wrong and doesn't match the output from keras_prediction at all. In fact all the values in the output (single array with 4 probability values) are exactly the same (~0.25, all adding to 1). This made me suspect that weights for the head are just initialized to 0 if tf.trainable_variables() is not called first, which was confirmed after inspecting the model variables. In any case, calling tf.trainable_variables() causes the tensorflow prediction to be correct.
The problem is that when I try to save this model, the variables from tf.trainable_variables() don't actually get saved to the .pb file:
with keras.backend.get_session() as sess:
tvars = tf.trainable_variables()
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), ['Softmax'])
graph_io.write_graph(constant_graph, './', 'saved_model.pb', as_text=False)
What I am asking is, how can I save a Keras model as a Tensorflow protobuf with the tf.training_variables() intact?
Thanks so much!
So your approach of freezing the variables in the graph (converting to constants), should work, but isn't necessary and is trickier than the other approaches. (more on this below). If your want graph freezing for some reason (e.g. exporting to a mobile device), I'd need more details to help debug, as I'm not sure what implicit stuff Keras is doing behind the scenes with your graph. However, if you want to just save and load a graph later, I can explain how to do that, (though no guarantees that whatever Keras is doing won't screw it up..., happy to help debug that).
So there are actually two formats at play here. One is the GraphDef, which is used for Checkpointing, as it does not contain metadata about inputs and outputs. The other is a MetaGraphDef which contains metadata and a graph def, the metadata being useful for prediction and running a ModelServer (from tensorflow/serving).
In either case you need to do more than just call graph_io.write_graph because the variables are usually stored outside the graphdef.
There are wrapper libraries for both these use cases. tf.train.Saver is primarily used for saving and restoring checkpoints.
However, since you want prediction, I would suggest using a tf.saved_model.builder.SavedModelBuilder to build a SavedModel binary. I've provided some boiler plate for this below:
from tensorflow.python.saved_model.signature_constants import DEFAULT_SERVING_SIGNATURE_DEF_KEY as DEFAULT_SIG_DEF
builder = tf.saved_model.builder.SavedModelBuilder('./mymodel')
with keras.backend.get_session() as sess:
output = sess.graph.get_tensor_by_name('Softmax:0')
input_tensor = sess.graph.get_tensor_by_name('input_1:0')
sig_def = tf.saved_model.signature_def_utils.predict_signature_def(
{'input': input_tensor},
{'output': output}
)
builder.add_meta_graph_and_variables(
sess, tf.saved_model.tag_constants.SERVING,
signature_def_map={
DEFAULT_SIG_DEF: sig_def
}
)
builder.save()
After running this code you should have a mymodel/saved_model.pb file as well as a directory mymodel/variables/ with protobufs corresponding to the variable values.
Then to load the model again, simply use tf.saved_model.loader:
# Does Keras give you the ability to start with a fresh graph?
# If not you'll need to do this in a separate program to avoid
# conflicts with the old default graph
with tf.Session(graph=tf.Graph()):
meta_graph_def = tf.saved_model.loader.load(
sess,
tf.saved_model.tag_constants.SERVING,
'./mymodel'
)
# From this point variables and graph structure are restored
sig_def = meta_graph_def.signature_def[DEFAULT_SIG_DEF]
print(sess.run(sig_def.outputs['output'], feed_dict={sig_def.inputs['input']: frame}))
Obviously there's a more efficient prediction available with this code through tensorflow/serving, or Cloud ML Engine, but this should work.
It's possible that Keras is doing something under the hood which will interfere with this process as well, and if so we'd like to hear about it (and I'd like to make sure that Keras users are able to freeze graphs as well, so if you want to send me a gist with your full code or something maybe I can find someone who knows Keras well to help me debug.)
EDIT: You can find an end to end example of this here: https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/keras/trainer/model.py#L85

Reusing part of a tensorflow trained graph

So, I trained a tensorflow model with a few layers, more or less like this:
with tf.variable_scope('model1') as scope:
inputs = tf.placeholder(tf.int32, [None, num_time_steps])
embeddings = tf.get_variable('embeddings', (vocab_size, embedding_size))
lstm = tf.nn.rnn_cell.LSTMCell(lstm_units)
embedded = tf.nn.embedding_lookup(embeddings, inputs)
_, state = tf.nn.dynamic_rnn(lstm, embedded, dtype=tf.float32, scope=scope)
# more stuff on the state
Now, I wanted to reuse the embedding matrix and the lstm weights in another model, which is very different from this one except for these two components.
As far as I know, if I load them with a tf.Saver object, it will look for
variables with the exact same names, but I'm using different variable_scopes in the two graphs.
In this answer, it is suggested to create the graph where the LSTM is trained as a superset of the other one, but I don't think it is possible in my case, given the differences in the two models. Anyway, I don't think it is a good idea to make one graph dependent on the other, if they do independent things.
I thought about changing the variable scope of the LSTM weights and embeddings in the serialized graph. I mean, where it originally read model1/Weights:0 or something, it would be another_scope/Weights:0. Is it possible and feasible?
Of course, if there is a better solution, it is also welcome.
I found out that the Saver can be initialized with a dictionary mapping variable names (without the trailing :0) in the serialized file to the variable objects I want to restore in the graph. For example:
varmap = {'model1/some_scope/weights': variable_in_model2,
'model1/another_scope/weights': another_variable_in_model2}
saver = tf.train.Saver(varmap)
saver.restore(sess, path_to_saved_file)