TensorFlow simple_save() why are input/output dicts necessary? - tensorflow

Since all vars/graph are loaded anyways, why am I required to provide inputs, outputs to tf.saved_model.simple_save()?
I tried loading a variable with get_tensor_by_name() that I didn't specify in inputs/outputs dictionaries and it worked! So why won't it let me have blank/None inputs/outputs and I grab my variables by their names?

When you specify the input and output tensors of your model, the inference graph is fully specified. Imagine a model that has a single input, but two outputs . For instance, the model predicts the temperature for tomorrow and whether it will rain or not. Maybe I want to save an inference graph for a model that only gives me the temperature.
When you specify the ins and outs, TensorFlow knows which layers connect them. The reason why get_tensor_by_name() worked in your case, is probably because you fetched a layer that connects your inputs to your outputs.

Related

Tensorflow Saving Error (from Tensorflow Example)

I am trying to use the Basic Text Classification example from Tensorflow on my own dataset. Training and verification have gone well and I am to the point in the tutorial for exporting the model. The model compiles and works on an array of strings.
After that, I'd like to save the model in h5 format for use in other projects. At this point, the tutorial refers you to save and load keras models tutorial.
This second tutorial essentially says to do this:
model.save('path/saved_model.h5')
This fails with
ValueError: Weights for model sequential_X have not yet been created. Weights are created when the Model is first called on inputs or build() is called with an input_shape.
So next I attempt to do this:
model.build((None, max_features))
model.save('path/saved_model.h5')
There are several errors with this:
ValueError: Tensor conversion requested dtype string for Tensor with dtype float32: <tf.Tensor 'Placeholder:0' shape=(None, 45000) dtype=float32>
TypeError: Input 'input' of 'StringLower' Op has type float32 that does not match expected type of string.
ValueError: You cannot build your model by calling build if your layers do not support float type inputs. Instead, in order to instantiate and build your model, call your model on real tensor data (of the correct dtype).
I think this essentially means the input I defined to pass into model.build defaults to float and needs to be string. I think I have two options:
Somehow define my input layer to be string, which I cannot see how to do. This feels like the correct thing to do.
Use model.call. However I am not sure how to 'call my model on real tensor data' because tensors can't be strings and that is the input to the network.
I've seen one other person with this issue here, with no solution other than to rebuild the model in functional style with mixed results. I am not sure of the point of rebuilding in the functional style since I don't fully understand the problem.
I'd prefer to have the TextVectorization layer built into the final model to simplify deployment. This is exactly the reason the docs give for doing this in the example in the first place. (The model will save without it.)
I am a novice with this so I might be making a simple mistake. How can I get this model to save?

Training Tensorflow only one object

Corresponding Tensorflow documentation I trained 3 objects and get result (It can recognize these objects). When I show other objects (not the 3 ones) it doesn't work correctly.
I want to train only one object (example: a cup) and recognize only this object. Is it possible to do via Tensorflow ?
Your question doesn't provide enough details, but as I can guess your trained the network with softmax activation and Categorical or SparseCategorical cross entropy loss. If my guess is right, such network always generates prediction to one of three classess, regardless to actual data, i.e. there is no option of "no-one".
In order to train network to recognize only one class of objects, make the only one output with only one channel and sigmoid activation. Use BinaryCrossEntropy loss to train your model for the specific object. Provide dataset that includes examples with this object and without it.

How to feed input into one layer in a tensorflow pre-trained model?

The pretrained model has many layers, I want to feed my input directly into one intermediate layer (and discard the result of the previous layers).
I only got the .pb file and the ckpt files of that model, so how to modify the computation flow without the source code?
This is the only code file that I got, but I dont know how to use it. Is the graph generate by this file?(much different from the normal tensorflow files)https://github.com/tensorflow/models/blob/master/research/object_detection/models/ssd_mobilenet_v2_feature_extractor.py
Here is what you need to do :
Load the model
Find the name of the layer or retrieve the tensor of the layer you want to feed values to (let's name it 'Z' for the sake of the explanation)
Find the name of the layer or retrieve the tensor of the layer you want to get results from ('Y')
Run this code snippet :results = sess.run('Y:0', {'Z:0': your_value})

Can I save a graph with its values without saving the inputs?

I have a network with weights filled by manual tf.assign, and now I want to save the network with the weight values but without the placeholder inputs. It seems tf.train.Saver works only when I have the feed_dict available, and tf.train.export_meta_graph only saves the network structure. I tried pickle and dill but they both have errors. Are there any better solutions for this kind of saving?
Placeholders convert the input data into Tensors so I guess they are an important part of the Graph and I don't understand why you don't want to include them.
Even if you use tf.assign, you can freeze the graph, which means combining the structure with the weights. What freezing does is to convert Tensorflow variables into constants.
You have to save the structure of your graph:
gdef = g.as_graph_def()
tf.train.write_graph(gdef,".","graph.pb",False)
Then save the weights (after training)
saver.save(sess, 'tmp/my-weights')
And freeze the graph according to the tutorial in https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite
After that, you can use the Graph.

Tensorflow Transfer Learning with Input Pipeline

I want to use transfer learning with Google's Inception network for an image recognition problem. I am using retrain.py from the TensorFlow example source for inspiration.
In retrain.py, the Inception graph is loaded and a feed dict is used to feed the new images into the model's input layer. However, I have my data serialized in TFRecord files and have been using an input pipeline to feed in my inputs, as demonstrated here.
So I have a tensor images which returns my input data in batches when run. But how can I feed these images into Inception? I can't use a feed dict since my inputs are tensors, not NumPy arrays. My two ideas are
1) simply call sess.run() on each batch to convert it to a NumPy array, and then use a feed dict to pass it to Inception.
2) replace the input node in the Inception graph with my own batch input tensor
I think (1) would work, but it seems a little inelegant. (2) seems more natural to me, but I can't do exactly that because TensorFlow graphs can only be appended to and not otherwise modified.
Is there a better approach?
You can implement option (2), replacing the input node, but you will need to modify retrain.py to do so. The tf.import_graph_def() function supports a limited form of modification to the imported graph, by remapping tensors in the imported graph to existing tensors in the target graph.
This line in retrain.py calls tf.import_graph_def() to import the Inception model, where jpeg_data_tensor becomes the tensor that you feed with input data:
bottleneck_tensor, jpeg_data_tensor, resized_input_tensor = (
tf.import_graph_def(graph_def, name='', return_elements=[
BOTTLENECK_TENSOR_NAME, JPEG_DATA_TENSOR_NAME,
RESIZED_INPUT_TENSOR_NAME]))
Instead of retrieving jpeg_data_tensor from the imported graph, you can remap it to an input pipeline that you construct yourself:
# Output of a training pipeline, returning a `tf.string` tensor containing
# a JPEG-encoded image.
jpeg_data_tensor = ...
bottleneck_tensor, resized_input_tensor = (
tf.import_graph_def(
graph_def,
input_map={JPEG_DATA_TENSOR_NAME: jpeg_data_tensor},
return_elements=[BOTTLENECK_TENSOR_NAME, RESIZED_INPUT_TENSOR_NAME]))
Wherever you previously fed jpeg_data_tensor, you no longer need to need it, because the inputs will be read from the input pipeline you constructed. (Note that you might need to handle resized_input_tensor as well... I'm not intimately familiar with retrain.py, so some restructuring might be necessary.)