Loading file from path contained in tf.Tensor - tensorflow

I build a simple model using tf.keras and a tf.data.Dataset for efficient loading as the dataset is a couple of GBs big.
The images are in tiff format and therefore need to be loaded directly as numpy.array.
I do have a dataset of labels and file paths and want to map a function for loading on that dataset. Therefore, I somehow have to get the python string representation out of the tensor.
I tried using the usual tf.Tensor.eval() and then joining the chars to a full string but getting the err: ValueError: Cannot evaluate tensor using eval(): No default session is registered. which does make sense as there is no session before the keras model is being executed
Then I tried putting tf.enable_eager_execution() right below my tensorflow import in the dataset file (and changing to from .eval() to .numpy()) but am getting err: AttributeError: 'Tensor' object has not attribute 'numpy' hinting tf.enable_eager_execution() did not work
Basically I'm trying to read a string contained in a tensor a below:
path = tf.decode_raw(path, tf.uint8)
path = ''.join(map(chr, path.eval())) # with session
path = ''.join(map(chr, path.numpy())) # with eager execution
image = PIL.Image.open(path)
image = numpy.array(image)
Both work fine when 'prototyping' in a single file without the rest but do not work when having e.g. my model in a model.py and dataset in dataset.py but having tf.enable_eager_execution() in both

This issue is quite old but as it could be useful for someone esle, that would be my approach:
You can use tf.io.read_file to read the file and then convert it to an image with tf.image.decode_png if your image is a png. Several other formats are available.
with tf_filepath the path to your RGB image contained in a tensor
image = tf.io.read_file(tf_filepath)
image = tf.image.decode_png(image, channels=3, dtype=tf.uint8)

Related

Tensorflow frozen_graph to tflite for android app

I am a newbie on the TensorFlow object detection library. I have a specific data set what I have to produce myself and labeled it with thousands of jpg. I have run the file to detect the object from these images.. any way.The end of the process i have gotten frozen_graph and from it I exported model.ckpl file to inference graph folder everything goes fine, and I have tested model.ckpl model on the object_detection.ipynb file it works fine. Until this step, there is no problem.
However,Am not able to understand how could convert that model.ckpl file to model.tflite file to use on android studio app.
I have see many things like but I am no idea what is the input_tensors = [...]
output_tensors = [...]
I may already know but what it was actually...
Could you show me how could I convert it?
Use tensorboard to find out your input and output layer. For reference follow these links -
https://heartbeat.fritz.ai/intro-to-machine-learning-on-android-how-to-convert-a-custom-model-to-tensorflow-lite-e07d2d9d50e3
Tensorflow Convert pb file to TFLITE using python
If you don't know your inputs and outputs, use summarize_graph tool and feed it your frozen model.
See command here
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms#inspecting-graphs
If you have trained your model from scratch you must be having the .meta file. Also you need to specify the output node names using which you can create a .pb file. Please refer to below link on steps to create this file:
Tensorflow: How to convert .meta, .data and .index model files into one graph.pb file
Once this is created you can further convert your .pb to tflite as below:
import tensorflow as tf
graph_def_file = "model.pb"
input_arrays = ["model_inputs"]
output_arrays = ["model_outputs"]
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

What is the use of a *.pb file in TensorFlow and how does it work?

I am using some implementation for creating a face recognition which uses this file:
"facenet.load_model("20170512-110547/20170512-110547.pb")"
What is the use of this file? I am not sure how it works.
console log :
Model filename: 20170512-110547/20170512-110547.pb
distance = 0.72212267
Github link of the actual owner of the code
https://github.com/arunmandal53/facematch
pb stands for protobuf. In TensorFlow, the protbuf file contains the graph definition as well as the weights of the model. Thus, a pb file is all you need to be able to run a given trained model.
Given a pb file, you can load it as follow.
def load_pb(path_to_pb):
with tf.gfile.GFile(path_to_pb, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name='')
return graph
Once you have loaded the graph, you can basically do anything. For instance, you can retrieve tensors of interest with
input = graph.get_tensor_by_name('input:0')
output = graph.get_tensor_by_name('output:0')
and use regular TensorFlow routine like:
sess.run(output, feed_dict={input: some_data})
Explanation
The .pb format is the protocol buffer (protobuf) format, and in Tensorflow, this format is used to hold models. Protobufs are a general way to store data by Google that is much nicer to transport, as it compacts the data more efficiently and enforces a structure to the data. When used in TensorFlow, it's called a SavedModel protocol buffer, which is the default format when saving Keras/ Tensorflow 2.0 models. More information about this format can be found here and here.
For example, the following code (specifically, m.save), will create a folder called my_new_model, and save in it, the saved_model.pb, an assets/ folder, and a variables/ folder.
# first download a SavedModel from TFHub.dev, a website with models
m = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v2_130_224/classification/4")
])
m.build([None, 224, 224, 3]) # Batch input shape.
m.save("my_new_model") # defaults to save as SavedModel in tensorflow 2
In some places, you may also see .h5 models, which was the default format for TF 1.X. source
Extra information: In TensorFlow Lite, the library for running models on mobile and IoT devices, instead of protocol buffers, flatbuffers are used. This is what the TensorFlow Lite Converter converts into (.tflite format). This is another Google format which is also very efficient: it allows access to any part of the message without deserialization (unlike json, xml). For devices with less memory (RAM), it makes more sense to load what you need from the model file, instead of loading the entire thing into memory to deserialize it.
Loading SavedModels in TensorFlow 2
I noticed BiBi's answer to show loading models was popular, and there is a shorter way to do this in TF2:
import tensorflow as tf
model_path = "/path/to/directory/inception_v1_224_quant_20181026"
model = tf.saved_model.load(model_path)
Note,
the directory (i.e. inception_v1_224_quant_20181026) has to have a saved_model.pb or saved_model.pbtxt, otherwise the code will crash. You cannot specify the .pb path, specify the directory.
you might get TypeError: 'AutoTrackable' object is not callable for older models, fix here.
If you load a TF1 model, I found that I don't get any errors, but the loaded file doesn't behave as expected. (e.g. it doesn't have any functions on it, like predict)

regarding transforming an ndarray(image input via cv2 or skimage) to a tensor

I have read an image as follows using opencv
image = cv2.imread('/data/TestImages/cat.jpg',cv2.IMREAD_UNCHANGED)
This read image cause the error message when it was called by segmentation, np_image, np_logits = sess.run([pred, image, logits])
The error message is as TypeError: Can not convert a ndarray into a Tensor or Operation.
Are there any mechanisms that can transform an image represented as ndarray to a Tensorflow tensor. Thanks.
You have to read up on the sess.run function. In the array you have as argument of your function you specify what you want to get OUT of your run command. In your case, you probably only want your pred and logits.
If you want to put something IN the network you have to specify a tf.placeholder in your graph, and feed your image like this:
np_pred,np_logits = sess.run([pred, logits],feed_dict={image_placeholder: image})
Hope this helps!

Tensorflow Transfer Learning with Input Pipeline

I want to use transfer learning with Google's Inception network for an image recognition problem. I am using retrain.py from the TensorFlow example source for inspiration.
In retrain.py, the Inception graph is loaded and a feed dict is used to feed the new images into the model's input layer. However, I have my data serialized in TFRecord files and have been using an input pipeline to feed in my inputs, as demonstrated here.
So I have a tensor images which returns my input data in batches when run. But how can I feed these images into Inception? I can't use a feed dict since my inputs are tensors, not NumPy arrays. My two ideas are
1) simply call sess.run() on each batch to convert it to a NumPy array, and then use a feed dict to pass it to Inception.
2) replace the input node in the Inception graph with my own batch input tensor
I think (1) would work, but it seems a little inelegant. (2) seems more natural to me, but I can't do exactly that because TensorFlow graphs can only be appended to and not otherwise modified.
Is there a better approach?
You can implement option (2), replacing the input node, but you will need to modify retrain.py to do so. The tf.import_graph_def() function supports a limited form of modification to the imported graph, by remapping tensors in the imported graph to existing tensors in the target graph.
This line in retrain.py calls tf.import_graph_def() to import the Inception model, where jpeg_data_tensor becomes the tensor that you feed with input data:
bottleneck_tensor, jpeg_data_tensor, resized_input_tensor = (
tf.import_graph_def(graph_def, name='', return_elements=[
BOTTLENECK_TENSOR_NAME, JPEG_DATA_TENSOR_NAME,
RESIZED_INPUT_TENSOR_NAME]))
Instead of retrieving jpeg_data_tensor from the imported graph, you can remap it to an input pipeline that you construct yourself:
# Output of a training pipeline, returning a `tf.string` tensor containing
# a JPEG-encoded image.
jpeg_data_tensor = ...
bottleneck_tensor, resized_input_tensor = (
tf.import_graph_def(
graph_def,
input_map={JPEG_DATA_TENSOR_NAME: jpeg_data_tensor},
return_elements=[BOTTLENECK_TENSOR_NAME, RESIZED_INPUT_TENSOR_NAME]))
Wherever you previously fed jpeg_data_tensor, you no longer need to need it, because the inputs will be read from the input pipeline you constructed. (Note that you might need to handle resized_input_tensor as well... I'm not intimately familiar with retrain.py, so some restructuring might be necessary.)

Feeding both jpeg and png images to the pre-trained inception3 model?

I gather from this question and its answer [ feeding image data in tensorflow for transfer learning ] that adding a new op to the imported graph will help, but it isn't clear to me if the resulting graph will handle both png and jpeg inputs automatically, and at the same time.
The answer to the above question suggests the following:
png_data = tf.placeholder(tf.string, shape=[])
decoded_png = tf.image.decode_png(png_data, channels=3)
# ...
graph_def = ...
softmax_tensor = tf.import_graph_def(
graph_def,
input_map={'DecodeJpeg:0': decoded_png},
return_elements=['softmax:0'])
sess.run(softmax_tensor, {png_data: ...})
Does this mean that a PNG input must be passed in as
sess.run(softmax_tensor, {png_data: image_array})
And a JPEG input must be given to the graph as
sess.run(softmax_tensor, {'DecodeJpeg:0': image_array})
Would the second statement work after the graph has been modified and an op added at the bottom?
The answers in the previous question center around switching the graph from taking JPEGs to PNGs. With the network as specified, there's no way for it to handle both.
You have a few options if you need to deal with both types.
Handle the decoding yourself, either with PIL, or TensorFlow, and feed the decoded image bytes into the graph at the output of the existing decode node.
If you're happy feeding the network, then do a two-step operation where you re-plumb the input to read from a variable, and create two new nodes that write decoded output to that variable.
sess.run(feed_jpeg, feed_dict={in_jpg: my_jpg})
sess.run(the_network)
or
sess.run(feed_png, feed_dict={in_png: my_png})
sess.run(the_network)
Create a more complex conditional input path where you can feed a flag value that tells it what data type it is, and uses TF conditionals to only pull on the specified decode node.
Write a new op that dispatches to either decode_png or decode_jpeg as necessary, based upon the format string at the start of the data.
I'm hoping we'll expose some string comparison ops so that you could write (4) in pure TensorFlow, but I don't have a timeline for any of that.