Predicting Tensorflow base64 encoded models directly from python - tensorflow2.0

Does anyone know how to predict Tensorflow base64 encoded models from Python directly for object detection?
If I were using a model exported to expect an input_tensor, I would use:
def convert_image_to_tensor_batch(image):
input_tensor = tf.convert_to_tensor(image)
input_tensor = input_tensor[tf.newaxis, ...]
return input_tensor
to convert my image (once loaded with cv2.imread(image_path))
When it comes to models that are exported to expect base64 encoding though, I have just wasted a whole day failing to work out the magic setup.
I've tried:
def load_image(image_path: str):
image = cv2.imread(image_path)
encoded = base64.b64encode(cv2.imencode(".png", image)[1].tobytes())
return image, encoded.decode("utf-8")
def convert_to_tensor_batch(input_string):
input_tensor = tf.convert_to_tensor(input_string, dtype="string")
input_tensor = input_tensor[tf.newaxis, ...]
return input_tensor
But the output is not recognised, and I get:
INVALID_ARGUMENT: Unknown image file format. One of JPEG, PNG, GIF, BMP required.
Which basically means it doesn't recognise the base64 encoded PNG I'm sending it.
What's frustrating is it works perfectly in TFserving if I serve it the base 64 encoded string with the exact same code shown there, but in a JSON request.
As I don't know how TFserving exactly converts JSON To tensors, my next step will otherwise have to be to make a dummy model that outputs whatever it is converting them too.

Related

Loading file from path contained in tf.Tensor

I build a simple model using tf.keras and a tf.data.Dataset for efficient loading as the dataset is a couple of GBs big.
The images are in tiff format and therefore need to be loaded directly as numpy.array.
I do have a dataset of labels and file paths and want to map a function for loading on that dataset. Therefore, I somehow have to get the python string representation out of the tensor.
I tried using the usual tf.Tensor.eval() and then joining the chars to a full string but getting the err: ValueError: Cannot evaluate tensor using eval(): No default session is registered. which does make sense as there is no session before the keras model is being executed
Then I tried putting tf.enable_eager_execution() right below my tensorflow import in the dataset file (and changing to from .eval() to .numpy()) but am getting err: AttributeError: 'Tensor' object has not attribute 'numpy' hinting tf.enable_eager_execution() did not work
Basically I'm trying to read a string contained in a tensor a below:
path = tf.decode_raw(path, tf.uint8)
path = ''.join(map(chr, path.eval())) # with session
path = ''.join(map(chr, path.numpy())) # with eager execution
image = PIL.Image.open(path)
image = numpy.array(image)
Both work fine when 'prototyping' in a single file without the rest but do not work when having e.g. my model in a model.py and dataset in dataset.py but having tf.enable_eager_execution() in both
This issue is quite old but as it could be useful for someone esle, that would be my approach:
You can use tf.io.read_file to read the file and then convert it to an image with tf.image.decode_png if your image is a png. Several other formats are available.
with tf_filepath the path to your RGB image contained in a tensor
image = tf.io.read_file(tf_filepath)
image = tf.image.decode_png(image, channels=3, dtype=tf.uint8)

How to change retrain.py to take in a b64 image

Right now, I am using the default retrain.py from tensorflow to train an image classification model. But when I serve the model on google ai platform and try to call the api, I get an error saying that the image is too large since it is a float32 array. I’m thinking the best thing would be is to change retrain.py to take in a b64 image instead of a float32 array, but I have no idea how to do that. Any suggestions?
Any help is appreciated! Thanks!
UPDATE
def export_model(module_spec, class_count, saved_model_dir):
sess, in_image, _, _, _, _ = build_eval_session(module_spec, class_count)
image = tf.placeholder(shape=[None], dtype=tf.string)
export_dir = "/tmp/save/"
inputs = {'image_bytes': image}
with sess.graph.as_default() as graph:
tf.saved_model.simple_save(sess, export_dir, inputs, {'prediction': graph.get_tensor_by_name('final_result:0')})
This is what I have updated my code to be, but it still doesn't work
take a look at this post, it contains the info you need. If not, then reply and I'll help you prepare some code, you probbably need the url safe b64 variant though.
EDIT
your code is a bit confusing, I don't think the input is already connected to your graph, have you looked at the graph with tf.summary.FileWriter('output folder', sess.graph)?
I'm gonna gradually try to explain how you build some layers in front of your model, with some examples, this code should not be in retrain.py and can be ran after you trained the model.
1) Load your tensorflow model in if it is build with savedModelBuilder or the simple save thing you can do it like this:
def loader(path):
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, [tag_constants.TRAINING], path)
return tf.get_default_graph().as_graph_def()
The tagconstants can be checked with the saved_model_cli tool, it is possible that this has to be empty [] in your case.
2) add the layers/tensors you need, you need something that accept a byte string, or base 64 in this case, that decodes it and transforms it into a 3D image:
image_str_tensor = tf.placeholder(dtype=tf.string, shape=(None,), name='input_image_bytes')
input_image = tf.decode_base64(image_str_tensor)
decoder = tf.image.decode_jpeg(input_image[0], channels=3)
The other tensors like converting to float, dim_expanding and reshaping should already be in the graph if you got it from retrain.py.
3) implement them into your graph by feeding them into it.
graph_def_inception = loader('path to your saved model')
output_prediction, = tf.import_graph_def(graph_def_inception, input_map={"DecodeJpeg:0": decoder}, return_elements=['final_result:0'], name="")
4) create a saved model and check if everything is like you want it to be!
builder = tf.saved_model.builder.SavedModelBuilder('output/model/path')
with tf.Session() as sess:
tf.summary.FileWriter('output/graph_log/files', sess.graph)
input_tensor_info = tf.saved_model.utils.build_tensor_info(input_image)
output_tensor_info = tf.saved_model.utils.build_tensor_info(output_prediction)
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs={'input_image': input_tensor_info},
outputs={'output_prediction': output_tensor_info},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
# save as SavedModel
builder.add_meta_graph_and_variables(sess,
[tf.saved_model.tag_constants.SERVING],
signature_def_map={'serving_default': signature})
builder.save()
5) if you get errors try to debug them with tensorboard
tensorboard --logdir=output/graph_log/files
I hoped I helped a bit, this code will not work from the first try, you need to puzzle with some parts. If you truly cannot succeed, then you should share the model, maybe I can do it then and share the code with you, if I have time.

What is the use of a *.pb file in TensorFlow and how does it work?

I am using some implementation for creating a face recognition which uses this file:
"facenet.load_model("20170512-110547/20170512-110547.pb")"
What is the use of this file? I am not sure how it works.
console log :
Model filename: 20170512-110547/20170512-110547.pb
distance = 0.72212267
Github link of the actual owner of the code
https://github.com/arunmandal53/facematch
pb stands for protobuf. In TensorFlow, the protbuf file contains the graph definition as well as the weights of the model. Thus, a pb file is all you need to be able to run a given trained model.
Given a pb file, you can load it as follow.
def load_pb(path_to_pb):
with tf.gfile.GFile(path_to_pb, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name='')
return graph
Once you have loaded the graph, you can basically do anything. For instance, you can retrieve tensors of interest with
input = graph.get_tensor_by_name('input:0')
output = graph.get_tensor_by_name('output:0')
and use regular TensorFlow routine like:
sess.run(output, feed_dict={input: some_data})
Explanation
The .pb format is the protocol buffer (protobuf) format, and in Tensorflow, this format is used to hold models. Protobufs are a general way to store data by Google that is much nicer to transport, as it compacts the data more efficiently and enforces a structure to the data. When used in TensorFlow, it's called a SavedModel protocol buffer, which is the default format when saving Keras/ Tensorflow 2.0 models. More information about this format can be found here and here.
For example, the following code (specifically, m.save), will create a folder called my_new_model, and save in it, the saved_model.pb, an assets/ folder, and a variables/ folder.
# first download a SavedModel from TFHub.dev, a website with models
m = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v2_130_224/classification/4")
])
m.build([None, 224, 224, 3]) # Batch input shape.
m.save("my_new_model") # defaults to save as SavedModel in tensorflow 2
In some places, you may also see .h5 models, which was the default format for TF 1.X. source
Extra information: In TensorFlow Lite, the library for running models on mobile and IoT devices, instead of protocol buffers, flatbuffers are used. This is what the TensorFlow Lite Converter converts into (.tflite format). This is another Google format which is also very efficient: it allows access to any part of the message without deserialization (unlike json, xml). For devices with less memory (RAM), it makes more sense to load what you need from the model file, instead of loading the entire thing into memory to deserialize it.
Loading SavedModels in TensorFlow 2
I noticed BiBi's answer to show loading models was popular, and there is a shorter way to do this in TF2:
import tensorflow as tf
model_path = "/path/to/directory/inception_v1_224_quant_20181026"
model = tf.saved_model.load(model_path)
Note,
the directory (i.e. inception_v1_224_quant_20181026) has to have a saved_model.pb or saved_model.pbtxt, otherwise the code will crash. You cannot specify the .pb path, specify the directory.
you might get TypeError: 'AutoTrackable' object is not callable for older models, fix here.
If you load a TF1 model, I found that I don't get any errors, but the loaded file doesn't behave as expected. (e.g. it doesn't have any functions on it, like predict)

Feeding both jpeg and png images to the pre-trained inception3 model?

I gather from this question and its answer [ feeding image data in tensorflow for transfer learning ] that adding a new op to the imported graph will help, but it isn't clear to me if the resulting graph will handle both png and jpeg inputs automatically, and at the same time.
The answer to the above question suggests the following:
png_data = tf.placeholder(tf.string, shape=[])
decoded_png = tf.image.decode_png(png_data, channels=3)
# ...
graph_def = ...
softmax_tensor = tf.import_graph_def(
graph_def,
input_map={'DecodeJpeg:0': decoded_png},
return_elements=['softmax:0'])
sess.run(softmax_tensor, {png_data: ...})
Does this mean that a PNG input must be passed in as
sess.run(softmax_tensor, {png_data: image_array})
And a JPEG input must be given to the graph as
sess.run(softmax_tensor, {'DecodeJpeg:0': image_array})
Would the second statement work after the graph has been modified and an op added at the bottom?
The answers in the previous question center around switching the graph from taking JPEGs to PNGs. With the network as specified, there's no way for it to handle both.
You have a few options if you need to deal with both types.
Handle the decoding yourself, either with PIL, or TensorFlow, and feed the decoded image bytes into the graph at the output of the existing decode node.
If you're happy feeding the network, then do a two-step operation where you re-plumb the input to read from a variable, and create two new nodes that write decoded output to that variable.
sess.run(feed_jpeg, feed_dict={in_jpg: my_jpg})
sess.run(the_network)
or
sess.run(feed_png, feed_dict={in_png: my_png})
sess.run(the_network)
Create a more complex conditional input path where you can feed a flag value that tells it what data type it is, and uses TF conditionals to only pull on the specified decode node.
Write a new op that dispatches to either decode_png or decode_jpeg as necessary, based upon the format string at the start of the data.
I'm hoping we'll expose some string comparison ops so that you could write (4) in pure TensorFlow, but I don't have a timeline for any of that.

Effective reading of own images in tensorflow

I've skimmed over all tensorflow tutorials in which all data sets were loaded in RAM due to their small size. However, my own data (~30 Gb of images) can not be loaded in memory, therefore I'm looking for effective ways of reading images for further processing. Could anyone provide me examples of how can I do that?
P.S. I have two files train_images and validation_images that contain:
<path/to/img> <label>
This is what you're looking for: Tensorflow read images with labels
The exact code snippet is like this:
def read_labeled_image_list(image_list_file):
"""Reads a .txt file containing pathes and labeles
Args:
image_list_file: a .txt file with one /path/to/image per line
label: optionally, if set label will be pasted after each line
Returns:
List with all filenames in file image_list_file
"""
f = open(image_list_file, 'r')
filenames = []
labels = []
for line in f:
filename, label = line[:-1].split(' ')
filenames.append(filename)
labels.append(int(label))
return filenames, labels
def read_images_from_disk(input_queue):
"""Consumes a single filename and label as a ' '-delimited string.
Args:
filename_and_label_tensor: A scalar string tensor.
Returns:
Two tensors: the decoded image, and the string label.
"""
label = input_queue[1]
file_contents = tf.read_file(input_queue[0])
example = tf.image.decode_png(file_contents, channels=3)
return example, label
# Reads pfathes of images together with their labels
image_list, label_list = read_labeled_image_list(filename)
images = ops.convert_to_tensor(image_list, dtype=dtypes.string)
labels = ops.convert_to_tensor(label_list, dtype=dtypes.int32)
# Makes an input queue
input_queue = tf.train.slice_input_producer([images, labels],
num_epochs=num_epochs,
shuffle=True)
image, label = read_images_from_disk(input_queue, num_labels=num_labels)
# Optional Preprocessing or Data Augmentation
# tf.image implements most of the standard image augmentation
image = preprocess_image(image)
label = preprocess_label(label)
# Optional Image and Label Batching
image_batch, label_batch = tf.train.batch([image, label],
batch_size=batch_size)
The recommended way is to put it into sharded protobuf files, where encoded jpeg and label(s) are features of a tf.Example. build_image_data.py in the tensorflow/models repository shows how to create such a database of image/label pairs from a directory structure, you'll need to adapt it a bit to your case (it's straightforward). Then for training time you can look at image_processing.py where it shows how to go from the tf.Example proto to image/label tensors (extract decoded jpg and label from the Example record, decode jpg, resize, apply augmentations as needed, then enqueue).
Tutorial on udacity has stochastic method explained in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/udacity/4_convolutions.ipynb, you can use the same with one change, instead of saving all images in single pickle file, save them in chunks of batch_size that you are using. That way at a time, you can load only as much data as used in the one batch.