TensorFlow DecodePng throws Value Error - tensorflow

For decoding a png image, normally we use the following segment of code.
image_placeholder = tf.placeholder(tf.string)
image_tensor = tf.read_file(image_placeholder)
image_tensor = tf.image.decode_png(image_tensor, channels=1)
For deploying a model using Tensorflow serving, I followed the example of Inception_saved_model for my own version of model. Below is the code used in that program to read the incoming tensorproto.
image_placeholder = tf.placeholder(tf.string, name='images')
feature_configs = {'images': tf.FixedLenFeature(shape=[], dtype=tf.string), }
tf_example = tf.parse_example(image_placeholder, feature_configs)
image_tensor = tf_example['images']
image_tensor = tf.image.decode_png(image_tensor, channels=1)
When I use this code, Decode_png throws Value error:
ValueError: Shape must be rank 0 but is rank 1 for 'DecodePng' (op: 'DecodePng') with input shapes: [?].
Can someone help me on where I am going wrong? The code I presented here is similar to the one given in the Inception example.

tf.parse_example operates on a batch ("rank 1"), and decode_png expects a single image (a scalar string, "rank 0"). I'd either use tf.parse_single_example or add a reshape to scalar (shape=[]) before using decode_png.

Related

RoBERTa example from tfhub produces error "During Variant Host->Device Copy: non-DMA-copy attempted of tensor type: string"

I would like to use the roberta-base model from tfhub. I am trying to run the example below, although I get an error when I try to feed sentences to model as input. I get the following error Invalid argument: During Variant Host->Device Copy: non-DMA-copy attempted of tensor type: string. I am using python 3.7, tensorflow 2.5, and tensorflow_hub 0.12.
If I try to replace preprocessor and encoder with the corresponding BERT versions, the code above works. However, I would like it to work for RoBERTa as well (as shown below).
preprocess = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3")
encoder = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4", trainable=True)
# define a text embedding model
text_input = tf.keras.layers.Input(shape=(), dtype=tf.string)
preprocessor = hub.KerasLayer("https://tfhub.dev/jeongukjae/roberta_en_cased_preprocess/1")
encoder_inputs = preprocessor(text_input)
encoder = hub.KerasLayer("https://tfhub.dev/jeongukjae/roberta_en_cased_L-12_H-768_A-12/1", trainable=True)
encoder_outputs = encoder(encoder_inputs)
pooled_output = encoder_outputs["pooled_output"] # [batch_size, 768].
sequence_output = encoder_outputs["sequence_output"] # [batch_size, seq_length, 768].
model = tf.keras.Model(text_input, pooled_output)
# You can embed your sentences as follows
sentences = tf.constant(["(your text here)"])
print(model(sentences))
Additionally, the code above with the RoBERTa preprocessor/encoder seems to work if I use CPU instead of GPU (adding with tf.device('/cpu:0')), but this is not feasible because I need to fine-tune a model on lots of data.

Tensorflow Model to TFLITE

I have this code for building a semantic search engine using pre-trained universal encoder from tensorflow hub. I am not able to convert to tlite. I have saved the model to my directory.
Importing the model:
module_path ="/content/drive/My Drive/4"
%time model = hub.load(module_path)
#print ("module %s loaded" % module_url)
#Create function for using modeltraining
def embed(input):
return model(input)
Training the model on data:
## training the model
Model_USE= embed(data)
Saving the model:
exported = tf.train.Checkpoint(v=tf.Variable(Model_USE))
exported.f = tf.function(
lambda x: exported.v * x,
input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)])
export_dir = "/content/drive/My Drive/"
tf.saved_model.save(exported,export_dir)
Saving works fine but when I convert to tflite it gives error.
Conversion code:
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
Error:
as_list() is not defined on an unknown TensorShape.
First, you should need to add a data generator to have representative inputs for the converter. Just like this:
def representative_data_gen():
for input_value in dataset.take(100):
yield [input_value]
The input value must be of shape (1, your_iput_shape) as if it had batch shape of 1. It has to be yielded as a list; mandatory.
You should also declare which type of optimization do you want, for example:
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
Nevertheless, I have also encountered problems with the different options of the converter depending on the network structure, which in this case I do not know. So, to make a clean run of the converter I would just do:
converter = lite.TFLiteConverter.from_keras_model(model)
converter.experimental_new_converter = True
converter.optimizations = [lite.Optimize.DEFAULT]
tfmodel = converter.convert()
The converter.experimental_new_converter = True is for problems when converting RNNs as in https://github.com/tensorflow/tensorflow/issues/34813
EDIT:
As explained here: ValueError: None is only supported in the 1st dimension. Tensor 'flatbuffer_data' has invalid shape '[None, None, 1, 512]' TFLite only allows the first dimension of your data to be None, that is, the batch. All other dimensions must be fixed. Try padding them with, for example, tf.keras.preprocessing.sequence.pad_sequences.
Then mask your sequences in the network as described in: tensorflow.org/guide/keras/masking_and_padding with Embedding or Masking layers.

Tensorflow Serving: InvalidArgumentError: Expected image (JPEG, PNG, or GIF), got unknown format starting with 'AAAAAAAAAAAAAAAA'

I'm trying to prepare my custom Keras model for deploy to be used with Tensorflow Serving, but I'm running into issues with preprocessing my images.
When i train my model i use the following functions to preprocess my images:
def process_image_from_tf_example(self, image_str_tensor, n_channels=3):
image = tf.image.decode_image(image_str_tensor)
image.set_shape([256, 256, n_channels])
image = tf.cast(image, tf.float32) / 255.0
return image
def read_and_decode(self, serialized):
parsed_example = tf.parse_single_example(serialized=serialized, features=self.features)
input_image = self.process_image_from_tf_example(parsed_example["image_raw"], 3)
ground_truth_image = self.process_image_from_tf_example(parsed_example["gt_image_raw"], 1)
return input_image, ground_truth_image
My images are PNGs saved locally, and when i write them on the .tfrecord files i use
tf.gfile.GFile(str(image_path), 'rb').read()
This works, I'm able to train my model and use it for local predictions.
Now I want to deploy my model to be used with Tensorflow Serving. My serving_input_receiver_fn function looks like this:
def serving_input_receiver_fn(self):
input_ph = tf.placeholder(dtype=tf.string, shape=[None], name='image_bytes')
images_tensor = tf.map_fn(self.process_image_from_tf_example, input_ph, back_prop=False, dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver({'input_1': images_tensor}, {'image_bytes': input_ph})
where process_image_from_tf_example is the same function as above, but i get the following error:
InvalidArgumentError (see above for traceback): assertion failed: [Unable to decode bytes as JPEG, PNG, GIF, or BMP]
Reading here it looks like this error is due to the fact that i'm not using
tf.gfile.GFile(str(image_path), 'rb').read()
as with my training/test files, but i can't use it because i need to send encoded bytes formatted as
{"image_bytes": {'b64': base64.b64encode(image).decode()}}
as requested by TF Serving.
Examples online send JPEG encoded bytes and preprocess the image starting with
tf.image.decode_jpeg(image_buffer, channels=3)
but if i use a different preprocessing function in my serving_input_receiver_fn (different than the one used for training) that starts with
tf.image.decode_png(image_buffer, channels=3)
i get the following error:
InvalidArgumentError (see above for traceback): Expected image (JPEG, PNG, or GIF), got unknown format starting with 'AAAAAAAAAAAAAAAA'
(the same happens with decode_jpeg, by the way)
What am i doing wrong? Do you need more code from me to answer? Thanks a lot!
Edit!!
Changed the title because it was not clear enough
OK I solved it.
image was a numpy array but i had to do the following:
buffer = cv2.imencode('.jpg', image)[1].tostring()
bytes_image = base64.b64encode(buffer).decode('ascii')
{"image_bytes": {"b64": bytes_image}}
Also, my preprocessing and serving_input_receiver_fn functions changed:
def process_image_from_buffer(self, image_buffer):
image = tf.image.decode_jpeg(image_buffer, channels=3)
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
image = tf.expand_dims(image, 0)
image = tf.image.resize_bilinear(image, [256, 256], align_corners=False)
image = tf.squeeze(image, [0])
image = tf.cast(image, tf.float32) / 255.0
return image
def serving_input_receiver_fn(self):
input_ph = tf.placeholder(dtype=tf.string, shape=[None])
images_tensor = tf.map_fn(self.process_image_from_buffer, input_ph, back_prop=False, dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver({'input_1': images_tensor}, {'image_bytes': input_ph})
process_image_from_buffer is different than process_image_from_tf_example used above for training.
I also removed name='image_bytes' from input_ph above.
Hope it's clear enough to help someone else.
Excellent guide partially used for solving it

Tensorflow Serving, online predictions: How to build a signature_def that accepts 'image_bytes' as input tensor name?

I have successfully trained a Keras model and used it for predictions on my local machine, now i want to deploy it using Tensorflow Serving. My model takes images as input and returns a mask prediction.
According to the documentation here my instances need to be formatted like this:
{'image_bytes': {'b64': base64.b64encode(jpeg_data).decode()}}
Now, the saved_model.pb file automatically saved by my Keras model has the following tensor names:
input_tensor = graph.get_tensor_by_name('input_image:0')
output_tensor = graph.get_tensor_by_name('conv2d_23/Sigmoid:0')
therefore i need to save a new saved_model.pb file with a different signature_def.
I tried the following (see here for reference), which works:
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, ['serve'], 'path/to/saved/model/')
graph = tf.get_default_graph()
input_tensor = graph.get_tensor_by_name('input_image:0')
output_tensor = graph.get_tensor_by_name('conv2d_23/Sigmoid:0')
tensor_info_input = tf.saved_model.utils.build_tensor_info(input_tensor)
tensor_info_output = tf.saved_model.utils.build_tensor_info(output_tensor)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'image_bytes': tensor_info_input},
outputs={'output_bytes': tensor_info_output},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
builder = tf.saved_model.builder.SavedModelBuilder('path/to/saved/new_model/')
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={'predict_images': prediction_signature, })
builder.save()
but when i deploy the model and request predictions to the AI platform, i get the following error:
RuntimeError: Prediction failed: Error processing input: Expected float32, got {'b64': 'Prm4OD7JyEg+paQkPrGwMD7BwEA'} of type 'dict' instead.
readapting the answer here, i also tried to rewrite
input_tensor = graph.get_tensor_by_name('input_image:0')
as
image_placeholder = tf.placeholder(tf.string, name='b64')
graph_input_def = graph.as_graph_def()
input_tensor, = tf.import_graph_def(
graph_input_def,
input_map={'b64:0': image_placeholder},
return_elements=['input_image:0'])
with the (wrong) understanding that this would add a layer on top of my input tensor with matching 'b64' name (as per documentation) that accepts a string and connects it the original input tensor
but the error from the AI platform is the same.
(the relevant code i use for requesting a prediction is:
instances = [{'image_bytes': {'b64': base64.b64encode(image).decode()}}]
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
where image is a numpy.ndarray of dtype('float32'))
I feel i'm close enough but i'm definitely missing something. Can you please help?
After b64 encoded -> decoded, the buffer of img will be changed to type string and not fit your model input type.
You may try to add some preprocess in your model and send b64 request again.

How to find the Input and Output Nodes of a Frozen Model

I want to use tensorflow's optimize_for_inference.py script on a frozen Model from the model zoo: the ssd_mobilenet_v1_coco.
How do i find/determine the names of the input and output name of the model?
Hires version of the graph generated by tensorboard
This question might help: Given a tensor flow model graph, how to find the input node and output node names (for me it did not)
I think you can do using the following code. I downloaded ssd_mobilenet_v1_coco frozen model from here and was able to get the input and output names as shown below
!pip install tensorflow==1.15.5
import tensorflow as tf
tf.__version__ # TF1.15.5
gf = tf.GraphDef()
m_file = open('/content/frozen_inference_graph.pb','rb')
gf.ParseFromString(m_file.read())
with open('somefile.txt', 'a') as the_file:
for n in gf.node:
the_file.write(n.name+'\n')
file = open('somefile.txt','r')
data = file.readlines()
print("output name = ")
print(data[len(data)-1])
print("Input name = ")
file.seek ( 0 )
print(file.readline())
Output is
output name =
detection_classes
Input name =
image_tensor
Please check the gist here.
all the models saved using tensorflow object detection api have image_tensor as the input node name.
Object detection model has 4 outputs:
num_detections : Predicts the number of detection for a given image
detection_classes: Number of classes that the model is trained on
detection_boxes : predicts (ymin, xmin, ymax, xmax) coordinates
detection_scores : predicts the confidence for each class, the class which has the highest prediction should be selected
code for saved_model inference
def load_image_into_numpy_array(path):
'Converts Image into numpy array'
img_data = tf.io.gfile.GFile(path, 'rb').read()
image = Image.open(BytesIO(img_data))
im_width, im_height = image.size
return np.array(image.getdata()).reshape((im_height, im_width, 3)).astype(np.uint8)
# Load saved_model
model = tf.saved_model.load_model('custom_mode/saved_model',tags=none)
# Convert image into numpy array
numpy_image = load_image_into_numpy_array('Image_path')
# Expand dimensions
input_tensor = np.expand_dims(numpy_image, 0)
# Send image to the model
model_output = model(input_tensor)
# Use output_nodes to predict the outputs
num_detections = int(model_output.pop('num_detections'))
detections = {key: value[0, :num_detections].numpy()
for key, value in detections.items()}
detections['num_detections'] = num_detections
detections['detection_classes'] = detections['detection_classes'].astype(np.int64)
boxes = detections['detection_boxes']
scores = detections['detection_scores']
pred_class = detections['detection_classes']
you can just do model.summary() to see all the Layer names (and also their type). It is the first column.