TF Object Detection input type for exported model - object-detection

TF Object Detection API creates an export/Servo folder for use with TF Serving. I'm trying to figure out how I should send an image when I have the server running. Anybody know what type(s) it expects? I could send a numpy array, jpeg bytes, jpeg encoded base64 string, etc.

Related

How to train custom object detection with tfrecord file

here I want to train a object detection model, so I have annotated the data using roboflow and then exported it as tfrecords and also got the (.pbtxt file) and after that I don't have any clue on how to train a can model from scratch with just 2,3 number of hidden layers. am not getting on how to use that tfrecord to fit in my model which I have created. please help me out.
tfrecord files are usually used with Tensorflow Object Detection. It's pretty old and I haven't seen it used in practice recently, but there's a Tensorflow Object Detection tutorial here that uses these tfrecord files.
If there's not a particular reason you need to use TF Object Detection I'd recommend using a newer and more well-supported model like YOLOv5 or YOLOv7.

facing issues for convert from tensorflow core to tensorflow lite

I am facing issues for convert TensorFlow to TensorFlow Lite. As per research first need to save the model in .pb and by using this file we can convert it into TensorFlow lite but facing an error.
Among the TF graph representations, exporting as a saved model is recommended. TFLiteConverter.from_saved_model API is more capable than the other conversion APIs. For example, signature def API is only available from the saved model API and there are better support of resource and variant types in the saved model API.
https://www.tensorflow.org/hub/exporting_tf2_saved_model
https://www.tensorflow.org/lite/convert

Load image array from GPU memory into keras/tensorflow without numpy

I have trained model in keras for image classification. Now, I am using my own encoder-decoder (not using OpenCV) for image processing and streaming using GPU. This encoder-decoder code is written in c++. I am able to transfer the image array generated using encoder-decoder to python using numpy and ctypes, and using that numpy array for image classification. This process induces huge overhead as it involves copying data from GPU to numpy array which uses CPU and again copying numpy array to GPU for image classification.
How can I directly use the GPU image array into the keras for inference? I have the address of the GPU image array through c++ code generated by encoder-decoder.

TF Lite object detection only returning 10 detections

I am using a custom object detection model with TensorFlow Lite on native Android. At this point I'm only detecting 2 custom objects. I am using the TensorFlow Object Detection API, and I have a pipeline in place that produces optimized .tflite files.
However, at inference time, the model only returns up to 10 individual detections. According to https://www.tensorflow.org/lite/models/object_detection/overview, this is expected. The problem is that my images have a relatively large object density. I need to be able to detect up to 30 individual objects per image/inference call.
If I change NUM_DETECTIONS in the sample Android app from the TF repo from 10 to, say, 20, I get a runtime exception due to shape mismatch. How can I produce .tflite files capable of yielding more than 10 object detection instances?
Thank you!
Unfortunately, since TFLite prefers static-shaped Input/Outputs, you would need to re-export a TFLite SSD graph with the required number of outputs. Instructions are here. While invoking object_detection/export_tflite_ssd_graph.py, you would need to pass in the parameter --max_detections=20. Then, your change of NUM_DETECTIONS should work as expected.

TFRecords / TensorFlow Serving: Converting TFRecords into (GRPC or RESTFul) TensorFlow Serving requests?

I have a bunch of TFRecords that I used to train a model. I'd like to use them with TensorFlow Serving as well. So far, I've just been using the RESTful TensorFlow serving endpoint and have been turning TFRecords into JSON request bodies.
Is there some special way I can do inference on TFRecords directly without manually munging individual TFRecords into TF serving requests?
TFRecords are binary format, would be hard to pass through RESTFul API directly.
The alternative is to use the GRPC end point of the tf serving. But it may not save you much.
GRPC request requires tensor_proto as input, see here for an example call in Python. In this case, your tensor proto could be a one dimensional data containing a serialized tf.Example object that comes from TFRecord. When you save your model during the training phase, you can define custom serving input processing function, that can accept a serialized tf.Example data as input for serving. Refer to the tf.estimator.Estimator.export_saved_model on how to define your custom function serving_input_receiver_fn for processing inputs at serving time.