Using .config files to load models in tensorflow - tensorflow

I have a .config file, name being ssd_mobilenet_v1.config which I know is supposed to load a pretrained model from tensorflow. However, I am unable to find how to do that.
I have searched the internet and there are instructions to do so from a dict but not directly using a .config file

Related

Exported object detection model not saved in referenced directory

I am trying to run the Tensorflow object detection notebook from the Google colab example that can be found here. I am running the example locally and when I try to create the exported model using the model.export(export_dir) function, the model is not saved in the referenced export_dir option. Instead, the code seems to ignore the option and saves the model in a temporary directory in /tmp/something. I tried to use both full and relative path but the model goes to /tmp. My environment is Ubuntu in WSL2 and I am using a /mnt/n drive.
Any idea how to fix the issue?

libnvinfer.so.5: cannot open shared object file: No such file or directory

I'm using Ubuntu 16.04 and TensorFlow 1.13.1. Now I want to integrate TensorRT to improve my model's inference time. I downloaded and extracted TensorRT7's tar, and installed whls of uff and graphsurgeon in its path. I also added its path to the system's LD_LIBRARY_PATH.
However, when I tried to import tensorflow.contrib.tensorrt, it gave me a file not found error. There isn't libnvinfer.so.5 in my TensorRT7's folder but a libnvinfer.so.7 instead.
Does this mean that TensorFlow 1.13.1 doesn't support TensorRT7? Should I use TensorRT5 instead?

Tensorflow Handpose model offline loading

I am trying to download the Tensorflow's handpose model files for running offline.
So, i have downloaded the Handpose Model files from here.
https://tfhub.dev/mediapipe/tfjs-model/handskeleton/1/default/1
But, how can we use these files offline and predict in javascript and as well as on the react-Native code.
Just change all urls in the hanpose package to point to the url where you put your model ( in your localhost/public_dir)
that works well for me :)

tensorflow lite models for android and instructions

I am trying to follow instructions here to generate models for android.
https://github.com/tensorflow/examples/tree/master/lite/examples/gesture_classification/ml
But when I try to run all commands in codelabs it asks for model.json and the binary weights file model-weights.bin files to upload. Not sure what this means.
If I skip this step second last step failes
No such file or directory: 'model.json'
Where can I find these?
You need to first use the gesture classification web app to generate the TensorFlow.js model trained by your gestures.
https://github.com/tensorflow/examples/tree/master/lite/examples/gesture_classification/web
Once the model is trained from the web app, you can download files like model.json and model-weights.bin which are needed in other steps.

tensorflow serving impossible proto files structure for grpc interface

I'm trying to compile pb files for GRPC calls to Tensorflow Serving (in php, but the question is not PHP related)
The file serving/tensorflow_serving/apis/predict.proto has:
import "tensorflow/core/framework/tensor.proto";
import "tensorflow_serving/apis/model.proto";
However in a normal setup tensorflow and tensorflow serving are not located in a hierarchy that has a common folder from which the two import can work together.
Assuming that compiling the proto files to pb files for grpc keeps the hirarchy , it cannot work without locating tensorflow serving under /tensorflow/. What am I missing here?
What is the best practice in order to compile pb files for grpc client?
Another issue: if the pb files are created - they include the imports with same hirarchy so it will force the same folder structure on the client ??? this is against the meaning of GRPC which is isolation and seperation between the entities.
I don't know anything about tensorflow, but i'm approaching the problem from a just-another-protobuf-creation point of view. Here https://github.com/tensorflow/serving i'm seeing both tensorflow_serving and a submodule tensorflow which is a root of your desired dependency (i.e. it has another tensorflow subfolder in it). So i guess that you either missed some configuration step, which would have copied the folder into right relative location, or you are running an incomplete/incorrect protoc command line, i.e. you are missing some -I <path>