I'm building an app with C++ and I want to include an ML classifier model that I've built with TensorFlow. App will be built for different operating systems. Is there any way to ship TensorFlow with my app so people don't have to install TensorFlow by themselves on their machines?
My other option is to make my own neural network implementation in C++ and just read weights and biases from saved TensorFlow model.
I recommend using freeze_graph to package your graphdef and weights into a single file, and then follow the label_image C++ example for how to load and run the resulting GraphDef file:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/label_image/main.cc#L140
Related
I am trying to standardize our deployment workflow for machine vision systems. So we were thinking of the following workflow.
Deployment workflow
So, we want to create the prototype for the same, so we followed the workflow. So, there is no problem with GCP operation whatsoever but when we try to export models, which we train on the vertexAI it will give three models as mentioned in the workflow which is:
SaveModel
TFLite
TFJS
and we try these models to convert into the ONNX model but we failed due to different errors.
SaveModel - Always getting the same error with any parameter which is as follows
Error in savemodel
I tried to track the error and I identified that the model is not loading inside the TensorFlow only which is wired since it is exported from the GCP vertexAI which leverages the power of TensorFlow.
TFLite - Successfully converted but again the problem with the opset of ONNX but with 15 opset it gets successfully converted but then NVIDIA tensorRT ONNXparser doesn't recognize the model during ONNX to TRT conversion.
TFJS - yet not tried.
So we are blocked here due to these problems.
We can run these models exported directly from the vertexAI on the Jetson Nano device but the problem is TF-TRT and TensorFlow is not memory-optimized on the GPU so the system gets frozen after 3 to 4 hours of running.
We try this workflow with google teachable machine once and it workout well all steps are working perfectly fine so I am really confused How I conclude this full workflow since it's working on a teachable machine which is created by Google and not working on vertexAI model which is again developed by same Company.
Or am I doing Something wrong in this workflow?
For the background we are developing this workflow inside C++ framework for the realtime application in industrial environment.
Is it possible to load tensorflow lite models with tensorflow for java?
I've testet the SavedModleBundle and org.tensorflow.Graph.importGraphDef
but it doesnt work.
By loading the GraphDef there is a java.lang.IllegalArgumentException: Invalid GraphDef exception.
It looks like the tflite interpreter was not implemented for tensorflow for java.
Best regards
To use tensorflow model on standalone java(not in android), you have to use SavedModleBundle and you need to compile with java compiler as described here. For that you need TensorFlow Jar Archive (JAR) and Java Native Interface (JNI) file from tensorflow.
It is not possible to use tflite model in standalone java applications.Tensorflow Lite is specifically used for mobile and embedded devices.
I'm trying to convert a model from tensorflow to onnx. The process to do this is like following.
Save a graph_def and a ckpt for weights in tensorflow.
Inspect a graph_def whether it's structure is valid and give us what the inputs and outputs are.
Freeze both of them together into frozen tensorflow graph.
Convert that graph to onnx model.
The problem is in step 2. To inspect the graph definition, I tried to invoke summarize_graph in Graph Transform Tool. But, it wasn't work properly. Next, i found documentation for Graph Transform Tool. According to the documentation they use bazel that is a tool to build and test like maven. It means that I cannot use this function in a tensorflow installed from pip package manager? Only way to use this function is to install a tensorflow from source and build with bazel?
You should be perfectly able to use these features installing TensorFlow from pip. Bazel is used to manage build procedures, you don't need It unless you want to compile TensorFlow from source by yourself.
Try to remove It and reinstall from pip paying attention to choose the right Python setup in case you have multiple Python distributions on your machine.
I want to convert my tensorflow 1.1 based model into tensorflow lite in order to serve the model locally and remotely for a PWA. The official guide only offers Python APIs for 1.11 at the earliest. Command line tools only seem to work starting at 1.7. Is it possible to convert a 1.1 model to tensorflow lite? Has anyone had experience with this?
The tf module is an out-of-the-box pre-trained model using BIDAF. I am having difficulty serving the full tf app on Heroku, which is unable to run it. I would like to try a tf lite app to see if hosting it locally will make it faster, and easier to set up as a PWA.
In the Nvidia's blog, they introduced their TensorRT as follows:
NVIDIA TensorRTâ„¢ is a high performance neural network inference engine for production deployment of deep learning applications. TensorRT can be used to rapidly optimize, validate and deploy trained neural network for inference to hyperscale data centers, embedded, or automotive product platforms.
So I am wondering, if I have a pre-trained Tensorflow model, can I use it in TensorRT in Jetson TX1 for inference?
From JetPack 3.1, NVIDIA has added TensorRT support for Tensorflow also. So, the trained TF model can be directly deployed in Jetson TX1/TK1/TX2
UPDATE (2020.01.03): Now both TensorFlow 1.X and 2.0 have been supported by TensorRT (Tested on Trt V6 & 7: See this tutorial: https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html).
Base on this post from Nvidia forum, it seems that you could use TensorRT for inference with caffemodel but not tensorflow model now.
Beside tensorRT, building tensorflow on tx1 is another issue (refer here: https://github.com/ugv-tracking/cfnet).