Suppose I have 2 tf.keras models that work with images, and I want to use these models inside my C++ standalone program which reads images from 2 cameras continuously and run the model on each frame. Each camera has a different frame rate, so both models must be run in parallel, they need to utilize GPU also.
Each model predicts different things.
The C++ app is only on one desktop PC, it does not need to serve the predictions to many people, so tensorflow serving might be overkill or not fast? I am suspecting it is not fast because I saw that it's using JSON to encode predictions, which might be slow to send or receive.
I have saved both models as SavedModel format and it can be predicted fine in python.
How can I load these 2 models and predict in parallel utilizing one GPU, and as fast as possible in C++ app? I only care about the case when the prediction is a single image, or 2 images, not a batch like 32 images. If there is a way to make tensorflow serving or lite works fast, I am also cool with that.
Bonus: the method should not require difficult installation that users have to do. Think that in the end user will have to install my app and they have to manually install tensorflow dependencies? That's impractical.
Related
I would like to serve about ~600 models with Tensorflow Serving.
I am trying to find a solution to eventually reduce the number of models:
My models have the same architecture, only the weights changes.
Is it possible to load only one model and changing the weights?
Would it be possible to aggregate all those models together and effectively, the first layer of the model would be an ID and the input features for that model?
Has anyone tried having couple of hundreds models running on one machine? I have find this cortex solution, but wanted to avoid using another tech.
https://towardsdatascience.com/how-to-deploy-1-000-models-on-one-cpu-with-tensorflow-serving-ec4297bff54b
If the models have the same architecture but different weight, you can try merging all those model into a "super model". However I would need to know more about the task to see if that's possible.
To serve 600 models, you would need a very powerful machine and lot of memory (depending on how big your models are and how much you use them in parallel).
You can either run TFServe yourself, or use a provider such as Inferrd.com/Google/AWS.
I realize this is not the intended usage of TensorRT, but I am a bit stuck so maybe there are some ideas out there. Currently I have been provided some neural network models as TensorRT serialized engines, so-called .trt files. These are basically models compiled and optimized from PyTorch to run on a specific GPU.
Now, this works fine since I do have a compatible GPU for development, however, for setting up CI/CD, I am having some trouble because the cloud servers on which it will be running for testing purposes only do not have adequate GPUs for this CUDA-compiled "engine".
So, I would like to force these models to run on CPU, or otherwise find some other way to make them run. On CPU would be just fine, because I just need to run handful of inferences to check the output, it is fine if it's slow. Again, I know this is not the intended usage of TensorRT, but I need some output from the models for integration testing.
Alternative approach
The other idea I had was maybe to convert the .trt files back to .onnx or another format that I could load into another runtime engine, or just into PyTorch or TensorFlow, but I cannot find any TensorRT tools that load an engine and write a model file. Presumably because it is "compiled" and no longer convertible; yet, the model parameters must be in there, so does anyone know how to do such a thing?
I am running in the following scenario:
Single Node Kubernetes Cluster (1x i7-8700K, 1x RTX 2070, 32GB RAM)
1 Tensorflow Serving Pod
4 Inference Client Pods
What the inference clients do is they get images from 4 separate cameras (1 each) and pass it to TF-Serving for inference in order to get the understanding of what is seen on the video feeds.
I have previously been doing inference inside the Inference Client Pods individually by calling TensorFlow directly but that hasn't been good on the RAM of the graphics card. Tensorflow Serving has been introduced to the mix quite recently in order to optimize RAM as we don't load duplicated models to the graphics card.
And the performance is not looking good, for a 1080p images it looks like this:
Direct TF: 20ms for input tensor creation, 70ms for inference.
TF-Serving: 80ms for GRPC serialization, 700-800ms for inference.
The TF-Serving pod is the only one that has access to the GPU and it is bound exclusively. Everything else operates on CPU.
Are there any performance tweaks I could do?
The model I'm running is Faster R-CNN Inception V2 from the TF Model Zoo.
Many thanks in advance!
This is from TF Serving documentation:
Please note, while the average latency of performing inference with TensorFlow Serving is usually not lower than using TensorFlow directly, where TensorFlow Serving shines is keeping the tail latency down for many clients querying many different models, all while efficiently utilizing the underlying hardware to maximize throughput.
From my own experience, I've found TF Serving to be useful in providing an abstraction over model serving which is consistent, and does not require implementing custom serving functionalities. Model versioning and multi-model which come out-of-the-box save you lots of time and are great additions.
Additionally, I would also recommend batching your requests if you haven't already. I would also suggest playing around with the TENSORFLOW_INTER_OP_PARALLELISM, TENSORFLOW_INTRA_OP_PARALLELISM, OMP_NUM_THREADS arguments to TF Serving. Here is an explanation of what they are
Maybe you could try OpenVINO? It's a heavily optimized toolkit for inference. You could utilize your i7-8700K and run some frames in parallel. Here are some performance benchmarks for very similar i7-8700T.
There is even OpenVINO Model Server which is very similar to Tensorflow Serving.
Disclaimer: I work on OpenVINO.
I'm actualy new in Machine Learning, but this theme is vary interesting for me, so Im using TensorFlow to classify some images from MNIST datasets...I run this code on Compute Engine(VM) at Google Cloud, because my computer is to weak for this. And the code actualy run well, but the problam is that when I each time enter to my VM and run the same code I need to wait while my model is training on CNN, and after I can make some tests or experiment with my data to plot or import some external images to impruve my accuracy etc.
Is There is some way to save my result of trainin model just once, some where, that when I will decide for example to enter to the same VM tomorrow...and dont wait anymore while my model is training. Is that possible to do this ?
Or there is maybe some another way to do something similar ?
You can save a trained model in TensorFlow and then use it later by loading it; that way you only have to train your model once, and use it as many times as you want. To do that, you can follow the TensorFlow documentation regarding that topic, where you can find information on how to save and load the model. In short, you will have to use the SavedModelBuilder class to define the type and location of your saved model, and then add the MetaGraphs and variables you want to save. Loading the saved model for posterior usage is even easier, as you will only have to run a command pointing to the location of the file in which the model was exported.
On the other hand, I would strongly recommend you to change your working environment in such a way that it can be more profitable for you. In Google Cloud you have the Cloud ML Engine service, which might be good for the type of work you are developing. It allows you to train your models and perform predictions without the need of an instance running all the required software. I happen to have worked a little bit with TensorFlow recently, and at first I was also working with a virtualized instance, but after following some tutorials I was able to save some money by migrating my work to ML Engine, as you are only charged for the usage. If you are using your VM only with that purpose, take a look at it.
You can of course consult all the available documentation, but as a first quickstart, if you are interested in ML Engine, I recommend you to have a look at how to train your models and how to get your predictions.
I have just started experimenting with Deep Learning and Computer Vision technologies. I came across this awesome tutorial. I have setup the TensorFlow environment using docker and trained my own sets of objects and it provided greater accuracy when I tested it out.
Now I want to make the same more real-time. For example, instead of giving an image of an object as the input, I want to utilize a webcam and make it recognize the object with the help of TensorFlow. Can you guys guide me with the right place to start with this work?
You may want to look at TensorFlow Serving so that you can decouple compute from sensors (and distribute the computation), or our C++ api. Beyond that, tensorflow was written emphasizing throughput rather than latency, so batch samples as much as you can. You don't need to run tensorflow at every frame, so input from a webcam should definitely be in the realm of possibilities. Making the network smaller, and buying better hardware are popular options.