Is it possible to train a H2O model with GPU and predict with a CPU? - gpu

For trainining speed, it would be nice to be able to train a H2O model with GPUs, take the model file, and then predict on a machine without GPUs.
It seems like that should be possible in theory, but with the H2O release 3.13.0.341, that doesn't seem to happen, except for XGBoost model.
When I run gpustat -cup I can see the GPUs kick in when I train H2O's XGBoost model. This doesn't happen with DL, DRF, GLM, or GBM.
I wouldn't be surprised if a difference in float point size (16, 32, 64) could cause some inconsistency, not to mention the vagaries due to multiprocessor modeling, but I think I could live with that.
(This is related to my question here, but now that I understand the environment better I can see that the GPUs aren't used all the time.)
How can I tell if H2O 3.11.0.266 is running with GPUs?

The new XGBoost integration in H2O is the only GPU-capable algorithm in H2O (proper) at this time. So you can train an XGBoost model on GPUs and score on CPUs, but that's not true for the other H2O algorithms.
There is also the H2O Deep Water project, which provides integration between H2O and three third-party deep learning backends (MXNet, Caffe and TensorFlow), all of which are GPU-capable. So you can train those models using a GPU and score on a CPU as well. You can download the H2O Deep Water jar file (or R package, or Python module) at the Deep Water link above, and you can find out more info in the Deep Water GitHub repo README.

Yes, you do the heavy job of training on a GPU, save weights and then, your CPU will only do the matrix multiplication for predictions.
In Keras you can train your model and save Neural Network weights:
model.save_weights('your_model_weights.h5')
model.load_weights('your_model_weights.h5')

Related

YoloV3 deployment on JETSON TX2

I faced problem regarding Yolo object detection deployment on TX2.
I use pre-trained Yolo3 (trained on Coco dataset) to detect some limited objects (I mostly concern on five classes, not all the classes), The speed is low for real-time detection, and the accuracy is not perfect (but acceptable) on my laptop. I’m thinking to make it faster by multithreading or multiprocessing on my laptop, is it possible for yolo?
But my main problem is that algorithm is not running on raspberry pi and nvidia TX2.
Here are my questions:
In general, is it possible to run yolov3 on TX2 without any modification like accelerators and model compression techniques?
I cannot run the model on TX2. Firstly I got error regarding camera, so I decided to run the model on a video, this time I got the 'cannot allocate memory in static TLS block' error, what is the reason of getting this error? the model is too big. It uses 16 GB GPU memory on my laptop.The GPU memory of raspberry and TX2 are less than 8GB. As far as I know there are two solutions, using a smaller model or using tensor RT or pruning. Do you have any idea if there is any other way?
if I use tiny-yolo I will get lower accuracy and this is not what I want. Is there any way to run any object detection model with high performance for real-time in terms of both accuracy and speed (FPS) on raspberry pi or NVIDIA TX2?
If I clean the coco data for just the objects I concern and then train the same model, I would get higher accuracy and speed but the size would not change, Am I correct?
In general, what is the best model in terms of accuracy for real-time detection and what is the best in terms of speed?
How is Mobilenet? Is it better than YOLOs in terms of both accuracy and speed?
1- Yes it is possible. I already run Yolov3 on Jetson Nano.
2- It depends on model and input resolution of data. You can decrease input resolution. Input images are transferred to GPU VRAM to use on model. Big input sizes can allocate much memory. As far as I remember I have run normal Yolov3 on Jetson Nano(which is worse than tx2) 2 years ago. Also, you can use Yolov3-tiny and Tensorrt as you mention. There are many sources on the web like this & this.
3- I suggest you to have a look at here. In this repo, you can make transfer learning with your dataset & optimize the model with TensorRT & run it on Jetson.
4- Size not dependent to dataset. It depend the model architecture(because it contains weights). Speed probably does not change. Accuracy depends on your dataset. It can be better or worser. If any class on COCO is similiar to your dataset's any class, I suggest you to transfer learning.
5- You have to find right model with small size, enough accuracy, gracefully speed. There is not best model. There is best model for your case which depend on also your dataset. You can compare some of the model's accuracy and fps here.
6- Most people uses mobilenet as feature extractor. Read this paper. You will see Yolov3 have better accuracy, SSD with MobileNet backbone have better FPS. I suggest you to use jetson-inference repo.
By using jetson-inference repo, I get enough accuracy on SSD model & get 30 FPS. Also, I suggest you to use MIPI-CSI camera on Jetson. It is faster than USB cameras.
I fixed the problem 1 and 2 only by replacing import order of the opencv and tensorflow inside the script.Now I can run Yolov3 without any modification on tx2. I got average FPS of 3.

Tensorflow Serving Performance Very Slow vs Direct Inference

I am running in the following scenario:
Single Node Kubernetes Cluster (1x i7-8700K, 1x RTX 2070, 32GB RAM)
1 Tensorflow Serving Pod
4 Inference Client Pods
What the inference clients do is they get images from 4 separate cameras (1 each) and pass it to TF-Serving for inference in order to get the understanding of what is seen on the video feeds.
I have previously been doing inference inside the Inference Client Pods individually by calling TensorFlow directly but that hasn't been good on the RAM of the graphics card. Tensorflow Serving has been introduced to the mix quite recently in order to optimize RAM as we don't load duplicated models to the graphics card.
And the performance is not looking good, for a 1080p images it looks like this:
Direct TF: 20ms for input tensor creation, 70ms for inference.
TF-Serving: 80ms for GRPC serialization, 700-800ms for inference.
The TF-Serving pod is the only one that has access to the GPU and it is bound exclusively. Everything else operates on CPU.
Are there any performance tweaks I could do?
The model I'm running is Faster R-CNN Inception V2 from the TF Model Zoo.
Many thanks in advance!
This is from TF Serving documentation:
Please note, while the average latency of performing inference with TensorFlow Serving is usually not lower than using TensorFlow directly, where TensorFlow Serving shines is keeping the tail latency down for many clients querying many different models, all while efficiently utilizing the underlying hardware to maximize throughput.
From my own experience, I've found TF Serving to be useful in providing an abstraction over model serving which is consistent, and does not require implementing custom serving functionalities. Model versioning and multi-model which come out-of-the-box save you lots of time and are great additions.
Additionally, I would also recommend batching your requests if you haven't already. I would also suggest playing around with the TENSORFLOW_INTER_OP_PARALLELISM, TENSORFLOW_INTRA_OP_PARALLELISM, OMP_NUM_THREADS arguments to TF Serving. Here is an explanation of what they are
Maybe you could try OpenVINO? It's a heavily optimized toolkit for inference. You could utilize your i7-8700K and run some frames in parallel. Here are some performance benchmarks for very similar i7-8700T.
There is even OpenVINO Model Server which is very similar to Tensorflow Serving.
Disclaimer: I work on OpenVINO.

Can H2o AutoML benefit from a GPU instance on Sagemaker platform?

I'm running some projects with H2o AutoML using Sagemaker notebook instances, and I would like to know if H2o AutoML can benefit from a GPU Sagemaker instance, if so, how should I configure the notebook?
H2O AutoML contains a handful of algorithms and one of them is XGBoost, which has been part of H2O AutoML since H2O version 3.22.0.1. XGBoost is the only GPU-capable algorithm inside of H2O AutoML, however, a lot of the models that are trained in AutoML are XGBoost models, so it still can be useful to utilize a GPU. Keep in mind that you must use H2O 3.22 or above to use this feature.
My suggestion is to test it on a GPU-enabled instance and compare the results to a non-GPU instance and see if it's worth the extra cost.

Why the MobileNetV2 is faster than MobileNetV1 only at mobile device?

I am studying about Google's brandnew MobileNetV2 architecture.
During studying, I've read this string at Tensorflow model zoo Github
'For example Mobilenet V2 is faster on mobile devices than Mobilenet V1, but is slightly slower on desktop GPU.'
So, my question is,
How that could be possible? I really want to know why.
From https://arxiv.org/abs/1903.08469v1 :
"However, MobileNet V2 uses depthwise separable convolutions which are not directly supported in GPU firmware (the cuDNN library). Therefore, MobileNet V2 tends to be slower than ResNet18 in most experimental setups. Note that the same issue disqualifies usage of the DenseNet architecture [12], since it requires efficient convolution over a non-contiguous tensor, which is still not supported in cuDNN."
From their published paper at MobileNetV2: Inverted Residuals and Linear Bottlenecks,
under subtopic number 5: Implementation Notes, 5.1. Memory efficient inference;
The inverted residual bottleneck layers allow a particularly
memory efficient implementation which is very
important for mobile applications. (and more in paper)
According to TensorFlow team, it's optimized smaller in size can also be used as TF Lite. As far as we know TF Lite is indeed for mobile use. It's much slower on desktop GPU probably V2 has more conv layers compared to V1 which make sense if the training tooks more times to finish. For now, we didn't do the training and inferencing of data on mobile because of computational speed hunger which lead to power hunger as well.
Hope I answer the question.

Training time of Tensorflow Object Detection API on MSCOCO

How long does it typically take to train a Faster RCNN model on MSCOCO on a single GPU card (e.g. K40 or 1080) using the Tensorflow Object Detection API? It would be great if you could provide training times for other models (SSD and R-FCN), too.
I have never trained a model on COCO using a single GPU --- we typically train using ~10 k40 GPUs with asynchronous SGD, which takes 3-4 days to converge on COCO. SSD and R-FCN take about the same amount of time.