eGPU with USB3.0 to train CNN - tensorflow

I want to use keras with tensorflow-gpu to train CNN at work.
Since I cannot use cloud GPU server such as AWS, because cannot transfer data outside of intranet, I wanna try to use eGPU.
But the problem is, my office computer doesn’t have thunderbolt3 IF, only have USB3.0 Type-A.
So I have to use an adapter that convert Thunderbolt3 to USB3.0.
Thunderbolt3 has 5GB/sec data transfer speed, but USB3.0 has only 500MB/sec.
Does this speed disadvantage makes any serious mal performance when training CNN?
Currently it tooks 24 hours to train mobilenetV2 with 1400 photos, which size is 244*244.
I would be happy if eGPU makes this in 30 minutes.

You bottleneck you GPU by a factor of 32 compared to PCIe 3.0 x16. So you will have quite a huge performance hit. So I would recommend buying a PC.

Related

YoloV3 deployment on JETSON TX2

I faced problem regarding Yolo object detection deployment on TX2.
I use pre-trained Yolo3 (trained on Coco dataset) to detect some limited objects (I mostly concern on five classes, not all the classes), The speed is low for real-time detection, and the accuracy is not perfect (but acceptable) on my laptop. I’m thinking to make it faster by multithreading or multiprocessing on my laptop, is it possible for yolo?
But my main problem is that algorithm is not running on raspberry pi and nvidia TX2.
Here are my questions:
In general, is it possible to run yolov3 on TX2 without any modification like accelerators and model compression techniques?
I cannot run the model on TX2. Firstly I got error regarding camera, so I decided to run the model on a video, this time I got the 'cannot allocate memory in static TLS block' error, what is the reason of getting this error? the model is too big. It uses 16 GB GPU memory on my laptop.The GPU memory of raspberry and TX2 are less than 8GB. As far as I know there are two solutions, using a smaller model or using tensor RT or pruning. Do you have any idea if there is any other way?
if I use tiny-yolo I will get lower accuracy and this is not what I want. Is there any way to run any object detection model with high performance for real-time in terms of both accuracy and speed (FPS) on raspberry pi or NVIDIA TX2?
If I clean the coco data for just the objects I concern and then train the same model, I would get higher accuracy and speed but the size would not change, Am I correct?
In general, what is the best model in terms of accuracy for real-time detection and what is the best in terms of speed?
How is Mobilenet? Is it better than YOLOs in terms of both accuracy and speed?
1- Yes it is possible. I already run Yolov3 on Jetson Nano.
2- It depends on model and input resolution of data. You can decrease input resolution. Input images are transferred to GPU VRAM to use on model. Big input sizes can allocate much memory. As far as I remember I have run normal Yolov3 on Jetson Nano(which is worse than tx2) 2 years ago. Also, you can use Yolov3-tiny and Tensorrt as you mention. There are many sources on the web like this & this.
3- I suggest you to have a look at here. In this repo, you can make transfer learning with your dataset & optimize the model with TensorRT & run it on Jetson.
4- Size not dependent to dataset. It depend the model architecture(because it contains weights). Speed probably does not change. Accuracy depends on your dataset. It can be better or worser. If any class on COCO is similiar to your dataset's any class, I suggest you to transfer learning.
5- You have to find right model with small size, enough accuracy, gracefully speed. There is not best model. There is best model for your case which depend on also your dataset. You can compare some of the model's accuracy and fps here.
6- Most people uses mobilenet as feature extractor. Read this paper. You will see Yolov3 have better accuracy, SSD with MobileNet backbone have better FPS. I suggest you to use jetson-inference repo.
By using jetson-inference repo, I get enough accuracy on SSD model & get 30 FPS. Also, I suggest you to use MIPI-CSI camera on Jetson. It is faster than USB cameras.
I fixed the problem 1 and 2 only by replacing import order of the opencv and tensorflow inside the script.Now I can run Yolov3 without any modification on tx2. I got average FPS of 3.

Why is the output video too slow in darknet?

I trained my own dataset for yolov2 in darknet. I am using ubuntu 18.04 and has no GPU. When I play a video(which i have taken in my smart phone) for testing, it is too slow. Is it because i don't have a GPU? Or is it because of some other reasons?
Can someone reply me.
Without a gpu, yolov2 is going to be very slow and if you have a modern smart phone it's likely that video is high resolution with a high frame rate. I'm not sure of your implementation but it's likely you're processing every frame in the video instead of skipping every other frame or only processing every 10th frame.
If you don't have a gpu available (and aren't going to) another way to get gpu type performance is using Intel's Openvino if you have a recent I-series processor. You'd be able to convert your yolov2 model to open vino and run it on a cpu with really fast inference times (likely <100ms per frame). I will say I ran yolov3 off of Openvino though and it was really slow compared to other object detectors and especially compared to a mobilenet.
I also have some demo's set up to test between yolov3 on a cpu and open vino on a cpu, you can check those out on SugarKubes
1 big reason is of course because you don't have GPU. The other reason is the model that you use. You use YoloV2 which is faster than YoloV3 but still slower compared to TinyYolo or TinyYoloV3.
So, this is the trade off between accuracy and speed, the faster your model the lower the accuracy. If you are going for speed, than there are 3 solutions that I can think of :
Use GPU (I know it's expensive but worth the price, nvidia gtx 1060++ would be great)
Change your model to TinyYolo or TinyYoloV3. I recommend using TinyYolov3 for higher fps
TinyYoloV3 : 220 fps
TinyYolo : 207 fps
YoloV2 : 67 fps
Use OpenVino as Andrew Pierno said
Download model from here : https://pjreddie.com/darknet/yolo/
Yolov2's link : https://pjreddie.com/darknet/yolov2/

Is there a workaround for running out of memory on GPU with tensorflow?

I am currently building a 3d convolutional network, for video classification. The main problem is that I run out of memory too easily. Even if I set my batch_size to 1, there is still not enough memory to train my CNN the way I want.
I am using a GTX 970 with 4Gb of VRAM (3.2Gb free to use by tensorflow). I was expecting it to still train my network, maybe using my RAM memory as a backup, or doing the calculations in parts. But until now I could only run it making the CNN simpler, which affects performance directly.
I think I can run on CPU, but it is significantly slower, making it not a good solution either.
Is there a better solution than to buy a better GPU?
Thanks in advance.
Using gradient checkpointing will help with memory limits.

What's the impact of using a GPU in the performance of serving a TensorFlow model?

I trained a neural network using a GPU (1080 ti). The training speed on GPU is far better than using CPU.
Currently, I want to serve this model using TensorFlow Serving. I just interested to know if using GPU in the serving process has a same impact on performance?
Since the training apply on batches but inferencing (serving) uses asynchronous requests, do you suggest using GPU in serving a model using TensorFlow serving?
You still need to do a lot of tensor operations on the graph to predict something. So GPU still provides performance improvement for inference. Take a look at this nvidia paper, they have not tested their stuff on TF, but it is still relevant:
Our results show that GPUs provide state-of-the-art inference
performance and energy efficiency, making them the platform of choice
for anyone wanting to deploy a trained neural network in the field. In
particular, the Titan X delivers between 5.3 and 6.7 times higher
performance than the 16-core Xeon E5 CPU while achieving 3.6 to 4.4
times higher energy efficiency.
The short answer is yes, you'll get roughly the same speedup for running on the GPU after training. With a few minor qualifications.
You're running 2 passes over the data in training, which all happens on the GPU, during the feedforward inference you're doing less work, so there will be more time spent transferring data to the GPU memory relative to computations than in training. This is probably a minor difference though. And you can now asynchronously load the GPU if that's an issue (https://github.com/tensorflow/tensorflow/issues/7679).
Whether you'll actually need a GPU to do inference depends on your workload. If your workload isn't overly demanding you might get away with using the CPU anyway, after all, the computation workload is less than half, per sample, so consider the number of requests per second you'll need to serve and test out whether you overload your CPU to achieve that. If you do, time to get the GPU out!

Why do we need GPU for Deep Learning?

As the question already suggests, I am new to deep learning. I know that the learning process of the model will be slow without GPU. If I am willing to wait, Will it be OK if i use CPU only ?
Many operations which are performed in computing deep learning (and neural networks in general) can be run in parallel, meaning they can be calculated independently then aggregated later. This is, in part, because most of the operations are on vectors.
A typical consumer CPU has between 4 to 8 cores, and hyperthreading allows them to be treated as 8 or 16 respectively. Server CPUs can have between 4 to 24 cores, 8 to 48 threads respectively. Additionally, most modern CPUs have SIMD (single instruction multiple data) extensions which allow them to perform vector operations in parallel on a single thread. Depending on the data type you're working with, an 8 core CPU can perform 8 * 2 * 4 = 64 to 8 * 2 * 8 = 128 vector calculations at once.
Nvidia's new 1080ti has 3584 CUDA cores, which essentially means it can perform 3584 vector calculations at once (hyperthreading and SIMD don't come into play here). That's 56 to 28 times more operations at once than an 8 core CPU. So, whether you're training a single network, or multiples to tune meta-parameters, it will probably be significantly faster on a GPU than a CPU.
Depending on what you are doing, it might take a lot longer. I had 20x speedups be using a GPU. If you read some Computer Vision papers, they train their networks on ImageNet for about 1-2 weeks. Now imagine if that took 20x longer...
Having said that: There are much simpler tasks. For example, for my HASY dataset you can train a reasonable network without a GPU in probably 3 hours. Similar small datasets are MNIST, CIFAR-10, CIFAR-100.
Computationally intensive part of the neural network is multiple matrix multiplications. And how do we make it faster? We can do this by doing all the operations at the same time instead of doing it one after the other. This is in a nutshell why we use GPU (graphics processing units) instead of a CPU (central processing unit).
Google used to have a powerful system, which they had specially built for training huge nets. This system costs $5 billion, with multiple clusters of CPUs.
Few years later, researchers at Stanford built the same system in terms of computation to train their deep nets using GPU. They reduced the costs to $33K. This system was built using GPUs, and it gave the same processing power as Google’s system.
Source: https://www.analyticsvidhya.com/blog/2017/05/gpus-necessary-for-deep-learning/
Deep learning is all about building a mathematical model of the reality or of some kind of part of reality for some kind of specific use by using a lot of training data so you use a lot of training data from the real world that you have collected and then you can train your model so your mathematical model can predict the other outcomes when you give it new data as input so you basically can train this mathematical model but it needs a lot of data and this training needs a lot of computation. So there are lot of computational heavy operations that need to take place and also you need a lot of data. Therefore, for example companies such as Nvidia who are traditionally have been making gaming GPUs for graphics, now they are also having a huge part of the revenue coming from AI and Machine Learning and all of these scientists who want to train their models and you see companies like Google and Facebook, all of them are using GPUs currently to train their ML models.
If you ask this question you probably need a GPU/TPU (Tensor Processing Unit).
You can get one with Google Colab GPU for "free". They have a pretty cool cloud GPU technology
You can stat working with your google accounts
with a Jupiter notebook: https://colab.research.google.com/notebooks/intro.ipynb
Kaggle (Google Owned Data Science competition site) also has this option to create Jupiter notebooks + GPU, but only in limited cases:
notebooks: https://www.kaggle.com/kernels
Documentation for it: https://www.kaggle.com/docs/notebooks