why is multi GPU tensorflow retraining not working - tensorflow

I have been training my tensorflow retraining algorithm using a single GTX Titan and it works just fine, but when I try to use multiple gpus in the flower of retraining example it does not work and seems to only utilize one GPU when I run it in Nvidia SMI.
Why is this happening as it does work with multiple gpus when retraining at Inception model from scratch but not during retraining?

TensorFlow's flower retraining example does not work with multiple GPUs at all, even if you set --num_gpus > 1. It should support a single GPU as you noted.
The model needs to be modified to utilize multiple GPUs in parallel. Unfortunately, a single TensorFlow operation like the flower retraining example can't automatically be split over multiple GPUs at this time.

Related

How to do inference with tensorflow2 with multi GPUs

I have a large dateset to inference. There are 10 gpus in my machine. When I do inference, only one GPU work. The frame I use is tensorflow2.6. I used to use pytorch. But now I have to use tensorflow which I am not familiar with for some reasons.
I want to know how to use all gpus and keep the order of the Dataset at the same time in the inference process

suggestion how to train pre trained models by using Yolov4?

I am looking the optimal way to train pre-trained models for YOLOv4
I have my local environment Debian 10 OS,
GeForce RTX 2060 SUPER
GeForce GTX 750 Ti
I planning to train the models based on different size of images, and the trained models I am going to use as part of microservices developed on java, or python.
Should I use any third party services like Google colab?
and the second question what the framework better to use ? (pytorch, tensorflow etc)
Thank you for suggestion
You can use this repo, https://github.com/AlexeyAB/darknet. It has all instructions for custom training, transfer learning and also colab training, inference script.
You can use the provided convolutional layer weights to improve results faster and on small dataset. Different dimension images can be used for training.
Custom training instructions,
https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects
Training and inference in colab,
https://colab.research.google.com/drive/1_GdoqCJWXsChrOiY8sZMr_zbr_fH-0Fg

Mirroring a multi-gpu model across multiple GPUs

I have a tensorflow (tf2.0)/keras model that uses multiple GPUs for its computations. There are 2 branches in the model and each branch is on a separate GPU.
I have a 4 GPU system that I want to use for training and I would like to mirror this model so that GPUs 1 and 2 contain one model and GPUs 3 and 4 contain the mirrored model.
Will tf.distribute.MirroredStrategy handle this mirroring automatically? Or does it assume that my model will be a single GPU model?
If tf.distribute.MirroredStrategy will not handle this, does anyone have any suggestions for how to customise MirroredStrategy to achieve this training structure?
This sounds a lot like your are gonna need to do a custom training loop.
The mirrored strategy replicates the model on each GPU, but since your model is allready on two GPUs, i don't think that its gonna work properly.
But you can try it out and check with nvidia-smi what tensorflow is doing.

Using GPU capabilities for retraining images using retrain.py on tensorflow-hub

I am new to Tensorflow, I am using retrain.py to train some images. In case I have a larger data base of 10000 images and I have a GPU capable system. How can i use retrain.py to run on my Nvidia GPU. So that training will be done faster.
I am following the steps from the link below
https://www.tensorflow.org/hub/tutorials/image_retraining
To get GPU support, be sure to install the PIP package tensorflow-gpu instead of plain tensorflow. You should see some performance benefits from that for retrain.py. That said, retrain.py shows its age (far predating TF Hub) and does not utilize GPUs so well, because it does not properly batch images when extracting bottleneck values.
If you are ready to live on the cutting edge of TF 2.0.0alpha0 (announced last week), take a look at Hub's
examples/colab/tf2_image_retraining.ipynb which is considerably smaller, faster (if you use a GPU), and even supports fine-tuning the image module.

Can we run training and validation on separate GPUs using tensorflow object detection API running on tensorflow 1.12?

I have two Nvidia Titan X cards on my machine and want to finetune COCO pretrained Inception V2 model on a single specific class. I have created the train/val tfrecords and changed the config to run the tensorflow object detection training pipeline.
I am able to start the training but it hangs (without any OOM) whenever it tries to evaluate a checkpoint. Currently it is using only GPU 0 with other resource parameters (like RAM, CPU, IO etc) in normal range. So I am guessing that GPU is the bottleneck. I wanted to try splitting training and validation on separate GPUs and see if it works.
I tried to look for a place where I could do something like setting "CUDA_VISIBLE_DEVICES" differently for both the processes but unfortunately the latest tensorflow object detection API code (using tensorflow 1.12) makes it very difficult to do so. I am also unable to verify my assumption about training and validation running in same process as my machine hangs. Could someone please suggest where to look for to solve it?