I have a GPU: 3070 which only has CUDA 11.x and so TF 2.x. What Mask R-CNN should I use with that TF version?
Related
I am trying to train custom object detection model using pre-trained model from Tensorflow1 Model ZOO.
I am using model ssd_mobilenet_v2_coco_2018_03_29
I created suitable environment for training following this tutorial :https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/tensorflow-1.14/training.html
The thing is, when I tried to train the model using tensorflow-gpu==1.14.0 I always got the error saying Model diverged with loss = NaN.
Then I tried to uninstall tensorflow-gpu==1.14.0 and install tensorflow==1.14.0 (so it did not use my GPU) and all of sudden it started to work !
I have no idea how is that possible...
Command I am using -
python model_main.py --alsologtostderr --model_dir=models\ssd_mobilenet_v2_coco_2018_03_29\export --pipeline_config_path=models\ssd_mobilenet_v2_coco_2018_03_29\pipeline.config --num_train_steps=2000
Python version is 3.7
OS is Windows 10
My Graphics Card is Nvidia GeForce RTX3050, I used CUDA v10.0 and cuDNN v7.4.1
Any ideas ?
This is because RTX30's don't support cuda 10. If you need tf v1 (1.15) you can install nvidia's tensorflow (1.15) that can run on cuda 11.
pip install nvidia-pyindex
pip install nvidia-tensorflow[horovod]
Note: Only supports Python 3.6 or 3.8 [Not 3.7]
https://developer.nvidia.com/blog/accelerating-tensorflow-on-a100-gpus/
Does anyone know if Google Colab's GPUs are only compatible with tensorflow versions 2.x? I'm trying to run tensorflow 1 code, so I am pip installing tensorflow 1.15, also pip installing tensorflow 1.15 gpu, and changing my notebook settings to enable GPU, however I don't seem to see the GPU speed up effects?
I descover that Version of tensorflow for Labview Imaq DL is:
tensorflow 1.4.0 on windows with numpy 1.4.1, python 3.6.8 and keras 2.1.5
I hope it's help.
My config PC system:
ubuntu 18.04
Tensorflow 1.15.0
Cuda 10.0
Cudnn 7.6.5
GPU GTX 1080 Nvidia
I want to train the ssd-mobile-v1/v2-quant-coco of TFOD models with NVIDIA GPU for Coral TPU.
I have some questions:
1- Is it possible to train these models of TFOD API on NVIDIA GPUs?
2- Is there a difference between training results with NVIDIA GPUs and Google TPUs under the same situations? If do, Please a more explaining about the differences.
3- Is it possible to do quantification-away training with NVIDIA GPUs for INT8? as same as accuracy TPU quantification-away training?
I install the tensorflow gpu first
pip install tensorflow-gpu.
pip install keras
but when I;m running the gpu task. it does not run with the gpu.
It run with CPU.
import keras
import tensorflow as tf
print(keras.__version__)
print(tf.__version__)
2.3.1
2.1.0
try to user Tensorflow-gpu=2.0.0 and Keras=2.3.1 this will solve your problem.