Tensorflow model slower than in the documentation - tensorflow

I am using the code at: https://keras.io/examples/imdb_fasttext/ for testing the performance of my PC. I have GTX 2060, Ubuntu 18.04, Tensorflow 2.0, Cuda 10.1, cuDNN7.6. I got 22 secs/epoch using the bi-grams, however according to this page, only 2 secs/epoch are needed in a GTx 980M GPU. I was hoping to have a second per epoch with my configuration.
Can anyone help me understand what could be the issue?
Many thanks,
Roxana

Related

Using the RTX 3070 laptop GPU for CNN model training with a windows system

I'm trying to use my laptop RTX 3070 GPU for CNN model training because I have to employ a exhastive grid search to tune the hyper parameters. I tried many different methods however, I could not get it done. Can anyone kindly point me in the right direction?
I followed the following procedure.
The procedure:
Installed the NVIDIA CUDA Toolkit 11.2
Installed NVIDIA cuDNN 8.1 by downloading and pasting the files (bin,include,lib) into the NVIDIA GPU Computing Toolkit/CUDA/V11.2
Setup the environment variable by including the path in the system path for both bin and libnvvm.
Installed tensorflow 2.11 and python 3.8 in a new conda environment.
However, I was unable to setup the system to use the GPU that is available. The code seems to be only using the CPU and when I query the following request I get the below output.
query:
import tensorflow as tf
print("TensorFlow version:", tf.__version__)
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
Output:
TensorFlow version: 2.11.0
Num GPUs Available: 0
Am I missing something here or anyone has the same issue like me?
You should use DirectML plugin. From tensorflow 2.11 Gpu support has been dropped for native windows. you need to use DirectML plugin.
You can follow the tutorial here to install

Model returns only NaN values on GTXA5000 but not on 1080TI

I have replaced a GTX 1080TI graphics card with a GTX A5000 in a desktop machine and reinstalled Ubuntu to upgraded from 16.04 to 20.04 in order to meet requirements.
But now I can't retrain or predict with our current model; When loading the model, Keras hangs for a very long time and all predicted results are NaN values.
We use Keras 2.2.4 with tensorflow 2.1.0 and Cuda 10.1.243, which I installed using Conda and I have tried with different drivers.
If I put the old GTX 1080 TI card back in to the machine the code works fine.
Any idea of what can be wrong - can it be the case that the A5000 does not support the same models as an old 1080TI card?
Ok, I can confirm that this setup works on the GTX A5000
CUDA: 11.6.0
Tensorflow: 2.7.0
Driver Version: 510.47.03
Thanks to #talonmies for his comment.

Tensorflow 1.14 performance issue on rtx 3090

I am running a model written with TensorFlow 1.x on 4x RTX 3090 and it is taking a long time to start up the training than as in 1x RTX 3090. Although, as training starts, it gets finished up earlier in 4x than in 1x. I am using CUDA 11.1 and TensorFlow 1.14 in both the GPUs.
Secondly, When I am using 1x RTX 2080ti, with CUDA 10.2 and TensorFlow 1.14, it is taking less amount to start the training as compared to 1x RTX 3090 with 11.1 CUDA and Tensorflow 1.14. Tentatively, it is taking 5 min in 1x RTX 2080ti, 30-35 minutes in 1x RTX 3090, and 1.5 hrs in 4x RTX 3090 to start the training for one of the datasets.
I'll be grateful if anyone can help me to resolve this issue.
I am using Ubuntu 16.04, Coreā„¢ i9-10980XE CPU, and 32 GB ram both in 2080ti and 3090 machines.
EDIT: I found out that TF takes a long start-up time in Ampere architecture GPUs, according to this, but I'm still unclear if this is the case; and, if this is the case, does any solution exist for it?
T.F. 1.x does not have binaries for CUDA 11.1, so at the start, it takes time to compile. Because RTX 3090 compiles using PTX & JIT-compiler it takes a long time.
A general solution for this is to increase the cache size,.using code:-"export CUDA_CACHE_MAXSIZE=2147483648" (here 2147483648 is the cache size, you can set it any number by considering memory limit and it's usage in other processes in account). Refer to https://www.tensorflow.org/install/gpu for clarification. From this in the subsequent run, start-up time will be small. But even after this, binaries produce(At this start) will not be compatible with CUDA 11.1
The best is to migrate the code from T.F. 1.x to 2.x(2.4+) to make it run on RTX 30XX series or try compiling T.F. 1.x from source with CUDA 11.1(Not sure on this).
As Thunder explained, TensorFlow 1.x is not supported on Nvidia Ampere GPUs, and it looks like it never will be, as Ampere streaming multiprocessor (SM_86) are only supported on CUDA 11.1, see https://forums.developer.nvidia.com/t/can-rtx-3080-support-cuda-10-1/155849/2 and TensorFlow 1.x wasn't fully supported on new versions of CUDA for a while now, for probably similar reason as described in the link above. Unfortunately TensorFlow version 1.x is no longer supported or maintained, see https://github.com/tensorflow/tensorflow/issues/43629#issuecomment-700709796
However, if you have to use Stylegan 2 model, you might have some luck with Nvidia Tensorflow, which apparently has support for version 1.15 on Ampere GPUs, see https://developer.nvidia.com/blog/accelerating-tensorflow-on-a100-gpus/
Here's the proposed solution on linux:
https://www.pugetsystems.com/labs/hpc/How-To-Install-TensorFlow-1-15-for-NVIDIA-RTX30-GPUs-without-docker-or-CUDA-install-2005/
On windows, I managed to get my RTX3080TI working with TF 1.15 using WSL2 with directml:
https://learn.microsoft.com/en-us/windows/ai/directml/gpu-tensorflow-wsl
Results is abt 1.5 times faster compared to my RTX2080TI.

is CUDA 11 with RTX 3080 support tensorflow and keras?

I attached RTX 3080 to my computer. but when training on keras 2.3.1 and tensorflow 1.15, I got some error "failed to run cuBLAS_STATUS_EXECUTION_FAILED, did not mem zero GPU location . . . check failed:start_event !=nullptr && stop_event != nullptr" I think the problem is that recently released rtx 3080 and CUDA 11 is not yet support the keras 2.xx and tensorflow 1.xx. is this right? And what make that problem?
At the moment of writing this, currently Nvidia 30xx series only fully support CUDA version 11.x, see https://forums.developer.nvidia.com/t/can-rtx-3080-support-cuda-10-1/155849/2
Tensorflow 1.15 wasn't fully supported on CUDA since version 10.1 and newer, for probably similar reason as described in the link above. Unfortunately TensorFlow version 1.x is no longer supported or maintained, see https://github.com/tensorflow/tensorflow/issues/43629#issuecomment-700709796
TensorFlow 2.4 is your best bet with an Ampere GPU. It has now a stable release, and it has official support for CUDA 11.0, see https://www.tensorflow.org/install/source#gpu
As TensorFlow 1.x is never going to be updated or maintained by TensorFlow team, I would strongly suggest moving to TensorFlow 2.x, excluding personal preferences, it's better in almost every way and has tf.compat module for backwards compatibility with TensorFlow 1.x code, if rewriting you code base is not an option. However, even that module is no longer maintained, really showing that version 1.x is dead, see https://www.tensorflow.org/guide/versions#what_is_covered
However, if you're dead set on using TensorFlow 1.15, you might have a chance with Nvidia Tensorflow, which apparently has support for version 1.15 on Ampere GPUs, see https://developer.nvidia.com/blog/accelerating-tensorflow-on-a100-gpus/

Tensorflow quantization

I would like to optimize a graph using Tensorflow's transform_graph tool. I tried optimizing the graph from MultiNet (and others with similar encoder-decoder architectures). However, the optimized graph is actually slower when using quantize_weights, and even much slower when using quantize_nodes. From Tensorflow's documentation, there may be no improvements, or it may even be slower, when quantizing. Any idea if this is normal with the graph/software/hardware below?
Here is my system information for your reference:
OS Platform and Distribution: Linux Ubuntu 16.04
TensorFlow installed from: using TF source code (CPU) for graph conversion, using binary-python(GPU) for inference
TensorFlow version: both using r1.3
Python version: 2.7
Bazel version: 0.6.1
CUDA/cuDNN version: 8.0/6.0 (inference only)
GPU model and memory: GeForce GTX 1080 Ti
I can post all the scripts used to reproduce if necessary.
It seems like quantization in Tensorflow only happens on CPUs. See: https://github.com/tensorflow/tensorflow/issues/2807
I got same problem in PC enviroment. My model is 9 times slower than not quantize.
But when I porting my quantized model into android application, its ok to speed up.
Seems like current only work on CPU and only ARM base CPU such as android phone.