The question is pretty straightforward but nothing has really been answered.
Pretty simple, how do I know that when I build a Sequential() model in tensorflow via Keras it's going to use my GPU?
Normally, in Torch, so easy just use 'device' parameter and can verify via nvidia-smi volatility metric. I tried it while building model in TF but nvidia-smi shows 0% usage across all GPU devices.
Tensorflow uses GPU for most of the operations by default when
It detects at least one GPU
Its GPU support is installed and configured properly. For information regarding how to install and configure it properly for GPU support: https://www.tensorflow.org/install/gpu
One of the requirements to emphasize is that specific version of CUDA library has to be installed. e.g. Tensorflow 2.5 requires CUDA 11.2. Check here for the CUDA version required for each version of TF.
To know whether it detects GPU devices:
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
It will also print out debug messages by default to stderr to indicate whether the GPU support is configured properly and whether it detects GPU devices.
To validate using nvidia-smi that it is really using GPU:
You have to define a sufficiently deep and complex neural network model such that the bottleneck is in the GPU side. This can be achieved by increasing the number of layers and the number of channels in each of the layers.
When doing training or inference of the model like model.fit() and model.evaluate(), the GPU utilization in the logging from nvidia-smi should be high.
To know exactly where each operation will be executed, you can add the following line in the beginning of your codes
tf.debugging.set_log_device_placement(True)
For more information: https://www.tensorflow.org/guide/gpu
Related
I have a low GPU and a high CPU usage on MNIST dataset with this model. I installed CUDA for the GPU, but nothing has changed. Can you help me?
Model
Training
tensorflow-gpu requires CUDNN, in addition to CUDA. CUDNN requires a developer account and download it from https://developer.nvidia.com/cudnn,
You can refer to the below page,
https://www.tensorflow.org/install/gpu
and install CUDNN and set required paths, depending on your OS as described.
If that does not help, you should post the output log (terminal) of launching your training script.
Tensorflow came out with the XLA compiler which compiles the backend C++ tensorflow targeting LLVM. My understanding about XLA was that it was a step towards supporting generic accelerated devices, so long as there was LLVM -> Device support.
Tensorflow lite was more recently released, replacing Tensorflow Mobile, and appears to be where the work is focused on targeting embedded and mobile devices with an apparent focus on embedded DSP and GPUs as optional processors common in these environments. Tensorflow lite appears to hand off operations to the Android NNAPI (neural network API) and supports a subset of the tensorflow OPs.
So this begs the question: which direction is Google going in to support non CUDA based devices? And are there use cases for XLA beyond what I described?
I work on XLA. The XLA compiler has three backends: for CPU, GPU, and TPU. The CPU and GPU ones are based on LLVM and are open source, and the TPU one is closed source.
I don't know what the plans are for XLA for mobile devices, so I can't comment on that.
A benefit you get by using XLA with your TF model, instead of executing the model directly, is that XLA fuses a lot of ops for you. See this post for instance.
I have installed CUDA and cuDNN, but the last was not working, giving a lot of error messages in theano. Now I am training moderate sized deep conv nets in Keras/Tensorflow, without getting any cuDNN error messages. How can I check if cuDNN is now being used?
tl;dr: If tensorflow-gpu works, then CuDNN is used.
The prebuilt binaries of TensorFlow (at least since version 1.3) link to the CuDNN library. If CuDNN is missing, an error message will tell you ImportError: Could not find 'cudnn64_7.dll'. TensorFlow requires that this DLL be installed....
According to the TensorFlow install documentation for version 1.5, CuDNN must be installed for GPU support even if you build it from source. There are still a lot of fallbacks in the TensorFlow code for the case of CuDNN not being available -- as far as I can tell it used to be optional in prior versions.
Here are two lines from the TensorFlow source that explicitly tell and force that CuDNN is required for gpu acceleration.
There is a special GPU version of TensorFlow that needs to be installed in order to use the GPU (and CuDNN). Make sure the installed python package is tensorflow-gpu and not just tensorflow.
You can list the packages containing "tensorflow" with conda list tensorflow (or just pip list, if you do not use anaconda), but make sure you have the right environment activated.
When you run your scripts with GPU support, they will start like this:
Using TensorFlow backend.
2018- ... C:\tf_jenkins\...\gpu\gpu_device.cc:1105] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7845
To test it, just type into the console:
import tensorflow as tf
tf.Session()
To check if you "see" the CuDNN from your python environment and therewith validate a correct PATH variable, you can try this:
import ctypes
ctypes.WinDLL("cudnn64_7.dll") # use the file name of your cudnn version here.
You might also want to look into the GPU optimized Keras Layers.
CuDNNLSTM
CuDNNGRU
They are significantly faster:
https://keras.io/layers/recurrent/#cudnnlstm
We saw a 10x improvement going from the LSTM to CuDNNLSTM Keras layers.
Note:
We also saw a 10x increase in VMS (virtual memory) usage on the machine. So there are tradeoffs to consider.
I would like to optimize a graph using Tensorflow's transform_graph tool. I tried optimizing the graph from MultiNet (and others with similar encoder-decoder architectures). However, the optimized graph is actually slower when using quantize_weights, and even much slower when using quantize_nodes. From Tensorflow's documentation, there may be no improvements, or it may even be slower, when quantizing. Any idea if this is normal with the graph/software/hardware below?
Here is my system information for your reference:
OS Platform and Distribution: Linux Ubuntu 16.04
TensorFlow installed from: using TF source code (CPU) for graph conversion, using binary-python(GPU) for inference
TensorFlow version: both using r1.3
Python version: 2.7
Bazel version: 0.6.1
CUDA/cuDNN version: 8.0/6.0 (inference only)
GPU model and memory: GeForce GTX 1080 Ti
I can post all the scripts used to reproduce if necessary.
It seems like quantization in Tensorflow only happens on CPUs. See: https://github.com/tensorflow/tensorflow/issues/2807
I got same problem in PC enviroment. My model is 9 times slower than not quantize.
But when I porting my quantized model into android application, its ok to speed up.
Seems like current only work on CPU and only ARM base CPU such as android phone.
I have previously asked if it is possible to run tensor flow with gpu support on a cpu. I was told that it is possible and the basic code to switch which device I want to use but not how to get the initial code working on a computer that doesn't have a gpu at all. For example I would like to train on a computer that has a NVidia gpu but program on a laptop that only has a cpu. How would I go about doing this? I have tried just writing the code as normal but it crashes before I can even switch which device I want to use. I am using Python on Linux.
This thread might be helpful: Tensorflow: ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory
I've tried to import tensorflow with tensorflow-gpu loaded in the uni's HPC login node, which does not have GPUs. It works well. I don't have Nvidia GPU in my laptop, so I never go through the installation process. But I think the cause is it cannot find relevant libraries of CUDA, cuDNN.
But, why don't you just use cpu version? As #Finbarr Timbers mentioned, you still can run a model in a computer with GPU.
What errors are you getting? It is very possible to train on a GPU but develop on a CPU- many people do it, including myself. In fact, Tensorflow will automatically put your code on a GPU if possible.
If you add the following code to your model, you can see which devices are being used:
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
This should change when you run your model on a computer with a GPU.