I am able to do this with their GPU, but with their TPU it retrieves me an error...
Does anybody around here know what I'm missing, please?
Does it make sense to actually use the TPU with CuDNNLSTM? Or is CuDNNLSTM just tailored for GPU?
Thanks a lot in advance.
keras.layers.CuDNNLSTM is only supported on GPUs. But in Tensorflow 2 built-in LSTM and GRU layers have been updated to leverage CuDNN kernels by default when a GPU is available.
Below is the details from Performance optimization and CuDNN kernels:
In TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN kernels by default when a GPU is available.
With this change, the prior keras.layers.CuDNNLSTM/CuDNNGRU layers have been deprecated, and you can build your model without worrying about the hardware it will run on.
You can just use the built-in LSTM layer: tf.keras.layers.LSTM and it will work on both TPUs and GPUs.
Related
I am using tensorflow 1.15 due to dependency on multiple other modules and struggling to do quantize aware training. I came across tensorflow_model_optimization, but it works with tensorflow 2.x. Is there any way Quantization can be performed during training with tensorflow 1.15?
We cannot as quantization aware training was introduced in TF 2.0, so please upgrade to 2.x and let us know if you face any issues. as there is no workaround for it in 1.x.
I had a custom CNN implementation in keras running with TensorFlow backend. To improve generalizability I was working on adding regularization to the CNN model. The model works fine without any activity/kernel regularization. The moment I add an activity/kernel regularization the model freezes in between; training typically stops in between batches/iterations of a single epoch (for e.g. 67/172 batch). The issue is very repeatable and reproducible on my system and I was able to localize the issue to the implementation of regularization. It was strange to see this behavior and I could not find similar issues by others. I am not sure if I need to provide any additional information, if someone can guide me on what is lacking, I would be more than happy to provide the required information, and guidance on the issue would be greatly appreciated.
The following are some helpful information about things like the libraries/dependencies
Keras 2.4.3
Tensorflow 2.3.1
GPU: NVIDIA 1070 TI (8GB)
cudart64_101.dll was successfully openedT
The code was written in Spyder running on Python 3.8
Input: 32 batch size, input size (32, 256,64,1)
Using model.fit function to train the model
100,277 parameters, 99523 trainable
Actually, I think this issue is fixed after I updated the NVIDIA software to the latest version (11.1) and added the most recent ones to the path
Okay, so I've worked on a bunch of Deep Learning projects and internships now and I've never had to do heavy training. But lately I've been thinking of doing some Transfer Learning for which I'll need to run my code on a GPU. Now I have a system with Windows 10 and a dedicated NVIDIA GeForce 940M GPU. I've been doing a lot of research online, but I'm still a bit confused. I haven't installed the NVIDIA Cuda Toolkit or cuDNN or tensorflow-gpu on my system yet. I currently use tensorflow and pytorch to train my DL models. Here are my queries -
When I define a tensor in tf or pytorch, it is a cpu tensor by default. So, all the training I've been doing so far has been on the CPU. So, if I make sure to install the correct versions of Cuda and cuDNN and tensorflow-gpu (specifically for tensorflow), I can run my models on my GPU using tf-gpu and pytorch and that's it? (I'm aware of the torch.cuda.is_available() in pytorch to ensure pytorch can access my GPU and the device_lib module in tf to check if my gpu is visible to tensorflow)(I'm also aware of the fact that tf doesnt support all Nvidia GPUs)
Why does tf have a separate module for GPU support? PyTorch doesnt seem to have that and all you need to do is cast your tensor from cpu() to cuda() to switch between them.
Why install cuDNN? I know it is a high-level API CUDA built for support to train Deep Neural Nets on the GPU. But do tf-gpu and torch use these in the backend while training on the gpu?
After tf == 1.15, did they combine CPU and GPU support all into one package?
First of all unfortunately 940M is a kinda weak GPU for training. I suggest you use Google colab for faster training but of course, it would be faster than the CPU. So here my answers to your four questions.
1-) Yes if you install the requirements correctly, then you can run on GPU. You can manually place your data to your GPU as well. You can check implementations on TensorFlow. In PyTorch, you should specify the device that you want to use. As you said you should do device = torch.device("cuda" if args.cuda else "cpu") then for models and data you should always call .to(device) Then it will automatically use GPU if available.
2-) PyTorch also needs extra installation (module) for GPU support. However, with recent updates both TF and PyTorch are easy to use for GPU compatible code.
3-) Both Tensorflow and PyTorch is based on cuDNN. You can use them without cuDNN but as far as I know, it hurts the performance but I'm not sure about this topic.
4-) No they are still different packages. tensorflow-gpu==1.15 and tensorflow==1.15 what they did with tf2, was making the tensorflow more like Keras. So it is more simplified then 1.15 or before.
Rest was already answered by regarding 3) cudNN optimizes layer and such operations on hardware level and those implementations are pure black magic. It is incredibly hard to write CUDA code that properly utilizes your GPU (how load data into the GPU, how to actually perform them using matrices etc. )
I want to run inference on the CPU; although my machine has a GPU.
I wonder if it's possible to force TensorFlow to use the CPU rather than the GPU?
By default, TensorFlow will automatically use GPU for inference, but since my GPU is not good (OOM'ed), I wonder if there's a setting to force Tensorflow to use the CPU for inference?
For inference, I used:
tf.contrib.predictor.from_saved_model("PATH")
Assuming you're using TensorFlow 2.0, please check out this issue on GitHub:
[TF 2.0] How to globally force CPU?
The solution seems to be to hide the GPU devices from TensorFlow. You can do that using one of the methodologies described below:
TensorFlow 2.0:
my_devices = tf.config.experimental.list_physical_devices(device_type='CPU')
tf.config.experimental.set_visible_devices(devices= my_devices, device_type='CPU')
TensorFlow 2.1:
tf.config.set_visible_devices([], 'GPU')
(Credit to #ymodak and #henrysky, who answered the question on the GitHub issue.)
as you know the SVM in ski-learn doesn't support GPU. by the way, the Tensorflow has a SVM module but I don't know tensorflow's svm works on GPU or not? if it doesn't support GPU can I force it to use GPU by code?