#tensorflow/tfjs-node-gpu works with NVIDIA P4 but fails with V100 on GKE - tensorflow

My tfjs-node-gpu code works great on an NVIDIA p4 on GKE (and using WebGL in a browser), but it fails on a v100 and t4.
Node is crashing in the first predict call inside my warmup. I'm using small 128x128 tiles to predict a 4x image upscale using the idealo-gans. The v100 initializes fine, shows up with nvidia_smi, is displayed as a TF device and the NUMA stuff is all fine. It just hard crashes my node express server. I'm having trouble finding the crash stack, since this is started in a Docker container and my last attempt to log the crash from stderr failed.
I've tried with both the latest tfjs-node-gpu 3.0 and 2.8.5. GKE is configured to install the NV drivers, currently 410.104, and CUDA 10.0.
I've tried enabling debug mode, and passing {verbose: true} to the failing model.predict() call in my warmup function. Neither added any output to the warmup call, which is odd, since I do see output in the actual, non-warmup call to model.predict()
Any suggestions on how to debug further?

Related

Google Cloud Deep Learning On Linux VM throws Unknown Cuda Error

I am trying to set up a deep learning VM on Google Cloud but I keep running into the same issue over and over again.
I will follow all the steps, set up a N1-highmem-8 (8 vCPU, 52gb Memory) instance, add a single T4 GPU and select the Deep Learning Image: TensorFlow 2.4 m69 CUDA 110 image. That's it.
After that, I will ssh into the vm, run the script that installs all the NVIDIA drivers and... when I begin using it, by simply running
from tensorflow.keras.layers import Input, Dense
i = Input((100,))
x = Dense(500)(i)
I keep getting failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error. By that point I haven't installed anything and haven't done anything custom, just the vanilla image from GCP.
What is more concerning is that, even if I delete the vm and then create a new one with the same config, some times the error won't happen immediately and sometimes it's present off the bat.
Has anyone encountered this? I've googled around to see if anyone has faced this issue and while I came across suggestions, all of them are old and have not worked for me. More over, suggestions on NVIDIA support forums tell me to re-install everything and the whole point of me using a pre-built GCP image specifically for deep learning is so that I don't have to enter the hell of installing and resolving issues with NVIDIA drivers.
The issue is fixed with the M74 image, but you are using M69. So follow one of the two fixes provided in the Google Cloud public forum.
we can mitigate the issue by:
Fix #1: Use the latest DLVM image (M74 or later) in a new VM instance: They have released a fix for the newest DLVM image in M74 so you will no longer be affected by this issue.
Fix #2: Patch your existing instance running images older than M74.
Run the following via an SSH session on the affected instance:
gsutil cp gs://dl-platform-public-nvidia/b191551132/restart_patch.sh /tmp/restart_patch.sh
chmod +x /tmp/restart_patch.sh
sudo /tmp/restart_patch.sh
sudo service jupyter restart
This only needs to be done once, and does not need to be rerun each time the instance is rebooted.

Stopping and starting a deep learning google cloud VM instance causes tensorflow to stop recognizing GPU

I am using the pre-built deep learning VM instances offered by google cloud, with an Nvidia tesla K80 GPU attached. I choose to have Tensorflow 2.5 and CUDA 11.0 automatically installed. When I start the instance, everything works great - I can run:
Import tensorflow as tf
tf.config.list_physical_devices()
And my function returns the CPU, accelerated CPU, and the GPU. Similarly, if I run tf.test.is_gpu_available(), the function returns True.
However, if I log out, stop the instance, and then restart the instance, running the same exact code only sees the CPU and tf.test.is_gpu_available() results in False. I get an error that looks like the driver initialization is failing:
E tensorflow/stream_executor/cuda/cuda_driver.cc:355] failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error
Running nvidia-smi shows that the computer still sees the GPU, but my tensorflow can’t see it.
Does anyone know what could be causing this? I don’t want to have to reinstall everything when I’m restarting the instance.
Some people (sadly not me) are able to resolve this by setting the following at the beginning of their script/main:
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
I had to reinstall CUDA drivers and from then on it worked even after restarting the instance. You can configure your system settings on NVIDIAs website and it will provide you the commands you need to follow to install cuda. It also asks you if you want to uninstall the previous cuda version (yes!).This is luckily also very fast.
I fixed the same issue with the commands below, taken from https://issuetracker.google.com/issues/191612865?pli=1
gsutil cp gs://dl-platform-public-nvidia/b191551132/restart_patch.sh /tmp/restart_patch.sh
chmod +x /tmp/restart_patch.sh
sudo /tmp/restart_patch.sh
sudo service jupyter restart
Option-1:
Upgrade a Notebooks instance's environment. Refer the link to upgrade.
Notebooks instances that can be upgraded are dual-disk, with one boot disk and one data disk. The upgrade process upgrades the boot disk to a new image while preserving your data on the data disk.
Option-2:
Connect to the notebook VM via SSH and run the commands link.
After execution of the commands, the cuda version will update to 11.3 and the nvidia driver version to 465.19.01.
Restart the notebook VM.
Note: Issue has been solved in gpu images. New notebooks will be created with image version M74. About new image version is not yet updated in google-public-issue-tracker but you can find the new image version M74 in console.

Tensorboard Interactive Debugger Not Connecting

I am running the latest version of tensorflow (1.7.0) in Windows. It added a new feature for debugging in tensorboard. However, I am not able to get it to connect. I was previously using the LocalCLIDebugWrapperSession debugger. However, I wanted to play with the new features in TensorBoardDebugWrapperSession.
When I run tensorboard using the command "tensorboard --logdir=dir --debugger_port 6064" it tells me "Creating InteractiveDebuggerPlugin at port 6064".
However, when I run my model in tensorflow (after swapping out the LocalCLIDebugWrapperSession for the TensorBoardDebugWrapperSession) it never seems to connect to the interactive debugger. I don't get an error in the output for either tensorboard or tensorflow. Nothing happens and on the tensorboard debugger page it just shows: "Debugger is waiting for Session.run() connections..."
It seems that tensorflow does not have support for grpc connection on windows. A workaround is to use windows subsystem. Note that there is no direct access to system GPU from the subsystem. So it cannot be use with tensorflow-gpu version
start_epoch: grpc:// debug URL scheme is not implemented on Windows yet.
https://github.com/tensorflow/tensorflow/issues/17933

After switching from gpu to cpu in tensorflow with tf.device("/cpu:0") GPU undocks everytime tf is imported

I am using Windows 7. After i tested my GPU in tensorflow, which was awkwardly slowly on a already tested model on cpu, i switched to cpu with:
tf.device("/cpu:0")
I was assuming that i can switch back to gpu with:
tf.device("/gpu:0")
However i got the following error message from windows, when i try to rerun with this configuration:
The device "NVIDIA Quadro M2000M" is not exchange device and can not be removed.
With "nvida-smi" i looked for my GPU, but the system said the GPU is not there.
I restarted my laptop, tested if the GPU is there with "nvida-smi" and the GPU was recogniced.
I imported tensorflow again and started my model again, however the same error message pops up and my GPU vanished.
Is there something wrong with the configuration in one of the tensorflow configuration files? Or Keras files? What can i change to get this work again? Do you know why the GPU is so much slower that the 8 CPUs?
Solution: Reinstalling tensorflow-gpu worked for me.
However there is still the question why that happened and how i can switch between gpu and cpu? I dont want to use a second virtual enviroment.

Tensorflow doesn't see GPU and uses CPU instead, how come?

I'm running a Python script using GPU-enabled Tensorflow. However, the program doesn't seem to recognize any GPU and starts using CPU straight away. What could be the cause of this?
Just want to add to the discussion that tensorflow may stop seeing a GPU due to CUDA initialization failure, in other words, tensorflow detects the GPU, but can't dispatch any op onto it, so it falls back to CPU. In this case, you should see an error in the log, like this
E tensorflow/stream_executor/cuda/cuda_driver.cc:481] failed call to cuInit: CUDA_ERROR_UNKNOWN
The cause is likely to be the conflict between different processes using GPU simultaneously. When that is the case, the most reliable way I found to get tensorflow working is to restart the machine. In the worst case, reinstalling tensorflow and / or NVidia driver.
See also one more case when GPU suddenly stops working.