The nvidia-smi shows following indicating 3.77GB utilized on GPU0 but no processes are listed for GPU0:
(base) ~/.../fast-autoaugment$ nvidia-smi
Fri Dec 20 13:48:12 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.50 Driver Version: 430.50 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 TITAN Xp Off | 00000000:03:00.0 Off | N/A |
| 23% 34C P8 9W / 250W | 3771MiB / 12196MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 TITAN Xp Off | 00000000:84:00.0 On | N/A |
| 38% 62C P8 24W / 250W | 2295MiB / 12188MiB | 8% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 1 1910 G /usr/lib/xorg/Xorg 105MiB |
| 1 2027 G /usr/bin/gnome-shell 51MiB |
| 1 3086 G /usr/lib/xorg/Xorg 1270MiB |
| 1 3237 G /usr/bin/gnome-shell 412MiB |
| 1 30593 G /proc/self/exe 286MiB |
| 1 31849 G ...quest-channel-token=4371017438329004833 164MiB |
+-----------------------------------------------------------------------------+
Similarly nvtop shows same GPU RAM utilization but the processes it lists shows TYPE=Compute and if you try to kill PIDs it shows then you get error:
(base) ~/.../fast-autoaugment$ kill 27761
bash: kill: (27761) - No such process
How to reclaim GPU RAM occupied by apparently ghost processes?
Use following command to get insight into ghost processes occupying GPU RAM:
sudo fuser -v /dev/nvidia*
In my case, output is:
(base) ~/.../fast-autoaugment$ sudo fuser -v /dev/nvidia*
USER PID ACCESS COMMAND
/dev/nvidia0: shitals 517 F.... nvtop
root 1910 F...m Xorg
gdm 2027 F.... gnome-shell
root 3086 F...m Xorg
shitals 3237 F.... gnome-shell
shitals 27808 F...m python
shitals 27809 F...m python
shitals 27813 F...m python
shitals 27814 F...m python
shitals 28091 F...m python
shitals 28092 F...m python
shitals 28096 F...m python
This shows processes that nvidia-smi as well as nvtop fails to shows. After I killed all of the python processes, the GPU RAM was freed up.
Another thing to try is to reset GPU using the command:
sudo nvidia-smi --gpu-reset -i 0
Related
I've been using Google Colab to produce Blender renders for a few months, but today my scripts stopped working without any changes. I run my scripts with sudo and for whatever reason Google Colab is not giving GPU access to commands run with sudo.
This is the output for nvidia-smi:
/content# nvidia-smi
Fri Jun 3 13:46:21 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 40C P8 9W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
But the same command throws an error if run with sudo:
/content# sudo nvidia-smi
Failed to initialize NVML: Driver/library version mismatch
sudo is important for me because my Blender commands, for whatever reason, don't work without sudo.
/content# ./blender-3.1.0-linux-x64/blender -b --python-console -noaudio
src/tcmalloc.cc:283] Attempt to free invalid pointer 0x7fe46122b000
Aborted (core dumped)
I'm connected to Google Colab through SSH (using this method). I get the following error when trying to use the GPU.
python lstm_example.py
Num GPUs Available: 1
(25000,)
(25000,)
2022-03-21 12:43:53.301917: W tensorflow/stream_executor/cuda/cuda_driver.cc:374] A non-primary context 0x559ed434d210 for device 0 exists before initializing the StreamExecutor. The primary context is now 0. We haven't verified StreamExecutor works with that.
2022-03-21 12:43:53.302331: F tensorflow/core/platform/statusor.cc:33] Attempting to fetch value instead of handling error INTERNAL: failed initializing StreamExecutor for CUDA device ordinal 0: INTERNAL: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_UNKNOWN: unknown error
Aborted (core dumped)
GPU info
nvidia-smi
Mon Mar 21 13:00:24 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.54 Driver Version: 460.32.03 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 50C P0 59W / 149W | Function Not Found | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
/usr/local/cuda/bin/nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Oct_12_20:09:46_PDT_2020
Cuda compilation tools, release 11.1, V11.1.105
Build cuda_11.1.TC455_06.29190527_0
I've added the following lines
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0"
The same code works when run in a notebook cell. I also notice that Memory_Usage is available when running nvidia-smi from a notebook and the CUDA version used is different (11.2).
Tue Mar 22 10:52:20 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 43C P8 31W / 149W | 3MiB / 11441MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-
I'm running on a server with a A100 GPU. When trying to run tensorflow code after a server reset, tensorflow does not recognize the GPU. Running tf.config.list_physical_devices('GPU') yields CUDA_ERROR_NOT_INITIALIZED:
2021-09-09 07:41:42.956917: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2021-09-09 07:41:43.899014: E tensorflow/stream_executor/cuda/cuda_driver.cc:313] failed call to cuInit: CUDA_ERROR_NOT_INITIALIZED: initialization error
2021-09-09 07:41:43.899148: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: f42a3aa12bd1
2021-09-09 07:41:43.899169: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: f42a3aa12bd1
2021-09-09 07:41:43.899890: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: 460.32.3
2021-09-09 07:41:43.899955: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 460.32.3
2021-09-09 07:41:43.899969: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:310] kernel version seems to match DSO: 460.32.3
Running nvidia-smi:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-PCIE-40GB Off | 00000000:00:06.0 Off | On |
| N/A 46C P0 40W / 250W | 0MiB / 40536MiB | N/A Default |
| | | Enabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| MIG devices: |
+------------------+----------------------+-----------+-----------------------+
| GPU GI CI MIG | Memory-Usage | Vol| Shared |
| ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG|
| | | ECC| |
|==================+======================+===========+=======================|
| No MIG devices found |
+-----------------------------------------------------------------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Why do I get CUDA_ERROR_NOT_INITIALIZED? The server ran perfectly well before the reset, and nvidia-smi is clearly working.
It seems NVIDIA Multi-Instance GPU (MIG) is enabled on your GPU, but you haven't defined any GPU instances. This can be seen from the fact that nvidia-smi shows a MIG devices table, but it's empty (No MIG devices found).
The MIG documentation states:
Without creating GPU instances (and corresponding compute instances),
CUDA workloads cannot be run on the GPU. In other words, simply
enabling MIG mode on the GPU is not sufficient. Also note that, the
created MIG devices are not persistent across system reboots. Thus,
the user or system administrator needs to recreate the desired MIG
configurations if the GPU or system is reset.
You probably had a MIG configuration defined before the reset, but the server reset removed that configuration. You need to re-configure the GPU instances to get the GPU working again. If you just want a basic configuration, in which you have only one GPU instance that uses all the resources, you can run:
sudo nvidia-smi mig -cgi 0 -C
If you need a fancier configuration than that, you should consult the documentation.
After configuring the GPU instances, the nvidia-smi command should show the MIG devices table full. In our case, it should have one entry:
+-----------------------------------------------------------------------------+
| MIG devices: |
+------------------+----------------------+-----------+-----------------------+
| GPU GI CI MIG | Memory-Usage | Vol| Shared |
| ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG|
| | | ECC| |
|==================+======================+===========+=======================|
| 0 0 0 0 | 0MiB / 40536MiB | 98 0 | 7 0 5 1 1 |
| | 1MiB / 65536MiB | | |
+------------------+----------------------+-----------+-----------------------+
There is a down state on my gpucompute nodes and cant send the jobs on GPU nodes.
I couldn't return my 'down GPU' nodes after following all the solutions on the net. Before this problem, I had an error with the Nvidia driver configuration in a way that I couldn't detect the GPUs by 'nvidia-smi', after solving that error by running 'NVIDIA-Linux-x86_64-410.79.run --no-drm' I have encountered this error that is because of the down state of the nodes. Appreciate very much any help!
command: sbatch md1.s
sbatch: error: Batch job submission failed: Requested node configuration is not available
command: sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
gpucompute* up infinite 1 down* fwb-lab-tesla1
command: sinfo -R
REASON USER TIMESTAMP NODELIST
Not responding slurm 2020-09-25T13:13:19 fwb-lab-tesla1
command: sinfo -Nl
Fri Sep 25 16:35:25 2020
NODELIST NODES PARTITION STATE CPUS S:C:T MEMORY TMP_DISK WEIGHT AVAIL_FE REASON
fwb-lab-tesla1 1 gpucompute* down* 32 32:1:1 64000 0 1 (null)Not responding
command: vim /etc/slurm/slurm.conf
# slurm.conf file generated by configurator easy.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
ControlMachine=FWB-Lab-Tesla
#ControlAddr=137.72.38.102
#
MailProg=/bin/mail
MpiDefault=none
#MpiParams=ports=#-#
ProctrackType=proctrack/cgroup
ReturnToService=1
SlurmctldPidFile=/var/run/slurmctld.pid
#SlurmctldPort=6817
SlurmdPidFile=/var/run/slurmd.pid
#SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
#SlurmUser=slurm
SlurmdUser=root
StateSaveLocation=/var/spool/slurm/StateSave
SwitchType=switch/none
TaskPlugin=task/cgroup
#
#
# TIMERS
#KillWait=30
command: ls /etc/init.d
functions livesys livesys-late netconsole network README
command: nvidia-smi
Fri Sep 25 16:35:01 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 TITAN V Off | 00000000:02:00.0 Off | N/A |
| 24% 32C P8 N/A / N/A | 0MiB / 12036MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 TITAN V Off | 00000000:03:00.0 Off | N/A |
| 23% 35C P8 N/A / N/A | 0MiB / 12036MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 TITAN V Off | 00000000:83:00.0 Off | N/A |
| 30% 44C P8 N/A / N/A | 0MiB / 12036MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 TITAN V Off | 00000000:84:00.0 Off | N/A |
| 31% 42C P8 N/A / N/A | 0MiB / 12036MiB | 6% Default |
---------------------------------------------------------------------------+
----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
The problem you mentioned probably prevented the slurmd daemon on gpucompute from starting. You should be able to confirm that by running systemctl status slurmd or the equivalent command for your Linux distribution.
The slurmd logs probably contain a line similar to
slurmd[1234]: fatal: can't stat gres.conf file /dev/nvidia0: No such file or directory
Try restarting it with
systemctl start slurmd
once you made sure nvidia-smi responded correctly.
My problem solved with the below instructions. Remember that you need to enter the commands after reboot anytime you restart the system. Thanks to Joan Bryan for resolving this!
slurmd -Dcvvv
reboot
ps -ef | grep slurm
kill xxxx (this is Process id number in the output of previous ps ef command)
nvidia-smi
systemctl start slurmctld
systemctl start slurmd
scontrol update nodename=fwb-lab-tesla1 state=idle
now you can run the jobs on the GPU nodes!
Cheers
I have access to a cluster that's run by Slurm, in which each node has 4 GPUs.
I have a code that needs 8 gpus.
So the question is how can I request 8 gpus on a cluster that each node has only 4 gpus?
So this is the job that I tried to submit via sbatch:
#!/bin/bash
#SBATCH --gres=gpu:8
#SBATCH --nodes=2
#SBATCH --mem=16000M
#SBATCH --time=0-01:00
But then I get the following error:
sbatch: error: Batch job submission failed: Requested node configuration is not available
Then I changed my the settings to this and submitted again:
#!/bin/bash
#SBATCH --gres=gpu:4
#SBATCH --nodes=2
#SBATCH --mem=16000M
#SBATCH --time=0-01:00
nvidia-smi
and the result shows only 4 gpus not 8.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66 Driver Version: 375.66 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P100-PCIE... Off | 0000:03:00.0 Off | 0 |
| N/A 32C P0 31W / 250W | 0MiB / 12193MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla P100-PCIE... Off | 0000:04:00.0 Off | 0 |
| N/A 37C P0 29W / 250W | 0MiB / 12193MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla P100-PCIE... Off | 0000:82:00.0 Off | 0 |
| N/A 35C P0 28W / 250W | 0MiB / 12193MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla P100-PCIE... Off | 0000:83:00.0 Off | 0 |
| N/A 33C P0 26W / 250W | 0MiB / 12193MiB | 4% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Thanks.
Slurm does not support what you need. It only can assign to your job GPUs/node, not GPUs/cluster.
So, unlike CPUs or other consumable resources, GPUs are not consumable and are binded to the node where they are hosted.
If you are interested in this topic, there is a research effort to turn GPUs into consumable resources, check this paper.
There you'll find how to do it using GPU virtualization technologies.
Job script: You are requesting 2 nodes with each of them 4 GPUs. Tolal 8 GPUs are assigned to you. You are running "nvidia-smi". nvidia-smi does not aware of SLURM nor MPI. It runs only on first node assigned to you. So it shows only 4 GPUs, result is normal.
If you run GPU based engineering application like Ansys HFSS or CST, They can use all 8 GPUs.