TensorFlow installed with PDM not detecting GPU with M1 MacBook - tensorflow

I tried to install dependencies with PDM for a project and found the inconvenience that TensorFlow does not detect the GPU of the M1. When I create a virtual environment with poetry I do not have the same problem.
Does anyone know why this may happen?

Related

I am having Error importing tensorflow as tf?

while importing tensorflow
Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2020-08-28 00:21:19.206030: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
system
Hp 245 g5 notebook
operating system ubuntu 18.4
How to solve the problem?
It seems you are trying to use the TensorFlow-GPU version and you have downloaded conflicting software versions for it.
Note: GPU support is available for Ubuntu and Windows with CUDA enabled cards only.
If you have a Cuda enabled card follow the instructions provided below.
As stated in Tensorflow documentation. The software requirements are as follows.
Nvidia gpu drivers - 418.x or higher
Cuda - 10.1 (TensorFlow >= 2.1.0)
cuDNN - 7.6
Make sure you have these exact versions of the software mentioned above. See this
Also, check the system requirements here.
For downloading the software mentioned above see here.
For downloading TensorFlow follow the instructions provided here to correctly install the necessary packages.

why tf.test.is_gpu_available() did not provide true or false,it got stuck?

After installing tensorflow gpu = 2.0.0 it got stuck after detecting gpu.
enviornment settings for this project is
ubuntu 18.04
cuda 10.0
cudnn 7.4.1
created a virtual enviornment
install tensorflow-gpu=2.0.0
While trying to check gpu with tf.test.is_gpu_available().compliation got stucked it is shown below.
enter image description here
changed cudnn version to 7.6.2.Then it works well.

Tensorflow see only XLA_GPU and not GPU

I have a problem since few days.
I installed NVIDIA drivers and cuDNN using some step by steph tutoriaI from the Internet.
The installation succeded since tests on CUDA samples passed.
I then installed python 3.7, jupyter and Tensorflow-gpu.
However, Tensorflow don't see my 2 GPUs and see only XLA_GPUs.
I tried recommandations from other posts (such us uninstalling and installing tensorflow) but this did not solve my problem.
Anyone have an idea how to solve this problem ?

Is it possible to compile tensorflow in Mac?

So I started to build tensorflow in Mac and the thing is that it doesn't seem possible to build tensorflow in Mac OS platform.
After following instructions in here, I get this package directory.
It seems like the build settings for bazel is only for linux distro. The reason why I thought so is because there is a .so file in package directory that is needed to be linked after importing tensorflow using python binary.
This is the result I get after importing tensorflow using python.
Is there any other way I can build tensorflow on Mac OS?
It seems like there are no options but to install tensorflow with pip. So I just created a new virtual machine and installed ubuntu 16.04 to use it as my docker host. By doing so, I can create a new docker container which can now link and execute the linux library.

GKE - GPU nvidia - cuda drivers dont work

I have setup a kubernetes node with a nvidia tesla k80 and followed this tutorial to try to run a pytorch docker image with nvidia drivers and cuda drivers working.
I have managed to install the nvidia daemonsets and i can now see the following pods:
nvidia-driver-installer-gmvgt
nvidia-gpu-device-plugin-lmj84
The problem is that even while using the recommendend image nvidia/cuda:10.0-runtime-ubuntu18.04 i still can't find the nvidia drivers inside my pod:
root#pod-name-5f6f776c77-87qgq:/app# ls /usr/local/
bin cuda cuda-10.0 etc games include lib man sbin share src
But the tutorial mention:
CUDA libraries and debug utilities are made available inside the container at /usr/local/nvidia/lib64 and /usr/local/nvidia/bin, respectively.
I have also tried to test if cuda was working through torch.cuda.is_available() but i get False as a return value.
Many help in advance for your help
Ok so i finally made nvidia drivers work.
It is mandatory to set a ressource limit to access the nvidia driver, which is weird considering either way my pod was on the right node with the nvidia drivers installed..
This made the nvidia folder accessible, but im'still unable to make the cuda install work with pytorch 1.3.0 .. [ issue here ]