Tensorflow doesn't seem to see my gpu - tensorflow

I've tried tensorflow on both cuda 7.5 and 8.0, w/o cudnn (my GPU is old, cudnn doesn't support it).
When I execute device_lib.list_local_devices(), there is no gpu in the output. Theano sees my gpu, and works fine with it, and examples in /usr/share/cuda/samples work fine as well.
I installed tensorflow through pip install. Is my gpu too old for tf to support it? gtx 460

I came across this same issue in jupyter notebooks. This could be an easy fix.
$ pip uninstall tensorflow
$ pip install tensorflow-gpu
You can check if it worked with:
tf.test.gpu_device_name()
Update 2020
It seems like tensorflow 2.0+ comes with gpu capabilities therefore
pip install tensorflow should be enough

Summary:
check if tensorflow sees your GPU (optional)
check if your videocard can work with tensorflow (optional)
find versions of CUDA Toolkit and cuDNN SDK, compatible with your tf version
install CUDA Toolkit
install cuDNN SDK
pip uninstall tensorflow; pip install tensorflow-gpu
check if tensorflow sees your GPU
* source - https://www.tensorflow.org/install/gpu
Detailed instruction:
check if tensorflow sees your GPU (optional)
from tensorflow.python.client import device_lib
def get_available_devices():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos]
print(get_available_devices())
# my output was => ['/device:CPU:0']
# good output must be => ['/device:CPU:0', '/device:GPU:0']
check if your card can work with tensorflow (optional)
my PC: GeForce GTX 1060 notebook (driver version - 419.35), windows 10, jupyter notebook
tensorflow needs Compute Capability 3.5 or higher. (https://www.tensorflow.org/install/gpu#hardware_requirements)
https://developer.nvidia.com/cuda-gpus
select "CUDA-Enabled GeForce Products"
result - "GeForce GTX 1060 Compute Capability = 6.1"
my card can work with tf!
find versions of CUDA Toolkit and cuDNN SDK, that you need
a) find your tf version
import sys
print (sys.version)
# 3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)]
import tensorflow as tf
print(tf.__version__)
# my output was => 1.13.1
b) find right versions of CUDA Toolkit and cuDNN SDK for your tf version
https://www.tensorflow.org/install/source#linux
* it is written for linux, but worked in my case
see, that tensorflow_gpu-1.13.1 needs: CUDA Toolkit v10.0, cuDNN SDK v7.4
install CUDA Toolkit
a) install CUDA Toolkit 10.0
https://developer.nvidia.com/cuda-toolkit-archive
select: CUDA Toolkit 10.0 and download base installer (2 GB)
installation settings: select only CUDA
(my installation path was: D:\Programs\x64\Nvidia\Cuda_v_10_0\Development)
b) add environment variables:
system variables / path must have:
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\bin
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\libnvvp
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\extras\CUPTI\libx64
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\include
install cuDNN SDK
a) download cuDNN SDK v7.4
https://developer.nvidia.com/rdp/cudnn-archive (needs registration, but it is simple)
select "Download cuDNN v7.4.2 (Dec 14, 2018), for CUDA 10.0"
b) add path to 'bin' folder into "environment variables / system variables / path":
D:\Programs\x64\Nvidia\cudnn_for_cuda_10_0\bin
pip uninstall tensorflow
pip install tensorflow-gpu
check if tensorflow sees your GPU
- restart your PC
- print(get_available_devices())
- # now this code should return => ['/device:CPU:0', '/device:GPU:0']

If you are using conda, you might have installed the cpu version of the tensorflow. Check package list (conda list) of the environment to see if this is the case . If so, remove the package by using conda remove tensorflow and install keras-gpu instead (conda install -c anaconda keras-gpu. This will install everything you need to run your machine learning codes in GPU. Cheers!
P.S. You should check first if you have installed the drivers correctly using nvidia-smi. By default, this is not in your PATH so you might as well need to add the folder to your path. The .exe file can be found at C:\Program Files\NVIDIA Corporation\NVSMI

When I look up your GPU, I see that it only supports CUDA Compute Capability 2.1. (Can be checked through https://developer.nvidia.com/cuda-gpus) Unfortunately, TensorFlow needs a GPU with minimum CUDA Compute Capability 3.0.
https://www.tensorflow.org/get_started/os_setup#optional_install_cuda_gpus_on_linux
You might see some logs from TensorFlow checking your GPU, but ultimately the library will avoid using an unsupported GPU.

The following worked for me, hp laptop. I have a Cuda Compute capability
(version) 3.0 compatible Nvidia card. Windows 7.
pip3.6.exe uninstall tensorflow-gpu
pip3.6.exe uninstall tensorflow-gpu
pip3.6.exe install tensorflow-gpu

I had a problem because I didn't specify the version of Tensorflow so my version was 2.11. After many hours I found that my problem is described in install guide:
Caution: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin
Before that, I read most of the answers to this and similar questions. I followed #AndrewPt answer. I already had installed CUDA but updated the version just in case, installed cudNN, and restarted the computer.
The easiest solution for me was to downgrade to 2.10 (you can try different options mentioned in the install guide). I first uninstalled all of these packages (probably it's not necessary, but I didn't want to see how pip messed up versions at 2 am):
pip uninstall keras
pip uninstall tensorflow-io-gcs-filesystem
pip uninstall tensorflow-estimator
pip uninstall tensorflow
pip uninstall Keras-Preprocessing
pip uninstall tensorflow-intel
because I wanted only packages required for the old version, and I didn't do it for all required packages for 2.11 version. After that I installed tensorflow 2.10:
pip install tensorflow<2.11
and it worked.
I used this code to check if GPU is visible:
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))

So as of 2022-04, the tensorflow package contains both CPU and GPU builds. To install a GPU build, search to see what's available:
λ conda search tensorflow
Loading channels: done
# Name Version Build Channel
tensorflow 0.12.1 py35_1 conda-forge
tensorflow 0.12.1 py35_2 conda-forge
tensorflow 1.0.0 py35_0 conda-forge
…
tensorflow 2.5.0 mkl_py39h1fa1df6_0 pkgs/main
tensorflow 2.6.0 eigen_py37h37bbdb1_0 pkgs/main
tensorflow 2.6.0 eigen_py38h63d3545_0 pkgs/main
tensorflow 2.6.0 eigen_py39h855417c_0 pkgs/main
tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main
tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main
tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main
tensorflow 2.6.0 mkl_py37h9623b36_0 pkgs/main
tensorflow 2.6.0 mkl_py38hdc16138_0 pkgs/main
tensorflow 2.6.0 mkl_py39h31650da_0 pkgs/main
You can see that there are builds of TF 2.6.0 that support Python 3.7, 3.8 and 3.9, and that are built for MKL (Intel CPU), Eigen, or GPU.
To narrow it down, you can use wildcards in the search. This will find any Tensorflow 2.x version that is built for GPU, for instance:
λ conda search tensorflow=2*=gpu*
Loading channels: done
# Name Version Build Channel
tensorflow 2.0.0 gpu_py36hfdd5754_0 pkgs/main
tensorflow 2.0.0 gpu_py37h57d29ca_0 pkgs/main
tensorflow 2.1.0 gpu_py36h3346743_0 pkgs/main
tensorflow 2.1.0 gpu_py37h7db9008_0 pkgs/main
tensorflow 2.5.0 gpu_py37h23de114_0 pkgs/main
tensorflow 2.5.0 gpu_py38h8e8c102_0 pkgs/main
tensorflow 2.5.0 gpu_py39h7dc34a2_0 pkgs/main
tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main
tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main
tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main
To install a specific version in an otherwise empty environment, you can use a command like:
λ conda activate tf
(tf) λ conda install tensorflow=2.6.0=gpu_py39he88c5ba_0
…
The following NEW packages will be INSTALLED:
_tflow_select pkgs/main/win-64::_tflow_select-2.1.0-gpu
…
cudatoolkit pkgs/main/win-64::cudatoolkit-11.3.1-h59b6b97_2
cudnn pkgs/main/win-64::cudnn-8.2.1-cuda11.3_0
…
tensorflow pkgs/main/win-64::tensorflow-2.6.0-gpu_py39he88c5ba_0
tensorflow-base pkgs/main/win-64::tensorflow-base-2.6.0-gpu_py39hb3da07e_0
…
As you can see, if you install a GPU build, it will automatically also install compatible cudatoolkit and cudnn packages. You don't need to manually check versions for compatibility, or manually download several gigabytes from Nvidia's website, or register as a developer, as it says in other answers or on the official website.
After installation, confirm that it worked and it sees the GPU by running:
λ python
Python 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.__version__
'2.6.0'
>>> tf.config.list_physical_devices()
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Getting conda to install a GPU build and other packages you want to use is another story, however, because there are a lot of package incompatibilities for me. I think the best you can do is specify the installation criteria using wildcards and cross your fingers.
This tries to install any TF 2.x version that's built for GPU and that has dependencies compatible with Spyder and matplotlib's dependencies, for instance:
λ conda install tensorflow=2*=gpu* spyder matplotlib
For me, this ended up installing a two year old GPU version of tensorflow:
matplotlib pkgs/main/win-64::matplotlib-3.5.1-py37haa95532_1
spyder pkgs/main/win-64::spyder-5.1.5-py37haa95532_1
tensorflow pkgs/main/win-64::tensorflow-2.1.0-gpu_py37h7db9008_0
I had previously been using the tensorflow-gpu package, but that doesn't work anymore. conda typically grinds forever trying to find compatible packages to install, and even when it's installed, it doesn't actually install a gpu build of tensorflow or the CUDA dependencies:
λ conda list
…
cookiecutter 1.7.2 pyhd3eb1b0_0
cryptography 3.4.8 py38h71e12ea_0
cycler 0.11.0 pyhd3eb1b0_0
dataclasses 0.8 pyh6d0b6a4_7
…
tensorflow 2.3.0 mkl_py38h8557ec7_0
tensorflow-base 2.3.0 eigen_py38h75a453f_0
tensorflow-estimator 2.6.0 pyh7b7c402_0
tensorflow-gpu 2.3.0 he13fc11_0

I have had an issue where I needed the latest TensorFlow (2.8.0 at the time of writing) with GPU support running in a conda environment. The problem was that it was not available via conda. What I did was
conda install cudatoolkit==11.2
pip install tensorflow-gpu==2.8.0
Although I've cheched that the cuda toolkit version was compatible with the tensorflow version, it was still returning an error, where libcudart.so.11.0 was not found. As a result, GPUs were not visible. The remedy was to set environmental variable LD_LIBRARY_PATH to point to your anaconda3/envs/<your_tensorflow_environment>/lib with this command
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/<user>/anaconda3/envs/<your_tensorflow_environment>/lib
Unless you make it permanent, you will need to create this variable every time you start a terminal prior to a session (jupyter notebook). It can be conveniently automated by following this procedure from conda's official website.

In my case, I had a working tensorflow-gpu version 1.14 but suddenly it stopped working. I fixed the problem using:
pip uninstall tensorflow-gpu==1.14
pip install tensorflow-gpu==1.14

I experienced the same problem on my Windows OS. I followed tensorflow's instructions on installing CUDA, cudnn, etc., and tried the suggestions in the answers above - with no success.
What solved my issue was to update my GPU drivers. You can update them via:
Pressing windows-button + r
Entering devmgmt.msc
Right-Clicking on "Display adapters" and clicking on the "Properties" option
Going to the "Driver" tab and selecting "Updating Driver".
Finally, click on "Search automatically for updated driver software"
Restart your machine and run the following check again:
from tensorflow.python.client import device_lib
local_device_protos = device_lib.list_local_devices()
[x.name for x in local_device_protos]
Sample output:
2022-01-17 13:41:10.557751: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce 940MX major: 5 minor: 0 memoryClockRate(GHz): 1.189
pciBusID: 0000:01:00.0
2022-01-17 13:41:10.558125: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2022-01-17 13:41:10.562095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2022-01-17 13:45:11.392814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-01-17 13:45:11.393617: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2022-01-17 13:45:11.393739: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2022-01-17 13:45:11.401271: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/device:GPU:0 with 1391 MB memory) -> physical GPU (device: 0, name: GeForce 940MX, pci bus id: 0000:01:00.0, compute capability: 5.0)
>>> [x.name for x in local_device_protos]
['/device:CPU:0', '/device:GPU:0']

Related

Tensorflow after 1.15 - No need to install tensorflow-gpu package

Question
Please confirm that to use both CPU and GPU with TensorFlow after 1.15, install tensorflow package is enough and tensorflow-gpu is no more required.
Background
Still see articles stating to install tensorflow-gpu e.g. pip install tensorflow-gpu==2.2.0 and the PyPi repository for tensorflow-gpu package is active with the latest tensorflow-gpu 2.4.1.
The Annaconda document also refers to tensorflow-gpu package still.
Working with GPU packages - Available packages - TensorFlow
TensorFlow is a general machine learning library, but most popular for deep learning applications. There are three supported variants of the tensorflow package in Anaconda, one of which is the NVIDIA GPU version. This is selected by installing the meta-package tensorflow-gpu:
However, according to the TensorFlow v2.4.1 (as of Apr 2021) Core document GPU support - Older versions of TensorFlow
For releases 1.15 and older, CPU and GPU packages are separate:
pip install tensorflow==1.15 # CPU
pip install tensorflow-gpu==1.15 # GPU
According to the TensorFlow Core Guide Use a GPU.
TensorFlow code, and tf.keras models will transparently run on a single GPU with no code changes required.
According to Difference between installation libraries of TensorFlow GPU vs CPU.
Just a quick (unnecessary?) note... from TensorFlow 2.0 onwards these are not separated, and you simply install tensorflow (as this includes GPU support if you have an appropriate card/CUDA installed).
Hence would like to have a definite confirmation that the tensorflow-gpu package would be for convenience (legacy script which has specified tensorflow-gpu, etc) only and no more required. There is no difference between tensorflow and tensorflow-gpu packages now.
It's reasonable to get confused here about the package naming. However, here is my understanding. For tf 1.15 or older, the CPU and GPU packages are separate:
pip install tensorflow==1.15 # CPU
pip install tensorflow-gpu==1.15 # GPU
So, if I want to work entirely on the CPU version of tf, I would go with the first command and otherwise, if I want to work entirely on the GPU version of tf, I would go with the second command.
Now, in tf 2.0 or above, we only need one command that will conveniently work on both hardware. So, in the CPU and GPU based system, we need the same command to install tf, and that is:
pip install tensorflow
Now, we can test it on a CPU based system ( no GPU)
import tensorflow as tf
print(tf.__version__)
print('1: ', tf.config.list_physical_devices('GPU'))
print('2: ', tf.test.is_built_with_cuda)
print('3: ', tf.test.gpu_device_name())
print('4: ', tf.config.get_visible_devices())
2.4.1
1: []
2: <function is_built_with_cuda at 0x7f2ce91415f0>
3:
4: [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')]
or also test it on a CPU based system ( with GPU)
2.4.1
1: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
2: <function is_built_with_cuda at 0x7fb6affd0560>
3: /device:GPU:0
4: [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'),
PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
So, as you can see this is just a single command for both CPU and GPU cases. Hope it's clear now more. But until now (in tf > = 2) we can also use -gpu / -cpu postfix while installing tf that delicately use for GPU / CPU respectively.
!pip install tensorflow-gpu
....
Installing collected packages: tensorflow-gpu
Successfully installed tensorflow-gpu-2.4.1
# -------------------------------------------------------------
!pip install tensorflow-cpu
....
Installing collected packages: tensorflow-cpu
Successfully installed tensorflow-cpu-2.4.1
Check: Similar response from tf-team.

Anaconda reading wrong CUDA version

I have a conda environment with PyTorch and Tensorflow, which both require CUDA 9.0 (~cudatoolkit 9.0 from conda). After installing pytorch with torchvision and the cudatoolkit (like they provided on their website) I wanted to install Tensorflow, the problem here is that I get this error:
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: /
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:
Specifications:
- tensorflow==1.12.0 -> python[version='2.7.*|3.6.*']
- tensorflow==1.12.0 -> python[version='>=2.7,<2.8.0a0|>=3.6,<3.7.0a0']
Your python: python=3.5
If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.
The following specifications were found to be incompatible with your system:
- feature:/linux-64::__cuda==10.2=0
- feature:|#/linux-64::__cuda==10.2=0
Your installed version is: 10.2
If I run nvcc or nvidia-smi on my host or the activated conda environment, I get that I have installed CUDA 10.2, even though conda list shows me that cudatoolkit 9.0 is installed. Any solution to this?
EDIT:
When running this code sample:
# setting device on GPU if available, else CPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
print()
#Additional Info when using cuda
if device.type == 'cuda':
print(torch.cuda.get_device_name(0))
print('Memory Usage:')
print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')
print('Cached: ', round(torch.cuda.memory_cached(0)/1024**3,1), 'GB')
print(torch.version.cuda)
I get this output:
GeForce GTX 1050
Memory Usage:
Allocated: 0.0 GB
Cached: 0.0 GB
9.0.176
So PyTorch does get the correct CUDA version, I just cant get tensorflow-gpu installed.
If I run nvcc or nvidia-smi on my host or the activated conda environment, I get that I have installed CUDA 10.2, even though conda list shows me that cudatoolkit 9.0 is installed. Any solution to this?
cudatoolkit doesn't ship with compiler (nvcc), thus when you run nvcc you start compiler from system wide installation. That's why it prints 10.2 istead of 9.0, and pytorch sees the local cudatoolkit.
anaconda / packages / cudatoolkit :
This CUDA Toolkit includes GPU-accelerated libraries, and the CUDA runtime for the Conda ecosystem. For the full CUDA Toolkit with a compiler and development tools visit https://developer.nvidia.com/cuda-downloads
From your comment above I understood that you are using python=3.5.6. So, first of all you should search for available tensorflow py35 builds using:
conda search tensorflow | grep py35
I have the following output:
tensorflow 1.9.0 eigen_py35h8c89287_1 pkgs/main
tensorflow 1.9.0 gpu_py35h42d5ad8_1 pkgs/main
tensorflow 1.9.0 gpu_py35h60c0932_1 pkgs/main
tensorflow 1.9.0 gpu_py35hb39db67_1 pkgs/main
tensorflow 1.9.0 mkl_py35h5be851a_1 pkgs/main
tensorflow 1.10.0 eigen_py35h5ed898b_0 pkgs/main
tensorflow 1.10.0 gpu_py35h566a776_0 pkgs/main
tensorflow 1.10.0 gpu_py35ha6119f3_0 pkgs/main
tensorflow 1.10.0 gpu_py35hd9c640d_0 pkgs/main
tensorflow 1.10.0 mkl_py35heddcb22_0 pkgs/main
As you can see there is no tensorflow 1.12.0 builds for py35, and that's why you are getting that error. You can try to inspect other conda channels, for example, conda-forge:
conda search tensorflow -c conda-forge | grep py35
But that wasn't helpful:
tensorflow 0.9.0 py35_0 conda-forge
tensorflow 0.10.0 py35_0 conda-forge
tensorflow 0.11.0rc0 py35_0 conda-forge
tensorflow 0.11.0rc2 py35_0 conda-forge
tensorflow 0.11.0 py35_0 conda-forge
tensorflow 0.12.1 py35_0 conda-forge
tensorflow 0.12.1 py35_1 conda-forge
tensorflow 0.12.1 py35_2 conda-forge
tensorflow 1.0.0 py35_0 conda-forge
tensorflow 1.1.0 py35_0 conda-forge
tensorflow 1.2.0 py35_0 conda-forge
tensorflow 1.2.1 py35_0 conda-forge
tensorflow 1.3.0 py35_0 conda-forge
tensorflow 1.4.0 py35_0 conda-forge
tensorflow 1.5.0 py35_0 conda-forge
tensorflow 1.5.1 py35_0 conda-forge
tensorflow 1.6.0 py35_0 conda-forge
tensorflow 1.8.0 py35_0 conda-forge
tensorflow 1.8.0 py35_1 conda-forge
tensorflow 1.9.0 eigen_py35h8c89287_1 pkgs/main
tensorflow 1.9.0 gpu_py35h42d5ad8_1 pkgs/main
tensorflow 1.9.0 gpu_py35h60c0932_1 pkgs/main
tensorflow 1.9.0 gpu_py35hb39db67_1 pkgs/main
tensorflow 1.9.0 mkl_py35h5be851a_1 pkgs/main
tensorflow 1.9.0 py35_0 conda-forge
tensorflow 1.10.0 eigen_py35h5ed898b_0 pkgs/main
tensorflow 1.10.0 gpu_py35h566a776_0 pkgs/main
tensorflow 1.10.0 gpu_py35ha6119f3_0 pkgs/main
tensorflow 1.10.0 gpu_py35hd9c640d_0 pkgs/main
tensorflow 1.10.0 mkl_py35heddcb22_0 pkgs/main
tensorflow 1.10.0 py35_0 conda-forge
So, the possible solutions are:
Install one of the older available tensorflow 1.10.0 gpu_py35 builds.
Switch to python 3.6.
conda search tensorflow | grep py36
...
tensorflow 1.11.0 gpu_py36h4459f94_0 pkgs/main
tensorflow 1.11.0 gpu_py36h9c9050a_0 pkgs/main
...
tensorflow 1.12.0 gpu_py36he68c306_0 pkgs/main
tensorflow 1.12.0 gpu_py36he74679b_0 pkgs/main
...
Note that versions >=1.13.1 doesn't support CUDA 9.
Use pip install inside conda env to install missing tensorflow build, because pip hosts more build combinations: Tested build configurations
Here is some best practices from Anaconda how to use pip w/ conda: Using Pip in a Conda Environment
The last option is to build your own missing conda package with conda-build
My experience is that even though the detected cuda version is incorrect by conda, what matters is the cudatoolkit version.
The actual problem for me was the incompatible python version.
E.g. You might have Python 3.7/3.8 but this version of tensorflow doesn't support that:
tensorflow==1.12.0 -> python[version='>=2.7,<2.8.0a0|>=3.6,<3.7.0a0']
Try an older conda environment or a newer tensorflow, the latest version before 2.0 is 1.15
If I haven't misunderstood, you installed the packages using the
conda install
from https://pytorch.org/get-started/locally/
I once had problems with the conda installation of PyTorch and Cuda: i solved by removing the packages installed with conda and re-installing it via pip.
If you are afraid to mess up the conda enviroment using pip, I suggest you to create another enviroment in order to test this solution.
Use conda to remove pytorch and cuda. See Removing Packages at Conda Managing packages
Install the cuda toolkit you need. Note that pytorch supports only cuda 9.2, 10.1 and 10.2, as you can see on the Pytorch download page.
If your OS is ubuntu 19, follow the CUDA instructions for ubuntu 18. Also note that not all gpus support the latest version of the toolkit for driver reasons (the 1050 should be recent enough to support them all, up to 10.1 sure because I used it).
Follow the instructions to install pytorch with the appropiate cuda support via pip at Pytorch download page. See Using pip in an enviroment at Conda Managing packages

Tensorflow 1.15 cannot detect gpu with Cuda10.1

I have installed both tensorflow 2.2.0 and tensorflow 1.15.0(by pip install tensorflow-gpu==1.15.0). The tensorflow 2 is installed in the base environment of Anaconda 3, while the tensorflow 1 is installed in a separate environment.
The tensorflow 2.2.0 can recognize gpu based on a simple test:
if tf.test.gpu_device_name():
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
//output: Default GPU Device: /device:GPU:0
But the tensorflow 1.15.0 can not detect gpu.
For your information, my system environment is python + cuda 10.1 + vs 2015.
The tensosflow versions 1.15.0 to 1.15.3 (the latest version) are all compiled against Cuda 10.0. Downgrading the cuda 10.1 to cuda 10.0 solved the problem.
Also be aware of the python version. It is recommended to install the tensorflow .whl file (as listed at https://nero-mirror.stanford.edu/pypi/simple/tensorflow-gpu/) for the specific python version. As for installation, see How do I install a Python package with a .whl file?
Tensorflow 1.15 expects cuda 10.0 , but I managed to make it work with cuda 10.1 by installing the following packages with Anaconda: cudatoolkit (10.0) and cudnn (7.6.5). So, after running
conda install cudatoolkit=10.0
conda install cudnn=7.6.5
tensorflow 1.15 was able to find and use GPU (which is using cuda 10.1).
PS: I understand your environment is Windows based, but this question pops on Google for the same problem happening on Linux (where I tested this solution). Might be useful also on Windows.
It might have to do with the version compatibility of TF, Cuda and CuDNN. This post has it discussed thoroughly.
Have you tried installing Anaconda? it downloads all the requirements and make it easy for you with just a few clicks.

Is it compulsory to have GPU and CUDA to run Keras/Autokeras in Windows 10? Can it run only on CPU?

I have tried to install keras, tensorflow, pytorch and all other dependencies in order to run a simple toy example using aukeras explained in https://autokeras.com/start/
After a lot of version changes and googling I found a typical error which prompts me to ask this question -
ImportError: Could not find 'nvcuda.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Typically it is installed in 'C:\Windows\System32'. If it is not present, ensure that you have a CUDA-capable GPU with the correct driver installed.
I don't have GPU or CUDA installed. Can I still run a toy example using CPU only?
Dependencies as mentioned below :
tensorboard 1.10.0
tensorflow 1.13.1
tensorflow-estimator 1.13.0
tensorflow-gpu 1.10.0
Keras 2.2.4
Keras-Applications 1.0.7
Keras-Preprocessing 1.0.9
autokeras 0.4.0
torch 1.0.1
torchvision 0.2.1
Uninstall tensorflow-gpu, use only tensorflow if you don't have GPU.
The tensorflow is CPU only version, you don't need to install both of them but if you have both, it will choose the GPU version.
Maybe you need to reinstall the tensorflow, uninstall both of them and install only the CPU version might better.
pip[3] uninstall tensorflow-gpu tensorflow
pip[3] install tensorflow

cannot access GPU with tensorflow-gpu 1.8.0 conda package

I have the tensorflow and tensorflow-gpu 1.8.0 conda (not pip) packages installed in a conda environment on Ubuntu 16.04.4:
conda list t.*flow
# packages in environment at /home/lebedov/miniconda3/envs/TF:
#
# Name Version Build Channel
_tflow_180_select 1.0 gpu
tensorflow 1.8.0 py36_1 conda-forge
tensorflow-gpu 1.8.0 h7b35bdc_0
I have CUDA 9.0 installed on my system, which has a Quadro M2200 GPU. I can see the GPU listed in the output of nvidia-smi and can also access the GPU using other deep learning frameworks such as PyTorch 0.4.0, but for some reason TensorFlow doesn't seem to see it:
Python 3.6.5 | packaged by conda-forge | (default, Apr 6 2018, 13:39:56)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.4.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import tensorflow as tf
...: sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
2018-07-11 23:21:11.827064: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Device mapping: no known devices.
2018-07-11 23:21:11.827942: I tensorflow/core/common_runtime/direct_session.cc:284] Device mapping:
If I downgrade to tensorflow-gpu 1.7.0, however, I can see the GPU. Any thoughts as to why the GPU isn't being detected by TensorFlow 1.8.0?
The tensorflow 1.8.0 packages from Anaconda appear to support GPUs properly, but those from conda-forge do not; see this issue.
I also had similar instances where the tensorflow-gpu won't run on default.
As for me, I solved it by just uninstalling tensorflow and only keeping tensorflow-gpu. That way, it would always run with GPU as there was no other option.
As far as compatibility is concerned, I would recommend the new conda way of installing tensorflow-gpu, which automatically installs the relevant cudnn files for you. The code would be as follows:
conda create -n [EnvironmentName] python=3.6
conda activate [EnvironmentName]
conda install -c conda-forge tensorflow-gpu==1.14
it will assess which version (CUDA,CUDNN, etc.) you require and download and install it directly to your environment. Then run your python file from this environment.
Do check afterwards with "conda list" to ensure that tensorflow-gpu is installed for you.