Will CUDA10 + CUDNN + tensorflow work on Ubuntu14.04? - tensorflow

It is now Oct 29, 2018
After much googling, I have not found a definitive answer or any examples of people using the latest cuda10 for tensorflow on ubuntu 14.04.
My dilemma is whether to upgrade my OS (currently at 14.04) in order to run cuda9 so I can use the latest tensorflow version or use CUDA10 on my existing 14.04 install.
Note cuda9 does not support 14.04, however, Nvidia has indicated that 14.04 will be supported for cuda10.
So, any examples/experiences of people using tensorflow with cuda10 on ubuntu14.04 are keenly sought after!
Also note cuda10 is not specifically supported by tensorflow...yet...they say "soon". But TF can be built from source with cuda10.
This is a link for cuda10+tensorflow on ubuntu16.04:
https://github.com/tensorflow/tensorflow/issues/22706
The short answer, I realize, is "try building it myself". Before I do that, I thought I'd ask around. Thanks.

I don't know whether CUDA 10 can work well on Ubuntu 14.04, but I was managed to build TensorFlow with CUDA 10 on Ubuntu 18.04 with using NVIDIA released docker image.
You can pull the 'TensorFlow Release 18.09' and try it on your current system.
If the previous step does not work, consider upgrading your OS to 18.04.
I wrote down my installation experience on this page, you could read it for some detail if you need.

Related

How can I run Mozilla TTS/Coqui TTS training with CUDA on a Windows system in 2023

there is a post How can I run Mozilla TTS/Coqui TTS training with CUDA on a Windows system? answered, by GuyPaddock, but I have RTX a5000 graphic card, running Windows 10. I'm not a programmer, but I think it needs CUDA version 11.x for this card. Will there be someone good who would write step by step what I should install to be able to run it and train models? (kidna RETARD guide) It's best not to mess with the webUI from AUTOMATIC1111, which requires python 3.10.6. Thanks in advance.
Trying to install it from the link above and also from youtube. I am trying to install this on python 3.10.8 because stable diffusion needs python 3.10.6, And version 3.10.8 is from October like CUDA 11.8. If possible, I'd like a step by step explanation of what I need to do to make it work?

How to deal with CUDA version?

How to set up different versions of CUDA in one OS?
Here is my problem: Lastest Tensorflow with GPU support requires CUDA 11.2, whereas Pytorch works with 11.3. So what is the solution to install both libraries in Windows and Ubuntu?
One solution is to use Docker Container Environment, which would only need the Nvidia Driver to be of version XYZ.AB; in this way, you can use both PyTorch and TensorFlow versions.
A very good starting point for your problem would be this one(ML-WORKSPACE) : https://github.com/ml-tooling/ml-workspace

Is there a way to run RAPIDS on windows pc?

I am trying to run Nvidia rapids on a windows computer but haven't had any luck. I have installed docker desktop for windows and downloaded the rapids image. Cuda 10.0 is installed, and Nvidia-container-toolkit isn't. I haven't been able to make it run. Any thoughts or guidance?
I'm not sure if anyone has given a more definite 'updated' answer to the original question. At this point (August 2020) the answer is "Yes!". You definitely can run RAPIDS in WSL2 on Windows 10 subject to a few conditions:
Requirements
You must use RAPIDS in the Windows Subsystem for Linux version 2 (WSL2);
Windows 10 Version
2004 (OS Build 202001.1000 or later)
You have to sign up to get Windows Insider Preview versions, specifically the Developer Channel. This is required for the WSL2 VM to have GPU access. https://insider.windows.com/en-us/
CUDA version 455.41 in CUDA SDK v11.1
You must be using a special version of the NVIDA CUDA drivers (I'm using )
that you must get by a special download from NVIDIA's site. You must
join the NVIDIA Developer Program to get access to the version
-- then search for 'WSL2 CUDA Driver' and it should lead you to it.
Setup
Install the developer preview version of windows. Make sure to click the check box in 'update' that installs other recommended updates too.
Install the windows CUDA driver from the NVIDIA Developer Program
Enable WSL 2 by enabling the "Virtual Machine Platform" optional feature. You can find more steps here https://learn.microsoft.com/en-us/windows/wsl/install-win10
Install WSL from the Windows Store (Ubuntu-20.04 confirmed working)
Install python on the WSL VM, tested with Anaconda
Install Rapids AI (It's best to install this right now before you have hundreds of other packages for 'conda' to try to self-consistently reconcile with the rapids dependency graphs -- you can always install additional python packages via pip or conda later.)
After doing this, if you launch ipython...
Python 3.8.3 (default, May 19 2020, 18:47:26)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.17.0 -- An enhanced Interactive Python. Type '?' for help.
>>> import cuml
>>> cuml.__version__
'0.15.0'
>>> import cudf
>>> cudf.__version__
'0.15.0'
>>> import dask_cudf
>>> dask_cudf.__version__
'0.15.0'
>>> import cupy
>>> cupy.__version__
'7.8.0'
...and you're good to go with RAPIDS AI.
Update 9/6/20: The answer written by Wesley is accurate with the latest Windows Insider Preview with WSL2. Rather than revising this answer, I've just made the edits to his. https://stackoverflow.com/a/59364773/6779504
No. As it exists now, RAPIDS requires a Linux host. This came up in a recent workshop by NVIDIA. It was also mentioned that RAPIDS won't work with WSL. It may work with WSL version 2, but I haven't tried it nor am aware of someone that as.
The only option would if you could assign a GPU to a Linux VM on the Windows host. This possible but sufficiently complex that dual-booting is a better solution.

Support for Nvidia CUDA Toolkit 9.2

What is the reasoning that Tensorflow-gpu is bound to a specific version of Nvidia's CUDA Toolkit? The current version appears to look for 9.0 specifically and will not work with anything greater. For example I installed the latest Toolkit 9.2 and added it to path but Tensorflow-gpu will not work with it and complains that it is looking for 9.0.
I can see major version updates not being supported but a minor release?
That's a good question. According to NVidia's website,
The CUDA driver is backward compatible, meaning that applications compiled against a particular version of the CUDA will continue to work on subsequent (later) driver releases.
So technically, it should not be a problem to support later iterations of a CUDA driver. And in practice, you will find working non-official pre-built binaries with later versions of CUDA and CuDNN on the net [1], [2]. Even easier to install, the tensorflow-gpu package installed from conda currently comes bundled with CUDA 9.2.
When asked on the topic, a dev answered,
The answer to why is driver issues in the ones required by 9.1, not many new features we need in cuda 9.1, and a few more minor issues.
So the reason looks rather vague -- he might mean that CUDA 9.1 (and 9.2) requires graphics card driver that are perhaps a bit too recent to be really convenient, but that is an uneducated guess.
If NVidia is right about binary compatibility, you may try to simply rename or link your CUDA 9.2 library as a CUDA 9.0 library and it should work. But I would save all my work before attempting this... and the fact that people go as far as recompiling tensorflow to support later CUDA versions may be a hint on how this could end.
When you download TF, you download a pre-built binary file.
In the build process TF is hard linked into a specific version of Cuda, so you cannot use it with different cuda versions.
If you want to work with the new (or sometimes older) version of cuda you will need to install TF from source (check how here)
Or, if you realy don't want to build yourself, check in these repos, there are others that publish specific TF binaries, few examples:
https://github.com/mind/wheels
https://github.com/yaroslavvb/tensorflow-community-wheels
https://github.com/fo40225/tensorflow-windows-wheel
For your convenience I add here the CUDA + cuDNN versions that are required for each prebuilt Tensorflow version:
(I write here just about the TF versions that I worked with, maybe older TF versions use older versions of CUDA as well)
before TF v1.5 cuda 8.0 and cuDNN 6
start from: 1.5 - Prebuilt binaries are now built against CUDA 9 and cuDNN 7.
The issue is not with NVIDIA drivers but Tensorflow itself. I spent an hour trying to make it work, and finally realized that if you download the pre-built binary from googleapi.com, it is hard coded to load libcudart.so.9.0! If you have both cuda 9.0 and 9.2 installed, tensorflow will work (but it's actually loading the dynamic libraries from 9.0). (BTW, I installed TF using anaconda.)
A cleaner approach is to build TF from source. It's not too complicated.

Is it time saving for loading a saved tensorflow model

The question is,I cannot make my computer work for my tensorflow-gpu on ubuntu system. Because NVIDIA driver cannot be installed on ubuntu.So I run tensorflow-gpu on Windows10,but it doesnot support tensorflow-serving.
I know Docker can help me to do it,and i really installed it,but just tensorflow-cpu.That would be very slowly if I just run tensorflow-cpu version.
In case that,I came up with a thought that I install two tensorflow,one is GPU version and on system,the other is CPU version on Docker.GPU version for training and save a model,then CPU version loading the saved model.
What I want to know is does this way work,and is it time saving?Or put it simply,does it take less time than just run tensorflow-cpu version on Docker?
TensorFlow GPU with NVIDIA GPUs on Ubuntu is supported, and there are drivers available. Check this tutorial.