Can I run Tensorflow, Keras,and Pytorch for deep learning projects on the latest and highly spec iMac Pro without Nvidia GPUs - tensorflow

I love my iMac and do not mind paying top dollars for it. However I need to run Tensorflow, Keras, and Pytorch for deep learning projects. Can I run them on the latest and maxed-out spec iMac Pro ?

tensorflow 1.8 supports ROCm, IDK how it performs next to nvidia's CUDA
but that means that if you have GPU (radeon) that supports ROCm you can use tensorflow gpu
running tensorflow on gpu is possible but extremely slow and can be added to the definition of torture

Related

is CUDA 11 with RTX 3080 support tensorflow and keras?

I attached RTX 3080 to my computer. but when training on keras 2.3.1 and tensorflow 1.15, I got some error "failed to run cuBLAS_STATUS_EXECUTION_FAILED, did not mem zero GPU location . . . check failed:start_event !=nullptr && stop_event != nullptr" I think the problem is that recently released rtx 3080 and CUDA 11 is not yet support the keras 2.xx and tensorflow 1.xx. is this right? And what make that problem?
At the moment of writing this, currently Nvidia 30xx series only fully support CUDA version 11.x, see https://forums.developer.nvidia.com/t/can-rtx-3080-support-cuda-10-1/155849/2
Tensorflow 1.15 wasn't fully supported on CUDA since version 10.1 and newer, for probably similar reason as described in the link above. Unfortunately TensorFlow version 1.x is no longer supported or maintained, see https://github.com/tensorflow/tensorflow/issues/43629#issuecomment-700709796
TensorFlow 2.4 is your best bet with an Ampere GPU. It has now a stable release, and it has official support for CUDA 11.0, see https://www.tensorflow.org/install/source#gpu
As TensorFlow 1.x is never going to be updated or maintained by TensorFlow team, I would strongly suggest moving to TensorFlow 2.x, excluding personal preferences, it's better in almost every way and has tf.compat module for backwards compatibility with TensorFlow 1.x code, if rewriting you code base is not an option. However, even that module is no longer maintained, really showing that version 1.x is dead, see https://www.tensorflow.org/guide/versions#what_is_covered
However, if you're dead set on using TensorFlow 1.15, you might have a chance with Nvidia Tensorflow, which apparently has support for version 1.15 on Ampere GPUs, see https://developer.nvidia.com/blog/accelerating-tensorflow-on-a100-gpus/

More than one GPU on vast.ai

Anyone with experience using vast.ai for cloud GPU computing knows if when renting more than one GPU do you need to do some setup to take advantage of the extra GPUs?
Because I can't notice any difference on speed when renting 6 or 8 GPUs instead of just one. I'm new at using vast.ai for cloud GPU computing.
I am using this default docker:
Official docker images for deep learning framework TensorFlow (http://www.tensorflow.org)
Successfully loaded tensorflow/tensorflow:nightly-gpu-py3
And just installing keras afterwards:
pip install keras
I have also checked the available GPUs using this and all the GPUs are detected correctly:
from keras import backend as K
K.tensorflow_backend._get_available_gpus()
cheers
Solution:
Finally I found the solution myself. I just used another docker image with an older version of tensorflow (2.0.0), and the error disappeared.

Tensorflow quantization

I would like to optimize a graph using Tensorflow's transform_graph tool. I tried optimizing the graph from MultiNet (and others with similar encoder-decoder architectures). However, the optimized graph is actually slower when using quantize_weights, and even much slower when using quantize_nodes. From Tensorflow's documentation, there may be no improvements, or it may even be slower, when quantizing. Any idea if this is normal with the graph/software/hardware below?
Here is my system information for your reference:
OS Platform and Distribution: Linux Ubuntu 16.04
TensorFlow installed from: using TF source code (CPU) for graph conversion, using binary-python(GPU) for inference
TensorFlow version: both using r1.3
Python version: 2.7
Bazel version: 0.6.1
CUDA/cuDNN version: 8.0/6.0 (inference only)
GPU model and memory: GeForce GTX 1080 Ti
I can post all the scripts used to reproduce if necessary.
It seems like quantization in Tensorflow only happens on CPUs. See: https://github.com/tensorflow/tensorflow/issues/2807
I got same problem in PC enviroment. My model is 9 times slower than not quantize.
But when I porting my quantized model into android application, its ok to speed up.
Seems like current only work on CPU and only ARM base CPU such as android phone.

How do I install tensorflow with gpu support for Mac?

My MacBook Pro doesn't have a NVIDIA gpu. So it's not possible to run CUDA. I'm wondering which of the earlier versions of TensorFlow have gpu support for Mac OS? And how can I install on Anaconda?
As stated on the official site:
Note: As of version 1.2, TensorFlow no longer provides GPU support on
Mac OS X.
..so installing any earlier version should be fine. But since your hardware does not have NVIDIA graphics card with CUDA support, it doesn't matter anyway.
In terms of installing TensorFlow on Mac OSX using Anaconda, you can just follow steps nicely described in the official docs
TensorFlow relies on CUDA for GPU use so you need Nvidia GPU. There's experimental work on adding OpenCL support to TensorFlow, but it's not supported on MacOS.
On anecdotal note, I've heard bad things from people trying to use AMD cards for deep learning. Basically AMD doesn't care about deep learning, they change their interfaces without notice so things break or run slower than CPU.

do I have to reinstall tensorflow after changing gpu?

I'm using tensorflow with gpu. My computer have NVIDIA gforce 750 ti and I'm gonna replace it with 1080 ti. do I have to re install tensorflow(or other drivers etc.)? If it is true, what exactly do I have to re-install?
One more question, Can I speed up the training process by install one more gpu in the computer?
As far as I know the only thing you need to reinstall are the GPU drivers (CUDA an/or cuDNN). If you install the exact same version with the exact same bindings Tensorflow should not notice you changed the GPU and continue working...
And yes, you can speed up the training process with multiple GPUs, but telling you how to install and manage that is a bit too broad for a Stackoverflow answer....