I am trying to implement Tensorflow OpKernels on Raspberry Pi3 GPU (QPU) for operations like Conv2D,Pooling,ReLU etc.
The operations are mainly targeted to improve performance during inference and do not care about training (hence back propagation and gradients).
Is using XLA a right approach to achieve this or is there any better way to do?
Related
So TensorFlowJS can use WebGL to do GPU computations and train deep learning models. Why isn't this more popular than using CUDA with an nVIDIA GPU? Most people just trying to prototype machine learning models would love to do so on their personal computer, but many of us resort to using expensive cloud services like AWS (although more recently Google Colab helps) for ML training if we don't have a computer with an nVIDIA GPU. I'm sure nVIDIA GPUs are faster than whatever GPU is in my Macbook, but probably any GPU will offer at least an order of magnitude speedup over even a fast CPU and allow for model prototyping, so why aren't well using WebGL GPGPU? There must be a catch I just don't know about.
WebGL backend uses GLSL language to define functions and upload data as shaders - it "works", but you pay huge cost to compile GSLS and upload shaders: warmup time for semi-complex models is immense (we're talking about minutes just to startup). And then memory overhead is 100-200% of what model would normally need - and for larger models, you're GPU memory bound, you don't want to waste that.
Btw, actual inference time once model is warmed up and it fits in memory is ok using WebGL
On the other hand nVidia CUDA libraries provide direct access to GPU, so TF compiled to use them is always going to be much more efficient.
Unfortunately, not many GPU vendors provide libraries like CUDA, so most ML is done on nVidia GPUs
Then there is a next level when you're using TPU instead of GPU - then there is no WebGL to start with
If I select WebGPU with the TFJS benchmark (https://tensorflow.github.io/tfjs/e2e/benchmarks/local-benchmark/index.html) it responds with "WebGPU is not supported. Please use Chrome Canary browser with flag "--enable-unsafe-webgpu" enabled...."
So when that's ready will it be competitive with CUDA? On my laptop it is about 15% faster than WebGL on that benchmark.
By default, TensorFlow will use our available GPU devices. That said, does TensorFlow use GPUs and CPUs simultaneously for computing, or GPUs for computing and CPUs for job handling (no matter how, CPUs are always active, as I think)?
Generally it uses both, the CPU and the GPU (assuming you are using a GPU-enabled TensorFlow). What actually gets used depends on the actual operations that your code is using.
For each operation available in TensorFlow, there are several "implementations" of such operation, generally a CPU implementation and a GPU one. Some operations only have CPU implementations as it makes no sense for a GPU implementation, but overall most operations are available for both devices.
If you make custom operations then you need to provide implementations that you want.
TensorFlow operations come packaged with a list of devices they can execute on and a list of associated priorities.
For example, a convolution is very conducive to computation on a GPU; but can still be done on a CPU whereas scalar additions should definitely be done on a CPU. You can override this selection using tf.Device and the key attached to the device of interest.
Someone correct me if I'm wrong.
But from what I'm aware TensorFlow only uses either GPU or CPU depending on what installation you ran. For example if you used pip install TensorFlow for python 2 or python3 -m pip install TensorFlow for python 3 you'll only use the CPU version.
Vise versa for GPU.
If you still have any questions or if this did not correctly answer your question feel free to ask me more.
Usually one layer in dnn consists of MatMul, BiasAdd, Relu, cuBlas provides Gemm for MatMul, and we can do BiasAdd and Relu in another kernel for GPU. They are two GPU lanuch calls, is there any way to fuse them all togather and make them just one? I looked into cuBlas, cudnn, but not found anything. I think it's not difficult because BiasAdd and Relu are just element-wise operaions, and fusion makes it more efficient.
Here is the backgroud:
I am working on a online prediction service which is multi dnn model ensemble. By profiling my program, I found out that both my CPU and GPU is not fully utilized, but requests blocks on GPU-related function call (like lanuchKernel). It seems like there's a big lock in libcuda. I am using tensorflow, XLA enabled, so I use nvprof and tensorflow HLO to visialize GPU-call, and there'are only dot and fused(which is biasadd and relu) operations. Although kernel fusion is done, there're still too many lanuchKernel calls, and GPU utilization is only 60%. I tried multi cuda context in one process, the improvement is trivial.
By the way, I am using one single GPU, Tesla P100.
I am developing robot with computer vision on Raspberry Pi 3 with Tensorflow. Can I use gpu for deep learning on raspberry pi 3?
UPDATE :
Here is an alternative backend for Keras called plaidml that is not Tensorflow. The major selling feature is a speedup on non-Nvidia graphics cards. It still isn't Tensorflow, but it may be a viable option.
HERE IS MY OLD ANSER PRIOR TO 2018-09 :
The short answer is no, it isn't possible at this time since Tensorflow leverages Nvidia drivers to power Nvidia GPUs and Raspberry Pi does not have Nvidia hardware.
One of two things have to change for you to have access to GPUs for a small form computing, Tensorflow has to support OpenCl (tracked here), or you have to switch platforms to something that has a Nvidia GPU like this
Sorry to be the bringer of bad news.
Reading :
https://www.tensorflow.org/versions/r0.10/resources/faq.html it states :
Does TensorFlow make use of all the devices (GPUs and CPUs) available
on my machine?
TensorFlow supports multiple GPUs and CPUs. See the how-to
documentation on using GPUs with TensorFlow for details of how
TensorFlow assigns operations to devices, and the CIFAR-10 tutorial
for an example model that uses multiple GPUs.
Note that TensorFlow only uses GPU devices with a compute capability
greater than 3.5.
Does this mean Tensorflow can automatically make use of all CPU's on given machine or does it ned to be explicitly configured ?
CPUs are used via a "device" which is just a threadpool. You can control the number of threads if you feel like you need more:
sess = tf.Session(config=tf.ConfigProto(
intra_op_parallelism_threads=NUM_THREADS))