What is SYCL 1.2? - tensorflow

I am trying to install tensorflow
Please specify the location where ComputeCpp for SYCL 1.2 is installed. [Default is /usr/local/computecpp]:
Invalid SYCL 1.2 library path. /usr/local/computecpp/lib/libComputeCpp.so cannot be found
What should I do?What is SYCL 1.2?

SYCL is a C++ abstraction layer for OpenCL. TensorFlow's experimental support for OpenCL uses SYCL, in conjunction with a SYCL-aware C++ compiler.
As Yaroslav pointed out in his comment, SYCL is only required if you are building TensorFlow with OpenCL support. The following question during the execution of ./configure asks about OpenCL support:
Do you wish to build TensorFlow with OpenCL support? [y/N]
If you answer N, you will not have to supply a SYCL path.

It is optional step so you can skip if you want.
OpenCL (Open Computing Language) is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), field-programmable gate arrays (FPGAs) and other processors or hardware accelerators.
so if you want to intall you have to Setting Up TensorFlow With OpenCL Using SYCL bellow link provide step by step information about it
https://developer.codeplay.com/computecppce/latest/getting-started-with-tensorflow

Related

Why SYCL supports openCL 1.2 or above?

I am a student. My question may be very silly but I want to clear it. I have a device with Vivante GPU with openCL 1.1 version. I want to run tensorflow sample code with SYCL support on GPU. But before trying Tensorflow sample code, I want to try SYCL sample code with openCL 1.1 on GPU.
I have seen several SYCL implementations, like computeCPP, triSYCL, sycl-gtx. All the implementations support openCL 1.2 or above.
Does anyone know the reason why SYCL doesn't support openCL 1.1?
And how feasible will it be the attempt to modify the SYCL open-source code to support openCL 1.1?
The main reason for SYCL 1.2 to require OpenCL 1.2 is because the Khronos intermediate representation SPIR 1.2 requires it. Without SPIR, or any other intermediate representation, a SYCL implementation cannot compile C++ code into device binaries, and would need to convert C++ to OpenCL C, which is quite problematic.

Tensorflow installation from source for tensorflow.examples.learn modules

The above said modules are not build into the site-packages. I am using Python3.5 for it, and followed all the steps for the building from the source, that are given on the website.
I did search on the Internet, but there is no apparent solution found.
The following is the configuration used during ./configure:
./configure
..................
You have bazel 0.5.2 installed.
Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3
Found possible Python library paths:
/usr/lib/python3/dist-packages
/usr/local/lib/python3.5/dist-packages
Please input the desired Python library path to use. Default is [/usr/lib/python3/dist-packages]
Using python library path: /usr/lib/python3/dist-packages
Do you wish to build TensorFlow with MKL support? [y/N] y
MKL support will be enabled for TensorFlow
Do you wish to download MKL LIB from the web? [Y/n] Y
mklml_lnx_2018.0.20170425.tgz
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Do you wish to use jemalloc as the malloc implementation? [Y/n] y
jemalloc enabled
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] y
Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N] y
Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] y
XLA JIT support will be enabled for TensorFlow
Do you wish to build TensorFlow with VERBS support? [y/N] y
VERBS support will be enabled for TensorFlow
Do you wish to build TensorFlow with OpenCL support? [y/N] n
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] n
No CUDA support will be enabled for TensorFlow
Do you wish to build TensorFlow with MPI support? [y/N] n
MPI support will not be enabled for TensorFlow
Configuration finished
After which I proceed with the steps given in the following website: https://www.tensorflow.org/install/install_sources#ConfigureInstallation
I am using Ubuntu 16.04 LTS 64 bit.
There are no error messages, just the modules are not getting built for using! This: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/learn
I want to understand the text_classification code by running it, so I had to install tensorflow from source, but in the site-packages/tensorflow, these modules are not there.
I want to understand why such thing happened, even when I did the steps right.
It seems you only finished the configure step, but you didn't actually build anything yet. Try running e.g. bazel build //tensorflow/examples/learn:boston.

How Tensorflow use cudnn

I'm reading Tensorflow source code currently, and curious about the implementation of kernels. I found that most of the gpu implementation pointing to Eigen. Could anyone can tell me how tensorflow use cuDNN via Eigen, or something else?
Yes, most basic kernels use Eigen which uses plain CUDA. Kernels that use cuDNN (e.g. convolution) go through this integration: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/stream_executor/cuda
Here is an example Conv kernel that retrieves supported Conv algorithms (including cuDNN if it is linked and available), run and chooses the best one, and finally, uses it.

Does Google Tensorflow support OpenCL

Does Google Tensorflow support OpenCL... or is it still only Cuda?
OpenCL does not appear to be supported yet (April 2017) per this open issue - https://github.com/tensorflow/tensorflow/issues/22 but I keep reading that support exists -- I might be missing something.
tf-coriander is an implementation of Tensorflow for OpenCL 1.2 GPUs. It's based on coriander, which is a general compiler to run NVIDIA® CUDA™ code on OpenCL 1.2 devices. Disclosure: I'm the author, of both projects.
There is OpenCL support via SYCL on Tensorflow, some features are in, others are in progress: https://github.com/tensorflow/tensorflow/issues/22#issuecomment-266050835

bazel build Tensorflow from source

I have many big deep learning tasks in python 3.6 ahead and wanted to build tensorflow (CPU only) from source, as my MacBook Pro with Touchbar 13" noted that tensorflow would run faster if it were build with SSE4.1 SSE4.2 AVX AVX2 and FMA support. There are quite a lot questions on StackOverflow and GitHub regarding that topic and I read them all. None of which is addressing why it is not working for me.
I strictly followed the instructions provided by https://www.tensorflow.org/install/install_sources
my configure looks like this
./configure
Please specify the location of python. [Default is /anaconda/bin/python]: /anaconda/python.app/Contents/MacOS/python
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] n
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N] n
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] n
No XLA JIT support will be enabled for TensorFlow
Do you wish to build TensorFlow with VERBS support? [y/N] n
No VERBS support will be enabled for TensorFlow
Found possible Python library paths:
/anaconda/python.app/Contents/lib/python3.6/site-packages
Please input the desired Python library path to use. Default is [/anaconda/python.app/Contents/lib/python3.6/site-packages]
Using python library path: /anaconda/python.app/Contents/lib/python3.6/site-packages
Do you wish to build TensorFlow with OpenCL support? [y/N] n
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] n
No CUDA support will be enabled for TensorFlow
INFO: Starting clean (this may take a while). Consider using --async if the clean takes more than several minutes.
Configuration finished
with bazel 0.4.5 I then try to do the build as in the instructions
bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
This is executed without error but it gives literally hundreds of warnings. I can provide such as an example, but there hardly any snippets that go on without warning.
I appreciate ever help, thank you all very much.
Unfortunately compiler warnings are a fact of life. However, many of these come from external libraries which are pulled into the build. These can be filtered out with the "output_filter" argument to Bazel:
bazel build --config=opt --output_filter='^//tensorflow' //tensorflow/tools/pip_package:build_pip_package
This limits output to warnings generated by TensorFlow code (you can also turn warnings off entirely this way, but that takes all the fun out of compiling). Since the tooling used to build matches what TensorFlow is developed with more closely, there are fewer warnings (I get some about multi-line comment continuations, a bunch of signed/unsigned integer comparisons, and some about variables which "may" be uninitialized).
None of these indicate definite bugs, just patterns of code which are sometimes bug-prone. If the compiler knew something was wrong, it would emit an error instead. Which is a long way of saying there's nothing to worry about.