julia> print(tf.VERSION)
1.4.2
I've found methods to upgrade in python but can't find any solution for Julia.
There's no Tensorflow-2.0 official API/package available for Julia as of now. Note that even implementation of Tensorflow for Julia which is TensorFlow.jl was not official one and it was community based with the approach recommended by the TensorFlow maintainers and also it was just Julia wrapper for Tensorflow.
As stated here in official documentation, for Tensorflow-2.0 API Stability is promised for Python only.
If you're looking for Deep Learning frameworks/packages in Julia then use Flux.jl or Knet.jl as they're active and pure Julia based solutions. If you want to write gpu code directly in Julia, there are JuliaGPU packages available, additionally you can refer generic gpu kernels.
If your intent is to use Tensorflow-2.0, then suggest you to use Python package.
Related
What is difference between tf.keras.models.Sequential() vs tf.keras.Sequential()? I don't understand differences between them quite well. Can somebody explain it to me? I am new to TensorFlow but have some basic understanding on machine learning.
>>> tf.keras.models.Sequential==tf.keras.Sequential
True
Both are same as of TFv2. You could use the later.
Added in this commit.
tf.keras.models.Sequential
and
tf.keras.Sequential
Do the same thing but they are from different versions of tensorflow. By the documentation (TensorFlow 2.0), tf.keras.Sequential is the most recent way of called this function.
Keras (keras.io) is a library which is available on its own. It specifies the high-level api.
tf.keras (https://www.tensorflow.org/guide/keras) implements the Keras API specification within TensorFlow.
If you intend to stick to the Tensorflow implementation I would stick to tf.keras. Otherwise you have the advantage to be backend agnostic.
=====
update for updated question.
The renaming of the package for tf.keras.models.Sequential to tf.keras.Sequential must have happened from 1.15 to 2.x you can either downgrade your tensor flow version or update the code. I'd go for the latter
I have been working in TensorFlow for about a year now, and I am transitioning from TF 1.x to TF 2.0, and I am looking for some guidance on how to use the tf.keras.backend library in TF 2.0. I understand that the transition to TF 2.0 is supposed to remove a lot of redundancies in modeling and building graphs, since there were many ways to create equivalent layers in earlier TensorFlow versions (and I'm insanely grateful for that change!), but I'm getting stuck on understanding when to use tf.keras.backend, because the operations appear redundant with other TensorFlow libraries.
I see that some of the functions in tf.keras.backend are redundant with other TensorFlow libraries. For instance, tf.keras.backend.abs and tf.math.abs are not aliases (or at least, they're not listed as aliases in the documentation), but both take the absolute value of a tensor. After examining the source code, it looks like tf.keras.backend.abs calls the tf.math.abs function, and so I really do not understand why they are not aliases. Other tf.keras.backend operations don't appear to be duplicated in TensorFlow libraries, but it looks like there are TensorFlow functions that can do equivalent things. For instance, tf.keras.backend.cast_to_floatx can be substituted with tf.dtypes.cast as long as you explicitly specify the dtype. I am wondering two things:
when is it best to use the tf.keras.backend library instead of the equivalent TensorFlow functions?
is there a difference in these functions (and other equivalent tf.keras.backend functions) that I am missing?
Short answer: Prefer tensorflow's native API such as tf.math.* to thetf.keras.backend.* API wherever possible.
Longer answer:
The tf.keras.backend.* API can be mostly viewed as a remnant of the keras.backend.* API. The latter is a design that serves the "exchangeable backend" design of the original (non-TF-specific) keras. This relates to the historical aspect of keras, which supports multiple backend libraries, among which tensorflow used to be just one of them. Back in 2015 and 2016, other backends, such as Theano and MXNet were quite popular too. But going into 2017 and 2018, tensorflow became the dominant backend of keras users. Eventually keras became a part of the tensorflow API (in 2.x and later minor versions of 1.x). In the old multi-backend world, the backend.* API provides a backend-independent abstraction over the myriad of supported backend. But in the tf.keras world, the value of the backend API is much more limited.
The various functions in tf.keras.backend.* can be divided into a few categories:
Thin wrappers around the equivalent or mostly-equivalent tensorflow native API. Examples: tf.keras.backend.less, tf.keras.backend.sin
Slightly thicker wrappers around tensorflow native APIs, with more features included. Examples: tf.keras.backend.batch_normalization, tf.keras.backend.conv2d(https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/backend.py#L4869). They often perform proprocessing and implement other logics, which make your life easier than using native tensorflow API.
Unique functions that don't have equivalent in the native tensorflow API. Examples: tf.keras.backend.rnn, tf.keras.backend.set_learning_phase
For category 1, use native tensorflow APIs. For categories 2 and 3, you may want to use the tf.keras.backend.* API, as long as you can find it in the documentation page: https://www.tensorflow.org/api_docs/python/, because the documented ones have backward compatibility guarantees, so that you don't need to worry about a future version of tensorflow removing it or changing its behavior.
I can see that TensorFlow Lite is using flatbuffers by default and documentation page notes that in fact it's more efficient.
Why isn't TensorFlow using it by default?
Probably because the team didn't know of its existence when they started it. FlatBuffers is a relatively new technology, whereas Protocol Buffers has been in use at Google almost since the start, and is used for everything by default.
The pypy project is currently adding support for numpy.
My impression is that sklearn library is mainly based on numpy.
Would I be able to use most of this library or there are other requirements that are not supported yet?
Officially, none of it. If you want to do a port, go ahead (and please report results on the mailing list), but PyPy is simply not supported because scikit-learn uses many, many parts of NumPy and SciPy as well as having a lot of C, C++ and Cython extension code.
The official website of sklearn (https://scikit-learn.org/stable/faq.html), see here:
Do you support PyPy?
In case you didn’t know, PyPy is an alternative Python implementation with a built-in just-in-time compiler. Experimental support for PyPy3-v5.10+ has been added, which requires Numpy 1.14.0+, and scipy 1.1.0+.
Also see what pypy has to say (https://www.pypy.org/)
Compatibility: PyPy is highly compatible with existing python code. It supports cffi, cppyy, and can run popular python libraries like twisted, and django. It can also run NumPy, Scikit-learn and more via a c-extension compatibility layer.
Numpy can be "linked/compiled" against different BLAS implementations (MKL, ACML, ATLAS, GotoBlas, etc). That's not always straightforward to configure but it is possible.
Is it also possible to "link/compile" numpy against NVIDIA's CUBLAS implementation?
I couldn't find any resources in the web and before I spend too much time trying it I wanted to make sure that it possible at all.
In a word: no, you can't do that.
There is a rather good scikit which provides access to CUBLAS from scipy called scikits.cuda which is built on top of PyCUDA. PyCUDA provides a numpy.ndarray like class which seamlessly allows manipulation of numpy arrays in GPU memory with CUDA. So you can use CUBLAS and CUDA with numpy, but you can't just link against CUBLAS and expect it to work.
There is also a commercial library that provides numpy and cublas like functionality and which has a Python interface or bindings, but I will leave it to one of their shills to fill you in on that.
here is another possibility :
http://www.cs.toronto.edu/~tijmen/gnumpy.html
this is basically a gnumpy + cudamat environment which can be used to harness a GPU. also the same code can be run without the gpu using npmat. refer to the link above to download all these files.