Understand relation between different tensorflow AdagradOptimizer APIs - tensorflow

I am new to tf, and during reading a model code, I noticed it used 1), but most document I can find are using 2) and 3). So what is the tensorflow.python library used for,seems it is not in official document? And what is the relation between 1 to 2,3?
from tensorflow.python.training.adagrad import AdagradOptimizer
from tf.compat.v1.train import AdagradOptimizer
from tf.keras.optimizers import Adagrad

Basically:
tensorflow.python is essentially "internal" code and not part of the public API. You should never use anything in there directly. It may work, but it can also lead to instabilities, break completely if you update your TF version, etc.
This is a hold-over from old TF versions, before Keras was tightly integrated with it. IMHO you should just forget that this exists and they should have just removed it completely. This would be used with the outdated layers interface (tf.compat.v1.layers).
This is what you should use if using tf.keras (models and/or layers) and should be your go-to interface (not necessarily Adagrad specifically, but all the optimizers in tf.keras.optimizers).

Related

Why do we use keras back-end command in codes?

import tensorflow as tf
from tensorflow import keras
from keras import backend as K
What is the reason behind using the command—>
from keras import backend as K
What does it do? I would appreciate it if anyone explains it the simple way so that it does not get complicated in the mind.
You can find more information on what Keras backend actually is here or here.
In simpler terms to understand what Keras backend actually is
Keras is a model-level library that provides high-level building blocks for developing deep learning models. Keras does not provide low-level operations such as tensor multiplication and convolution. Instead, it relies on a specialized, well-optimized tensor library that serves as Keras' "backend engine". Instead of choosing one single tensor library and tying your Keras implementation to that library, Keras handles the problem in a modular way, allowing you to seamlessly connect multiple different backend engines to Keras.
Keras backend will allow you to write custom code or in a particular case a new "Keras module" for your use case that can support Theano and/or Tensorflow both. Like instead of tf.placeholder() you could write keras.backend.placeholder() which will work across both the libraries mentioned earlier.

Redundancies in tf.keras.backend and tensorflow libraries

I have been working in TensorFlow for about a year now, and I am transitioning from TF 1.x to TF 2.0, and I am looking for some guidance on how to use the tf.keras.backend library in TF 2.0. I understand that the transition to TF 2.0 is supposed to remove a lot of redundancies in modeling and building graphs, since there were many ways to create equivalent layers in earlier TensorFlow versions (and I'm insanely grateful for that change!), but I'm getting stuck on understanding when to use tf.keras.backend, because the operations appear redundant with other TensorFlow libraries.
I see that some of the functions in tf.keras.backend are redundant with other TensorFlow libraries. For instance, tf.keras.backend.abs and tf.math.abs are not aliases (or at least, they're not listed as aliases in the documentation), but both take the absolute value of a tensor. After examining the source code, it looks like tf.keras.backend.abs calls the tf.math.abs function, and so I really do not understand why they are not aliases. Other tf.keras.backend operations don't appear to be duplicated in TensorFlow libraries, but it looks like there are TensorFlow functions that can do equivalent things. For instance, tf.keras.backend.cast_to_floatx can be substituted with tf.dtypes.cast as long as you explicitly specify the dtype. I am wondering two things:
when is it best to use the tf.keras.backend library instead of the equivalent TensorFlow functions?
is there a difference in these functions (and other equivalent tf.keras.backend functions) that I am missing?
Short answer: Prefer tensorflow's native API such as tf.math.* to thetf.keras.backend.* API wherever possible.
Longer answer:
The tf.keras.backend.* API can be mostly viewed as a remnant of the keras.backend.* API. The latter is a design that serves the "exchangeable backend" design of the original (non-TF-specific) keras. This relates to the historical aspect of keras, which supports multiple backend libraries, among which tensorflow used to be just one of them. Back in 2015 and 2016, other backends, such as Theano and MXNet were quite popular too. But going into 2017 and 2018, tensorflow became the dominant backend of keras users. Eventually keras became a part of the tensorflow API (in 2.x and later minor versions of 1.x). In the old multi-backend world, the backend.* API provides a backend-independent abstraction over the myriad of supported backend. But in the tf.keras world, the value of the backend API is much more limited.
The various functions in tf.keras.backend.* can be divided into a few categories:
Thin wrappers around the equivalent or mostly-equivalent tensorflow native API. Examples: tf.keras.backend.less, tf.keras.backend.sin
Slightly thicker wrappers around tensorflow native APIs, with more features included. Examples: tf.keras.backend.batch_normalization, tf.keras.backend.conv2d(https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/backend.py#L4869). They often perform proprocessing and implement other logics, which make your life easier than using native tensorflow API.
Unique functions that don't have equivalent in the native tensorflow API. Examples: tf.keras.backend.rnn, tf.keras.backend.set_learning_phase
For category 1, use native tensorflow APIs. For categories 2 and 3, you may want to use the tf.keras.backend.* API, as long as you can find it in the documentation page: https://www.tensorflow.org/api_docs/python/, because the documented ones have backward compatibility guarantees, so that you don't need to worry about a future version of tensorflow removing it or changing its behavior.

Why I got different training results from using keras and tf.keras?

I was using TensorFlow 1.13 and Keras for my research projects. Nowadays, due to some future warnings, I installed TensorFlow 2.0 and tried to use it.
Instead of using Keras as I did before, I used tf.keras and built the same RNN model. i.e.
from keras.layers import Dense (I used before)
v.s.
from tf.keras.layers import Dense (I tried now)
All other codes are the same. However, I get some worse results for using import from tf.keras.layers one. And I am pretty sure it's not a coincidence, I tried cross-validation and run the models many times.
Does anyone have some ideas about why it happens? Are there any differences from the tf.keras.layers and keras.layers? If so, how can we be careful in case we got some "wrose" results?
tf.keras is tensorflow's implementation of the keras api. Ideally, using tf.keras should not provide you worse results. However, there might be a mismatch in the versions of both the keras which may/may not give you different results. You can check the version using tf.keras.version function and see if that is the same version of keras that you had used before.
For more details refer:
https://www.tensorflow.org/guide/keras/overview

What's the difference between tensorflow.python.keras and tensorflow.keras?

As the title says, Are they the same api? When I print the layers module in keras, the result are shown as follow:
from tensorflow.keras import layers
print(layers)
from tensorflow.python.keras import layers
print(layers)
result
<module 'tensorflow.python.keras.api._v1.keras.layers' from '/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/api/_v1/keras/layers/__init__.py'>
<module 'tensorflow.python.keras.layers' from '/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/__init__.py'>
We can see that two modules come from different source.
And I find the api module from source code, there is only a BUILD file.
Is there a relation between two modules, what is the mechanism of the api generator?
Anything under tf.python.* is private, intended for development only, rather than for public use.
Importing from tensorflow.python or any other modules (including import tensorflow_core...) is not supported, and can break unannounced.
So, it is suggested not to use anything with tf.python.*.

What is the difference between keras and tf.keras?

I'm learning TensorFlow and Keras. I'd like to try https://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438/, and it seems to be written in Keras.
Would it be fairly straightforward to convert code to tf.keras?
I'm not more interested in the portability of the code, rather than the true difference between the two.
The difference between tf.keras and keras is the Tensorflow specific enhancement to the framework.
keras is an API specification that describes how a Deep Learning framework should implement certain part, related to the model definition and training.
Is framework agnostic and supports different backends (Theano, Tensorflow, ...)
tf.keras is the Tensorflow specific implementation of the Keras API specification. It adds the framework the support for many Tensorflow specific features like: perfect support for tf.data.Dataset as input objects, support for eager execution, ...
In Tensorflow 2.0 tf.keras will be the default and I highly recommend to start working using tf.keras
At this point tensorflow has pretty much entirely adopted the keras API and for a good reason - it's simple, easy to use and easy to learn, whereas "pure" tensorflow comes with a lot of boilerplate code. And yes, you can use tf.keras without any issues, though you might have to re-work your imports in the code. For instance
from keras.layers.pooling import MaxPooling2D
Would turn into:
from tensorflow.keras.layers import MaxPooling2D
The history of Keras Vs tf.keras is long and twisted.
Keras: Keras is a high-level (easy to use) API, built by Google AI Developer/Researcher, Francois Chollet. Written in Python and capable of running on top of backend engines like TensorFlow, CNTK, or Theano.
TensorFlow: A library, also developed by Google, for the Deep Learning developer Community, for making deep learning applications accessible and usable to public. Open Sourced and available on GitHub.
With the release of Keras v1.1.0, Tensorflow was made default backend engine. That meant: if you installed Keras on your system, you were also installing TensorFlow.
Later, with TensorFlow v1.10.0, for the first time tf.keras submodule was introduced in Tensorflow. The first step in integrating Keras within TensorFlow
With the release of Keras 2.3.0,
first release of Keras in sync with tf.keras
Last major release to support other multi-backend engines
And most importantly, going forward, recommend switching the code from keras to Tensorflow2.0 and tf.keras packages.
Refer this tweet from François Chollet to use tf.keras.
That means,
Change Everywhere
From
from keras.models import Sequential
from keras.models import load_model
To
from tensorflow.keras.models import Sequential
from tensorflow.keras.models import load_model
And In requirements.txt,
tensorflow==2.3.0
*Disclaimer: it might give conflicts if you were using an older version of Keras. Do pip uninstall keras in that case.