How to perform quantize aware training with tensorflow 1.15? - tensorflow

I am using tensorflow 1.15 due to dependency on multiple other modules and struggling to do quantize aware training. I came across tensorflow_model_optimization, but it works with tensorflow 2.x. Is there any way Quantization can be performed during training with tensorflow 1.15?

We cannot as quantization aware training was introduced in TF 2.0, so please upgrade to 2.x and let us know if you face any issues. as there is no workaround for it in 1.x.

Related

How to make predictions with Mask R-CNN and python 3.10

My problem:
I have weights for a Mask R-CNN Model, which has been trained using python 3.7 and tensorflow 1.13.1. I can use this environment to make predictions.
I am able to reproduce those predictions using python 3.8 and loading the weights with the Mask R-CNN for tensorflow 2 and tensorflow 2.4.1.
When I use python 3.10 and tensorflow 2.9.1 the prediction process runs without any errors but the results do not make any sense. The class instances are just a view randomly distributed specks. The results look similar for python 3.8 and tensorflow 2.9.1.
Where I'm at
I need to use python 3.10, I don't care about the tensorflow version. I found requirements for an environment that should work for python 3.9 using tensorflow 2.7. But to my understanding for python 3.10 I need tensorflow 2.8 or higher.
What I need
I have no experience with tensorflow or Mask R-CNN so I don't really know where to start. Did someone already encounter this kind of problem, is it something typical and does it point into a certain direction?

Is it possible to use CuDNNLSTM with Google Colab's TPU?

I am able to do this with their GPU, but with their TPU it retrieves me an error...
Does anybody around here know what I'm missing, please?
Does it make sense to actually use the TPU with CuDNNLSTM? Or is CuDNNLSTM just tailored for GPU?
Thanks a lot in advance.
keras.layers.CuDNNLSTM is only supported on GPUs. But in Tensorflow 2 built-in LSTM and GRU layers have been updated to leverage CuDNN kernels by default when a GPU is available.
Below is the details from Performance optimization and CuDNN kernels:
In TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN kernels by default when a GPU is available.
With this change, the prior keras.layers.CuDNNLSTM/CuDNNGRU layers have been deprecated, and you can build your model without worrying about the hardware it will run on.
You can just use the built-in LSTM layer: tf.keras.layers.LSTM and it will work on both TPUs and GPUs.

What is the difference between keras and tf.keras?

I'm learning TensorFlow and Keras. I'd like to try https://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438/, and it seems to be written in Keras.
Would it be fairly straightforward to convert code to tf.keras?
I'm not more interested in the portability of the code, rather than the true difference between the two.
The difference between tf.keras and keras is the Tensorflow specific enhancement to the framework.
keras is an API specification that describes how a Deep Learning framework should implement certain part, related to the model definition and training.
Is framework agnostic and supports different backends (Theano, Tensorflow, ...)
tf.keras is the Tensorflow specific implementation of the Keras API specification. It adds the framework the support for many Tensorflow specific features like: perfect support for tf.data.Dataset as input objects, support for eager execution, ...
In Tensorflow 2.0 tf.keras will be the default and I highly recommend to start working using tf.keras
At this point tensorflow has pretty much entirely adopted the keras API and for a good reason - it's simple, easy to use and easy to learn, whereas "pure" tensorflow comes with a lot of boilerplate code. And yes, you can use tf.keras without any issues, though you might have to re-work your imports in the code. For instance
from keras.layers.pooling import MaxPooling2D
Would turn into:
from tensorflow.keras.layers import MaxPooling2D
The history of Keras Vs tf.keras is long and twisted.
Keras: Keras is a high-level (easy to use) API, built by Google AI Developer/Researcher, Francois Chollet. Written in Python and capable of running on top of backend engines like TensorFlow, CNTK, or Theano.
TensorFlow: A library, also developed by Google, for the Deep Learning developer Community, for making deep learning applications accessible and usable to public. Open Sourced and available on GitHub.
With the release of Keras v1.1.0, Tensorflow was made default backend engine. That meant: if you installed Keras on your system, you were also installing TensorFlow.
Later, with TensorFlow v1.10.0, for the first time tf.keras submodule was introduced in Tensorflow. The first step in integrating Keras within TensorFlow
With the release of Keras 2.3.0,
first release of Keras in sync with tf.keras
Last major release to support other multi-backend engines
And most importantly, going forward, recommend switching the code from keras to Tensorflow2.0 and tf.keras packages.
Refer this tweet from François Chollet to use tf.keras.
That means,
Change Everywhere
From
from keras.models import Sequential
from keras.models import load_model
To
from tensorflow.keras.models import Sequential
from tensorflow.keras.models import load_model
And In requirements.txt,
tensorflow==2.3.0
*Disclaimer: it might give conflicts if you were using an older version of Keras. Do pip uninstall keras in that case.

Should I use the standalone Keras library or tf.keras?

As Keras becomes an API for TensorFlow, there are lots of old versions of Keras code, such as https://github.com/keiserlab/keras-neural-graph-fingerprint/blob/master/examples.py
from keras import models
With the current version of TensorFlow, do we need to change every Keras code as?
from tensorflow.keras import models
You are mixing things up:
Keras (https://keras.io/) is a library independent from TensorFlow, which specifies a high-level API for building and training neural networks and is capable of using one of multiple backends (among which, TensorFlow) for low-level tensor computation.
tf.keras (https://www.tensorflow.org/guide/keras) implements the Keras API specification within TensorFlow. In addition, the tf.keras API is optimized to work well with other TensorFlow modules: you can pass a tf.data Dataset to the .fit() method of a tf.keras model, for instance, or convert a tf.keras model to a TensorFlow estimator with tf.keras.estimator.model_to_estimator. Currently, the tf.keras API is the high-level API to look for when building models within TensorFlow, and the integration with other TensorFlow features will continue in the future.
So to answer your question: no, you don't need to convert Keras code to tf.keras code. Keras code uses the Keras library, potentially even runs on top of a different backend than TensorFlow, and will continue to work just fine in the future. Even more, it's important to not just mix up Keras and tf.keras objects within the same script, since this might produce incompatabilities, as you can see for example in this question.
Update: Keras will be abandoned in favor of tf.keras: https://twitter.com/fchollet/status/1174019423541157888

How do I identify Keras version which is merged to Tensorflow current?

I am trying to use Keras/TensorFlow. But some options are not supported (ex. TensorBoard embeddings_freq) . I want to know TensorFlow merging policy for Keras, especially for synchronizing schedule and how to check Keras merged version.
The Keras in tf.keras is a reimplementation of keras and not a merge of a particular version. File issues if features you need are missing.