Why tensorflow does not use flatbuffers by default - tensorflow

I can see that TensorFlow Lite is using flatbuffers by default and documentation page notes that in fact it's more efficient.
Why isn't TensorFlow using it by default?

Probably because the team didn't know of its existence when they started it. FlatBuffers is a relatively new technology, whereas Protocol Buffers has been in use at Google almost since the start, and is used for everything by default.

Related

Is there any way to make stable-baselines works with Tensorflow 2.0?

I am trying to run a program written before that uses stable baselines. My laptop uses the M1 chip so it doesn't work with Tensorflow 1.x. (I found tutorials on how to use Tensorflow2.x, but not Tensorflow1.x) And if I use stable-baselines3, I will have to change a lot of codes in the program since there're differences between PPO2 and PPO. So I wonder is there any way I can either use Tensorflow 1.x on my mac or use stable-baselines with Tensorflow2.x?

Redundancies in tf.keras.backend and tensorflow libraries

I have been working in TensorFlow for about a year now, and I am transitioning from TF 1.x to TF 2.0, and I am looking for some guidance on how to use the tf.keras.backend library in TF 2.0. I understand that the transition to TF 2.0 is supposed to remove a lot of redundancies in modeling and building graphs, since there were many ways to create equivalent layers in earlier TensorFlow versions (and I'm insanely grateful for that change!), but I'm getting stuck on understanding when to use tf.keras.backend, because the operations appear redundant with other TensorFlow libraries.
I see that some of the functions in tf.keras.backend are redundant with other TensorFlow libraries. For instance, tf.keras.backend.abs and tf.math.abs are not aliases (or at least, they're not listed as aliases in the documentation), but both take the absolute value of a tensor. After examining the source code, it looks like tf.keras.backend.abs calls the tf.math.abs function, and so I really do not understand why they are not aliases. Other tf.keras.backend operations don't appear to be duplicated in TensorFlow libraries, but it looks like there are TensorFlow functions that can do equivalent things. For instance, tf.keras.backend.cast_to_floatx can be substituted with tf.dtypes.cast as long as you explicitly specify the dtype. I am wondering two things:
when is it best to use the tf.keras.backend library instead of the equivalent TensorFlow functions?
is there a difference in these functions (and other equivalent tf.keras.backend functions) that I am missing?
Short answer: Prefer tensorflow's native API such as tf.math.* to thetf.keras.backend.* API wherever possible.
Longer answer:
The tf.keras.backend.* API can be mostly viewed as a remnant of the keras.backend.* API. The latter is a design that serves the "exchangeable backend" design of the original (non-TF-specific) keras. This relates to the historical aspect of keras, which supports multiple backend libraries, among which tensorflow used to be just one of them. Back in 2015 and 2016, other backends, such as Theano and MXNet were quite popular too. But going into 2017 and 2018, tensorflow became the dominant backend of keras users. Eventually keras became a part of the tensorflow API (in 2.x and later minor versions of 1.x). In the old multi-backend world, the backend.* API provides a backend-independent abstraction over the myriad of supported backend. But in the tf.keras world, the value of the backend API is much more limited.
The various functions in tf.keras.backend.* can be divided into a few categories:
Thin wrappers around the equivalent or mostly-equivalent tensorflow native API. Examples: tf.keras.backend.less, tf.keras.backend.sin
Slightly thicker wrappers around tensorflow native APIs, with more features included. Examples: tf.keras.backend.batch_normalization, tf.keras.backend.conv2d(https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/backend.py#L4869). They often perform proprocessing and implement other logics, which make your life easier than using native tensorflow API.
Unique functions that don't have equivalent in the native tensorflow API. Examples: tf.keras.backend.rnn, tf.keras.backend.set_learning_phase
For category 1, use native tensorflow APIs. For categories 2 and 3, you may want to use the tf.keras.backend.* API, as long as you can find it in the documentation page: https://www.tensorflow.org/api_docs/python/, because the documented ones have backward compatibility guarantees, so that you don't need to worry about a future version of tensorflow removing it or changing its behavior.

Has Microsoft abandoned CNTK?

I want to know if CNTK dead? Release notes on GitHub dated 03/31/2019: "Today’s 2.7 release will be the last main release of CNTK." I've spent months developing software using CNTK and now it appears to be a waste of time and money. I've search for an answer on numerous sites and still no answer. stackoverflow is one of the sites recommend by Microsoft.
From KedengMS, one of the maintainers for CNTK. Reposted from github.
Thanks for all the CNTK supporters, and I am privileged to have worked
on it, and learned a lot in the process. You can continue to use CNTK
for training and inference in the way it currently is, as other
Microsoft internal teams that still runs old models even in
BrainScript or NDL. Stopping adding new features does not mean CNTK is
no longer open source, it just means that going forward, there will be
no new GPU support (say, CUDA 11+), and no major new features added.
For different user scenarios, I think you may have different choices:
Deep learning newcomers: IMO CNTK is still a good entry to understand basics of deep learning, if you found CNTK
documents/tutorials/examples useful. Once you learnt the basic, it
won't be too hard to switch between frameworks. However, the DL field
is changing rapidly and CNTK has already lagged behind in a lot of
ways, so if you need more advanced features like dynamic graph,
PyTorch would be a better choice.
Model maintainers: If you already have CNTK models working, and to maintain it just means training with new data, you can continue to use
CNTK the way you currently use it. Actually, teams inside Microsoft
are doing this too. If there are serious bugs preventing productivity,
they still will be fixed. For inference, you can continue to use CNTK
C/C++/Python/C#/Java APIs, or you may export CNTK models in ONNX
format, and use ONNX Runtime or ORT as a slimmer and faster inference
engine. You'll be surprised to find how much faster it is comparing to
CNTK, and how slimmer the setup is (forget about OpenMPI when you just
need inference!). ORT currently provides C/C++/Python/C# interfaces.
Model builders: If you have CNTK model, and want to use features that are not currently supported in CNTK, please consider switch to
other frameworks like TensorFlow/PyTorch/etc. Our team has done lots
of data reader work inside PyTorch to ensure teams in Microsoft can
switch from CNTK to PyTorch. Besides, we are also in the process of
migrating CNTK specific distributed trainer like BMUF to PyTorch.
Hopefully you'll find that useful too when migrating your model.
The good thing about open source is that the community can continue to
fork/evolve if needed, unlike other Microsoft products that only ship
binaries (Win7 I am looking at you).

how to serve pytorch or sklearn models using tensorflow serving

I have found tutorials and posts which only says to serve tensorflow models using tensor serving.
In model.conf file, there is a parameter model_platform in which tensorflow or any other platform can be mentioned. But how, do we export other platform models in tensorflow way so that it can be loaded by tensorflow serving.
I'm not sure if you can. The tensorflow platform is designed to be flexible, but if you really want to use it, you'd probably need to implement a C++ library to load your saved model (in protobuf) and give a serveable to tensorflow serving platform. Here's a similar question.
I haven't seen such an implementation, and the efforts I've seen usually go towards two other directions:
Pure python code serving a model over HTTP or GRPC for instance. Such as what's being developed in Pipeline.AI
Dump the model in PMML format, and serve it with a java code.
Not answering the question, but since no better answers exist yet: As an addition to the alternative directions by adrin, these might be helpful:
Clipper (Apache License 2.0) is able to serve PyTorch and scikit-learn models, among others
Further reading:
https://www.andrey-melentyev.com/model-interoperability.html
https://medium.com/#vikati/the-rise-of-the-model-servers-9395522b6c58
Now you can serve your scikit-learn model with Tensorflow Extended (TFX):
https://www.tensorflow.org/tfx/guide/non_tf

How to deploy parsey's cousins with tensorflow serving

Are there instructions or some documentation somewhere or could somebody describe how to deploy the models available as "Parsey's Cousins" (see https://github.com/tensorflow/models/blob/master/syntaxnet/universal.md) with SyntaxNet under Tensorflow Serving? Even deploying just Parsey is a rather complex undertaking that is not really documented anywhere, but how to do this for the additional 40 languages?
This pull request partially addresses your request, but it still has some issues: https://github.com/tensorflow/models/pull/250.
We do have some tentative plans to provide easier integration between SyntaxNet and Tensorflow Serving, but no precise timeline.
Just for the benefit of anyone else who finds this question, after some digging around on GitHub, one can find the following issue started by Johann Petrak:
https://github.com/dsindex/syntaxnet/issues/7
a model from parsey's cousin is not able to export by that patch due to version mismatch
So whilst some people have been able to modify syntaxnet so that it works with Tensorflow Serving, this seems to be at the cost of using a version which is not compatible with Parsey's Cousins.
Currently the only way to get Tensorflow Serving working with languages other than English is to use something like dsindex's code and train your own models.