What does 'opaque object' means in Tensorflow Serving? - tensorflow-serving

I've looked through the official documentation of Tensorflow Serving to understand how does it structured. However, I've encountered one terminology, opaque object.
It states, Tensorflow Serving Core treats servables and loaders as opaque objects
What does that mean and how does it relate to Tensorflow Serving structure?
Here is the link for the official documentation

Related

Can't manage to open TensorFlow SavedModel for usage in Keras

I'm kinda new to TensorFlow and Keras, so please excuse any accidental stupidity, but I have an issue. I've been trying to load in models from the TensorFlow Detection Zoo, but haven't had much success.
I can't figure out how to read these saved_model folders (they contain a saved_model.pb file, and an assets and variables folder), so that they're accepted by Keras. Nor can I figure out a way to convert these models so that they may be loaded in. I've tried converting the SavedModel to ONNX, and then convert the ONNX-model to Keras, but that didn't work. Trying to load the original model as a saved_model, and then trying to to save this loaded model in another format gave me no success either.
Since you are new to Tensorflow (and I guess deep learning) I would suggest you stick with the API because the detection zoo models best interface with the object detection API. If you have already downloaded the model, you just need to export it using the exporter_main_v2.py script. This article explains it very well link.

TensorFlow 2 documentation for graph-mode

When I check the TensorFlow documentation (Python API docs or guides), it all seems exclusively for eager-mode. Almost all the examples don't even mention this.
For some specific operation/function like tf.nn.relu, this does not really make any difference.
However, for more complex things like tf.data (Dataset API, guide), it likely makes a difference. Esp all the examples would be different for graph mode.
Where can I find recent documentation (API references, guides, tutorials, examples) for graph mode?
(My current fallback is to check latest TF 1 documentation. But at some point, this will become more and more outdated.)
Or is graph mode deprecated so far that documentation for it seems not necessary anymore?
Graph mode in TensorFlow 2 is different from graph mode in TensorFlow 1. Instead of using sessions and placeholders, TensorFlow 2 uses functions annotated with tf.function. The eager mode examples you see can be executed in graph mode by wrapping them within a tf.function.
If you prefer to use the TensorFlow 1 style of graph mode with sessions and placeholders, you can still do so in TensorFlow 2 by using the tf.compat.v1 module. The API docs in that module describe the TensorFlow 1 style of graph mode. You can find archived guides about TensorFlow 1 graph mode at https://github.com/tensorflow/docs/tree/master/site/en/r1/guide

"Using TensorFlow backend". Is this an error?

I am new to deep learning on jupyter-notebook. I compiled this first cell and got this reply. It says "TensorFlow backend". Is this an error?
No it's not an error. Keras is a model-level library, providing high-level building blocks for developing deep learning models. It does not handle itself low-level operations such as tensor products, convolutions and so on. Instead, it relies on a specialized, well-optimized tensor manipulation library to do so, serving as the "backend engine" of Keras. Rather than picking one single tensor library and making the implementation of Keras tied to that library, Keras handles the problem in a modular way, and several different backend engines can be plugged seamlessly into Keras.
At this time, Keras has three backend implementations available: the TensorFlow backend, the Theano backend, and the CNTK backend.
In your case it is TensorFlow backend.

how to serve pytorch or sklearn models using tensorflow serving

I have found tutorials and posts which only says to serve tensorflow models using tensor serving.
In model.conf file, there is a parameter model_platform in which tensorflow or any other platform can be mentioned. But how, do we export other platform models in tensorflow way so that it can be loaded by tensorflow serving.
I'm not sure if you can. The tensorflow platform is designed to be flexible, but if you really want to use it, you'd probably need to implement a C++ library to load your saved model (in protobuf) and give a serveable to tensorflow serving platform. Here's a similar question.
I haven't seen such an implementation, and the efforts I've seen usually go towards two other directions:
Pure python code serving a model over HTTP or GRPC for instance. Such as what's being developed in Pipeline.AI
Dump the model in PMML format, and serve it with a java code.
Not answering the question, but since no better answers exist yet: As an addition to the alternative directions by adrin, these might be helpful:
Clipper (Apache License 2.0) is able to serve PyTorch and scikit-learn models, among others
Further reading:
https://www.andrey-melentyev.com/model-interoperability.html
https://medium.com/#vikati/the-rise-of-the-model-servers-9395522b6c58
Now you can serve your scikit-learn model with Tensorflow Extended (TFX):
https://www.tensorflow.org/tfx/guide/non_tf

TFLearn, tf.contrib.learn or tf.estimator?

I've been tooling around with Tensorflow and TFLearn for a few months. I've made some progress. However, I was expecting to be able to construct a functioning scikit-learn type Estimator as a TFLearn.DNN. I can fit, and I can predict, but I can't do cross-validation because evaluate is failing for me. TensorFlow is throwing:
ValueError: Cannot use the given session to evaluate tensor: the tensor's graph is different from the session's graph.
when I call evaluate. I thought the whole point of the TFLearn API was to abstract things like session management away from my code.
I have asked questions about problems I've had with TFLearn in several forums, including on the project's GitHub page. Unfortunately, I'm not getting any answers.
A few days ago, suddenly I encountered the tf.contrib.learn namespace. I'm seeing a lot of overlap between those classes and TFLearn. Then, I also found the tf.estimator class.
Finally, I just figured out that tensorflow.contrib sub-packages are third-party contributions. This leads me to wonder whether the original TFLearn is simply being absorbed into the larger TensorFlow package. Which direction is the code flowing?
I don't care what I use, as long as I get all the functionality of a scikit-learn estimator object.
I think it's best to use the official sub-modules of TensorFlow like tf.data and tf.estimator. They should be well maintained and features are added quickly.
For instance, #mrry seems in charge of tf.data and the module is very clean, easy to use and compatible with tf.estimator.
The module tf.estimator is a bit less clear, and comes from tf.contrib.learn. Don't take my word for it but I think tf.estimator will slowly replace tf.contrib.learn and it should be the official high-level API for TensorFlow (along with tf.keras).
You can find more information in the official blog post, where they explain the relationship between all modules.