What's are the differences between tensorflow_serving classification, predict and regression SignatureDefs - tensorflow-serving

I am trying to serve the tensorflow object detection api model in tensorflow serving, And I am confused by the 3 different SignatureDefs. What are the differences, When to choose one over another?

Tensorflow Serving uses a different way of updating models weights and different signature mechanism is used in serving. In order to save model in serving se uses SavedModel. SavedModel provides a language-neutral format to save machine-learned models that is recoverable and hermetic. It enables higher-level systems and tools to produce, consume and transform TensorFlow models.
This support SignatureDefs
Graphs that are used for inference tasks typically have a set of inputs and outputs. This is called a Signature.
SavedModel uses SignatureDefs to allow generic support for signatures that may need to be saved with the graphs.
For those who previously used TF-Exporter/SessionBundle, Signatures in TF-Exporter will be replaced by SignatureDefs in SavedModel.
A SignatureDef requires specification of:
inputs as a map of string to TensorInfo.
outputs as a map of string to TensorInfo.
method_name (which corresponds to a supported method name in the loading tool/system).
Classification SignatureDefs support structured calls to TensorFlow Serving's Classification API. These prescribe that there must be an inputs Tensor, and that there are two optional output Tensors: classes and scores, at least one of which must be present.
Predict SignatureDefs support calls to TensorFlow Serving's Predict API. These signatures allow you to flexibly support arbitrarily many input and output Tensors. For the example below, the signature my_prediction_signature has a single logical input Tensor images that are mapped to the actual Tensor in your graph x:0.
Regression SignatureDefs support structured calls to TensorFlow Serving's Regression API. These prescribe that there must be exactly one inputs Tensor, and one outputs Tensor.
Please refer:
https://www.tensorflow.org/serving/signature_defs

https://github.com/tensorflow/serving/issues/599
The Classify API is higher-level and more specific than the Predict API. Classify accepts tensorflow.serving.Input (which wraps a list of tf.Examples) as input and produces classes and scores as output. It is used for classification problems. Predict, on the other than, accepts tensors as input and outputs tensors. It can be used for regression, classification and other types of inference problems.

Related

Inspecting functional keras model structure

I would like to inspect the layers and connections in a model, after having created a model using the Functional API in Keras. Essentially to start at the output and recursively enumerate the inputs of each layer instance. Is there a way to do this in the Keras or TensorFlow API?
The purpose is to create a more detailed visualisation than the ones provided by Keras (tf.keras.utils.plot_model). The model is generated procedurally based on a parameter file.
I have successfully used attributes of the KerasTensor objects to do this inspection:
output = Dense(1)(...)
print(output)
print(output.node)
print(output.node.keras_inputs)
print(output.node.keras_inputs[0].node)
This wasn't available in TF 2.6, only 2.7, and I realise it's not documented anywhere.
Is there a proper way to do this?

Tensorflow: Does a tflite file contain data about the model architecture? (graph?)

Does a tflite file contain data about the model architecture? A graph that shows what operations there where between the weights and features and biases, what kind of layers (linear or convolutional etc), size of layers, and what activation functions are there in-between the layers?
For example a graph you get with graphviz, that contains all the information, or does a tflite file only contain the final weights of the model after training?
I am working on a project with image style transfer. I wanted to do some research on an existing project, and see what parameters work better. The project I am looking at is here:
https://tfhub.dev/sayakpaul/lite-model/arbitrary-image-stylization-inceptionv3-dynamic-shapes/int8/transfer/1
I can download a tflite file, but I don't know much about these files. If they have the architecture I need, how do I read it?
TFLite flatbuffer files contain the model structure as well. For example, there are a subgraph concept in TFLite, which corresponds to the function concept in the programming language and the operator nodes also represent a graph node, which takes inputs and generates outputs. By using the Netron application, the model architecture can be visualized.

How to feeding hidden state vectors from one transformer directly into a layer of different transformer

The transformer models take as input token Ids, which are converted into embeddings. I am wondering how to input embeddings directly.
I am asking for both the Pytorch and Keras versions of the models.

Extracting representations from different layers of a network in TensorFlow 2

I have the weights of a custom pre-trained model. I need to extract the representations for different inputs that I pass through the model, across its different layers. What would be the best way of doing this?
I am using TensorFlow 2.1.0 and currently load in the weights of the model using either hub.KerasLayer() or tf.saved_model.load()
Any help would be greatly appreciated! I am very new to TensorFlow and have no choice but to use it since the weights were acquired from another source.
tf.saved_model.load() and its wrapper hub.KerasLayer load both the computation graph and the pre-trained weights. I suppose you're dealing with a TF2-style SavedModel that has its computation packaged in TensorFlow functions. If so, there's no easy way to extract intermediate results from within a function. If possible, you could ask the model creator to provide more outputs, or, if you have the model's Python source, build the model from source and initialize its weights with those from the SavedModel (some plumbing required).

Customize Input to Tensorflow Hub module

I know how to load a pre-trained image models from Tensorflow Hub. like so:
#load model
image_module = hub.Module('https://tfhub.dev/google/imagenet/mobilenet_v2_035_128/feature_vector/2')
#get predictions
features = image_module(batch_images)
I also know how to customize the output of this model (fine-tune on new dataset). The existing Modules expect input batch_images to be a RGB image tensor.
My question: Instead of the input being a RGB image of certain dimensions, I would like to use a tensor (dim 20x20x128, from a different model) as input to the Hub model. This means I need to by-passing the initial layers of the tf-hub model definition (i don't need them). Is this possible in tf-hub module api's? Documentation is not clear on this aspect.
p.s.: I can do this easily be defining my own layers but trying to see if i can use the Tf-Hub API's.
The existing https://tfhub.dev/google/imagenet/... modules do not support this.
Generally speaking, the hub.Module format allows multiple signatures (that is, combinations of input/output tensors; think feeds and fetches as in tf.Session.run()). So module publishers can arrange for that if there is a common usage pattern they want to support.
But for free-form experimentation at this level of sophistication, you are probably better off directly using and tweaking the code that defines the models, such as TF Slim (for TF1.x) or Keras Applications (also for TF2). Both provide Imagenet-pretrained checkpoints for downloading and restoring on the side.