Accessing intermediate layers from a loaded saved_model in Tensorflow 2.0 - tensorflow

When using SavedModels in Tensorflow 2.0, is it possible to access activations from intermediate layers? For example, with one of the models here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md, I can run, for example,
model = tf.saved_model.load('faster_rcnn_inception_v2_coco_2018_01_28/saved_model').signatures['serving_default']
outputs = model(input_tensor)
to get output predictions and bounding boxes. I would like to be able to access layers other than the outputs, but there doesn't seem to be any documentation for Tensorflow 2.0 on how to do this. The downloaded models also include checkpoint files, but there doesn't seem to be very good documentation for how to load those with Tensorflow 2.0 either...

If you are generating saved models using TensorFlow 2.0, it is possible to extract individual layers. But the model which you are referring to has been saved in TensorFlow 1.x. With TF 1.x saved models, you cannot individually extract layers.
Here is an example on how you can extract layers from a saved model in TensorFlow 2.0
import tensorflow as tf
import numpy as np
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu', input_shape=(100,)),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
# Compile and fit the model
model.save('save_model', save_format='tf')
Then load the model as shown.
model = tf.keras.models.load_model('save_model')
layer1 = model.get_layer(index=1)

Related

Convert Inception model with include_top=False from Keras to Pytorch

Im try convert old project writen on Keras to PyTorch.
Keras create_model() contains folowing code. This is (129,500,1) grayscale image as input and (None, 2, 14, 2038) as output. Output tensor used in another BiLSTM later.
from tensorflow.python.keras.applications.inception_v3 import InceptionV3
inception_model = InceptionV3(include_top=False, weights=None, input_tensor=input_tensor)
for layer in inception_model.layers:
layer.trainable = False
x = inception_model.output
How I am can convert this code to Pytorch? The main problem is "include_top=False" what do not exist in Pytorch torchvision.inception_v3 model. This flag allow Keras model work with non-standard 1 channel inputs and 4-dim last Conv block outputs.
Actually, InceptionV3 model available in PyTorch.
You can try the below code.
import torchvision
torchvision.models.inception_v3()

keras.get_ssion().graph is not working in tensorflow2.x

I could graphs of keras model by code below in tensorflow1.x
from tensorflow.python.keras import backend as K
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10)
])
...
graph=K.get_session().graph
graph_def=graph.as_graph_def()
print(graph_def)
However, when I change tensorflow version to 2.x, it does not working.
I got the result like a picture below in tensorflow2.1.
How can I get graphs of keras model in tensorflow2.x?
Tensorflow migration guide would be the place where you can start -
Migrate your TensorFlow 1 code to TensorFlow 2

Upgrade Tensorflow model or Retrain for SavedModel

I followed "Tensorflow for poets" in 2017 and retrained my own collection of images and created "retrained_graph.pb" and "retrained_labels.txt"
Today I need to run this model on Tensorflow Serving.
There are two options to accomplish this:
Upgrade the old model to save it as under the "saved_model" format and use it on Tensorflow Serving - I found some SO postings to acccomplish it (this or
that).
Use the latest tensorflow Hub with Keras (https://www.tensorflow.org/tutorials/images/hub_with_keras)
I am looking for the best option among these, or a new one.
In my opinion, either using Tensorflow Hub or using the Pre-Trained Models inside tf.keras.applications is preferable because, in either cases, there won't be many code changes required to Save the Model, to make it compatible for Tensorflow Serving.
The code for reusing the Pre-Trained Model, MobileNet which is present inside tf.keras.applications is shown below:
#Import MobileNet V2 with pre-trained weights AND exclude fully connected layers
IMG_SIZE = 224
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras import Model
IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
# Create the base model from the pre-trained model MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
# Add Global Average Pooling Layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# Add a Output Layer
my_mobilenetv2_output = Dense(5, activation='softmax')(x)
# Combine whole Neural Network
my_mobilenetv2_model = Model(inputs=base_model.input, outputs=my_mobilenetv2_output)
You can Save the Model using the Code given below:
version = 1
MODEL_DIR = 'Image_Classification_Model'
export_path = os.path.join(MODEL_DIR, str(version))
tf.keras.models.save_model(model = model, filepath = export_path)

Using a Tensorflow feature_column in a Keras model

How can a Tensorflow feature_column be used in conjunction with a Keras model?
E.g. for a Tensorflow estimator, we can use an embedding column from Tensorflow Hub:
embedded_text_feature_column = hub.text_embedding_column(
key="sentence",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
estimator = tf.estimator.DNNClassifier(
hidden_units=[100],
feature_columns=[embedded_text_feature_column],
n_classes=2,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001))
However, I would like to use the TF Hub text_embedding_column as input to a Keras model. E.g.
net = tf.keras.layers.Input(...) # use embedding column here
net = tf.keras.layers.Flatten()
net = Dense(100, activation='relu')(net)
net = Dense(2)(net)
Is this possible?
The answer seems to be that you don't use feature columns. Keras comes with its own set of preprocessing functions for images and text, so you can use those.
So basically the tf.feature_columns are reserved for the high level API. Then the tf.keras.preprocessing() functions are used with tf.keras models.
Here is a link to the section on preprocessing data in the keras documentation.
https://keras.io/preprocessing/text/
Here is another Stackoverflow post that has an example of this approach.
Add Tensorflow pre-processing to existing Keras model (for use in Tensorflow Serving)
The keras functional api is a viable way to do this, but if you want to use feature_columns this tutorial shows you how:
https://www.tensorflow.org/beta/tutorials/keras/feature_columns
Basically it's this DenseFeatures layer that does the job:
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1, activation='sigmoid')
])

Keras model to tensforflow

Is it possible to convert a keras model (h5 file of network architecture and weights) into a tensorflow model? Or is there an equivalent function to model.save of keras in tensorflow?
Yes, it is possible, because Keras, since it uses Tensorflow as backend, also builds computational graph. You just need to get this graph from your Keras model.
"Keras only uses one graph and one session. You can access the session
via: K.get_session(). The graph associated with it would then be:
K.get_session().graph."
(from fchollet: https://github.com/keras-team/keras/issues/3223#issuecomment-232745857)
Or you can save this graph in checkpoint format (https://www.tensorflow.org/api_docs/python/tf/train/Saver):
import tensorflow as tf
from keras import backend as K
saver = tf.train.Saver()
sess = K.get_session()
retval = saver.save(sess, ckpt_model_name)
By the way, since tensorflow 13 you can use keras right from it:
from tensorflow.python.keras import models, layers