Keras feature extractor clarification - which layers does an input goes through - tensorflow

When extracting a model layer output as in the Tensorflow sequential model document example below, does the input x in the code go through the my_first_layer as well before going into my_intermediate_layer layer? Or does it directly go into the my_intermediate_layer layer without going through the my_first_layer layer?
If it directly goes into the my_intermediate_layer, the input to the my_intermediate_layer does not have the transformation done by my_first_layer Conv2D. However, it seems not right to me because the input should go through all the preceding layers.
Please help understand what layers does x go through?
Feature extraction with a Sequential model
initial_model = keras.Sequential(
[
keras.Input(shape=(250, 250, 3)),
layers.Conv2D(32, 5, strides=2, activation="relu", name="my_first_layer"),
layers.Conv2D(32, 3, activation="relu", name="my_intermediate_layer"),
layers.Conv2D(32, 3, activation="relu"),
]
)
# The model goes through the training.
...
# Feature extractor
feature_extractor = keras.Model(
inputs=initial_model.inputs,
outputs=initial_model.get_layer(name="my_intermediate_layer").output,
)
# Call feature extractor on test input.
x = tf.ones((1, 250, 250, 3))
features = feature_extractor(x)

Keras offers higher level of API, which runs on top of the TensorFlow machine learning platform. Keras offers two types of class to define the neural network model, namely 'Sequential Class' and 'Model Class.'
Sequential Class:
It groups a linear stack of layers to form a model, such that each layer has one input and one output tensor. One can add required layers to the defined model (schema-1) as shown below to execute sequentially as name suggests Keras Sequential Class,
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(8, input_shape=(16,)))
model.add(tf.keras.layers.Dense(4))
The schema for defining a sequential model Keras-Sequential Class Definition has shown below (schema-2),
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential(
[
layers.Dense(2, activation="relu", name="layer1"),
layers.Dense(3, activation="relu", name="layer2"),
layers.Dense(4, name="layer3"),
]
)
# Call model on a test input
x = tf.ones((3, 3))
y = model(x)
Model Class
It allows the user to build a custom model along with many layers as shown below,
import tensorflow as tf
inputs = tf.keras.Input(shape=(3,))
x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs)
outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
It allows one to create a new functional API model with additional layers Keras - Model Class as follows,
inputs = keras.Input(shape=(None, None, 3))
processed = keras.layers.RandomCrop(width=32, height=32)(inputs)
conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed)
pooling = keras.layers.GlobalAveragePooling2D()(conv)
feature = keras.layers.Dense(10)(pooling)
Note: The input tensors supports only dicts, lists or tuples but not lists of list, or dicts of dict.
I hope that this helps.

Related

How can I have multiple duplicate input layers from the first input layer in tensorflow library?

I want multiple duplicate input layers from the first input layer. So that I don't have to send the input to the fit function twice.
Image
You can reuse the instance of the input layer when creating your two models. I can see in the image that you want to concatenate the output of the two separate layers, so I also included that in my code snippet.
Firstly, I create the input layer. Then I create two sub-models that use the same instance of the input. I stack the output of both sub-models. You can also use tf.concat instead of tf.stack.
import tensorflow as tf
from tensorflow.python.keras import layers
from tensorflow.python.keras import Model
def get_model(input_layer):
model = tf.keras.Sequential(
[
input_layer,
layers.Dense(32, activation="relu"),
layers.Dense(32, activation="relu"),
layers.Dense(1),
]
)
return model
num_features = 3
input = tf.keras.Input(shape=(num_features,))
model1 = get_model(input)
model2 = get_model(input)
combined_output = tf.stack([model1.output, model2.output], axis=0)
model = Model(inputs=input, outputs=combined_output)
print(tf.shape(model(tf.ones([32, 3]))))
The batch size is 32, and the number of features is 3. The code snippet prints
tf.Tensor([ 2 32 1], shape=(3,), dtype=int32)

trying to add a layer for transfer learning, getting ValueError: A merge layer should be called on a list of inputs [duplicate]

I am trying to do a transfer learning; for that purpose I want to remove the last two layers of the neural network and add another two layers. This is an example code which also output the same error.
from keras.models import Sequential
from keras.layers import Input,Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers.core import Dropout, Activation
from keras.layers.pooling import GlobalAveragePooling2D
from keras.models import Model
in_img = Input(shape=(3, 32, 32))
x = Convolution2D(12, 3, 3, subsample=(2, 2), border_mode='valid', name='conv1')(in_img)
x = Activation('relu', name='relu_conv1')(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x)
x = Convolution2D(3, 1, 1, border_mode='valid', name='conv2')(x)
x = Activation('relu', name='relu_conv2')(x)
x = GlobalAveragePooling2D()(x)
o = Activation('softmax', name='loss')(x)
model = Model(input=in_img, output=[o])
model.compile(loss="categorical_crossentropy", optimizer="adam")
#model.load_weights('model_weights.h5', by_name=True)
model.summary()
model.layers.pop()
model.layers.pop()
model.summary()
model.add(MaxPooling2D())
model.add(Activation('sigmoid', name='loss'))
I removed the layer using pop() but when I tried to add its outputting this error
AttributeError: 'Model' object has no attribute 'add'
I know the most probable reason for the error is improper use of model.add(). what other syntax should I use?
EDIT:
I tried to remove/add layers in keras but its not allowing it to be added after loading external weights.
from keras.models import Sequential
from keras.layers import Input,Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers.core import Dropout, Activation
from keras.layers.pooling import GlobalAveragePooling2D
from keras.models import Model
in_img = Input(shape=(3, 32, 32))
def gen_model():
in_img = Input(shape=(3, 32, 32))
x = Convolution2D(12, 3, 3, subsample=(2, 2), border_mode='valid', name='conv1')(in_img)
x = Activation('relu', name='relu_conv1')(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x)
x = Convolution2D(3, 1, 1, border_mode='valid', name='conv2')(x)
x = Activation('relu', name='relu_conv2')(x)
x = GlobalAveragePooling2D()(x)
o = Activation('softmax', name='loss')(x)
model = Model(input=in_img, output=[o])
return model
#parent model
model=gen_model()
model.compile(loss="categorical_crossentropy", optimizer="adam")
model.summary()
#saving model weights
model.save('model_weights.h5')
#loading weights to second model
model2=gen_model()
model2.compile(loss="categorical_crossentropy", optimizer="adam")
model2.load_weights('model_weights.h5', by_name=True)
model2.layers.pop()
model2.layers.pop()
model2.summary()
#editing layers in the second model and saving as third model
x = MaxPooling2D()(model2.layers[-1].output)
o = Activation('sigmoid', name='loss')(x)
model3 = Model(input=in_img, output=[o])
its showing this error
RuntimeError: Graph disconnected: cannot obtain value for tensor input_4 at layer "input_4". The following previous layers were accessed without issue: []
You can take the output of the last model and create a new model. The lower layers remains the same.
model.summary()
model.layers.pop()
model.layers.pop()
model.summary()
x = MaxPooling2D()(model.layers[-1].output)
o = Activation('sigmoid', name='loss')(x)
model2 = Model(inputs=in_img, outputs=[o])
model2.summary()
Check How to use models from keras.applications for transfer learnig?
Update on Edit:
The new error is because you are trying to create the new model on global in_img which is actually not used in the previous model creation.. there you are actually defining a local in_img. So the global in_img is obviously not connected to the upper layers in the symbolic graph. And it has nothing to do with loading weights.
To better resolve this problem you should instead use model.input to reference to the input.
model3 = Model(input=model2.input, output=[o])
Another way to do it
from keras.models import Model
layer_name = 'relu_conv2'
model2= Model(inputs=model1.input, outputs=model1.get_layer(layer_name).output)
As of Keras 2.3.1 and TensorFlow 2.0, model.layers.pop() is not working as intended (see issue here). They suggested two options to do this.
One option is to recreate the model and copy the layers. For instance, if you want to remove the last layer and add another one, you can do:
model = Sequential()
for layer in source_model.layers[:-1]: # go through until last layer
model.add(layer)
model.add(Dense(3, activation='softmax'))
model.summary()
model.compile(optimizer='adam', loss='categorical_crossentropy')
Another option is to use the functional model:
predictions = Dense(3, activation='softmax')(source_model.layers[-2].output)
model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer='adam', loss='categorical_crossentropy')
model.layers[-1].output means the last layer's output which is the final output, so in your code, you actually didn't remove any layers, you added another head/path.
An alternative to Wesam Na's answer, if you don't know the layer names you can simply cut off the last layer via:
from keras.models import Model
model2= Model(inputs=model1.input, outputs=model1.layers[-2].output)

How to unfold Xception layers in TensorFlow

I am following the official Keras transfer learning and fine-tuning tutorial. It consists of loading the Xception model with include_top=False, and adding a new classifier part on top.
I am then saving the model with model.save() and loading with load_model().
So this is what I see when I do model.summary()
My problem is that I would like to iterate through the layers, while now Xception layers are somehow folded (on the picture: xception(Functional)). Is there a way to somehow unfold it, to see all the layers (including those that are creating Xception)?
For model. summary(), you can unfold that as follows:
from tensorflow import keras
base_model = keras.applications.Xception(
weights='imagenet', # Load weights pre-trained on ImageNet.
input_shape=(150, 150, 3),
include_top=False) # Do not include the ImageNet classifier at the top.
inputs = keras.Input(shape=(150, 150, 3))
x = base_model(inputs, training=False)
x = keras.layers.GlobalAveragePooling2D()(x)
outputs = keras.layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
model.summary() # full model
model.layers[1].summary() # only xception model
Note, you cal also see the layer using the plot_model utility.
keras.utils.plot_model(model, expand_nested=True)

Tensorboard graph orphan layers

While building a model that includes transfer learning (from VGG-16).
I encounter this strange behavior. Tensorboard graph shows the layers which are not part of the new model but part of the old, above the point of seperation, and they are just dangling there.
When investigating further, model.summary() does not show these layers, model.get_layer("block4_conv1") can't find them either, and the keras tf.keras.utils.plot_model doesn't show them too. but if they are not part of the graph, how would tensorboard know about them?
To build the new model, I used the recommended method.
Model first stage:
vgg_input_model = tf.keras.applications.VGG16(weights='imagenet', include_top=False, input_tensor=x)
final_vgg_kayer = vgg_input_model.get_layer("block3_pool")
input_model = tf.keras.Model(inputs=vgg_input_model.inputs, outputs=final_vgg_kayer.output)
input_model.trainable = True
x = tf.keras.layers.Conv2D(512, 1, padding="same", activation='relu', name="stage0_final_conv1")(input_model.output)
x = tf.keras.layers.Conv2D(512, 1, padding="same", activation='relu', name="stage0_final_conv2")(x)
x = tf.keras.layers.Conv2D(256, 1, padding="same", activation='relu', name="stage0_final_conv3")(x)
x = tf.keras.layers.Conv2D(128, 1, padding="same", activation='relu', name="stage0_final_conv4")(x)
TF:2.1 (nightly-2.x)
PY:3.5
Tensorboard: 2.1.0a20191124
After trying multiple methods, I came to the conclusion that the recommended way is wrong. doing model_b=tf.keras.Model(inputs=model_a.inputs,outputs=model_a.get_layet("some_layer").output) will lead to dangling layers from model_a.
Using tf.keras.backend.clear_session() in between may cleans the keras graph, but tensorboard's graph is left empty then.
The best solution I found is config+weights copy of the required model, layer by layer.
And rebuilding the connections in a new model. that way there is no relationship whatsoever in the Keras graph between the two models.
(This is simple for a sequential model like VGG, but might be more difficult for something like ResNet)
Sample code:
tf.keras.backend.clear_session()
input_shape = (368, 368, 3) #only the input shape is shared between the models
#transfer learning model definition
input_layer_vgg = tf.keras.layers.Input(shape=input_shape)
vgg_input_model = tf.keras.applications.VGG16(weights='imagenet', include_top=False, input_tensor=input_layer_vgg)
name_last_layer = "block3_pool" #the last layer to copy
tf.keras.backend.clear_session() #clean the graph from the transfer learning model
input_layer = tf.keras.layers.Input(shape=input_shape) #define the input layer for the first model
x=input_layer
for layer in vgg_input_model.layers[1:]: #copy over layers, without the other input layer
config=layer.get_config() #get config
weights=layer.get_weights() #get weights
#print(config)
copy_layer=type(layer).from_config(config) #create the new layer from config
x=copy_layer(x) #connect to previous layers,
#required for the proper sizing of the layer,
#set_weights will not work without it
copy_layer.set_weights(weights)
if layer.name == name_last_layer:
break
del vgg_input_model
input_model=tf.keras.Model(inputs=input_layer,outputs=x) #create the new model,
#if needed x can be used further doen the line

Keras model with several inputs and several outputs

I want to build a Keras model with two inputs and two outputs which both use the same architecture/weights. Both outputs are then used to compute a​ single loss.
Here is a picture of my desired architecture.
This is my pseudo code:
model = LeNet(inputs=[input1, input2, input3],outputs=[output1, output2, output3])
model.compile(optimizer='adam',
loss=my_custom_loss_function([output1,outpu2,output3],target)
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
Can this approach work?
Do I need to use a different Keras API?
The architecture is fine. Here is a toy example with training data of how it can be defined using keras' functional API:
from keras.models import Model
from keras.layers import Dense, Input
# two separate inputs
in_1 = Input((10,10))
in_2 = Input((10,10))
# both inputs share these layers
dense_1 = Dense(10)
dense_2 = Dense(10)
# both inputs are passed through the layers
out_1 = dense_1(dense_2(in_1))
out_2 = dense_1(dense_2(in_2))
# create and compile the model
model = Model(inputs=[in_1, in_2], outputs=[out_1, out_2])
model.compile(optimizer='adam', loss='mse')
model.summary()
# train the model on some dummy data
import numpy as np
i_1 = np.random.rand(10, 10, 10)
i_2 = np.random.rand(10, 10, 10)
model.fit(x=[i_1, i_2], y=[i_1, i_2])
Edit given that you want to compute the losses together you can use Concatenate()
output = Concatenate()([out_1, out_2])
Any loss function you pass into model.compile will be applied to output in it's combined state. After you get the output from a prediction you can just split it back up into it's original state:
f = model.predict(...)
out_1, out_2 = f[:n], f[n:]