Issue in removing layer from a pretrained model - tensorflow

I have the following code, I need to remove some layers of the model and perform prediction. But currently I am retrieving error.
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
import numpy as np
from keras.models import Model
from tensorflow.python.keras.optimizers import SGD
base_model = ResNet50(include_top=False, weights='imagenet')
model= Model(inputs=base_model.input, outputs=base_model .layers[-2].output)
#model = Model(inputs=base_model.input, outputs=predictions)
#Compiling the model
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics =
['accuracy'])
img_path = 'elephant.jpg'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
#decode the results into a list of tuples (class, description, probability)
#(one such list for each sample in the batch)
print('Predicted:', decode_predictions(preds, top=3)[0])
error
File "C:/Users/learn/remove_layer.py", line 9, in <module>
model= Model(inputs=base_model.input, outputs=base_model .layers[-2].output)
AttributeError: 'Tensor' object has no attribute '_keras_shape'
Due to my beginner's knowledge in Keras what I understood is the shape issue. Since its a resnet model, if I remove a layer from one merge to another merge layer, because merge layer doesn't have dimension issues, how can I accomplish this?

You actually need to visualize what you have done, so lets do little summary for last layers of ResNet50 Model:
base_model.summary()
conv5_block3_2_relu (Activation (None, None, None, 5 0 conv5_block3_2_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_3_conv (Conv2D) (None, None, None, 2 1050624 conv5_block3_2_relu[0][0]
__________________________________________________________________________________________________
conv5_block3_3_bn (BatchNormali (None, None, None, 2 8192 conv5_block3_3_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_add (Add) (None, None, None, 2 0 conv5_block2_out[0][0]
conv5_block3_3_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_out (Activation) (None, None, None, 2 0 conv5_block3_add[0][0]
==================================================================================================
Total params: 23,587,712
Trainable params: 23,534,592
Non-trainable params: 53,120
_____________________________
And now your model after removing last layer
model.summary()
conv5_block3_2_relu (Activation (None, None, None, 5 0 conv5_block3_2_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_3_conv (Conv2D) (None, None, None, 2 1050624 conv5_block3_2_relu[0][0]
__________________________________________________________________________________________________
conv5_block3_3_bn (BatchNormali (None, None, None, 2 8192 conv5_block3_3_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_add (Add) (None, None, None, 2 0 conv5_block2_out[0][0]
conv5_block3_3_bn[0][0]
==================================================================================================
Total params: 23,587,712
Trainable params: 23,534,592
Non-trainable params: 53,120
Reset50 in keras output is all the feature map after the last Conv2D blocks it doesn't care about the classfication part of your model, what you actualy did is that you just removed the last activation layer after the last addition block
So you need check more which block layer you wanna remove and add flatten and fully connected layer for the classfication part
Also as mentioned by Dr.Snoopy, dont mix imports between keras and tensorflow.keras
# this part
from tensorflow.keras.models import Model

Related

Why does Tensor Flow add a dimension to my input & output?

Here is my code:
from tensorflow.keras import layers
import tensorflow as tf
from tensorflow import keras
TFDataType = tf.float16
XTrain = tf.cast(tf.ones((10,10)), dtype=TFDataType)
YTrain = tf.cast(tf.ones((10,10)), dtype=TFDataType)
model = tf.keras.models.Sequential()
model.add(layers.Dense(1, dtype=TFDataType, input_shape=(10, 10)))
model.add(layers.Dense(1, dtype=TFDataType, input_shape=(10, 10)))
print(model.summary())
I am feeding it a 2 dimensional matrix. But when I see the model summary, I see:
Model: "sequential"
_________________________________________________________________
2021-08-23 13:32:18.716788: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-TLG9US3
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 10, 1) 11
_________________________________________________________________
dense_1 (Dense) (None, 10, 2) 4
=================================================================
Total params: 15
Trainable params: 15
Non-trainable params: 0
_________________________________________________________________
Why is the model asking for a 3 Dimensional (None, 10, 1) array?
How do I pass an array that meets the dimensionality of (None, 10, 1)?
I cannot call numpy.ones(None, 10, 1). I cannot reshape the array with -1 in the first dimension.
In your first layer the code input_shape=(10, 10) adds the extra dimension to account for the batch size of the data. Note you only need this code for the FIRST layer in your model so remove input_shape=(10, 10) in your second layer.

How to show all layers in a Tensorflow model with nested model?

How to show all layers in a tensorflow model with the model base?
base_model = keras.applications.MobileNetV3Small(
input_shape=model_input_shape,
include_top=False,
weights="imagenet",
)
# =================== build model
model = keras.Sequential(
[
keras.Input(shape=image_shape),
preprocessing.Resizing(*model_input_shape[:2]),
preprocessing.Rescaling(1.0 / 255),
base_model,
layers.GlobalAveragePooling2D(),
# missing dropout
layers.Dense(1, activation="sigmoid"),
]
)
model.summary()
The output is this:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
resizing (Resizing) (None, 224, 224, 3) 0
_________________________________________________________________
rescaling_1 (Rescaling) (None, 224, 224, 3) 0
_________________________________________________________________
MobilenetV3small (Functional (None, 7, 7, 1024) 1529968 <---------- why can't I see all layers here?
_________________________________________________________________
global_average_pooling2d (Gl (None, 1024) 0
_________________________________________________________________
dense (Dense) (None, 1) 1025
How do I show all layers?
for layer in model.layers:
print(layer)
The above has the same problem. What am I doing wrong?
In such a setup, the base_model acts as a single layer, ie. become nested. To inspect it, you can try either
model.layers[2].summary()
for i, layer in enumerate(model.layers):
if i == 2:
for nested_layer in layer.layers:
print(nested_layer)
or, more intuitively, you can use this solution.
def summary_plus(layer, i=0):
if hasattr(layer, 'layers'):
if i != 0:
layer.summary()
for l in layer.layers:
i += 1
summary_plus(l, i=i)
summary_plus(model)
or, you can also use the plot_model function as well
keras.utils.plot_model(
model,
expand_nested=True # < make it true
)
Update 1: Raised on the issue regarding this. Keras #15239. Hopefully, it will be solved soon.
Update 2: model.summary now has expand_nested parameter. #15251

Keras LSTM: How to give true value for every timestep (Many to many)

This might be noob question. I have tried my best find the answers.
Basically I want LSTM to calculated error based on very timestep. I want to give true value for every timestep. I have tried giving dimension x=(2,10,1) and y=(2,10,1) which doesn't work , predict function outputs 3d array instead of 2d array. what I am doing wrong here?
I
You should use LSTM with return_sequences=True followed by Dense layer and then flatten the output of the Dense layer.
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
ins = Input(shape=(10, 3)) # considering 3 input features
lstm = LSTM(256, return_sequences=True)(ins)
dense = Dense(1)(lstm)
flat = Flatten()(dense)
model = Model(inputs=ins, outputs=flat)
model.summary()
This will build the following model
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 10, 3)] 0
_________________________________________________________________
lstm_1 (LSTM) (None, 10, 256) 266240
_________________________________________________________________
dense_1 (Dense) (None, 10, 1) 257
_________________________________________________________________
flatten (Flatten) (None, 10) 0
=================================================================
Total params: 266,497
Trainable params: 266,497
Non-trainable params: 0
_________________________________________________________________

How to implement a tensorflow2 layer, tf.nn.conv1d_transpose inside a keras model architecture?

I need to use Transpose Conv1D layer which keras don't have yet , but tensorfow2 does . Till now i can only code in keras. Is there any way to implement a tf.nn.conv1d_transpose layer directly in a keras model along with other keras layers?
Please provide some sample code.
Please refer sample code to add tf.nn.conv1d_transpose inside a keras Sequential model
%tensorflow_version 1.x
# Importing dependency
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv1D, MaxPooling1D, Dropout, BatchNormalization, Lambda
# Create a sequential model
model = Sequential()
x=input=[None,256,16]
def conv1d_transpose(x):
return tf.nn.conv1d_transpose(x, filters=[3.0,8.0,16.0], output_shape=[100, 1024, 8], strides=(4), padding="SAME")
model.add(Conv1D(32,250,padding='same',input_shape=(1500,9)))
model.add(MaxPooling1D(2))
model.add(Dropout(0.5))
model.add(BatchNormalization())
model.add(Lambda(conv1d_transpose, name='conv1d_transpose'))
# Display Model
model.summary()
Output:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, 1500, 32) 72032
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 750, 32) 0
_________________________________________________________________
dropout (Dropout) (None, 750, 32) 0
_________________________________________________________________
batch_normalization (BatchNo (None, 750, 32) 128
_________________________________________________________________
conv1d_transpose (Lambda) (100, 1024, 8) 0
=================================================================
Total params: 72,160
Trainable params: 72,096
Non-trainable params: 64
_________________________________________________________________

Tensorflow keras Sequential .add is different than inline definition?

Keras is giving different results when I define my model via the declarative method instead of the functional method. The two models appear to be equivillent, but using the ".add()" syntax works while using the declarative syntax gives errors -- it's a different error each time, but usually something like:
A target array with shape (10, 1) was passed for an output of shape (None, 16) while using as loss `mean_squared_error`. This loss expects targets to have the same shape as the output.
There seems to be something going on with auto-conversion of input shapes, but I can't tell what. Does anyone know what I'm doing wrong? Why aren't these two models exactly equivillent?
import tensorflow as tf
import tensorflow.keras
import numpy as np
x = np.arange(10).reshape((-1,1,1))
y = np.arange(10)
#This model works fine
model = tf.keras.Sequential()
model.add(tf.keras.layers.LSTM(32, input_shape=(1, 1), return_sequences = True))
model.add(tf.keras.layers.LSTM(16))
model.add(tf.keras.layers.Dense(1))
model.add(tf.keras.layers.Activation('linear'))
#This model fails. But shouldn't this be equivalent to the above?
model2 = tf.keras.Sequential(
{
tf.keras.layers.LSTM(32, input_shape=(1, 1), return_sequences = True),
tf.keras.layers.LSTM(16),
tf.keras.layers.Dense(1),
tf.keras.layers.Activation('linear')
})
#This works
model.compile(loss='mean_squared_error', optimizer='adagrad')
model.fit(x, y, epochs=1, batch_size=1, verbose=2)
#But this doesn't! Why not? The error is different each time, but usually
#something about the input size being wrong
model2.compile(loss='mean_squared_error', optimizer='adagrad')
model2.fit(x, y, epochs=1, batch_size=1, verbose=2)
Why aren't those two models equivalent? Why does one handle the input size correctly but the other doesn't? The second model fails with a different error each time (once in a while it even works) so i thought maybe there's some interaction with the first model? But I've tried commenting out the first model and that doesn't help. So why doesn't the second one work?
UPDATE: Here is the "model.summary() for the first and second model. They do seem different but I don't understand why.
For model.summary():
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm (LSTM) (None, 1, 32) 4352
_________________________________________________________________
lstm_1 (LSTM) (None, 16) 3136
_________________________________________________________________
dense (Dense) (None, 1) 17
_________________________________________________________________
activation (Activation) (None, 1) 0
=================================================================
Total params: 7,505
Trainable params: 7,505
Non-trainable params: 0
For model2.summary():
model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_2 (LSTM) (None, 1, 32) 4352
_________________________________________________________________
activation_1 (Activation) (None, 1, 32) 0
_________________________________________________________________
lstm_3 (LSTM) (None, 16) 3136
_________________________________________________________________
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 7,505
Trainable params: 7,505
Non-trainable params: 0```
When you are creating the model with the inline declarations, you put the layers in curly braces {}, which makes it a set, which is inherently unordered. Change the curly braces to square brackets [] to put them in an ordered list. This will make sure that the layers are in the correct order in your model.