Issue retrieving value decode_predictions` expects a batch of predictions - tensorflow

I have a pretrained model trying to remove a layer and perform prediction on the new model. However retrieving error.
model = applications.VGG16(include_top=False, input_shape=(224, 224, 3), weights='imagenet')
layers = [l for l in model.layers]
x = layers[9].output
x = layers[11](x)
x = layers[12](x)
x = layers[13](x)
x = layers[14](x)
x = layers[15](x)
x = layers[16](x)
x = layers[17](x)
x = layers[18](x)
result_model = Model(inputs=layers[0].input, outputs=x)
img='/content/elephant.jpg'
img = image.load_img(img, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = result_model.predict(x)
print('Predicted:', decode_predictions(preds, top=3)[0])
Error
ValueError: `decode_predictions` expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (1, 14, 14, 512)

Your neural network doesn't have an output layer. decode_predictions can't decode the output of a convolutional layer, which is what you get when you do include_top=False. Do this:
model = applications.VGG16(include_top=True, input_shape=(224, 224, 3),
weights='imagenet')

Related

ValueError: Input 0 of layer conv1_pad is incompatible with the layer: expected ndim=4, found ndim=2. Full shape received: (None, 1)

I'm getting this error when I try to replicate Facial Recognition using MobileNet on the VGGFace2 dataset.
ValueError: Input 0 of layer conv1_pad is incompatible with the layer: expected ndim=4, found ndim=2. Full shape received: (None, 1)
The below code is snippet of data generator where X is an dict with np array of images
# Generate data
X = {
"anchor_input":np.array(anchor_images),
"positive_input":np.array(positive_images),
"negative_input":np.array(negative_images),
}
Y = np.zeros((self.batch_size,2))
return (X, Y)
Output for X["anchor_input"].shape is (20, 224, 224, 3) where 20 is the batch size.
inpShape=(224,224,3)
model = tf.keras.applications.MobileNetV2(
input_shape=inpShape,
alpha=1.0,
include_top=False,
weights="imagenet",
input_tensor=None,
pooling=None,
classes=1000,
classifier_activation="softmax"
)
outputLayer = model.get_layer('block_14_add').output
x = tf.keras.layers.Conv2D(80,(3,1),padding="same",name="extra_conv0")(outputLayer)
x = tf.keras.layers.Conv2D(64,(3,3),padding="same",name = "extra_conv1")(x)
x = tf.keras.layers.AveragePooling2D()(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(64)(x)
baseNetwork = tf.keras.Model(inputs=model.input,outputs=x,name="basemodel")
input1 = tf.keras.layers.Input(shape=inpShape,name='anchor_input')
input2 = tf.keras.layers.Input(shape=inpShape,name='positive_input')
input3 = tf.keras.layers.Input(shape=inpShape,name='negative_input')
anchor_out = baseNetwork(input1)
positive_out = baseNetwork(input2)
negative_out = baseNetwork(input3)
output = DistanceLayer()(anchor_out,positive_out,negative_out)
inputs = (input1,input2,input3)
contrastive_model = tf.keras.Model(inputs=inputs,outputs=output)
contrastive_model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
loss=contrastive_loss_mutated,
metrics=[contrastive_loss_mutated,contrastive_accuracy])
history1 = contrastive_model.fit(
trainGenerator,
epochs = 25,
validation_data=testGenerator,
callbacks=my_callbacks)
Please help me out with this.

Keras dropping dimension in LSTM, not sure why

Following up to Keras LSTM Input 0 of layer sequential_10 is incompatible with the layer
I now have the following code:
def myLSTM(i_shape, o_shape):
input = keras.layers.Input(i_shape)
print(input)
model = Sequential()
x = keras.layers.LSTM(128, return_sequences = True, input_shape = [1, x_train.shape[0], x_train.shape[1]])(input)
x = keras.layers.Dropout(0.2)(x)
x = keras.layers.LSTM(128, return_sequences = True)(x)
x = keras.layers.Dropout(0.2)(x)
x = keras.layers.LSTM(64, return_sequences = True)(x)
x = keras.layers.Dropout(0.2)(x)
output = keras.layers.Dense(units = 1, activation='softmax')(x)
print('input: ', input)
print('outpput: ', output)
return keras.Model(input, output)
print(x_train.shape)
my_lstm = myLSTM(x_train.shape, y_train.shape)
my_lstm.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
my_lstm.summary()
But I get the following error:
ValueError: Input 0 is incompatible with layer model_22: expected shape=(None, 25000, 100), found shape=(None, 100)
I'm not sure why it dropped the 25000 from the dimensions. I have printed out the input variable in the first line and it is of dimension (None, 25000, 100)

How can I reduce the dimension of data, loaded through the flow_from_directory function of ImageDataGenerator?

Since I load my data (images) from the structured folders, I utilize the flow_from_directory function of the ImageDataGenerator class, which is provided by Keras. I've no issues while feeding this data to a CNN model. But when it comes to an LSTM model, getting the following error: ValueError: Error when checking input: expected lstm_1_input to have 3 dimensions, but got array with shape (64, 28, 28, 1). How can I reduce the dimension of the input data while reading it via ImageDataGenerator objects to be able to use an LSTM model instead of a CNN?
p.s. The shape of the input images is (28, 28) and they are grayscale.
train_valid_datagen = ImageDataGenerator(validation_split=0.2)
train_gen = train_valid_datagen.flow_from_directory(
directory=TRAIN_IMAGES_PATH,
target_size=(28, 28),
color_mode='grayscale',
batch_size=64,
class_mode='categorical',
shuffle=True,
subset='training'
)
Update: The LSTM model code:
inp = Input(shape=(28, 28, 1))
inp = Lambda(lambda x: squeeze(x, axis=-1))(inp) # from 4D to 3D
x = LSTM(num_units, dropout=dropout, recurrent_dropout=recurrent_dropout, activation=activation_fn, return_sequences=True)(inp)
x = BatchNormalization()(x)
x = Dense(128, activation=activation_fn)(x)
output = Dense(nb_classes, activation='softmax', kernel_regularizer=l2(0.001))(x)
model = Model(inputs=inp, outputs=output)
you start feeding your network with 4D data like your images in order to have the compatibility with ImageDataGenerator and then you have to reshape them in 3D format for LSTM.
These are the possibilities:
with only one channel you can simply squeeze the last dimension
inp = Input(shape=(28, 28, 1))
x = Lambda(lambda x: tf.squeeze(x, axis=-1))(inp) # from 4D to 3D
x = LSTM(32)(x)
if you have multiple channels (this is the case of RGB images or if would like to apply a RNN after a Conv2D) a solution can be this
inp = Input(shape=(28, 28, 1))
x = Conv2D(32, 3, padding='same', activation='relu')(inp)
x = Reshape((28,28*32))(x) # from 4D to 3D
x = LSTM(32)(x)
the fit can be computed as always with model.fit_generator
UPDATE: model review
inp = Input(shape=(28, 28, 1))
x = Lambda(lambda x: squeeze(x, axis=-1))(inp) # from 4D to 3D
x = LSTM(32, dropout=dropout, recurrent_dropout=recurrent_dropout, activation=activation_fn, return_sequences=False)(x)
x = BatchNormalization()(x)
x = Dense(128, activation=activation_fn)(x)
output = Dense(nb_classes, activation='softmax', kernel_regularizer=l2(0.001))(x)
model = Model(inputs=inp, outputs=output)
model.summary()
pay attention when you define inp variable (don't overwrite it)
set return_seq = False in LSTM in order to have 2D output

When trying to feed in variable sequences to keras LSTMs ValueError: Error when checking input?

My model:
model = Sequential()
model.add( LSTM(25, batch_input_shape = (None, None, 19), return_sequences = True ) )
model.add(Dense(4, activation = 'tanh'))
model.compile(loss='mean_squared_error', optimizer ='adam', metrics = ['accuracy'])
some example of input data shape:
input_list[0].shape = (7,19)
input_list[1].shape = (8,19)
input_list[2].shape = (17,19)
some example of output data shape:
output_list[0].shape = (7,4)
output_list[1].shape = (8,4)
output_list[2].shape = (17,4)
input_list.shape = (233,)
output_list.shape = (233,)
error while:
d_loss = model.fit(input_list,output_list,validation_split=0.33,nb_epoch=100,verbose=1,shuffle=True, batch_size = 1)
error: ValueError: Error when checking input: expected lstm_22_input to have 3 dimensions, but got array with shape (233, 1)
Just increase the dimensions, by np.expand_dims(x, axis= 0). It will become three dimensional.

Neural network output issue

I built a neural network with tensorflow, here the code :
class DQNetwork:
def __init__(self, state_size, action_size, learning_rate, name='DQNetwork'):
self.state_size = state_size
self.action_size = action_size
self.learning_rate = learning_rate
with tf.variable_scope(name):
# We create the placeholders
self.inputs_ = tf.placeholder(tf.float32, shape=[state_size[1], state_size[0]], name="inputs")
self.actions_ = tf.placeholder(tf.float32, [None, self.action_size], name="actions_")
# Remember that target_Q is the R(s,a) + ymax Qhat(s', a')
self.target_Q = tf.placeholder(tf.float32, [None], name="target")
self.fc = tf.layers.dense(inputs = self.inputs_,
units = 50,
kernel_initializer=tf.contrib.layers.xavier_initializer(),
activation = tf.nn.elu)
self.output = tf.layers.dense(inputs = self.fc,
units = self.action_size,
kernel_initializer=tf.contrib.layers.xavier_initializer(),
activation=None)
# Q is our predicted Q value.
self.Q = tf.reduce_sum(tf.multiply(self.output, self.actions_))
# The loss is the difference between our predicted Q_values and the Q_target
# Sum(Qtarget - Q)^2
self.loss = tf.reduce_mean(tf.square(self.target_Q - self.Q))
self.optimizer = tf.train.AdamOptimizer(self.learning_rate).minimize(self.loss)
But i have an issue with the output,
the output should normaly be at the same size than "action_size", and action_size value is 3
but i got an output like [[5][3]] instead of just [[3]] and i realy don't understand why...
This network got 2 dense layers, one with 50 perceptrons and the other with 3 perceptrons (= action_size).
state_size is format : [[9][5]]
If someone know why my output is two dimensions i will be very thankful
Your self.inputs_ placeholder has shape (5, 9). You perform the matmul(self.inputs_, fc1.w) operation in dense layer fc1 which has shape (9, 50) and it results in shape (5, 50). You then apply another dense layer with shape (50, 3) which results in output shape (5, 3).
The same schematically:
matmul(shape(5, 9), shape(9, 50)) ---> shape(5, 50) # output of 1st dense layer
matmul(shape(5, 50), shape(50, 3)) ---> shape(5, 3) # output of 2nd dense layer
Usually, the first dimension of the input placeholder represents batch size and the second dimension is the dimension of inputs feature vector. So for each sample in a batch you (batch size is 5 in your case) you get the output shape 3.
To get probabilities, use this:
import tensorflow as tf
import numpy as np
inputs_ = tf.placeholder(tf.float32, shape=(None, 9))
actions_ = tf.placeholder(tf.float32, shape=(None, 3))
fc = tf.layers.dense(inputs=inputs_, units=2)
output = tf.layers.dense(inputs=fc, units=3)
reduced = tf.reduce_mean(output, axis=0)
probs = tf.nn.softmax(reduced) # <--probabilities
inputs_vals = np.ones((5, 9))
actions_vals = np.ones((1, 3))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(probs.eval({inputs_:inputs_vals,
actions_:actions_vals}))
# [0.01858923 0.01566187 0.9657489 ]