Data Preprocessing - Input Shape for TimeDistributed CNN (LRCN) & ConvLSTM2D for Video Classification - tensorflow

I'm trying to do binary classification for labeled data for 300+ videos. The goal is to extract features using a ConvNet and feed into to an LSTM for sequencing with a binary output after evaluating all the frames in the video. I've preprocessed each video to have exactly 200 frames with each image being 256 x 256 so that it would be easier to feed into a DNN and split the dataset into two folders as labels. (e.g. dog and cat)
However, after searching stackoverflow for hours, I'm still unsure how to reshape the dataset of video frames so that the model accounts for the number of frames. I'm trying to feed the video frames into a 3D ConvNets and TimeDistributed (2DConvNets) + LSTM, (e.g. (300, 200, 256, 256, 3) ) with no luck. I'm able to perform 2D ConvNet classification (data is a 4D Tensor, need to add a time step dimension to make it a 5D Tensor
) pretty easily but now having issues wrangling with the temporal aspect.
I've been using Keras ImageDataGenerator and train_datagen.flow_from_directory to read in the images and have been running into shape mismatch errors when I attempt to feed it to a TimeDistributed ConvNet. I know hypothetically if I have a X_train dataset I can potentially do X_train = X_train.reshape(...). Any example code would be very much appreciated.

I think you could use ConvLSTM2D in Keras for your purpose. ImageDataGenerator is very good for CNN with images, but may be not convenient for CRNN with videos.
You have already transformed your 300 videos data in the same shape (200, 256, 256, 3), each video 200 frames, each frame 256x256 rgb. Next, you need to load them in a numpy array in shape (300, 200, 256, 256, 3). For reading videos in numpy arrays see this answer.
Then you can feed the data in a CRNN. Its first ConvLSTM2D layer should have input_shape = (None, 200, 256, 256, 3).
A sample according to your data: (only illustrated and not tested)
from keras.models import Sequential
from keras.layers import Dense
from keras.layers.convolutional_recurrent import ConvLSTM2D
model = Sequential()
model.add(ConvLSTM2D(filters = 32, kernel_size = (5, 5), input_shape = (None, 200, 256, 256, 3)))
### model.add(...more layers)
model.add(Dense(units = num_of_categories, # num of your vedio categories
kernel_initializer = 'Orthogonal', activation = 'softmax'))
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
# then train it
model.fit(video_data, # shape (300, 200, 256, 256, 3)
[list of categories],
batch_size = 20,
epochs = 50,
validation_split = 0.1)
I hope this could be a little helpful.

Related

Tensorflow input shape incompatible with layer

I'm trying to build a Sequential model with tensorflow.
import tensorflow as tf
import keras
from tensorflow.keras import layers
from keras import optimizers
import numpy as np
model = keras.Sequential (name="model")
model.add(keras.Input(shape=(786,)))
model.add(layers.Dense(2048, activation="relu", name="layer1"))
model.add(layers.Dense(786, activation="relu", name="layer2"))
model.add(layers.Dense(786, activation="relu", name="layer3"))
output = model.add(layers.Dense(786, activation="relu", name="output"))
model.summary()
model.compile(
optimizer=tf.optimizers.Adam(), # Optimizer
loss=keras.losses.CategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
history = model.fit(
x_train,
y_train,
batch_size=1,
epochs=5,
)
The input shape is a vector with length of 768 (so the input shape is (768,) right?), representing a chess board:
def get_dataset():
container = np.load('/content/drive/MyDrive/test_data_vector.npz')
b, v = container['arr_0'], container['arr_1']
v = np.asarray(v / abs(v).max() / 2 + 0.5, dtype=np.float32) # normalization (0 - 1)
return b, v
xtrain, ytrain = get_dataset()
print(xtrain.shape)
print(ytrain.shape)
>> (37, 786) #there are 37 samples
>> (37, 786)
But I always get the error:
ValueError: Input 0 of layer model is incompatible with the layer: expected axis -1 of input shape to have value 786 but received input with shape (1, 1, 768)
I tried with np.expand_dims(), which ended in the same Error.
The error is just a typo, as the user mentioned the issue is resolved by changing the output shape from 786 to 768 and the issue is resolved.
One suggestion based on the model structure.
The number of units are not related to your input shape, you don't have to match that number.
The number of units like 2048 and 786 in dense layer is too large and this may not help the model to learn better.
Try with smaller numbers like 32,64 etc, you can refer some of the examples in the tensorflow document.

How transfer learning on EfficientNets work for grayscale images?

My question concerns more about how the algorithm work. I have successfully implemented EfficientNet integration and modelization for grayscale images and now I want to understand why it works.
Here the most important aspect is the grayscale and its 1 channel. When I put channels=1, the algorithm doesn't work because, if I understood right, it was made on 3-channel images. When I put channels=3 it works perfectly.
So my question is, when I put channels = 3 and feed the model with preprocessed images with channels=1, why it continues to work?
Code for EfficientNetB5
# Variable assignments
num_classes = 9
img_height = 84
img_width = 112
channels = 3
batch_size = 32
# Make the input layer
new_input = Input(shape=(img_height, img_width, channels),
name='image_input')
# Download and use EfficientNetB5
tmp = tf.keras.applications.EfficientNetB5(include_top=False,
weights='imagenet',
input_tensor=new_input,
pooling='max')
model = Sequential()
model.add(tmp) # adding EfficientNetB5
model.add(Flatten())
...
Code of preprocessing into grayscale
data_generator = ImageDataGenerator(
validation_split=0.2)
train_generator = data_generator.flow_from_directory(
train_path,
target_size=(img_height, img_width),
batch_size=batch_size,
color_mode="grayscale", ###################################
class_mode="categorical",
subset="training")
I dug into what happens when you give grayscale images to efficient net models with three-channel inputs.
Here are the first layers of Efficient Net B5 whose input_shape is (128,128,3)
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_7 (InputLayer) [(None, 128, 128, 3 0 []
)]
rescaling_7 (Rescaling) (None, 128, 128, 3) 0 ['input_7[0][0]']
normalization_13 (Normalizatio (None, 128, 128, 3) 7 ['rescaling_7[0][0]']
n)
tf.math.truediv_4 (TFOpLambda) (None, 128, 128, 3) 0 ['normalization_13[0][0]']
stem_conv_pad (ZeroPadding2D) (None, 129, 129, 3) 0 ['tf.math.truediv_4[0][0]']
And here is the shape of the output of each of these layers when the model has as input a grayscale image:
input_7 (128, 128, 1)
rescaling_7 (128, 128, 1)
normalization_13 (128, 128, 3)
tf.math.truediv_4 (128, 128, 3)
stem_conv_pad (129, 129, 3)
As you can see, the number of channels of the output tensor switches from 1 to 3 when proceeding to the normalization_13 layer, so let's see what this layer is actually doing.
The Normalization layer is performing this operation on the input tensor:
(input_tensor - self.mean) / sqrt(self.var) // see https://www.tensorflow.org/api_docs/python/tf/keras/layers/Normalization
The number of channels changes after the subtraction. As a matter of fact, self.mean looks like this :
<tf.Tensor: shape=(1, 1, 1, 3), dtype=float32, numpy=array([[[[0.485, 0.456, 0.406]]]], dtype=float32)>
So self.mean has three channels and when performing the subtraction between a tensor with one channel and a tensor with three channels, the output looks like this: [firstTensor - secondTensorFirstChannel, firstTensor - secondTensorSecondChannel, firstTensor - secondTensorThirdChannel]
And this is how the magic happens and this is why the model can take as input grayscale images!
I have checked this with efficient net B5 and with efficient net B2V2. Even if they have differences in the way the Normalization layer is declared, the process is the same. I suppose that is also the case for the other efficient net models.
I hope it was clear enough!
This is interesting. If training still works with channels = 3 even though the input is grayscale, I would check the batch shape of the train_generator(maybe print a couple of batches to get a feel for it). Here is a code snippet to quickly check the batch shape. (plotImages() is available in Tensorflow docs)
imgs,labels = next(train_generator)
print('Batch shape: ',imgs.shape)
plotImages(imgs,labels)

How to load MobileNet weights with an input tensor in Keras

I'm trying to apply transfer learning to MNIST using MobileNet weights in Keras. Keras documentation to use MobileNet https://keras.io/applications/#mobilenet
Mobilenet accepts 224x224x3 as input but MNIST is 28x28x1. I'm creating a Lambda layer which can convert 28x28x1 image into 224x224x3 and send it as input to MobileNet. The following code causes
TypeError: Input layers to a Model must be InputLayer objects. Received inputs: Tensor("lambda_2/ResizeNearestNeighbor:0", shape=(?, 224, 224, 3), dtype=float32). Input 0 (0-based) originates from layer type Lambda.
height = 28
width = 28
input_image = Input(shape=(height,width,1))
def resize_image_to_inception(x):
x = K.repeat_elements(x, 3, axis=3)
x = K.resize_images(x, 8, 8, data_format="channels_last")
return x
input_image_ = Lambda(resize_image_to_inception, output_shape=(224, 224, 3))(input_image)
print(type(input_image_))
base_model = MobileNet(input_tensor=input_image_, weights='imagenet', include_top=False)

Visualization of Keras Convolution Layer Outputs

I have written the following code for this question where there are two convolution layers (Conv1 and Conv2 for short) and I would like to plot all the outputs of each layer (it's self-contained). Everything is fine for Conv1, but I am missing something about Conv2.
I am feeding a 1x1x25x25 (num images, num channels, height, width (my convention, neither TF or Theano convention)) image to Conv1 which has FOUR 5x5 filters. That means its output shape is 4x1x1x25x25 (num filters, num images, num channels, height, width), resulting in 4 plots.
Now, this output is being fed to Conv1 which has SIX 3x3 filters. Hence, the output of Conv2 should be 6x(4x1x1x25x25), but it is not! It's rather 6x1x1x25x25. That means, there are only 6 plots rather than 6x4, but why? The following functions also prints the shape of each output which they are
(1, 1, 25, 25, 4)
-------------------
(1, 1, 25, 25, 6)
-------------------
but should be
(1, 1, 25, 25, 4)
-------------------
(1, 4, 25, 25, 6)
-------------------
Right?
import numpy as np
#%matplotlib inline #for Jupyter ONLY
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Conv2D
from keras import backend as K
model = Sequential()
# Conv1
conv1_filter_size = 5
model.add(Conv2D(nb_filter=4, nb_row=conv1_filter_size, nb_col=conv1_filter_size,
activation='relu',
border_mode='same',
input_shape=(25, 25, 1)))
# Conv2
conv2_filter_size = 3
model.add(Conv2D(nb_filter=6, nb_row=conv2_filter_size, nb_col=conv2_filter_size,
activation='relu',
border_mode='same'))
# The image to be sent through the model
img = np.array([
[[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.]],
[[1.],[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[1.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.],[1.]],
[[1.],[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[1.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[0.],[1.],[1.]],
[[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[1.]],
[[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[1.]],
[[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[1.]],
[[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.]],
[[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.]],
[[1.],[0.],[0.],[1.],[1.],[1.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[1.],[1.],[1.],[0.],[0.],[1.]],
[[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.]],
[[1.],[1.],[0.],[1.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[1.],[1.]],
[[1.],[1.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.],[1.]],
[[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.]],
[[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.],[1.]]])
def get_layer_outputs(image):
'''This function extracts the numerical output of each layer.'''
outputs = [layer.output for layer in model.layers]
comp_graph = [K.function([model.input] + [K.learning_phase()], [output]) for output in outputs]
# Feeding the image
layer_outputs_list = [op([[image]]) for op in comp_graph]
layer_outputs = []
for layer_output in layer_outputs_list:
print(np.array(layer_output).shape, end='\n-------------------\n')
layer_outputs.append(layer_output[0][0])
return layer_outputs
def plot_layer_outputs(image, layer_number):
'''This function handels plotting of the layers'''
layer_outputs = get_layer_outputs(image)
x_max = layer_outputs[layer_number].shape[0]
y_max = layer_outputs[layer_number].shape[1]
n = layer_outputs[layer_number].shape[2]
L = []
for i in range(n):
L.append(np.zeros((x_max, y_max)))
for i in range(n):
for x in range(x_max):
for y in range(y_max):
L[i][x][y] = layer_outputs[layer_number][x][y][i]
for img in L:
plt.figure()
plt.imshow(img, interpolation='nearest')
plot_layer_outputs(img, 1)
The output of a convolution layer is bundled as one image with multiple channels. These could be thought as feature channels, in contrast with color channels. For example, if a convolution layer has F number of filters, it will output an image with F number of channels, no matter how many (color or feature) channels the input image had. This is why Conv2 produces 6 feature maps rather than 6x4.
In more details, a convolution filter will convolve over all input channels and the linear combination of its convolution would be fed to its activation function.

Loading weights in TH format when keras is set to TF format

I have Keras' image_dim_ordering property set to 'tf', so I define my models as this:
model = Sequential()
model.add(ZeroPadding2D((1, 1), input_shape=(224, 224, 3)))
model.add(Convolution2D(64, 3, 3, activation='relu'))
But when I call load_weights method, it crashes because my model was saved using "th" format:
Exception: Layer weight shape (3, 3, 3, 64) not compatible with provided weight shape (64, 3, 3, 3)
How can I load these weights and automatically transpose them to fix Tensorflow's format?
I asked Francois Chollet about this (he doesn't have an SO account) and he kindly passed along this reply:
"th" format means that the convolutional kernels will have the shape (depth, input_depth, rows, cols)
"tf" format means that the convolutional kernels will have the shape (rows, cols, input_depth, depth)
Therefore you can convert from the former to the later via np.transpose(x, (2, 3, 1, 0)) where x is the value of the convolution kernel.
Here's some code to do the conversion:
from keras import backend as K
K.set_image_dim_ordering('th')
# build model in TH mode, as th_model
th_model = ...
# load weights that were saved in TH mode into th_model
th_model.load_weights(...)
K.set_image_dim_ordering('tf')
# build model in TF mode, as tf_model
tf_model = ...
# transfer weights from th_model to tf_model
for th_layer, tf_layer in zip(th_model.layers, tf_model.layers):
if th_layer.__class__.__name__ == 'Convolution2D':
kernel, bias = layer.get_weights()
kernel = np.transpose(kernel, (2, 3, 1, 0))
tf_layer.set_weights([kernel, bias])
else:
tf_layer.set_weights(tf_layer.get_weights())
In case the model contains Dense layers downstream of the Convolution2D layers, then the weight matrix of the first Dense layer would need to be shuffled as well.
You can Use This Script which auto translates theano/tensorflow backend trained model weights directly into the other 3 possible combinations of backend / dim ordering.