Tensorfow-lite PReLU Fusion and TransposeConv Bias - tensorflow

When we convert a tf.keras model with PReLU with tf 1.15, the PReLU layers becomes ReLU and seem to get fused with previous operators. As a result, the keras h5 file of 28 MB becomes 1.3 MB in size.It looks like number of parameters gets significantly less since i did not use share weights axes option with PReLU. So, does this conversion work properly without any accuracy loss? Are the weights of PReLU discarded altogether? Similarly does the fusion take into account the bias of transpose convolution layers(bias is not mentioned as input property in netron). Do these fusions preserve the trained weight parameters internally and do they effect the inference accuracy of tflite?
Prelu Fusion:-
input = Input(shape=(512,512,3), name='ip')
x = Conv2D(filters=8, kernel_size=2, strides=2, padding='valid')(input)
x = PReLU()(x) # shared_axes not used
It shows prelu/ReLU in output property
Transpose conv:-
cout1 = Conv2DTranspose(filters=8, kernel_size=2, strides=2, padding = 'same' )(pout1) # Bias is true by default
It does not show bias in output property
So, does the fusion work properly by combining weights or are they being discarded?

If all the values in the weights are zeros it automatically discards them during fusion/conversion. So, PReLU became ReLU after fusion and transpose conv+bias became transpose conv. The problem arises when you convert a model to tflite format before training, since the weights have their default values(zeros).

Related

Should Relu be used in LSTM hidden layers if Targets contain negative values?

I'm aware that Relu as an ouput layer will only produce non-negative values, should Relu be used however in the hidden layers if targets contains Negative and Positive values ? (A linear regression model for Time Series)
Simple LSTM Example:
model = Sequential()
model.add(LSTM(64, activation = "relu")) # or without Relu?
model.add(Dropout(0.2))
model.add(LSTM(32, activation = "relu")) # or without Relu?
model.add(Dropout(0.2))
model.add(Dense(1))
Additional info: Targets are daily pct Change so mostly distribution is centered around 0 with range -10 < targets < 10
Yes, using Relu is not an error here, but this does not mean that an other activation function would not generate better results, i would still try in doubt other functions, like leaky Relu.
The reason on why the Relu function is not wrong has to do with the fact that when you reach the relu the input has been already elaborated by the layer, this means that there is not a loss of information, because the negative values have been already modified by the network.
The only thing to keep in mind is that if you use relu before the output you can't generate negative values for obvious reasons.

Dropout only on specific column in Keras

I am training an autoencoder using keras,with the encoder part as :
self.encoder = tf.keras.Sequential()
self.encoder.add(tf.keras.layers.Dropout(rate=0.2))
self.encoder.add(layers.Dense(14, activation='relu'))
self.encoder.add(layers.Dense(10, activation='relu'))
I am using Dropout at the start to create noise.My input is a 14-dimensional dataset.What dropout does now is dropping randomly each time 20% of the nodes meaning dropping 20% of the features at each time.What i would like to do is drop a specific feature,let's say feature_3(i suppose this means dropping a specific node),with a probability of 20% in each training step.
Could this be done using Keras?
If yes then how?
I do think you misunderstand how Dropout works.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout
Your expectations is what dropout actually is. Also keras.layers.Dropout does not "create noise"
If you'd like to set the dropout mask:
noise_shape: 1D integer tensor representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape (batch_size, timesteps, features) and you want the dropout mask to be the same for all timesteps, you can use noise_shape=(batch_size, 1, features).
Note that noise_shape describes the behavior of the feature's dropout and is not related to adding/substracting noise to your features.

Bounding Box regression using Keras transfer learning gives 0% accuracy. The output layer with Sigmoid activation only outputs 0 or 1

I am trying to create an object localization model to detect license plate in an image of a car. I used VGG16 model and excluded the top layer to add my own dense layers, with the final layer having 4 nodes and sigmoid activation to get (xmin, ymin, xmax, ymax).
I used the functions provided by keras to read image, and resize it to (224, 244, 3), and also used preprocess_input() function to process the input. I also tried to manually process the image by resizing with padding to maintain proportion, and normalize the input by dividing by 255.
Nothing seems to work when I train. I get 0% train and test accuracy. Below is my code for this model.
def get_custom(output_size, optimizer, loss):
vgg = VGG16(weights="imagenet", include_top=False, input_tensor=Input(shape=IMG_DIMS))
vgg.trainable = False
flatten = vgg.output
flatten = Flatten()(flatten)
bboxHead = Dense(128, activation="relu")(flatten)
bboxHead = Dense(32, activation="relu")(bboxHead)
bboxHead = Dense(output_size, activation="sigmoid")(bboxHead)
model = Model(inputs=vgg.input, outputs=bboxHead)
model.compile(loss=loss, optimizer=optimizer, metrics=['accuracy'])
return model
X and y were of shapes (616, 224, 224, 3) and (616, 4) respectively. I divided the coordinates by the length of the respective sides so each value in y is in range (0,1).
I'll link my python notebook below from github so you can see the full code. I am using google colab to train the model.
https://github.com/gauthamramesh3110/image_processing_scripts/blob/main/License_Plate_Detection.ipynb
Thanks in advance. I am really in need of help here.
If you're doing object localization task then you shouldn't using 'accuracy' as your metrics, because docs of compile() said:
When you pass the strings 'accuracy' or 'acc', we convert this to one
of tf.keras.metrics.BinaryAccuracy,
tf.keras.metrics.CategoricalAccuracy,
tf.keras.metrics.SparseCategoricalAccuracy based on the loss function
used and the model output shape
You should using tf.keras.metrics.MeanAbsoluteError, IoU(Intersection Over Union) or mAP(Mean Average Precision) instead

efficientnet.tfkeras vs tf.keras.applications.efficientnet

I am trying to use efficientnet to custom train my dataset.
And I find out with all other code/data/config the same.
efficientnet.tfkeras.EfficientNetB0 can gives ~90% training/prediction accruacy and tf.keras.applications.efficientnet.EfficientNetB0 only gives ~70% accuracy.
But I guess both should be the same implementation of the efficient net, or I am missing something here?
I am using latest efficientnet and Tensorflow 2.3.0
with strategy.scope():
model = tf.keras.Sequential([
efficientnet.tfkeras.EfficientNetB0( #tf.keras.applications.efficientnet.EfficientNetB0
input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3),
weights='imagenet',
include_top=False
),
L.GlobalAveragePooling2D(),
L.Dense(1, activation='sigmoid')
])
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=['binary_crossentropy']
)
model.summary()
I did run into the same problem for EfficientNetB4 and did encounter the following:
The number of total parameters are not equal. The trainable parameters are equal, but the non-trainable parameters aren't. The efficientnet.tfkeras has 7 fewer non-trainable parameters than the tf.keras.applications model.
The number of layers are not equal, the efficientnet.tfkeras has fewer layers than tf.keras.application model.
The different layers are at the very beginning, the most noteworthy are the normalization and rescaling layers, which are in the tf.keras.applications model, but not in the efficientnet.tfkeras model. You can observe this yourself using the model.summary() method.
When applying this layer, by using model.layers[i](array), it turn out these layers do rescale the image by dividing it by 255 and applying normalization according to:
(input_image - IMAGENET_MEAN) / square_root(IMAGENET_STD)
Thus, it turns out the image normalization is build into the model. When you perform this normalization yourself to the input image, the image will be normalized twice resulting in extremely small pixel values. The model will therefore have a hard time learning.
TLDR: Do not normalize the input image as it is build into the tf.keras.application model, input images should have values in the range 0-255.

add Batch Normalization immediately before non-linearity or after in Keras?

def conv2d_bn(x, nb_filter, nb_row, nb_col,
border_mode='same', subsample=(1, 1),
name=None):
'''Utility function to apply conv + BN.
'''
x = Convolution2D(nb_filter, nb_row, nb_col,
subsample=subsample,
activation='relu',
border_mode=border_mode,
name=conv_name)(x)
x = BatchNormalization(axis=bn_axis, name=bn_name)(x)
return x
When I use official inception_v3 model in keras, I find that they use BatchNormalization after 'relu' nonlinearity as above code script.
But in the Batch Normalization paper, the authors said
we add the BN transform immediately before the nonlinearity, by
normalizing x=Wu+b.
Then I view the implementation of inception in tensorflow which add BN immediately before the nonlinearity as they said. For more details in inception ops.py
I'm confused. Why do people use above style in Keras other than the following?
def conv2d_bn(x, nb_filter, nb_row, nb_col,
border_mode='same', subsample=(1, 1),
name=None):
'''Utility function to apply conv + BN.
'''
x = Convolution2D(nb_filter, nb_row, nb_col,
subsample=subsample,
border_mode=border_mode,
name=conv_name)(x)
x = BatchNormalization(axis=bn_axis, name=bn_name)(x)
x = Activation('relu')(x)
return x
In the Dense case:
x = Dense(1024, name='fc')(x)
x = BatchNormalization(axis=bn_axis, name=bn_name)(x)
x = Activation('relu')(x)
I also use it before the activation, which is indeed how it was designed, and so do other libraries, such as lasagne's batch_norm http://lasagne.readthedocs.io/en/latest/modules/layers/normalization.html#lasagne.layers.batch_norm .
However it seems that in practice placing it after the activation works a bit better:
https://github.com/ducha-aiki/caffenet-benchmark/blob/master/batchnorm.md
(this is just one benchmark though)
In addition to the original paper using batch normalization before the activation, Bengio's book Deep Learning, section 8.7.1 gives some reasoning for why applying batch normalization after the activation (or directly before the input to the next layer) may cause some issues:
It is natural to wonder whether we should apply batch normalization to
the input X, or to the transformed value XW+b. Ioffe and Szegedy (2015)
recommend the latter. More specifically, XW+b should be replaced by a
normalized version of XW. The bias term should be omitted because it
becomes redundant with the β parameter applied by the batch
normalization reparameterization. The input to a layer is usually the
output of a nonlinear activation function such as the rectified linear
function in a previous layer. The statistics of the input are thus
more non-Gaussian and less amenable to standardization by linear
operations.
In other words, if we use a relu activation, all negative values are mapped to zero. This will likely result in a mean value that is already very close to zero, but the distribution of the remaining data will be heavily skewed to the right. Trying to normalize that data to a nice bell-shaped curve probably won't give the best results. For activations outside of the relu family this may not be as big of an issue.
Keep in mind that there have been reports of models getting better results when using batch normalization after the activation, so it is probably worthwhile to test your model using both configurations.