I am trying to train an object detection model as described in this paper
There are 3 fully connected layers with 512, 512, 25 neurons. The 16x55x55 feature map from the last convolutional layer is fed into the fully connected layers to retrieve the appropriate class. At this stage, every grid described by (16x1x1) is fed into the fully connected layers to classify the grid as belonging to one of the 25 classes. The structure can be seen in the pciture below
fully connected layers
I am trying to adapt the code from TF MNIST classification tutorial, and I would like to know if it is okay to just sum the losses from each grid as in the code snippet below and use it to train the model weights.
flat_fmap = tf.reshape(last_conv_layer, [-1, 16*55*55])
total_loss = 0
for grid of flat_fmap:
dense1 = tf.layers.dense(inputs=grid, units=512, activation=tf.nn.relu)
dense2 = tf.layers.dense(inputs=dense1, units=512, activation=tf.nn.relu)
logits = tf.layers.dense(inputs=dense2, units=25)
total_loss += tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(
loss=total_loss,
global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=tf.estimator.ModeKeys.TRAIN, loss=total_loss, train_op=train_op)
In the code above, I think at every iteration 3 new layers are being creating. However, I would like the weights to be preserved when classifying one grid and then another.
Adding to the total_loss should be ok.
tf.losses.sparse_softmax_cross_entropy is also adding losses together.
It calculates a sparse_softmax with logits and then reduces the resulting array though a sum using math_ops.reduce_sum.
So you are adding them together, one way or another.
As you can see in its source
The for loop on the network declaration seems unusual, it probably makes more sense to do it at run time and pass each grid through the feed_dict.
dense1 = tf.layers.dense(inputs=X, units=512, activation=tf.nn.relu)
dense2 = tf.layers.dense(inputs=dense1, units=512, activation=tf.nn.relu)
logits = tf.layers.dense(inputs=dense2, units=25)
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(loss)
total_loss = 0
with tf.session as sess:
sess.run(init)
for grid in flat_fmap:
_, l = sess.run([optimizer,loss], feed_dict{X: grid, labels=labels})
total_loss += l
Related
I am trying to solve the Spoken Digit Recognition task using the LSTM model, where the audio files are converted into spectrograms and fed into an LSTM model after doing Global Average Pooling. Here is the architecture of it
tf.keras.backend.clear_session()
#input layer
input_= Input(shape = (64, 35))
lstm = LSTM(100, activation='tanh', return_sequences= True, kernel_regularizer = l2(0.000001),
recurrent_initializer = 'glorot_uniform')(input_)
lstm = GlobalAveragePooling1D(data_format='channels_first')(lstm)
dense = Dense(20, activation='relu', kernel_regularizer = l2(0.000001), kernel_initializer='glorot_uniform')(lstm)
drop = Dropout(0.8)(dense)
dense1 = Dense(25, activation='relu', kernel_regularizer = l2(0.000001), kernel_initializer= 'he_uniform')(drop)
drop = Dropout(0.95)(dense1)
output = Dense(10,activation = 'softmax', kernel_regularizer = l2(0.000001), kernel_initializer= 'glorot_uniform')(drop)
model_2 = Model(inputs = [input_], outputs = output)
model_2.summary()
Having summary as -
I need to calculate the F1 score to check the performance of the model, I have implemented a custom callback and used TensorFlow addons F1 score too. However, I won't get the correct result, for every epoch I get the constant F1 score value.
On further digging, I found out that my model predicts the same class label, for the entire epoch, whereas it is supposed to predict 10 classes in one epoch. as there are 10 class label values present.
Here is my model.compile and model.predict commands. I have used TensorFlow addon here -
from tensorflow import keras
opt = keras.optimizers.Adam(0.001, clipnorm=0.8)
model_2.compile(loss='categorical_crossentropy', optimizer=opt, metrics = metric)
hist = model_2.fit([X_train_spectrogram],
[y_train_converted],
validation_data= ([X_test_spectrogram], [y_test_converted]),
epochs = 10,
verbose =1,
callbacks=[tensorBoard_callbk2, ClearMemory()],
# steps_per_epoch = 3,
batch_size=32)
Here is what I mean by getting the same prediction, the entire array is filled with the same predicted values.
Why is the model predicting the same class label? or How to rectify it?
I have tried increasing the number of trainable parameters, increasing - decreasing batch size too, but it won't help me. If anyone knows can you please help me out?
I am working on a triplet loss based model for this Kaggle competition.
Short Description- In this competition, we have been challenged to build an algorithm to identify individual whales in images by analyzing a database of containing more than 25,000 images, gathered from research institutions and public contributors.
https://www.kaggle.com/c/humpback-whale-identification?rvi=1
I have decided to use a Siamese network architecture and train it to give me encodings which I can then use to calculate the distance between two pictures of whales. If this distance is below a particular threshold the two pictures belong to the same whale and if this distance is greater then, they aren't the same whale.
This is the Triplet loss function(learnt it from Andrew's deeplearning specialization) I used but i also normalized the encoding's to make the loss function more interpretable(easier to determine margin and split point) across different models(if that makes sense).(First, tried it without the normalization and when it didnt work i tried normalizing.) I also have tried changing alpha(margin) and varied it from 0.2 to 0.6.
from tensorflow.nn import l2_normalize as norm_l2
def triplet_loss(y_true, y_pred, alpha = 0.3):
"""
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
"""
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
anchor, positive, negative = norm_l2(anchor), norm_l2(positive), norm_l2(negative)
# Step 1: Compute the (encoding) distance between the anchor and the positive
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor,positive)), axis = -1)
# Step 2: Compute the (encoding) distance between the anchor and the negative
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor,negative)), axis = -1)
# Step 3: subtract the two previous distances and add alpha.
basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)
# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0))
return loss
This is an example of one of the model architectures i tried out. I have tried using pretrained Facenet, ResNet, DenseNet and Xception till now. I have tried Freezing different numbers of layers in each.
R = tf.keras.applications.ResNet50(include_top=False, weights = 'imagenet', input_shape=(224,224,3))
lr = 0.0001
optimizer = Adam(learning_rate=lr)
R.compile(optimizer=optimizer, loss = triplet_loss)
for layer in R.layers[0:30]:
layer.trainable = False
em_Rmodel = Sequential([
R,
GlobalAveragePooling2D(),
#tf.keras.layers.GlobalMaxPooling2D(),
Dense(512, activation='relu'),
bn(),
Dense(256, activation = 'sigmoid'),
Dense(128, activation = 'sigmoid')
])
def make_tripletModel(model):
#I was manually changing the input shape to fit the default shape of pretrained networks
A = Input(shape = (224, 224, 3), name='anchor')
P = Input(shape = (224, 224, 3), name = 'anchorPositive')
N = Input(shape = (224, 224, 3), name = 'anchorNegative')
enc_A = model(A)
enc_P = model(P)
enc_N = model(N)
tripletModel = Model(inputs=[A, P, N], outputs=[enc_A, enc_P, enc_N])
return tripletModel
tripletModel = make_tripletModel(em_Rmodel)
I have been training using semi-hard triplets and have also been augmenting data properly to generate more training images.
This is the batch generator that i used for training. crop_batch is a function that crops images to show only the whale's tail, using which one can identify whales. It uses a DenseNet trained on more than 1000 images with whale tails and the bounding box surrounding it. Does the work sufficiently well.
def batch_generator_RN(batch_size = batch_size, ishape = (256, 256, 3), model_input_shape = (224, 224, 3)):
triplet_generator = get_triplets()
y_val = np.zeros((batch_size, 2, 1))
anchors = np.zeros((batch_size, ishape[0], ishape[1], ishape[2]))
positives = np.zeros((batch_size, ishape[0], ishape[1], ishape[2]))
negatives = np.zeros((batch_size, ishape[0], ishape[1], ishape[2]))
while True:
for i in range(batch_size):
anchors[i], positives[i], negatives[i] = next(triplet_generator)
anc = crop_batch(anchors, batch_size= batch_size, img_shape=model_input_shape)
pos = crop_batch(positives, batch_size= batch_size, img_shape=model_input_shape)
neg = crop_batch(negatives, batch_size= batch_size, img_shape=model_input_shape)
x_data = {'anchor': anc,
'anchorPositive': pos,
'anchorNegative': neg
}
yield (x_data, [y_val, y_val, y_val])
And finally, this, in general, is how i have been trying to train these models. I have tried reducing and increasing learning rate, batch_size = 16.
lr = 0.0001
optimizer = Adam(learning_rate=lr)
tripletModel.compile(optimizer = optimizer, loss = triplet_loss)
es = EarlyStopping(monitor='loss', patience=20, min_delta=0.05, restore_best_weights=True)
#mc = ModelCheckpoint('Rmodel.h5', monitor='loss', save_best_only=True, save_weights_only=True)
rlr = ReduceLROnPlateau(monitor='loss',min_delta=0.05,factor = 0.1,patience = 5, verbose = 1, min_lr = 0)
gen = batch_generator(batch_size)
tripletModel.fit(gen, steps_per_epoch=64, epochs = 40, callbacks=[es, rlr])
So after training all these models, in some models the triplet loss does go down for a while but then plateaus and basically learns nothing meaningful(which basically means that just by looking at the distance between two embeddings i cant figure out if they are the same whale or not.). In other models, immediately after the first or the second epoch the weights converge and don't change at all and doesn't learning anything.
I have tried a very wide range of learning rates and i am pretty sure that it isnt the problem.
Please tell me if i should add all the code files for you to understand the problem better. The reason i havent done it yet because i havent cleaned it but will gladly do so if required. Thanks.
When you say that it doesn't learn anything, is it that the loss reaches a plateau and thus it stops decreasing or it does decrease significantly but when you predict the embeddings of both same and different whales are are similar in value?
The triples_loss() fn and batch_generator_RN() fn are correct, the problem is not related to the data generation.
However, I suspect that your learning rate is too high while you freeze a lot of layers, i.e. numerous trainable parameters are frozen, which may lead to your network being unable to converge.
My suggestion is to unfreeze all the layers and decrease the learning rate to 0.00001 and start training again, regardless of the architecture that you use (Xception/ResNet etc.)
I am trying to construct a basic "vanilla gradient" saliency heatmap (gradient-based feature attribution) for MNIST using keras. I know there are libraries such as this one to compute saliency heatmaps, but I would like to construct this from scratch since the vanilla gradient approach seems conceptually straightforward to implement. I have trained the following digit classifier in Keras using functional model definition:
input = layers.Input(shape=(28,28,1), name='input')
conv2d_1 = layers.Conv2D(32, kernel_size=(3, 3), activation='relu')(input)
maxpooling2d_1 = layers.MaxPooling2D(pool_size=(2, 2), name='maxpooling2d_1')(conv2d_1)
conv2d_2 = layers.Conv2D(64, kernel_size=(3, 3), activation='relu')(maxpooling2d_1)
maxpooling2d_2 = layers.MaxPooling2D(pool_size=(2, 2))(conv2d_2)
flatten = layers.Flatten(name='flatten')(maxpooling2d_2)
dropout = layers.Dropout(0.5, name='dropout')(flatten)
dense = layers.Dense(num_classes, activation='softmax', name='dense')(dropout)
model = keras.models.Model(inputs=input, outputs=dense)
Now, I want to compute the saliency map for a single MNIST image. Since the final layer has a softmax activation and the denominator is a normalization term (so that the output nodes add up to 1), I believe that I need to either take the pre-softmax output or change the activation of the trained model linear for computing saliency maps. I will do the latter.
model.layers[-1].activation = tf.keras.activations.linear # swap activation to linear
input = loaded_model.layers[0].input
output = loaded_model.layers[-1].output
input_image = x_test[0] # shape is (28, 28, 1)
pred = np.argmax(loaded_model.predict(np.expand_dims(input_image, axis=0))) # predicted class
However, I am not sure what to do beyond this. I know I can use the following K.gradients(output, input) to compute gradients. That being said, I believe I should compute the gradient of the predicted class with respect to the input image, versus computing the gradient of the entire output. How would I do this? Also, I'm not sure how to evaluate the saliency heatmap for a specific image/prediction. I imagine I will have to use sess = tf.keras.backend.get_session() and sess.run(), but not sure exactly. I would greatly appreciate any help with completing the saliency heatmap code. Thanks!
If you add the activation as a single layer after the last dense layer with:
keras.layers.Activation('softmax')
you can do:
linear_model = keras.Model(input=model, output=model.layers[-2].output)
To then compute the gradients like:
def get_saliency_map(model, image, class_idx):
with tf.GradientTape() as tape:
tape.watch(image)
predictions = model(image)
loss = predictions[:, class_idx]
# Get the gradients of the loss w.r.t to the input image.
gradient = tape.gradient(loss, image)
# take maximum across channels
gradient = tf.reduce_max(gradient, axis=-1)
# convert to numpy
gradient = gradient.numpy()
# normaliz between 0 and 1
min_val, max_val = np.min(gradient), np.max(gradient)
smap = (gradient - min_val) / (max_val - min_val + keras.backend.epsilon())
return smap
I'm trying to use VGG16 network to do image classification. I've tried two different ways to do it which should be approximately equivalent as far as I understand, yet the results are very different.
Method 1: Extract features using VGG16 and fit these features using a custom fully connected network. Here is the code:
model = vgg16.VGG16(include_top=False, weights='imagenet',
input_shape=(imsize,imsize,3),
pooling='avg')
model_pred = keras.Sequential()
model_pred.add(keras.layers.Dense(1024, input_dim=512, activation='sigmoid'))
model_pred.add(keras.layers.Dropout(0.5))
model_pred.add(keras.layers.Dense(512, activation='sigmoid'))
model_pred.add(keras.layers.Dropout(0.5))
model_pred.add(keras.layers.Dense(num_categories, activation='sigmoid'))
model_pred.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])
(xtr, ytr) = tools.extract_features(model, 3000, imsize, datagen,
rootdir + '/train',
pickle_name = rootdir + '/testpredstrain.pickle')
(xv, yv) = tools.extract_features(model, 300, imsize, datagen,
rootdir + '/valid1',
pickle_name = rootdir + '/testpredsvalid.pickle')
model_pred.fit(xtr, ytr, epochs = 10, validation_data = (xv, yv), verbose=1)
(The function extract_features() simply uses Keras ImageDataGenerator to generate sample images and returns the output after using model.predict() on those images)
Method 2: Take the VGG16 network without the top part, set all the convolutional layers to non-trainable and add a few densely connected layers that are trainable. Then fit using keras fit_generator(). Here is the code:
model2 = vgg16.VGG16(include_top=False, weights='imagenet',
input_shape=(imsize,imsize,3),
pooling='avg')
for ll in model2.layers:
ll.trainable = False
out1 = keras.layers.Dense(1024, activation='softmax')(model2.layers[-1].output)
out1 = keras.layers.Dropout(0.4)(out1)
out1 = keras.layers.Dense(512, activation='softmax')(out1)
out1 = keras.layers.Dropout(0.4)(out1)
out1 = keras.layers.Dense(num_categories, activation='softmax')(out1)
model2 = keras.Model(inputs = model2.input, outputs = out1)
model2.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model2.fit_generator(train_gen,
steps_per_epoch = 100,
epochs = 10,
validation_data = valid_gen,
validation_steps = 10)
The number of epochs, samples, etc. are not exactly the same in both methods, but they don't need to be to notice the inconsistency: method 1 yields validation accuracy of 0.47 after just one epoch and gets as high as 0.7-0.8 and even better when I'm using larger number of samples to fit. Method 2, however, gets stuck at validation accuracy of 0.1-0.15 and never gets any better no matter how much I train.
Also, method 2 is considerably slower than method 1 even though it seems to me that they should be approximately as fast (when taking into account the time it takes to extract the features in method 1).
With your first method you extract features with vgg16 pre-trained model once and then you train - finetune your network while in your second approach you are constantly passing your images through every layer including vgg's layers at every epoch. That causes your model to run slower with your second method.
I am using tf.estimator.Estimator for developing my model,
I wrote a model_fn and trained 50,000 iterations, now I want to make a small change in my model_fn, for example add a new layer.
I don't want to start training from scratch, I want to restore all the old variables from the 50,000 checkpoint, and continue training from this point. When I try to do so I get a NotFoundError
How can this be done with tf.estimator.Estimator?
TL;DR The easiest way to load variables from a previous checkpoint is to use the function tf.train.init_from_checkpoint(). Just one call to this function inside the model_fn of your Estimator will override the initializers of the corresponding variables.
First model with two hidden layers
In more details, suppose you have trained a first model with two hidden layers on MNIST, named model_fn_1. The weights are saved in directory mnist_1.
def model_fn_1(features, labels, mode):
images = features['image']
h1 = tf.layers.dense(images, 100, activation=tf.nn.relu, name="h1")
h2 = tf.layers.dense(h1, 100, activation=tf.nn.relu, name="h2")
logits = tf.layers.dense(h2, 10, name="logits")
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
optimizer = tf.train.GradientDescentOptimizer(0.01)
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
# Estimator 1: two hidden layers
estimator_1 = tf.estimator.Estimator(model_fn_1, model_dir='mnist_1')
estimator_1.train(input_fn=train_input_fn, steps=1000)
Second model with three hidden layers
Now we want to train a new model model_fn_2 with three hidden layers. We want to load the weights for the first two hidden layers h1and h2. We use tf.train.init_from_checkpoint() to do this:
def model_fn_2(features, labels, mode, params):
images = features['image']
h1 = tf.layers.dense(images, 100, activation=tf.nn.relu, name="h1")
h2 = tf.layers.dense(h1, 100, activation=tf.nn.relu, name="h2")
h3 = tf.layers.dense(h2, 100, activation=tf.nn.relu, name="h3")
assignment_map = {
'h1/': 'h1/',
'h2/': 'h2/'
}
tf.train.init_from_checkpoint('mnist_1', assignment_map)
logits = tf.layers.dense(h3, 10, name="logits")
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
optimizer = tf.train.GradientDescentOptimizer(0.01)
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
# Estimator 2: three hidden layers
estimator_2 = tf.estimator.Estimator(model_fn_2, model_dir='mnist_2')
estimator_2.train(input_fn=train_input_fn, steps=1000)
The assignment_map will load every variable from scope h1/ in the checkpoint into the new scope h1/, and same with h2/. Don't forget the / at the end to make TensorFlow know it's a variable scope.
I couldn't find a way to make this work using pre-made estimators, since you can't change their model_fn.