Resnet-50 adversarial training with cleverhans FGSM accuracy stuck at 5% - tensorflow

I am facing a strange problem when adversarially training a resnet-50, and I am not sure whether is's a logical error, or a bug somewhere in the code/libraries.
I am adversarially training a resnet-50 thats loaded from Keras, using the FastGradientMethod from cleverhans, and expecting the adversarial accuracy to rise at least above 90% (probably 99.x%). The training algorithm, training- and attack-params should be visible in the code.
The problem, as already stated in the title is, that the accuracy is stuck at 5% after training ~3000 of 39002 training inputs in the first epoch. (GermanTrafficSignRecognitionBenchmark, GTSRB).
When training without and adversariy loss function, the accuracy does not get stuck after 3000 samples, but continues to rise > 0.95 in the first epoch.
When substituting the network with a lenet-5, alexnet and vgg19, the code works as expected, and an accuracy absolutely comparabele to the non-adversarial, categorical_corssentropy lossfunction is achieved. I've also tried running the procedure using solely tf-cpu and different versions of tensorflow, the result is always the same.
Code for obtaining ResNet-50:
def build_resnet50(num_classes, img_size):
from tensorflow.keras.applications import ResNet50
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Flatten
resnet = ResNet50(weights='imagenet', include_top=False, input_shape=img_size)
x = Flatten(input_shape=resnet.output.shape)(resnet.output)
x = Dense(1024, activation='sigmoid')(x)
predictions = Dense(num_classes, activation='softmax', name='pred')(x)
model = Model(inputs=[resnet.input], outputs=[predictions])
return model
Training:
def lr_schedule(epoch):
# decreasing learning rate depending on epoch
return 0.001 * (0.1 ** int(epoch / 10))
def train_model(model, xtrain, ytrain, xtest, ytest, lr=0.001, batch_size=32,
epochs=10, result_folder=""):
from cleverhans.attacks import FastGradientMethod
from cleverhans.utils_keras import KerasModelWrapper
import tensorflow as tf
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.callbacks import LearningRateScheduler, ModelCheckpoint
sgd = SGD(lr=lr, decay=1e-6, momentum=0.9, nesterov=True)
model(model.input)
wrap = KerasModelWrapper(model)
sess = tf.compat.v1.keras.backend.get_session()
fgsm = FastGradientMethod(wrap, sess=sess)
fgsm_params = {'eps': 0.01,
'clip_min': 0.,
'clip_max': 1.}
loss = get_adversarial_loss(model, fgsm, fgsm_params)
model.compile(loss=loss, optimizer=sgd, metrics=['accuracy'])
model.fit(xtrain, ytrain,
batch_size=batch_size,
validation_data=(xtest, ytest),
epochs=epochs,
callbacks=[LearningRateScheduler(lr_schedule)])
Loss-function:
def get_adversarial_loss(model, fgsm, fgsm_params):
def adv_loss(y, preds):
import tensorflow as tf
tf.keras.backend.set_learning_phase(False) #turn off dropout during input gradient calculation, to avoid unconnected gradients
# Cross-entropy on the legitimate examples
cross_ent = tf.keras.losses.categorical_crossentropy(y, preds)
# Generate adversarial examples
x_adv = fgsm.generate(model.input, **fgsm_params)
# Consider the attack to be constant
x_adv = tf.stop_gradient(x_adv)
# Cross-entropy on the adversarial examples
preds_adv = model(x_adv)
cross_ent_adv = tf.keras.losses.categorical_crossentropy(y, preds_adv)
tf.keras.backend.set_learning_phase(True) #turn back on
return 0.5 * cross_ent + 0.5 * cross_ent_adv
return adv_loss
Versions used:
tf+tf-gpu: 1.14.0
keras: 2.3.1
cleverhans: > 3.0.1 - latest version pulled from github

It is a side-effect of the way we estimate the moving averages on BatchNormalization.
The mean and variance of the training data that you used are different from the ones of the dataset used to train the ResNet50. Because the momentum on the BatchNormalization has a default value of 0.99, with only 10 iterations it does not converge quickly enough to the correct values for the moving mean and variance. This is not obvious during training when the learning_phase is 1 because BN uses the mean/variance of the batch. Nevertheless when we set learning_phase to 0, the incorrect mean/variance values which are learned during training significantly affect the accuracy.
You can fix this problem by below approachs:
More iterations
Reduce the size of the batch from 32 to 16(to perform more updates per epoch) and increase the number of epochs from 10 to 250. This way the moving average and variance will converge to the correct values.
Change the momentum of BatchNormalization
Keep the number of iterations fixed but change the momentum of the BatchNormalization layer to update more aggressively the rolling mean and variance (not recommended for production models).
On the original snippet, add the following code between reading the base_model and defining the new layers:
# ....
base_model = ResNet50(weights='imagenet', include_top=False, input_shape=input_shape)
# PATCH MOMENTUM - START
import json
conf = json.loads(base_model.to_json())
for l in conf['config']['layers']:
if l['class_name'] == 'BatchNormalization':
l['config']['momentum'] = 0.5
m = Model.from_config(conf['config'])
for l in base_model.layers:
m.get_layer(l.name).set_weights(l.get_weights())
base_model = m
# PATCH MOMENTUM - END
x = base_model.output
# ....
Would also recommend you to try another hack provided bu us here.

Related

XOR problem with 2-2-1 configuration should always predict output accurately?

I am trying to solve the XOR problem using the following code:
import numpy as np
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, Input, Concatenate
from tensorflow.keras.utils import plot_model
from tensorflow.keras.optimizers import SGD, Adam
# input data
x = np.array([[0,0], [0,1], [1,0], [1,1]], 'float32')
y = np.array([[0], [1], [1], [0]], 'float32')
### Model
model = Sequential()
# add layers (architecture)
model.add(Dense(2, activation = 'relu')
model.add(Dense(1, activation = 'sigmoid'))
# compile
model.compile(loss = 'mean_squared_error',
optimizer = SGD(learning_rate = 0.1, momentum=0.8),
metrics = ['accuracy'])
# train
model.fit(x, y, epochs = 25000, batch_size = 1)
# evaluate
ev = model.evaluate(x, y)
I already tested:
using different activation functions in the hidden layer (sigmoid and tanh)
using different learning rates and momentum
Also, I am running with a high number of epochs (25000). Still, it only accurately predicts all outputs a few times. Most of the times accuracy is equal to 0.5 or 0.75.
I have read that this is the minimum configuration to solve this problem. However, it also seems that the error surface presents a number of regions with local minima.
My question is:
Should I assume that the model is correct and can learn the problem, although sometimes it gets 'stuck' in a local minima, OR do I still need to improve my model somehow to solve the XOR more accurately and consistently?

Resnet based (Tensorflow Keras) Siamese Model providing `nan` validation loss in training when using TripletHardLoss (Semi too)

I have a model which I built on top of ResNet. I am using 25k Similar type of Images. My images have text as well as some diagram. When I used the Euclidean Distance + Binary loss, I got an accuracy of 95% with Inception but same with Triplet Hard/ Semi Hard Loss gave me nan loss and almost 0 accuracy. Please tell me if there is something wrong with the code structure.
import tensorflow_addons as tfa
from tensorflow.keras.applications.resnet50 import preprocess_input as res50_pre, ResNet50
shape = (224,224,3)
lr = 0.001
loss = tfa.losses.TripletSemiHardLoss()
epochs = 50
batch_size = 128 #254 gives 'log' referenced before assignment error
datagen = ImageDataGenerator(preprocessing_function=res50_pre,validation_split=0.2)
train_data = datagen.flow_from_dataframe(df,x_col='path',y_col='label',class_mode='sparse',target_size=(224,224),
batch_size=batch_size,subset='training',seed=SEED)
val_data = datagen.flow_from_dataframe(df,x_col='path',y_col='label',class_mode='sparse',target_size=(224,224),
batch_size=batch_size,subset='validation',seed=SEED)
base_model = ResNet50(weights='imagenet',input_shape=shape,include_top=False,pooling='avg')
base_model.trainable = True
inputs = keras.Input(shape=shape)
x = base_model(inputs,training=True)
outputs = keras.layers.Lambda(lambda x: tf.math.l2_normalize(x, axis=1))(x) # L2 normalize embeddings
model = keras.Model(inputs, outputs)
for layer in model.layers: # set all the parameters trainable
layer.trainable = True
model.compile(optimizer=tf.keras.optimizers.Adam(lr),loss=loss,metrics=['accuracy'])
history = model.fit(train_data,epochs=epochs,steps_per_epoch=len(train_data)//batch_size,validation_data=val_data,verbose=2)
My group has values like 1,2,3 [Not in order and some missing] which represent the same type of data. I used Sparse after converting the value to str(1), str(3) etc.
My DataFrame looks like this:
increase batch size to reduce probability of a mini batch not including any triplets.
Edit: I published a package for generating TF/Keras balanced batches to solve this problem https://github.com/ma7555/kerasgen

How to train a model on multi gpus with tensorflow2 and keras?

I have an LSTM model that I want to train on multiple gpus. I transformed the code to do this and in nvidia-smi I could see that it is using all the memory of all the gpus and each of the gpus are utilizing around 40% BUT the estimated time for training of each batch was almost the same as 1 gpu.
Can someone please guid me and tell me how I can train properly on multiple gpus?
My code:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Dropout
import os
from tensorflow.keras.callbacks import ModelCheckpoint
checkpoint_path = "./model/"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = ModelCheckpoint(filepath=checkpoint_path, save_freq= 'epoch', verbose=1 )
# NNET - LSTM
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
regressor = Sequential()
regressor.add(LSTM(units = 180, return_sequences = True, input_shape = (X_train.shape[1], 3)))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 180, return_sequences = True))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 180))
regressor.add(Dropout(0.2))
regressor.add(Dense(units = 4))
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error')
regressor.fit(X_train, y_train, epochs = 10, batch_size = 32, callbacks=[cp_callback])
Assuming that your batch_size for a single GPU is N and the time taken per batch is X secs.
You can measure the training speed by measuring the time taken for the model to converge, but you have to make sure that you feed in the right batch_size with 2 GPUs since 2 GPUs will have twice the memory of a single GPU you should linearly scale your batch_size to 2N. It might be deceiving to see that the model still takes X secs per batch, but you should know that now your model is seeing 2N samples per batch, and would lead to a quicker convergence because now you can train with a higher learning rate.
If both of your GPUs have their memory utilized and are sitting at 40% utilization there might be multiple reasons
The model is too simple and you don't need all that compute.
Your batch_size is small and your GPUs can handle a bigger batch_size
Your CPU is the bottleneck and thus making the GPUs wait for the data to be ready, this can be the case when you see spikes in GPU utilization
You need to write a better and performant data pipeline. You can find more about efficient data input pipelines here - https://www.tensorflow.org/guide/data_performance
You can try using CuDNNLSTM. Its way faster than the usual LSTM layer.
https://www.tensorflow.org/api_docs/python/tf/compat/v1/keras/layers/CuDNNLSTM

How to fix flatlined accuracy and NaN loss in tensorflow image classification

I am currently experimenting with TensorFlow and machine learning, and as a challenge, I decided to try and code a machine learning software, on the Kaggle website, that can analyze brain MRI scans and predict if a tumour exists or not. I did so with the code below and began training the model. However, the text that showed up during training showed that none of the loss values (training or validation) had proper values and that the accuracies flatlined, or fluctuated between two numbers (the same numbers each time).
I have looked at other posts but was unable to find anything that gave me tips. I changed my loss function (from sparse_categorical_crossentropy to binary_crossentropy). But none of these changed the values.
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import tensorflow as tf
from tensorflow import keras
import numpy as np
import cv2
import pandas as pd
from random import shuffle
IMG_SIZE = 50
data_path = "../input/brain_tumor_dataset"
data = []
folders = os.listdir(data_path)
for folder in folders:
for file in os.listdir(os.path.join(data_path, folder)):
if file.endswith("jpg") or file.endswith("jpeg") or file.endswith("png") or file.endswith("JPG"):
data.append(os.path.join(data_path, folder, file))
shuffle(data)
images = []
labels = []
for file in data:
img = cv2.imread(file)
img = cv2.resize(img, (IMG_SIZE, IMG_SIZE))
images.append(img)
if "Y" in file:
labels.append(1)
else:
labels.append(0)
union_list = list(zip(images, labels))
shuffle(union_list)
images, labels = zip(*union_list)
images = np.array(images)
labels = np.array(labels)
train_img = images[:200]
train_lbl = labels[:200]
val_img = images[200:]
val_lbl = labels[200:]
train_img = np.array(train_img)
val_img = np.array(val_img)
train_img = train_img.astype("float32") / 255.0
val_img = val_img.astype("float32") / 255.0
model = keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), padding='same', activation=tf.nn.relu, input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.MaxPooling2D((2,2), strides=2),
tf.keras.layers.Conv2D(64, (3, 3), padding='same', activation=tf.nn.relu),
tf.keras.layers.MaxPooling2D((2,2), strides=2),
tf.keras.layers.Dropout(0.8),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
history = model.fit(train_img, train_lbl, epochs = 100, validation_data=(val_img, val_lbl))
This should give a result with increasing accuracy, and decreasing loss, but the loss is nan, and the accuracy is flatlined.
I managed to solve the problem. I looked at my code again and realized that my output layer only had one node. However, it needed to output the probabilities for two different categories ('yes' or 'no' for whether it is a tumour or not). Once I changed it to 2 nodes, the network began working properly and reached 95% accuracy on both the training and validation sets.
My validation accuracy still fluctuates a little between a few values, but this is most likely because I only have 23 images in the validation set. In order to decrease the fluctuations, however, I also decreased the epoch number to just 10. Everything seems to be great now.
It's likely the cause of the flatlining accuracy is the NaN loss. I'd try to figure out at what point in the computation the loss is becoming NaN (in inference? in the optimiser? in the loss calculation?). This post details some methods for outputting these intermediate values.

keras batchnorm layer moving average for variance

I've been trying to understand the Keras BatchNorm layer behavior in my Keras NN model. One question I encountered was how the BN layer is calculating the moving average of the 'variance'. My understanding is Keras is using exponential-weighted-average method to calculate the moving average for both mean and variance from the training mini-batches. But regardless of this, after a really large number of epochs, this moving average should approach the mean/variance of the training data set. But in my simple example, the 'variance' moving average is always different from the training data 'variance'. Below is my code and output:
from keras.layers import Input, BatchNormalization
from keras.models import Model
from keras.optimizers import Adam, RMSprop
import numpy as np
X_input = Input(shape=(6,))
X = BatchNormalization(axis=-1)(X_input)
model = Model(inputs=X_input, outputs=X)
model.compile(optimizer=RMSprop(), loss='mean_squared_error')
np.random.seed(3)
train_data = np.random.random((5,6))
train_label = np.random.random((5,6))
model.fit(x=train_data, y=train_label, epochs=10000, batch_size=6, verbose=False)
bn_gamma, bn_beta, bn_mean, bn_var = model.layers[1].get_weights()
train_mean = np.mean(train_data, axis=0)
train_var = np.var(train_data, axis=0)
print("train_mean: {}".format(train_mean))
print("moving_mean: {}".format(bn_mean))
print("train_var: {}".format(train_var))
print("moving_var: {}".format(bn_var))
Below is the output:
train_mean: [0.42588575 0.47785879 0.32170309 0.49151921 0.355046 0.60104636]
moving_mean: [0.4258843 0.47785735 0.32170165 0.49151778 0.35504454 0.60104346]
train_var: [0.03949981 0.05228663 0.04027516 0.02522536 0.10261097 0.0838988 ]
moving_var: [0.04938692 0.06537427 0.05035637 0.03153942 0.12829503 0.10489936]
If you see, the train_mean is the same as the moving average mean of BN layer, but train_var (variance) is not. Can anyone please help here? Thanks.
If you look at the source code of batchnorm, you can see that the unbiased estimator of population variance is used, here is the relevant line:
variance *= sample_size / (sample_size - (1.0 + self.epsilon))
In your case, the sample size is 5, so you should have train_var * 5./4 == moving_var, which is the case.