Arbitrary threshold for sigmoid activation function for CNN binary classification? - tensorflow

I am classifying sentiment of reviews - 0 or 1 - using gensim Doc2Vec and CNN in Tensorflow 2.2.0:
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
input_length=maxlen,
embeddings_initializer=Constant(embedding), trainable=False),
tf.keras.layers.Conv1D(128, 5, activation='relu'),
tf.keras.layers.GlobalMaxPooling1D(),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy'])
history = model.fit(X_train, y_train,
epochs=8,
validation_split=0.3,
batch_size=10)
I then make predictions and convert my sigmoid probability to 0 or 1 using np.round():
predicted = model.predict(X_test)
predicted = np.round(predicted,1).astype(np.int32)
I get great results (~96% accuracy) indicating that the threshold of 0.5 is working as expected...
However, when I try to predict on a set of new data, the model seems to separate bad reviews from good ones but across approx 0.0:
# Example sigmoid outputs for new test reviews:
good_review_1: 0.000052
good_review_2: 0.000098
bad_review_1: 0.112334
bad_review_2: 0.214934
Mind you, the model never saw X_test during training and it is able to predict just fine. It's only when I introduce a new set of review text strings, I run into incorrect predictions. For new reviews, the only preprocessing that I do before calling model.predict() is feeding them through the same tokenizer used for model training:
s = 'This is a sample bad review.'
tokenizer.texts_to_sequences(pd.Series(s))
s = pad_sequences(s, maxlen=maxlen, padding='pre', truncating='pre')
model.predict(s)
I've been trying to make sense of this conundrum but I'm making little progress. I ran into post and it indicates
Some sigmoid functions will have this at 0, while some will have it set to a different 'threshold'.
But this still doesn't explain why my model was able to predict on np.round()'s 0.5 threshold for X_test dataset (which the model never learned on) and then unable to predict on new dataset at the same 0.5 threshold...

Related

Why does model.summary() give shape None when input shape is clear and fixed?

The code below that is adapted from tensorflow:
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(len(x_train), -1)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(10)
])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=1, verbose=0)
model.summary()
gives output Shape (32, 10), whereas this code
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(10)
])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=1, verbose=0)
model.summary()
gives Output Shape (None, 10).
I'm conscious that 32 means batch size, 10 means the output classes. I'd just like to know where does the None come from when input shape is clear and fixed.
The first dimension is the number of samples (batch_size). Since it should be flexible and work with any number of samples or batch sizes, it is represented as None. So, don't worry about it. Your model does not care about the first dimension.
For example in your case input shape is (28,28) and output is (10). The model considers (None,28,28) and (None,10) shapes as input and output. It means that you can feed to the model any number of samples, but each input sample should be (28,28), and the model gives you the same unknown number of samples but each of which with 10 labels. This is the reason that you don't need to set the batch_size in the input_shape parameter in your first layer.
Another example for the first dimension, is when you train your model, vs. when you predict using that model. For training you may pass an input array say (10,28,28), which means 10 samples with 28,28 size. But when you want to get a prediction from your model using model.predict() you may pass one single sample like (1,28,28) to get a prediction. So, The first dimension varies during the model life cycle. So it is set to None.
The first model shows (32,10) because you called it after model.fit() and you didn't specified input_shape in your first layer, so it inferences the shapes from training procedure. model.fit() sets batch_size to 32 as default. So, it shows the batch size.
But if you set input_shape, since you should not include the batch size, model will be created by None as the first dimension.

DCNN for Binary Classification Converges to 50%/50%

I am new to Keras, and never asked a question here, so excuse me any rookie mistakes I might make.
What I am trying to do is to implement a binary classifier, operating on images (CTs to be exact).
My model is based on a pretrained net, that performed classification on 14 classes (see wonderful git here https://github.com/jrzech/reproduce-chexnet).
As the saying goes, "crawl before you walk, walk before you run", my current humble goal is to achieve overfitting of the network on some 100 examples.
My current problem is that the net converges to a weird solution, with the output neuron (im using sigmoid) always very close to 50%, with 100% of the predictions going to one class (that way im stuck at about 50% accuracy). My loss and accuracy do not change at all from epoch 1 or so.
Things I tried/considered:
using different optimizers (i used Adam optimizer and the following SGD).
trying also to go with categorical crossentropy (with softmax layer at the end, instead of sigmoid, since some say it might perform better [Keras' fit_generator() for binary classification predictions always 50%).
adding an additional denselayer (I thought i might be underfitting somehow).
tried to maybe change the batchsize, to 128 (and overfit on 1000 examples).
All failed miserably, so im kind of at a lost here. I would be happy to provide more details if needed, and would appreciate any help or insights you might have. Major parts of my code are attached. Note that the ModelFactory() that I'm loading and using is the pretrained one.
Thanks in advance!
data generator code
rescale = 1./255.0
target_size = (224, 224)
batch_size = 128
train_datagen = ImageDataGenerator(
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
rescale=rescale
)
train_generator = train_datagen.flow_from_dataframe(
train_csv,
directory=train_path,
x_col='image_name',
y_col='class',
target_size=target_size,
color_mode='rgb',
class_mode='binary',
batch_size=batch_size,
shuffle=True,
)
my model
def get_model():
file_name='/content/brucechou1983_CheXNet_Keras_0.3.0_weights.h5'
base_model = ModelFactory().get_model(class_names=[str(i) for i in range(14)],
weights_path=file_name)
x = base_model.output
x = keras.layers.Dense(1024, activation='relu')(x)
x = keras.layers.BatchNormalization(trainable=True)(x)
predictions = keras.layers.Dense(1, activation='sigmoid')(x)
model = keras.models.Model(inputs=base_model.inputs, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
model.summary()
return model
training the model
class_weight = sklearn.utils.class_weight.compute_class_weight('balanced',np.unique(train_csv['class']), train_csv['class'])
model.compile(keras.optimizers.SGD(lr=1e-6, decay=1e-6, momentum=0.9, nesterov=True),
loss='binary_crossentropy',
metrics=['binary_accuracy'])
history = model.fit_generator(
train_generator,
steps_per_epoch=len(train_generator),
epochs=10,
verbose=1,
class_weight=class_weight
)

Keras model not learning and predicting only one class out of three classes

New to the field of deep learning and currently working on this competition for predicting the earthquake damage to buildings.
The model I created starts at an accuracy of .56 but remains at this for any number of epochs i let it run. When finished, the model only predicts one of the three classes (which I one hot encoded into a dataframe with three columns). Changing the number of layers, optimizers, data preparation, dropout wont change anything. Even trying to overfit my model with the over-parameterization of the neural network will still have the same accuracy and a non-learning model.
What am I doing wrong?
This is my code:
model = keras.models.Sequential()
model.add(keras.layers.Dense(64, input_dim = 85, activation = "relu"))
keras.layers.Dropout(0.3)
model.add(keras.layers.Dense(128, activation = "relu"))
keras.layers.Dropout(0.3)
model.add(keras.layers.Dense(256, activation = "relu"))
keras.layers.Dropout(0.3)
model.add(keras.layers.Dense(512, activation = "relu"))
model.add(keras.layers.Dense(3, activation = "softmax"))
adam = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(optimizer = adam,
loss='categorical_crossentropy',
metrics = ['accuracy'])
history = model.fit(traindata, trainlabels,
epochs = 5,
validation_split = 0.2,
verbose = 1,)
There's nothing visually wrong with your model, but it may be too haevy to learn any useful features.
Try normalizing your input with https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
Start with only 2 layers, and a few numbers of neurons.
Increase batch_size and try learning_rate scheduling.
Observe the validation_accuracy, stop when it starts to overfit.
Finally, for a 3-class classification, 56% accuracy is better than baseline, remmeber it's a competition so the data is not dummy playground data which you can expect to get a 90% accuracy with an MLP in the first try.
Finally, try hyperparameter optimization with tuner.

Why I'm getting bad result with Keras vs random forest or knn?

I'm learning deep learning with keras and trying to compare the results (accuracy) with machine learning algorithms (sklearn) (i.e random forest, k_neighbors)
It seems that with keras I'm getting the worst results.
I'm working on simple classification problem: iris dataset
My keras code looks:
samples = datasets.load_iris()
X = samples.data
y = samples.target
df = pd.DataFrame(data=X)
df.columns = samples.feature_names
df['Target'] = y
# prepare data
X = df[df.columns[:-1]]
y = df[df.columns[-1]]
# hot encoding
encoder = LabelEncoder()
y1 = encoder.fit_transform(y)
y = pd.get_dummies(y1).values
# split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3)
# build model
model = Sequential()
model.add(Dense(1000, activation='tanh', input_shape = ((df.shape[1]-1),)))
model.add(Dense(500, activation='tanh'))
model.add(Dense(250, activation='tanh'))
model.add(Dense(125, activation='tanh'))
model.add(Dense(64, activation='tanh'))
model.add(Dense(32, activation='tanh'))
model.add(Dense(9, activation='tanh'))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train)
score, acc = model.evaluate(X_test, y_test, verbose=0)
#results:
#score = 0.77
#acc = 0.711
I have tired to add layers and/or change number of units per layer and/or change the activation function (to relu) by it seems that the result are not higher than 0.85.
With sklearn random forest or k_neighbors I'm getting result (on same dataset) above 0.95.
What am I missing ?
With sklearn I did little effort and got good results, and with keras, I had a lot of upgrades but not as good as sklearn results. why is that ?
How can I get same results with keras ?
In short, you need:
ReLU activations
Simpler model
Data mormalization
More epochs
In detail:
The first issue here is that nowadays we never use activation='tanh' for the intermediate network layers. In such problems, we practically always use activation='relu'.
The second issue is that you have build quite a large Keras model, and it might very well be the case that with only 100 iris samples in your training set you have too few data to effectively train such a large model. Try reducing drastically both the number of layers and the number of nodes per layer. Start simpler.
Large neural networks really thrive when we have lots of data, but in cases of small datasets, like here, their expressiveness and flexibility may become a liability instead, compared with simpler algorithms, like RF or k-nn.
The third issue is that, in contrast to tree-based models, like Random Forests, neural networks generally require normalizing the data, which you don't do. Truth is that knn also requires normalized data, but in this special case, since all iris features are in the same scale, it does not affect the performance negatively.
Last but not least, you seem to run your Keras model for only one epoch (the default value if you don't specify anything in model.fit); this is somewhat equivalent to building a random forest with a single tree (which, BTW, is still much better than a single decision tree).
All in all, with the following changes in your code:
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
model = Sequential()
model.add(Dense(150, activation='relu', input_shape = ((df.shape[1]-1),)))
model.add(Dense(150, activation='relu'))
model.add(Dense(y.shape[1], activation='softmax'))
model.fit(X_train, y_train, epochs=100)
and everything else as is, we get:
score, acc = model.evaluate(X_test, y_test, verbose=0)
acc
# 0.9333333373069763
We can do better: use slightly more training data and stratify them, i.e.
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size = 0.20, # a few more samples for training
stratify=y)
And with the same model & training epochs you can get a perfect accuracy of 1.0 in the test set:
score, acc = model.evaluate(X_test, y_test, verbose=0)
acc
# 1.0
(Details might differ due to some randomness imposed by default in such experiments).
Adding some dropout might help you improve accuracy. See Tensorflow's documentation for more information.
Essentially how you add a Dropout layer is just very similar to how you added those Dense() layers.
model.add(Dropout(0.2)
Note: The parameter '0.2 implies that 20% of the connections in the layer is randomly omitted to reduce the interdependencies between them, which reduces overfitting.

Keras model.load_weights(WEIGHTS) provide inaccurate results

I'm training a LSTM RNN for description generation using Keras (Tensorflow Backend) with MSCOCO dataset. When training the model it had 92% accuracy with 0.79 loss. Further when the model was training I tested the description generation at each epoch and the model provided very good predictions with a meaningful description when it gives a random word.
However after training I loaded the model using model.load_weights(WEIGHTS) method in Keras and tried to create a description by giving a random word as I've done before. But now model is not providing a meaningful description and it just outputs random words which has no meaning at all.
Can anyone tell me what could be the issue for this ?
My model parameters are:
10 LSTM layers, Learning rate: 0.04, Activation: Softmax, Loss Function: Categorical Cross entropy, Optimizer: rmsprop
UPDATE:
This is my model:
model = Sequential()
model.add(LSTM(HIDDEN_DIM, input_shape=(None, VOCAB_SIZE), return_sequences=True))
for i in range(LAYER_NUM - 1):
model.add(LSTM(HIDDEN_DIM, return_sequences=True))
model.add(TimeDistributed(Dense(VOCAB_SIZE)))
model.add(Activation('softmax'))
model.add(Dropout(0.04))
model.compile(loss="categorical_crossentropy", optimizer="rmsprop", metrics=['accuracy'])
This is how I train & save my model weights (I generate a description at each epoch to test the accuracy):
model.fit(X, Y, batch_size=BATCH_SIZE, verbose=1, epochs=EPOCHS)
EPOCHS += 1
generate_description(model, GENERATE_LENGTH, VOCAB_SIZE, index_to_word) model.save_weights('checkpoint_layer_{}_hidden_{}_epoch_{}.hdf5'.format(LAYER_NUM, HIDDEN_DIM, EPOCHS))
This is how I load my model (WEIGHTS = my saved model):
model.load_weights(WEIGHTS)
desc = generate_description(model, GENERATE_LENGTH, VOCAB_SIZE, index_to_word)
print(desc)
I provide randomly generated vector to my model for testing. This is how I generate the description.
def generate_description(model, length, vocab_size, index_to_word):
index = [np.random.randint(vocab_size)]
Y_word = [index_to_word[index[-1]]]
X = np.zeros((1, length, vocab_size))
for i in range(length):
# Appending the last predicted word to next timestep
X[0, i, :][index[-1]] = 1
print(index_to_word[index[-1]])
index = np.argmax(model.predict(X[:, :i + 1, :])[0], 1)
Y_word.append(index_to_word[index[-1]])
Y_word.append(' ')
return ('').join(Y_word)