Hi I am applying an NN on tweets using doc2vec. Can anyone tell me, why my accuracy stays 0.0 in every epoch? Might the input vectors be the problem?
Sorry for asking beginners questions here. I´m not that advanced yet :-/
np.random.seed(seed)
model_d2v_01 = Sequential()
model_d2v_01.add(Dense(64, activation='relu', input_dim=200))
model_d2v_01.add(Dense(1, activation='sigmoid'))
model_d2v_01.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model_d2v_01.fit(train_vecs_ugdbow_tgdmm, y_train,
validation_data=(validation_vecs_ugdbow_tgdmm, y_validation),
epochs=10, batch_size=32, verbose=2)
I´m trying to combine a DBOW and a DMM Model (Vocab).
I was following more or less this tutorial: https://github.com/tthustla/twitter_sentiment_analysis_part10/blob/master/Capstone_part10.ipynb
Related
I have a project to detect faces using tiny yolov1 with Keras and TensorFlow and I have to train the model from scratch. When I train the model using the dataset https://pixeldrain.com/u/wnUrWG2k. The loss value does not decrease much in each epoch and when I plot the y_pred and y_validation they are both straight lines.
model = Model(inputs=inputs, outputs=yolo_outputs)
model.compile(loss=yolo_loss, optimizer='adam')
history = model.fit(X_train, y_train, validation_split=0.33, epochs=5, batch_size=10)`
this is my model loss plot
loss plot
Did you
normalize the input features?
double check your learning rate?
I am new to Keras, and never asked a question here, so excuse me any rookie mistakes I might make.
What I am trying to do is to implement a binary classifier, operating on images (CTs to be exact).
My model is based on a pretrained net, that performed classification on 14 classes (see wonderful git here https://github.com/jrzech/reproduce-chexnet).
As the saying goes, "crawl before you walk, walk before you run", my current humble goal is to achieve overfitting of the network on some 100 examples.
My current problem is that the net converges to a weird solution, with the output neuron (im using sigmoid) always very close to 50%, with 100% of the predictions going to one class (that way im stuck at about 50% accuracy). My loss and accuracy do not change at all from epoch 1 or so.
Things I tried/considered:
using different optimizers (i used Adam optimizer and the following SGD).
trying also to go with categorical crossentropy (with softmax layer at the end, instead of sigmoid, since some say it might perform better [Keras' fit_generator() for binary classification predictions always 50%).
adding an additional denselayer (I thought i might be underfitting somehow).
tried to maybe change the batchsize, to 128 (and overfit on 1000 examples).
All failed miserably, so im kind of at a lost here. I would be happy to provide more details if needed, and would appreciate any help or insights you might have. Major parts of my code are attached. Note that the ModelFactory() that I'm loading and using is the pretrained one.
Thanks in advance!
data generator code
rescale = 1./255.0
target_size = (224, 224)
batch_size = 128
train_datagen = ImageDataGenerator(
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
rescale=rescale
)
train_generator = train_datagen.flow_from_dataframe(
train_csv,
directory=train_path,
x_col='image_name',
y_col='class',
target_size=target_size,
color_mode='rgb',
class_mode='binary',
batch_size=batch_size,
shuffle=True,
)
my model
def get_model():
file_name='/content/brucechou1983_CheXNet_Keras_0.3.0_weights.h5'
base_model = ModelFactory().get_model(class_names=[str(i) for i in range(14)],
weights_path=file_name)
x = base_model.output
x = keras.layers.Dense(1024, activation='relu')(x)
x = keras.layers.BatchNormalization(trainable=True)(x)
predictions = keras.layers.Dense(1, activation='sigmoid')(x)
model = keras.models.Model(inputs=base_model.inputs, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
model.summary()
return model
training the model
class_weight = sklearn.utils.class_weight.compute_class_weight('balanced',np.unique(train_csv['class']), train_csv['class'])
model.compile(keras.optimizers.SGD(lr=1e-6, decay=1e-6, momentum=0.9, nesterov=True),
loss='binary_crossentropy',
metrics=['binary_accuracy'])
history = model.fit_generator(
train_generator,
steps_per_epoch=len(train_generator),
epochs=10,
verbose=1,
class_weight=class_weight
)
I am working on a document classification problem.
Multi-label classification 20 different labels, 1920 documents in training, and 480 in validation. The model is a CNN with FastText embeddings and I use a logistic regression model with Ngram as baseline.
The problem is that the baseline model gives a f1-score of 0.36 while the cnn only gives 0.3.
The architecture I use is from here:
https://www.kaggle.com/vsmolyakov/keras-cnn-with-fasttext-embeddings
I have been doing some parameter tuning, and the current best parameters are: dropout. 0.25, learning rate 0.001, trainable embeddings false, 128 filters, prediction threshold 0.15 and kernel size 9.
Do you guys have ideas to parameters to be special aware of, ideas to change the architecture, anything that might improve the f1-score?
# Parameters
BATCH_SIZE = 16
DROP_OUT = 0.25
N_EPOCHS = 20
N_FILTERS = 128
TRAINABLE = False
LEARNING_RATE = 0.001
N_DIM = 32
KERNEL_SIZE = 9
# Create model
model = Sequential()
model.add(Embedding(NB_WORDS, EMBED_DIM, weights=[embedding_matrix],
input_length=MAX_SEQ_LEN, trainable=TRAINABLE))
model.add(Conv1D(N_FILTERS, KERNEL_SIZE, activation='relu', padding='same'))
model.add(MaxPooling1D(2))
model.add(Conv1D(N_FILTERS, KERNEL_SIZE, activation='relu', padding='same'))
model.add(GlobalMaxPooling1D())
model.add(Dropout(DROP_OUT))
model.add(Dense(N_DIM, activation='relu', kernel_regularizer=regularizers.l2(1e-4)))
model.add(Dense(N_LABELS, activation='sigmoid')) #multi-label (k-hot encoding)
adam = optimizers.Adam(lr=LEARNING_RATE, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(loss='binary_crossentropy', optimizer=adam, metrics=['accuracy'])
model.summary()
Edit
I think I got some wrong hyperparameters by fixing epochs to 20 during tuning. I am now trying with a stopping criteria, the model usually converges around 30-35 epochs. It seems dropout of 0.5 works better, and I am currently tuning batch size. If somebody has some experience/knowledge about the relationship between epochs and other hyperparameters feel free to share.
A thing you should consider in general is whether the data is imbalanced and how your model performs for each class (using for example sklearn.metrics.confusion_matrix)
I think the dataset (2000 over 20 classes) might be not big enough for deep learning to work from scratch. You can consider augmenting your dataset or you could start by trying to fine-tune a pretrained language model for your task. See https://github.com/huggingface/pytorch-openai-transformer-lm .That could help you overcome the issue with the dataset size in general.
I'm trying to experiment with a simple TensorFlow model built with keras, but I can't figure out why I'm getting such poor predictions. Here's the model:
x_train = np.asarray([[.5], [1.0], [.4], [5], [25]])
y_train = np.asarray([.25, .5, .2, 2.5, 12.5])
opt = keras.optimizers.Adam(lr=0.01)
model = Sequential()
model.add(Dense(1, activation="relu", input_shape=(x_train.shape[1:])))
model.add(Dense(9, activation="relu"))
model.add(Dense(1, activation="relu"))
model.compile(loss='mean_squared_error', optimizer=opt, metrics=['mean_squared_error'])
model.fit(x_train, y_train, shuffle=True, epochs=10)
print(model.predict(np.asarray([[5]])))
As you can see, it should learn to divide the input by two. However the loss is 32.5705, and over a few epochs, it refuses to change whatsoever (even if I do something crazy like 100 epochs, it's always that loss). Is there anything you can see that I'm doing horribly wrong here? The prediction for any value it seems is 0..
It also seems to be randomly switching between performing as expected, and the weird behavior described above. I re-ran it and got a loss of 0.0019 after 200 epochs, but if I re-run it with all the same parameters a second later the loss stays at 30 like before. What's going on here?
Some reasons that I can think of,
training set is too small
learning rate is high
last layer should just be a linear layer
for some runs the ReLU units are dying (see dead ReLU problem) and your network weights don't change after that so you see the same loss value.
In this case maybe a tanh activation will provide better conditioning for optimization
I made a few changes to your code based on what I commented, and I get decent results.
import keras
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation
x_train = np.random.random((50000, 1))#np.asarray([[.5], [1.0], [.4], [5], [25]])
y_train = x_train /2. #TODO: add small amount of noise to y #np.asarray([.25, .5, .2, 2.5, 12.5])
opt = keras.optimizers.Adam(lr=0.0005, clipvalue=0.5)
model = Sequential()
model.add(Dense(1, activation="tanh", input_shape=x_train.shape[1:]))
model.add(Dense(9, activation="tanh"))
model.add(Dense(1, activation=None))
model.compile(loss='mean_squared_error', optimizer=opt, metrics=['mean_squared_error'])
model.fit(x_train, y_train, shuffle=True, epochs=10)
print(model.predict(np.asarray([.4322])))
Output:
[[0.21410337]]
I am using keras and RNN to classify slack text data on whether the text is reaction worthy or not (1 - emoji, 0 - no emoji). I have removed usernames and urls from the text as well as dropped duplicates with different target variables.
I am not able to get the model to generalize to unseen data. The loss of the train/val sets look good and continually decrease but the accuracy of the val set only decreases.
I am using a pretrained GLOVE word embedding since my training size is only about 25,000 sentences.
I have added additional layers, changed my regularization value and increased dropout but get similar results. Is my model not complex enough to generalize the data? The times i added additional layers they were much smaller but deeper because the training time was about 2 min per epoch.
Any insight would be appreciated.
embedding_layer = Embedding(len(word_index) + 1,
100,
weights=[embeddings_matrix],
input_length=max_message_length,
embeddings_regularizer=l2(0.001),
trainable=True)
# Creating the Model
model = Sequential()
model.add(embedding_layer)
model.add(Convolution1D(filters=32, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.7))
model.add(layers.GRU(128))
model.add(Dropout(0.7))
model.add(Dense(1, activation='sigmoid'))
# Compiling the model with our given Optimizer
optimizer = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.000025)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
print(model.summary())