Tflite 200mb big - tensorflow

I am building a model which should classify flowers. So I created a model with Tensorflow:
keras.layers.Conv2D(128, (3,3), activation='relu', input_shape=(imageShape[0], imageShape[1],3)),
keras.layers.MaxPooling2D(2,2),
keras.layers.Dropout(0.5),
keras.layers.Conv2D(256, (3,3), activation='relu'),
keras.layers.MaxPooling2D(2,2),
keras.layers.Conv2D(512, (3,3), activation='relu'),
keras.layers.MaxPooling2D(2,2),
keras.layers.Flatten(),
keras.layers.Dropout(0.3),
keras.layers.Dense(280, activation='relu'),
keras.layers.Dense(4, activation='softmax')
opt = tf.keras.optimizers.RMSprop()
model.compile(loss='categorical_crossentropy',
optimizer= opt,
metrics=['accuracy'])
While training I save checkpoints as .h5
checkpoint = ModelCheckpoint("preSaved"+str(time.time())+".h5", monitor='val_loss', verbose=1,
save_best_only=True, save_weights_only=False, mode='auto', period=1)
Now I got an epoch with a pretty low loss and want to convert it to .tflite to upload it to Firebase (use it in an Android Studio App).
import tensorflow as tf
new_model= tf.keras.models.load_model(filepath="model.h5")
tflite_converter = tf.lite.TFLiteConverter.from_keras_model(new_model)
tflite_converter.inference_type=tf.uint8
tflite_converter.default_ranges_stats=[min_value,max_value]
tflite_converter.quantized_input_stats={"conv2d_6_input_6:0"[mean,std]}
tflite_converter.post_training_quantize=True
tflite_model = tflite_converter.convert()
open("tf_lite_model.tflite", "wb").write(tflite_model)
The .h5 has about 335mb and the final .tflite got 160mb.But Firebase only allows .tflite to 60 mb and if I use a local model it needs minutes to load. I read that .tflite are usually smaller.
Is there a problem in my model or when I convert it to .tflite?

The model size is largely determined by your model architecture (the different layers that make up the model and the number of parameters in each layer). You can experiment changing those to get a smaller model.
Here is much simpler architecture for an image classification model. Keep in mind, of course, that going with a smaller model might have lower accuracy than a more sophisticated version.

Related

OpencvDNN cannot read the .onnx file if there is a GAP layer in the original Keras model

I can't read a Tensorflow Keras model converted to .ONNX, using the opencvDnn module if the original model has a GlobalAveragePooling2D instead of a Flatten layer in the fully connected part.
I'm trying to use a Resnet50 (doesn't work with simpler models either), importing imagenet weights, and instead of a flatten layer I'm using a GlobalAveragePooling2D as described below:
pretrained_Model = applications.ResNet50(include_top=False, weights="imagenet", input_shape=(img_rows, img_cols, img_channel))
add_model = Sequential()
add_model.add(GlobalAveragePooling2D(input_shape=pretrained_Model.output_shape[1:])))
#add_model.add(Flatten(input_shape=pretrained_Model.output_shape[1:]))
add_model.add(Dense(256, activation='relu'))
add_model.add(Dense(1, activation='sigmoid'))
model = Model(inputs=pretrained_Model.input, outputs=add_model(pretrained_Model.output))
model.compile(loss='binary_crossentropy', optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
When I convert to .onnx there is no noticeable error, but when trying to read it with cv2.dnn.readNetFromONNX() this is the error that comes up:

Why are encoded representations bad for classification?

Given a pre-trained well-performing auto-encoder. When I train a classifier on encodings (produced by the auto-encoder) the classifier does very poorly. In particular, it does much worse than training a classifier on normal inputs (i.e. unencoded inputs).
However, when I fine-tune the encoder based on classification loss, the classifier does quite well.
Why are encoded representations bad for classification?
Details: I’m working on CIFAR-100 and trying to classify coarse image labels, i.e. 20 classes (but I think I had the same problem when doing classification on CIFAR-10). The classifier has 5 layers and I’m using dropout:
classifier = tf.keras.Sequential([
tf.keras.layers.Dense(512,
activation='relu',
name='classifier_hidden_1'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(256,
activation='relu',
name='classifier_hidden_2'),
tf.keras.layers.Dense(128,
activation='relu',
name='classifier_hidden_3'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(64,
activation='relu',
name='classifier_hidden_4'),
tf.keras.layers.Dense(num_classes,
activation=None,
name='classifier_out'),
], name='classifier')

Constrained Optimization Tensorflow

I have a trained classifier neural network in Keras. Let the neural network be f(x). I want to find the vectors x such that when ||x||^2 = 1, f(x) is maximized. I currently have trained my neural network with Keras
model = Sequential()
model.add(Dense(500, activation='sigmoid'))
model.add(Dense(500, activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy', auc])
model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=2, verbose = 1, callbacks=[earlyStopping])
I want to know if there is a way to solve this constrained optimization problem once my Neural network has already been trained. There is a scipy optimize which can do this for general functions. Is there a way to do this for a neural network. Please include a code sample.
If I understand you correctly you have finished training your neural network and would like to find the input x which maximizes the probability of it being in a certain class (where the output is close to 1.0)
You could write a small function to assess the performance of your network using the predict_proba() method to get classification probabilities on test data, and then optimise this function using scipy:
model = Sequential()
model.add(Dense(500, activation='sigmoid'))
model.add(Dense(500, activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy', auc])
model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=2, verbose = 1, callbacks=[earlyStopping])
def f(x):
prediction = model.predict_proba(x)
return -prediction
a = scipy.optimize.minimize(f, x0=np.random.randn(500))
optimal_x = a.x
optimal_x will be the input x which maximises the certainty with which your classifier puts it in one specific class.

Why is my Neural Net not learning

I have a CNN that I'm trying to train and I cant figure out why its not learning. It has 32 classes which are different types of clothes of about 1000 images in each folder.
Issue is this is the result at the end of training which takes about 9 hours on my GPU
loss: 3.3403 - acc: 0.0542 - val_loss: 3.3387 - val_acc: 0.0534
If anyone could give me directions on how to get this network to train better I would be grateful.
# dimensions of our images.
img_width, img_height = 228, 228
train_data_dir = 'Clothes/train'
validation_data_dir = 'Clothes/test'
nb_train_samples = 25061
nb_validation_samples = 8360
epochs = 20
batch_size = 64
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
model = Sequential()
model.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='tanh', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.3))
model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='tanh'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(32, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
[test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical',
shuffle = True)
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical',
shuffle = True)
history = model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
Here is the plot of the training & validation loss
A network may not converge/learn for several reasons, but here is a list of tips that I think is relevant in your case (based on my own experience):
Transfer Learning: The first thing you should know is that it's very hard to train an image classifier from scratch for most problems, you need much more computing power and time for that. I strongly recommend using transfer learning. There are multiple trained architectures available in Keras that you can use as initial weights fr your network (or other methods).
Training step: For the optimizer, I recommend to use Adam first and to vary the learning rate to see how the loss responds. Also, since you are using Convolutional Layers, you should consider adding Batch Normalization Layers, that can speed significantly the training time, and change the Convolutional activations to 'relu', which make them much faster to train.
You could also try decreasing the Dropout values but I don't think that's the main issue here. Also, If you are considering training your network from scratch,
you should start with fewer layers and add more gradually to get a better idea of ​​what's going on.
train/test split: I see that you are using 8360 observations in your test set. Given the size of your training set, I think it's too much. 1000 for example is enough. The more training samples you have, the more satisfying your results will be.
Also, before judging the accuracy of your model, you should start by establishing a baseline model to benchmark your model. The baseline model depends on your problem, but in general I choose a model that predicts the most common class in the dataset. You should also look at another metric 'top_k_accuracy' available in Keras that is interesting when you have a relatively high number of classes to predict. It helps you to see how close your model is to the right prediction.
First, in order to keep your sanity, check carefully for any bugs, and that your data is being sent in as intended
You might want to add a Top K accuracy metric to get a better idea of whether it's close to getting it, or totally wrong.
Here are some tuning things to try:
Change the kernel size to 3 and activation to relu
model.add(Conv2D(filters=64, kernel_size=3, padding='same', activation='relu'))
If you think your model is underfitting then try increasing the number of Conv layers per pooling to start with. But you could also increase the number of filters or the number of conv + pool repetitions.
Adam optimizer might learn a bit faster than RMS prop
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
Probably the biggest improvement would be to get more data. I think your data set is probably too small for the scope of the problem.
You might want to try transfer learning from a pre-trained image recognition network.

RNN Not Generalizing on Text Classification

I am using keras and RNN to classify slack text data on whether the text is reaction worthy or not (1 - emoji, 0 - no emoji). I have removed usernames and urls from the text as well as dropped duplicates with different target variables.
I am not able to get the model to generalize to unseen data. The loss of the train/val sets look good and continually decrease but the accuracy of the val set only decreases.
I am using a pretrained GLOVE word embedding since my training size is only about 25,000 sentences.
I have added additional layers, changed my regularization value and increased dropout but get similar results. Is my model not complex enough to generalize the data? The times i added additional layers they were much smaller but deeper because the training time was about 2 min per epoch.
Any insight would be appreciated.
embedding_layer = Embedding(len(word_index) + 1,
100,
weights=[embeddings_matrix],
input_length=max_message_length,
embeddings_regularizer=l2(0.001),
trainable=True)
# Creating the Model
model = Sequential()
model.add(embedding_layer)
model.add(Convolution1D(filters=32, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.7))
model.add(layers.GRU(128))
model.add(Dropout(0.7))
model.add(Dense(1, activation='sigmoid'))
# Compiling the model with our given Optimizer
optimizer = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.000025)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
print(model.summary())