I use the already trained(pre-trained) data-set for object detection using yolo+tensorflow.
My inference results are great but now I want to "add" a new class to pre-trained data-set.
There are 80 classes in pre-trained data-set how can I add my custom classes and made it 81 or 82 in total?
Inference git-hub "https://github.com/thtrieu/darkflow".
In case of transfer learning, pre-trained weights on famous datasets like 'Imagenet', 'fashion-mnist' etc are used. These datasets have defined number of classes and labels which may or may not be same as our dataset. The best practice is to add layers above the output layer of the pre-trained model output. For example in keras:
from tensorflow.keras.applications import mobilenet
from tensorflow.keras.layers import Dense, Flatten
output = mobilenet(include_top=False)
flatten = Flatten()(output)
predictions = Dense(number_of_classes, activation='softmax')(layer)
In this case you need to train(or better call it fine tune) the model using your dataset. The mobilenet network will use pretrained weights and the last layer will be only trained as per your dataset with the your defined number of classes.
You may also use:
from tensorflow.keras.applications import mobilenet
preds = mobilenet(include_top=Flase, classes=number_of_classes, weights='imagenet')
for more information you can refer: keras-applications
and these blog1, blog2
If you have already trained your model for 80 classes and need to add another class, then it would be better to re-train the model starting from previously saved checkpoints.(The network architecture should be designed for the total number of classes since the beginning cause at the output layer you will have neurons equal to the number of classes, if that is not the case you cannot add other class to the data as network has not been designed for it.) This will make use of initial training done on previous classes. The data that you are using for re-training, should now contain all the classes (including all the previous class and the new classes that you want to add). It's similar to initializing the weights from last trained checkpoint(on 80 classes) and then again train using more data (including all the classes 80 + more that you want to add) allowing back propagation through all the layers.
Related
As far as I know, cnn's last layers identify objects as a whole, this is irrelevant to the dataset with signatures. Thus, I want to remove them and add additional layers on top of the model, freezing the VGG16 from training. How would the removal of layers potentially affect the model's performance, or should I just leave and delete only dense layers?
I need to add additional layers on top anyway for the school report about the effect of convolutional layers' configurations on the model's performance.
p.s my dataset is really small it contains nearly 700 samples, which is extremely small n i know that(i tried augmenting data)
I have a dataset with Chinese signatures, but I thought that it is better to train it separately//
I am not proficient in this field and I started my acquaintance from deep learning, so pls correct me if you noticed any misconception in my explanation?/
Easiest way is to use VGG with include_top=False, weights='imagenet, and set pooling = max. This will instantiate the model with imagenet weights, the top classification layer is removed and the output of the VGG model is a flat vector you can feed directly into a dense layer. My typical code for this is shown below. In the final layer class_count is the number of classes in the training data.
base_model=tf.keras.applications.VGG16(include_top=False, weights="imagenet",input_shape=img_shape, pooling='max')
x=base_model.output
x=keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001 )(x)
x = Dense(256, kernel_regularizer = regularizers.l2(l = 0.016),activity_regularizer=regularizers.l1(0.006),
bias_regularizer=regularizers.l1(0.006) ,activation='relu')(x)
x=Dropout(rate=.45, seed=123)(x)
output=Dense(class_count, activation='softmax')(x)
model=Model(inputs=base_model.input, outputs=output)
How would the removal of layers potentially affect the model's performance, or should I just leave and delete only dense layers?
This is hard to answer because what performance are you talking about? VGG16 originally were build to Imagenet problem with 1000 classes, so if you use it without any modifications probably won't work at all.
Now, if you are talking about transfer learning, so yes, the last dense layers could be replaced to classify your dataset, because the model created with cnn layers in VGG16 is a good pattern recognizer. The fully connected layers at the end work as a classifier for this patterns and you should replace it and train it again for your specific problem. VGG16 has 3 dense layers (FC1, FC2 and FC3) at end, keras only allow you to remove all three, so if you want replace just the last one, you will need to remove all three and rebuild the FC1 and FC2.
The key is what you are going to train after that, you could:
Use original weights (imagenet) in cnn layers and start you trainning from that, just finetunning with a small learning rate. A good choice when you dataset is similar to original and you have a good amount of it.
Use original weights (imagenet) in cnn layers, but freeze them, and just training the weights in the dense layers you replaced. A good choice when your dataset is small.
Don't use the original weights and retrain all the model. Usually not a good choice, because you will need to be an expert to tunning the parameters, tons of data and computacional power to make it work.
I will describe my intention here. I want to import BERT pretrained model via tf-hub function hub.module(bert_url, trainable = True) and utilize it for text classification task. I plan to use a large corpus to fine-tune weights of BERT as well as a few dense layers whose inputs are the BERT outputs. I would then like to freeze layers of BERT and train only the dense layers following BERT. How can I do this efficiently?
You mention Hub's TF1 API hub.Module, so I suppose you are writing TF1 code and using the TF1-compatible Hub assets google/bert/..., such as https://tfhub.dev/google/bert_cased_L-12_H-768_A-12/1
Are you going to have separate run of your program for the two phases of training? If so, maybe you can just drop trainable=True from the hub.Module call in the second run. This doesn't affect variable names, so you can restore the training result from the first run, including BERT's adjusted weights. (To be clear: the pre-trained weights shipped with the hub.Module are only used for initialization at the very start of training; restoring a checkpoint overrides them.)
The actual problem is generating random layer weights for an existing (already built) model in Keras. There are some solutions using Numpy [2] but it is not good to choice that solutions. Because, in Keras, there are special initializers using different distributions for each layer type. When Numpy is used instead of the initializers, the generated weights have different distribution then its original. Let's give an example:
Second layer of my model is a convolutional (1D) layer and its initializer is GlorotUniform [1]. If you generate random weights using Numpy, the distribution of generated weights will not be the GlorotUniform.
I have a solution for this problem but it has some problems. Here is what I have:
def set_random_weights(self, tokenizer, config):
temp_model = build_model(tokenizer, config)
self.model.set_weights(temp_model.get_weights())
I am building the existing model. After the building process, weights of the model are re-initialized. Then I get the re-initalized weights and set them to another model. Building model to generate new weights has redundant processes. So, I need a new solution without building a model and Numpy.
https://keras.io/initializers/
https://www.codementor.io/nitinsurya/how-to-re-initialize-keras-model-weights-et41zre2g
See previous answers to this question here.
Specifically, if you want to use the original weights initializer of a Keras layer, you can do the following:
import tensorflow as tf
import keras.backend as K
def init_layer(layer):
session = K.get_session()
weights_initializer = tf.variables_initializer(layer.weights)
session.run(weights_initializer)
layer = model.get_layer('conv2d_1')
init_layer(layer)
I am still relatively new to the world of Deep Learning. I wanted to create a Deep Learning model (preferably using Tensorflow/Keras) for image anomaly detection. By anomaly detection I mean, essentially a OneClassSVM.
I have already tried sklearn's OneClassSVM using HOG features from the image. I was wondering if there is some example of how I can do this in deep learning. I looked up but couldn't find one single code piece that handles this case.
The way of doing this in Keras is with the KerasRegressor wrapper module (they wrap sci-kit learn's regressor interface). Useful information can also be found in the source code of that module. Basically you first have to define your Network Model, for example:
def simple_model():
#Input layer
data_in = Input(shape=(13,))
#First layer, fully connected, ReLU activation
layer_1 = Dense(13,activation='relu',kernel_initializer='normal')(data_in)
#second layer...etc
layer_2 = Dense(6,activation='relu',kernel_initializer='normal')(layer_1)
#Output, single node without activation
data_out = Dense(1, kernel_initializer='normal')(layer_2)
#Save and Compile model
model = Model(inputs=data_in, outputs=data_out)
#you may choose any loss or optimizer function, be careful which you chose
model.compile(loss='mean_squared_error', optimizer='adam')
return model
Then, pass it to the KerasRegressor builder and fit with your data:
from keras.wrappers.scikit_learn import KerasRegressor
#chose your epochs and batches
regressor = KerasRegressor(build_fn=simple_model, nb_epoch=100, batch_size=64)
#fit with your data
regressor.fit(data, labels, epochs=100)
For which you can now do predictions or obtain its score:
p = regressor.predict(data_test) #obtain predicted value
score = regressor.score(data_test, labels_test) #obtain test score
In your case, as you need to detect anomalous images from the ones that are ok, one approach you can take is to train your regressor by passing anomalous images labeled 1 and images that are ok labeled 0.
This will make your model to return a value closer to 1 when the input is an anomalous image, enabling you to threshold the desired results. You can think of this output as its R^2 coefficient to the "Anomalous Model" you trained as 1 (perfect match).
Also, as you mentioned, Autoencoders are another way to do anomaly detection. For this I suggest you take a look at the Keras Blog post Building Autoencoders in Keras, where they explain in detail about the implementation of them with the Keras library.
It is worth noticing that Single-class classification is another way of saying Regression.
Classification tries to find a probability distribution among the N possible classes, and you usually pick the most probable class as the output (that is why most Classification Networks use Sigmoid activation on their output labels, as it has range [0, 1]). Its output is discrete/categorical.
Similarly, Regression tries to find the best model that represents your data, by minimizing the error or some other metric (like the well-known R^2 metric, or Coefficient of Determination). Its output is a real number/continuous (and the reason why most Regression Networks don't use activations on their outputs). I hope this helps, good luck with your coding.
After training a network using Keras:
I want to access the final trained weights of the network in some order.
I want to know the neuron activation values for every input passed. For example, after training, if I pass X as my input to the network, I want to know the neuron activation values for that X for every neuron in the network.
Does Keras provide API access to these things? I want to do further analysis based on the neuron activation values.
Update : I know I can do this using Theano purely, but Theano requires more low-level coding. And, since Keras is built on top of Theano, I think there could be a way to do this?
If Keras can't do this, then among Tensorflow and Caffe , which can? Keras is the easiest to use, followed by Tensorflow/Caffe, but I don't know which of these provide the network access I need. The last option for me would be to drop down to Theano, but I think it'd be more time-consuming to build a deep CNN with Theano..
This is covered in the Keras FAQ, you basically want to compute the activations for each layer, so you can do it with this code:
from keras import backend as K
#The layer number
n = 3
# with a Sequential model
get_nth_layer_output = K.function([model.layers[0].input],
[model.layers[n].output])
layer_output = get_nth_layer_output([X])[0]
Unfortunately you would need to compile and run a function for each layer, but this should be straightforward.
To get the weights, you can call get_weights() on any layer.
nth_weights = model.layers[n].get_weights()