I am implementing a face estimation model where I am using a pretrained ResNet50 and added some layers on top. Now I would like to implement a Mean-Variance Loss in Keras but I am very new to Keras and just couldn't figure out how.
This is how the last layers of my model look like (where the predict layer contains a sigmoid):
Last Layers:
Right now I train the model with the following loss:
model.compile(tf.keras.optimizers.Adam(learning_rate=1e-4),loss=tf.keras.losses.MeanSquaredError(),metrics=['mae'])
I know that I just have to replace the MeanSquaredError by my custom function but I don't know how to implement the Mean-Variance-Loss.
Related
I want to create a CNN model using the concatenation of hidden layers two pretrained models Resnet and VGG16
After you define model, checkout these pretrained models layers by model.summary(), then when you define layer, try to take output of that layer in this way; first get the model.get_layer('layer_name') and then take its output by layer.output, and now concatenate the outputs of the layers that you have defined before.
I was working on an image recognition problem. After training the model, I saved the architecture as well as weights. Now I want to use the model for extracting features from other images and perform SVM on that. For this, I want to remove the last two layers of my model and get the values calculated by the CNN and fully connected layers till then. How can I do that in Keras?
# a simple model
model = keras.models.Sequential([
keras.layers.Input((32,32,3)),
keras.layers.Conv2D(16, 3, activation='relu'),
keras.layers.Flatten(),
keras.layers.Dense(10, activation='softmax')
])
# after training
feature_only_model = keras.models.Model(model.inputs, model.layers[-2].output)
feature_only_model take a (32,32,3) for input and the output is the feature vector
If your model is subclassed - just change call() method.
If not:
if your model is complicated - wrap your model by subclassed model and change forward pass in call() method, or
if your model is simple - create model without the last layers, load weights to every layer separately
I train the model in tensorflow,the model has dropout layer.And then I convert it into tensorflowjs ,then I load it by loadFrozenModel(),Can I modify the dropout rate after model=tf.loadFrozenModel?
Currently frozen models cannot be trained further. You can of course use them as a base for a transfer learning task, but the variables inside that model are frozen and not marked as updatable.
Using transfer learning, you can retrieve the layer before the dropout layer and change the dropout layer and train further
I am using keras to build a multi-output classification model. My dataset is such as
[x1,x2,x3,x4,y1,y2,y3]
x1,x2,x3 are the features, and y1,y2,y3 are the labels, the y1,y2,y3 are multi-classes.
And I already built a model (I ingore some hidden layers):
def baseline_model(input_dim=23,output_dim=3):
model_in = Input(shape=(input_dim,))
model = Dense(input_dim*5,kernel_initializer='uniform',input_dim=input_dim)(model_in)
model = Activation(activation='relu')(model)
model = Dropout(0.5)(model)
...................
model = Dense(output_dim,kernel_initializer='uniform')(model)
model = Activation(activation='sigmoid')(model)
model = Model(model_in,model)
model.compile(optimizer='adam',loss='binary_crossentropy', metrics=['accuracy'])
return model
And then I try to use the method of keras to make it support classification:
estimator = KerasClassifier(build_fn=baseline_model)
estimator.fit()
estimator.predict(df[0:10])
But I found that the result is not multi-output, only one dimension is output.
[0,0,0,0,0,0,0,0,0,0]
So for the multi-output classification problem, we can not use KerasClassifier function to learn it?
You do not need to wrap the model in KerasClassifier. That wrapper is so that you can use the Keras model with Scikit-Learn. The type of model (classifier, regression, multiclass classifier, etc) is ultimately determined by the shape and activation of the final layer of your model.
You can simply use model.fit() function that is part of Keras. Make sure that you pass the data into the function. You can see more info on the fit function here: https://keras.io/models/model/#fit
Also your loss is setup as binary_crossentropy. For a multi-class problem you will want to use categorical_crossentropy.
model.compile(optimizer='adam',loss='categorical_crossentropy', metrics=['accuracy'])
This model isn't really what Keras refers to as multi-output as far as I can tell. With multi-output you are trying to get the output from several different layers and possibly apply different loss functions to them.
Base on the setup in your question you would be able to use the Keras Sequential model instead of the Functional model if you wanted. Keras recommends using the Sequential model if you can because its simpler. https://keras.io/getting-started/sequential-model-guide/
I'm building image processing network in tensorflow and I want to make use of texture loss. Texture loss seems simple to implement if you have pretrained model loaded.
I'm using TF to build the computational graph for my model and I want to incorporate Keras.application.VGG19 model to get output from layer 'block4_conv4'.
The problem is: I have two TF tensors target and result from my main model, how to feed them into keras VGG19 in the same session to compute their diff and use it in main loss for my model?
It seems following code does the trick
with tf.variable_scope("") as scope:
phi_func = VGG19(include_top=False, weights=None, input_shape=(128, 128, 3))
text_1 = phi_func(predicted)
scope.reuse_variables()
text_2 = phi_func(x)
text_loss = tf.reduce_mean((text_1 - text_2)**2)
right after session created I call phi_func.load_weights(path) to initiate weights