Using different optimizers to train the same layer in tensorflow - tensorflow

I have a model which consists of convolutional layers followed by fully connected layers. I trained this model on the fer dataset. This is considered a classification problem where the number of output is equal to 8.
After training this model, I kept the fully connected layer, and replaced only the last layer with a new one that has 3 outputs. Therefore, the purpose was to fine tune the fully connected layers along with training the output layer.
Therefore, I have used an optimizer at the beginning to train the whole model. Then I created a new optimizer to fine tune the fully connected layer along with training the last layer.
As a result, I got the following error:
ValueError: Variable Dense/dense/bias/Adam/ already exists,
I know the reason for getting error. The second optimizer was trying to create a kernel for updating the weights using the same name; because a kernel with the same name was created by the first optimizer.
Hence, I would like to know how to fix this problem. Is there a way to delete the kernels associated with the first optimizer?
Any help is much appreciated!!

This is probably caused by both optimizers using the (same) default name 'Adam'. To avoid this clash, you can give the second optimizer a different name, e.g.
opt_finetune = tf.train.AdamOptimizer(name='Adam_finetune')
This should make opt_finetune create its variables under different names. Please let us know whether this works!

Related

Function CNN model written in Keras 2.2.4 not learning in TensorFlow or Keras 2.4

I am dealing with an object detection problem and using a model which is actually functioning (its results have been published on a paper and I have the original code). Originally, the code was written with Keras 2.2.4 without importing TensorFlow and trained and tested on the same dataset that I am using at the moment. However, when I try to run the same model with TensorFlow 2.x it just won't learn a thing.
I have tried importing everything from TensorFlow 2.4, but I have the same problem if I import everything (layers, models, optimizers...) from Keras 2.4. And I have tried to do so on two different devices, both using a GPU. Namely, what is happening is that the loss function decreases ridiculously fast, but the accuracy won't increase a bit (or, if it does, it gets stuck around 10% or smth). Also, every now and then this happens from an epoch to the next one:
Loss undergoes HUGE jumps between consecutive epochs, and all this without any changes in accuracy
I have tried to train the network on another dataset (had to change the last layers in order to match the required dimensions) and the model seemed to be learning in a normal way, i.e. the accuracy actually increases and the loss doesn't reach 0.0x in one epoch.
I can't post the script, but the model is an Encoder-Decoder network: consecutive Convolutions with increasing number of filters reduce the dimensions of the image, and a specular path of Transposed Convolutions restores the original dimensions. So basically the network only contains:
Conv2D
Conv2DTranspose
BatchNormalization
Activation("relu")
Activation("sigmoid")
concatenate
6 is used to put together outputs from parallel paths or distant layers; 3 and 4 are used after every Conv or ConvTranspose; 5 is only used as final activation function, i.e. as output layer.
I think the problem is pretty generic and I am honestly surprised that I couldn't find a single question about it. What could be happening here? The problem must have something to do with TF/Keras versions, but I can't find any documentation about it and I have been trying to change so many things but nothing changes. It's crazy because if I didn't know that the model works I would try to rewrite it from scratch so I am afraid that this problem may occurr with a new network and I won't be able to understand whether it's the libraries or the model itself.
Thank you in advance! :)
EDIT
Code snippets:
Convolutional block:
encoder1 = Conv2D(filters=first_layer_channels, kernel_size=2, strides=2)(input)
encoder1 = BatchNormalization()(encoder1)
encoder1 = Activation('relu')(encoder1)
Decoder
decoder1 = Conv2DTranspose(filters=first_layer_channels, kernel_size=2, strides=2)(encoder4)
decoder1 = BatchNormalization()(decoder1)
decoder1 = Activation('relu')(decoder1)
Final layers:
final = Conv2D(filters=total, kernel_size=1)(decoder4)
final = BatchNormalization()(final)
Last_Conv = Activation('sigmoid')(final)
The task is human pose estimation: the network (which, I recall, works on this specific task with Keras 2.2.4) has to predict twenty binary maps containing the positions of specific keypoints.

Training Tensorflow only one object

Corresponding Tensorflow documentation I trained 3 objects and get result (It can recognize these objects). When I show other objects (not the 3 ones) it doesn't work correctly.
I want to train only one object (example: a cup) and recognize only this object. Is it possible to do via Tensorflow ?
Your question doesn't provide enough details, but as I can guess your trained the network with softmax activation and Categorical or SparseCategorical cross entropy loss. If my guess is right, such network always generates prediction to one of three classess, regardless to actual data, i.e. there is no option of "no-one".
In order to train network to recognize only one class of objects, make the only one output with only one channel and sigmoid activation. Use BinaryCrossEntropy loss to train your model for the specific object. Provide dataset that includes examples with this object and without it.

Tensorflow serving, get different outcome

I am using tensorflow serving to serve a pre-trained model.
The strange thing is when I input same data for this model, I got different outcome each time.
I thought it might be my problem at variable initialize, I am wondering is there any clue I debug my model, or how can I find the cause, thanks.
Two common problems:
There's a known issue with main_op in which variables are re-initialized to random.
You left dropout layers in your prediction graph.
To address (1), use this instead:
def main_op():
init_local = variables.local_variables_initializer()
init_tables = lookup_ops.tables_initializer()
return control_flow_ops.group(init_local, init_tables)
To address (2), be sure that you aren't directly exporting your training graph. You need to build a new graph for prediction/serving. If you are using the tf.estimator framework, then you will only conditionally add dropout layers when mode is tf.estimator.ModeKeys.TRAIN.

Changing a trained network to keep only a subset of its output

Suppose I have a trained TensorFlow classification network for 20 classes as in PASCAL VOC 2007: aeroplane, bicycle, ..., car, cat, ..., person, ..., tvmonitor.
Now, I would like to have a sub-network for only a subset of the classes, e.g., 3 classes: car, cat, person.
Then, I can use this network for testing or for re-training/fine-tuning on a new dataset, only for the 3 classes.
It should be possible to extract this sub-network out of the original network, since it is only the last layer that will change. We need to discard the neurons/weights for the discarded classes.
My question: Is there an easy way to do this in TensorFlow?
It will be great if you can point to some sample code or similar solution.
I have googled, but have not come across any mention of this.
The symmetric problem, expanding the number of classes without discarding the original weights, can potentially be useful for some people, but my current focus is the one above.
If you want to only keep the output for a few slices, you could simply extract the corresponding slices from the last layer.
For example, let's assume the last layer is fully connected. Its weights are a tensor of size num_previous x num_output.
You want to keep only a few of these outputs, says output 1, 22, and 42. You can get the weights of your new fully connected layer as:
outputs_to_keep = [1, 22, 42]
new_W = tf.transpose(tf.gather(tf.transpose(old_W), outputs_to_keep))
It is possible to extract a pretrained subnet as you said. It is called transfer learning. There are different ways to do it, here you have one:
Find the layer you want to start with. You can use Tensorboard to find it and then use graph.get_tensor_by_name() Usually you keep the convolutional layers and discard the fully connected ones.
Connect your new layers (normally fully connected ones) to the previous layer.
Freeze the variables (weights) of the pretrained layers using trainable=false. Alternatively, you can instruct the optimizer to update only the weights from the new layers.
Train your model with the new classes.

pruning tensorflow connections and weights (using cifar10 cnn)

I'm using tensorflow to run a cnn for image classification.
I use tensorflow cifar10 cnn implementation.(tensorflow cifar10)
I want to decrease the number of connections, meaning I want to prune the low-weight connections.
How can I create a new graph(subgraph) without some of the nuerones?
Tensorflow does not allow you lock/freeze a particular kernel of a particular layer, that I have found. The only I've found to do this is to use the tf.assign() function as shown in
How to freeze/lock weights of one Tensorflow variable (e.g., one CNN kernel of one layer
It's fairly cave-man but I've seen no other solution that works. Essentially, you have to .assign() the values every so often as you iterate through the data. Since this approach is so inelegant and brute-force, it's very slow. I do the .assign() every 100 batches.
Someone please post a better solution and soon!
The cifar10 model you point to, and for that matter, most models written in TensorFlow, do not model the weights (and hence, connections) of individual neurons directly in the computation graph. For instance, for fully connected layers, all the connections between the two layers, say, with M neurons in the layer below, and 'N' neurons in the layer above, are modeled by one MxN weight matrix. If you wanted to completely remove a neuron and all of its outgoing connections from the layer below, you can simply slice out a (M-1)xN matrix by removing the relevant row, and multiply it with the corresponding M-1 activations of the neurons.
Another way is add an addition mask to control the connections.
The first step involves adding mask and threshold variables to the
layers that need to undergo pruning. The variable mask is the same
shape as the layer's weight tensor and determines which of the weights
participate in the forward execution of the graph.
There is a pruning implementation under tensorflow/contrib/model_pruning to prune the model. Hope this can help you to prune model quickly.
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/model_pruning
I think google has an updated answer here : https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/model_pruning
Removing pruning nodes from the trained graph:
$ bazel build -c opt contrib/model_pruning:strip_pruning_vars
$ bazel-bin/contrib/model_pruning/strip_pruning_vars --checkpoint_path=/tmp/cifar10_train --output_node_names=softmax_linear/softmax_linear_2 --filename=cifar_pruned.pb
I suppose that cifar_pruned.pb will be smaller, since the pruned "or zero masked" variables are removed.