Training Tensorflow only one object - tensorflow

Corresponding Tensorflow documentation I trained 3 objects and get result (It can recognize these objects). When I show other objects (not the 3 ones) it doesn't work correctly.
I want to train only one object (example: a cup) and recognize only this object. Is it possible to do via Tensorflow ?

Your question doesn't provide enough details, but as I can guess your trained the network with softmax activation and Categorical or SparseCategorical cross entropy loss. If my guess is right, such network always generates prediction to one of three classess, regardless to actual data, i.e. there is no option of "no-one".
In order to train network to recognize only one class of objects, make the only one output with only one channel and sigmoid activation. Use BinaryCrossEntropy loss to train your model for the specific object. Provide dataset that includes examples with this object and without it.

Related

variational autoencoder with limited data

Im working on a binary classificaton project, and im using VAE (variational autoencoder) to handle the imbalance between the 2 classes by generating new samples for the minority class.
the first class (majority class) contains 20000 samples, and the second one (minority class) contains 500 samples.
After training VAE model on the minority class, i generated new samples for this class and add them to the training set, then i trained two classification models, a model on trained on the imbalanced data (only training set) and the second one trained with training set + data generated by VAE). The problem is the first model is giving results better than the second(f1-score, Roc auc...), and i thought that maybe the problem was because of the limited amount of data that the VAE was trained on.
Any help please.
Though 500 training Images are not good enough to generate diversified images from a VAE, you can still try producing some. It's better to take mean of latents of 10 different images (or even more) and pass it through the decoder ( if you're already doing this, ignore it. If you're doing some other method, try this).
If it's still not working, then, I suggest you to build a Conditional VAE on your entire dataset. In conditional VAE, you train VAE using the labels so that your models learns not only reconstruction but also what class of image it is reconstructing. This helps you to generate an Image of any particular class.

Limiting probability percentage of irrelevant image in CNN

I am training a cnn model with five classes using keras library. Using model.predict function i get prediction percentage of the classes. My problem is for a image which doesn't belong to these classes and completely irrelevant, the predict class still predicts the percentages according to the classes.
How do I prevent it? How do I identify it as irrelevant?
I assume you are using a softmax activation on your last layer to generate the probabilities for each class. By definition, the sum of the outputs from the softmax activation must add up to 1. Therefore, it is impossible for the neural net to say that the image does not belong to any of your classes, with your current setup.
There are two potential ways you could address this:
Add another class that represents "other" or "unknown" objects (so you have 6 classes).
Add another output to your neural net (or train a completely independent neural net) that does binary classification on whether or not the image is in one of the 5 classes. That way, if your secondary output says that the image is not in the 5 classes, you can ignore the softmax output.
In both cases, you will need to augment your dataset with images that do not fall in your 5 classes.

TensorFlow simple_save() why are input/output dicts necessary?

Since all vars/graph are loaded anyways, why am I required to provide inputs, outputs to tf.saved_model.simple_save()?
I tried loading a variable with get_tensor_by_name() that I didn't specify in inputs/outputs dictionaries and it worked! So why won't it let me have blank/None inputs/outputs and I grab my variables by their names?
When you specify the input and output tensors of your model, the inference graph is fully specified. Imagine a model that has a single input, but two outputs . For instance, the model predicts the temperature for tomorrow and whether it will rain or not. Maybe I want to save an inference graph for a model that only gives me the temperature.
When you specify the ins and outs, TensorFlow knows which layers connect them. The reason why get_tensor_by_name() worked in your case, is probably because you fetched a layer that connects your inputs to your outputs.

Using different optimizers to train the same layer in tensorflow

I have a model which consists of convolutional layers followed by fully connected layers. I trained this model on the fer dataset. This is considered a classification problem where the number of output is equal to 8.
After training this model, I kept the fully connected layer, and replaced only the last layer with a new one that has 3 outputs. Therefore, the purpose was to fine tune the fully connected layers along with training the output layer.
Therefore, I have used an optimizer at the beginning to train the whole model. Then I created a new optimizer to fine tune the fully connected layer along with training the last layer.
As a result, I got the following error:
ValueError: Variable Dense/dense/bias/Adam/ already exists,
I know the reason for getting error. The second optimizer was trying to create a kernel for updating the weights using the same name; because a kernel with the same name was created by the first optimizer.
Hence, I would like to know how to fix this problem. Is there a way to delete the kernels associated with the first optimizer?
Any help is much appreciated!!
This is probably caused by both optimizers using the (same) default name 'Adam'. To avoid this clash, you can give the second optimizer a different name, e.g.
opt_finetune = tf.train.AdamOptimizer(name='Adam_finetune')
This should make opt_finetune create its variables under different names. Please let us know whether this works!

Changing a trained network to keep only a subset of its output

Suppose I have a trained TensorFlow classification network for 20 classes as in PASCAL VOC 2007: aeroplane, bicycle, ..., car, cat, ..., person, ..., tvmonitor.
Now, I would like to have a sub-network for only a subset of the classes, e.g., 3 classes: car, cat, person.
Then, I can use this network for testing or for re-training/fine-tuning on a new dataset, only for the 3 classes.
It should be possible to extract this sub-network out of the original network, since it is only the last layer that will change. We need to discard the neurons/weights for the discarded classes.
My question: Is there an easy way to do this in TensorFlow?
It will be great if you can point to some sample code or similar solution.
I have googled, but have not come across any mention of this.
The symmetric problem, expanding the number of classes without discarding the original weights, can potentially be useful for some people, but my current focus is the one above.
If you want to only keep the output for a few slices, you could simply extract the corresponding slices from the last layer.
For example, let's assume the last layer is fully connected. Its weights are a tensor of size num_previous x num_output.
You want to keep only a few of these outputs, says output 1, 22, and 42. You can get the weights of your new fully connected layer as:
outputs_to_keep = [1, 22, 42]
new_W = tf.transpose(tf.gather(tf.transpose(old_W), outputs_to_keep))
It is possible to extract a pretrained subnet as you said. It is called transfer learning. There are different ways to do it, here you have one:
Find the layer you want to start with. You can use Tensorboard to find it and then use graph.get_tensor_by_name() Usually you keep the convolutional layers and discard the fully connected ones.
Connect your new layers (normally fully connected ones) to the previous layer.
Freeze the variables (weights) of the pretrained layers using trainable=false. Alternatively, you can instruct the optimizer to update only the weights from the new layers.
Train your model with the new classes.