False prediction from efficientnet transfer learning - tensorflow

I'm new to transfer learning in TensorFlow and I choose tfhub to simplify finding a dataset, but now I'm confused because my model gives me a wrong prediction when I try to use an image from the internet. I used the efficientnet_v2_imagenet1k_b0 feature vector without fine-tuning to train a rock-paper-scissors dataset from https://www.kaggle.com/drgfreeman/rockpaperscissors. I used image data generator and flow from directory for data processing.
This is my model here
This is my train result here
This is my test result here
It's the second time I get something like this when using transfer learning with tfhub. I want to know why this happened and how to fix it, so this problem doesn't happen again. Thanks a lot for your help and sorry for my bad English.

I downloaded your code to my local machine and the dataset as well.
Had to make a few adjustments to make it run locally.
I believe the model efficientnet_v2_imagenet1k_b0 is different
from the newer efficient net models in that this version DOES
require pixel levels to be scaled between 0 and 1. I ran the model
with and without rescaling and it works well only if the pixlels
are rescaled. Below is the code I used to test if the model correctly predicts
an image downloaded from the internet. It worked as expected.
import cv2
class_dict=train_generator.class_indices
print (class_dict)
rev_dict={}
for key, value in class_dict.items():
rev_dict[value]=key
print (rev_dict)
fpath=r'C:\Temp\rps\1.jpg' # an image downloaded from internet that should be paper class
img=plt.imread(fpath)
print (img.shape)
img=cv2.resize(img, (224,224)) # resize to 224 X 224 to be same size as model was trained on
print (img.shape)
plt.imshow(img)
img=img/255.0 # rescale as was done with training images
img=np.expand_dims(img,axis=0)
print(img.shape)
p=model.predict(img)
print (p)
index=np.argmax(p)
print (index)
klass=rev_dict[index]
prob=p[0][index]* 100
print (f'image is of class {klass}, with probability of {prob:6.2f}')
the results were
{'paper': 0, 'rock': 1, 'scissors': 2}
{0: 'paper', 1: 'rock', 2: 'scissors'}
(300, 300, 3)
(224, 224, 3)
(1, 224, 224, 3)
[[9.9902594e-01 5.5121275e-04 4.2284720e-04]]
0
image is of class paper, with probability of 99.90
You had this in your code
uploaded = files.upload()
len_file = len(uploaded.keys())
This did not run because files was not defined
so could not find what causes your misclassification problem.
Remember in flow_from_directory, if you do not specify the color mode it defaults to rgb. So even though training images are 4 channel PNG the
actual model is trained on 3 channels. So make sure the images you want to predict are 3 channels.

To help really need to see the code for how you provide your data to model.predict. However as a guess, remember efficientnet needs to have the pixels in the range from0 to 255 so do not scale your images. Make sure your test images are rgb an of the same size as the image size used in training. Also need to see code for how you process the predictions

Related

Why we have target_size for DeepLab while CNN can accept any sizes?

I still have not understood a concept. One reason that we use fully convolutional layer at the end in a CNN network is to handle different images sizes during training. My question is that if this is the case why we always crop or squeeze images into squared sizes in the input section. Please do not say the question is repeated, we use squared images to make it easier, check pyramid pooling, and so on.
For example, Here's a link
DeepLab can accept any images with different sizes. But in its code, there is a target_size as (513). Now, if CNN can accept images with different sizes, why we need to use target_size. If this is for converting images into a standard format, why 513?
During training, we should specify batch size. What is our batch_size in this case: (5, None, None, None). Is it possible to have images with different sizes in a batch?
I read many posts and still, I am confused with these questions:
- How can we train a model on images with different sizes (imagine that sizes are standard)? I see some codes use a batch size of one. I think it is not a solution.
- Is there any snipped code that shows how can we define batches for a model like FCN to accept dataset with different sizes?
- In this paper: Here's a link my problem was explained but authors again resized images into squared format, if we can use batches comprises of images with different sizes why they proposed that idea of using squared images between 180 by 180 and 224 by 224.
Has DeepLab used this part: link to make images into a standard format? or for other reason?
width, height = image.size
resize_ratio = 1.0 * 513 / max(width, height)
target_size = (int(resize_ratio * width), int(resize_ratio * height))
I could not find the place of their code when they training the model on PASCAL dataset.
I expected to find a simple code for Keras or Tensorflow whereas it shows easily that we can apply a CNN model such as FCN or DeepLab for a dataset such as PASCAL VOC2012 (for Segmentation) with images of different sizes without any resizing or cropping. Still, I am looking.
Thank you for detail answers in advance. Please do not repeat answers like you can use batch size one, squared images are common and better, you can add black margins to the images, fully connected layer is the problem, you can use global max pooling, and so on. I am looking to find a code that works on images with different sizes.
I could not find the place of DeepLab model in TensorFlow GitHub where it accepts batches with different sizes?? here
Also in here FCN it is trained on COCO dataset with target_size of 320 by 320. Why? it should be any size for FCN.
Also, could one explain to me how can we have a batch of images with different sizes? Could we have an np array of different sized images? Batch = [5, none, none, 3] each of 5 with different sizes.
I also found another confusing part in semantic segmentation. Using Keras Augmentation we can not augment image with more than 4 channels. It means that using Keras augmentation, we can not train PASCAL dataset with 21 channels. ??

Resnet50 image preprocessing

I am using https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/3 to extract image feature vectors. However, I'm confused when it comes to how to preprocess the images prior to passing them through the module.
Based on the related Github explanation, it's said that the following should be done:
image_path = "path/to/the/jpg/image"
image_string = tf.read_file(image_path)
image = tf.image.decode_jpeg(image_string, channels=3)
image = tf.image.convert_image_dtype(image, tf.float32)
# All other transformations (during training), in my case:
image = tf.random_crop(image, [224, 224, 3])
image = tf.image.random_flip_left_right(image)
# During testing:
image = tf.image.resize_image_with_crop_or_pad(image, 224, 224)
However, using the aforementioned transformation, the results I am getting suggest that something might be wrong. Moreover, the Resnet paper is saying that the images should be preprocessed by:
A 224×224 crop is randomly sampled from an image or its
horizontal flip, with the per-pixel mean subtracted...
which I can't quite understand what is means. Can someone point me in the right direction?
Looking forward to you answers!
The image modules on TensorFlow Hub all expect pixel values in range [0,1], like you get in your code snippet above. This makes it easy and safe to switch between modules.
Inside the module, the input values are scaled to the range that the network was trained for. The module https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/3 has been published from a TF-Slim checkpoint (see documentation), which uses yet another convention for normalizing inputs than He&al. -- but all this is taken care of.
To demystify the language in He&al.: it refers to the mean R, G and B values aggregated over all pixels of the dataset they studied, following the old wisdom that normalizing inputs to zero mean helps neural networks train better. However, later papers on image classification no longer expended this degree of attention to dataset-specific preprocessing.
The citation from the Resnet paper you mentioned is based on the following explanation from the Alexnet paper:
ImageNet consists of variable-resolution images, while our system requires a constant input dimensionality. Therefore, we down-sampled the images to a fixed resolution of256×256. Given a rectangular image, we first rescaled the image such that the shorter side was of length 256, and thencropped out the central 256×256patch from the resulting image. We did not pre-process the images in any other way, except for subtracting the mean activity over the training set from each pixel.
So in the Resnet paper, a similar process consist in taking a of 224x224 pixels part of the image (or of its horizontally flipped version) to ensure the network is given constant-sized images, and then center it by substracting the mean.

MNIST-like issue. Convolutional Neural Network

This should be easy for some, but I'm a bit new to Tensorflow and all my research has brought me to multi-thousand line gits and I'm just curious if there is a simpler alternative for a beginner. I had an idea which inputs a 200x260 color image and outputs a one-hot vector between 1-10. I realized it is very similar to MNIST, but Tensorflow does not have any documentation on how the mnist library turns its images into a usable form. Does anybody have any ideas to turn a folder of about 200 images (yes, I know, small) into a usable form? I already have my one-hot vectors. Also, I set my placeholder shape as tf.placeholder(tf.float32,[None, 200, 260, 3]) Would that work? I would really prefer to maintain color as well. Thanks for any tips!
First, you can import all of your images using imread from skimage
For example:
my_image = skimage.io.imread('./path/myimage.png')
Then if all of them are in the size you desired (200x260) then you can normalize them by dividing all of them by 255 (normalized to a value between 0 and 1). If not, you can use resize from skimage, this will automatically resize and normalize the images for you.
For example
my_image = skimage.transform.resize(my_image, (200, 260))
To visualize it, you can use imshow from matplotlib.pyplot to plot the image.
For the convenient next_batch function that grabs next batch built in in Tensorflow, you can use the following code
i = 0
def next_batch(batch_size):
x = training_images[i:i + batch_size]
y = training_labels[i:i + batch_size]
i = (i + batch_size) % len(training_images)
return x, y
Then you can create your CNN and train the images. The placeholder you created for X looks right.
I also struggled with that in the beginning, but the best way that I know to get data into tensorflow would be to convert your images into the tfRecord format. Especially if you have a large dataset that doesn't fit into RAM. That way tensorflow can load in your data as needed (You need to provide input functions to convert your files back).
Although this might not be/certainly isnt the easiest way it would probably be the best in the long run in case you want to add more images.
To easiest way would just be to load your images using pillow or any other image library (I'm assuming your using tensorflow with python) and hand them over to tensorflow when running your session.

How to test your model on Retinanet?

I am using the Retinanet model to train a classifier with about 50 classes. Link to the model: https://github.com/fizyr/keras-retinanet
This is what I have done so far:
Installed the model using the suggested steps.
Create a csv of my images with the recommended format for reading.
Used the following script to train my model:
# Using the installed script:
retinanet-train csv <path to csv file containing annotations> <path to csv file containing classes>
The model is currently running and training with about 50 epochs and 10000 steps in each epoch. I see the losses going down and it should take about a day to finish the training.
How do I proceed now with:
a. Testing my model? The example given here:
An example of testing the network can be seen in this (https://github.com/fizyr/keras-retinanet/blob/master/examples/ResNet50RetinaNet.ipynb link on the website is dead, this seems appropriate) Notebook. In general, output can be retrieved from the network as follows:
_, _, detections = model.predict_on_batch(inputs)
Where detections are the resulting detections, shaped (None, None, 4 + num_classes) (for (x1, y1, x2, y2, cls1, cls2, ...)).
Loading models can be done in the following manner:
from keras_retinanet.models.resnet import custom_objects
model = keras.models.load_model('/path/to/model.h5',
custom_objects=custom_objects)
Execution time on NVIDIA Pascal Titan X is roughly 55msec for an image of shape 1000x600x3.
Now during the training, I did not do anything while running my model:
Create generators for training and testing data (an example is show in keras_retinanet.preprocessing.PascalVocGenerator).
Am I missing something?
Again, sorry for the multi-fold questions and thank you for helping me out.
If by testing you mean running your own image through the network, have a look at the new example. All it does is setup the environment, load in the model, load and prepare an image and visualize the results.
https://github.com/fizyr/keras-retinanet/blob/master/examples/ResNet50RetinaNet.ipynb
Is there an issue with that example? Or is it not clear?

Neural Network with my own dataset

I have downloaded many face images from web. In order to learn Tensorflow I want to feed those images to a simple fully-connected neural network with a single hidden layer. I have found an example code in here.
Since I am a beginner, I don't know how to train, evaluate, and test the network with the downloaded images. The code owner used a '.mat' file and a .pkl file. I don't understand how he organized training and test set.
In order to run the code with my images;
Do I need to divide my images into training, test, and validation folders and turn each folder into a mat file? How am I going to provide labels for the training?
Besides, I don't understand why he used a '.pkl' file?
All in all, I would like to change this code so that I can find test, training , and validation set classification performance with my image dataset.
It might be an easy question, but it is important for me as it is a starting step. Thanks for your understanding.
First, you don't have to use .mat files nor pickles. Tensorflow expects numpy array.
For instance, let's say you have 70000 images of size 28x28 (=784 dimensions) belonging to 10 classes. Let's also assume that you'd like to train a simple feedforward neural network to classify the images.
The first step would be to split the images between train and test (and validation, but let's put this aside for the sake of simplicity). For the sake of the example, let's imagine that you chose randomly 60000 images for your training set and 10000 for your test set.
The second step would be to ensure that your data has the right format. Here, you'd like your training set to consist in one numpy array of shape (60000, 784) for the images and another one of shape (60000, 10) for the labels (if you use one-hot encoding to represent your classes). As for your test set, you should have an array of shape (10000, 784) for the images and one of shape (10000, 10) for the labels.
Once you have these big numpy arrays, you should define placeholders that will allow you to feed data to you network during training and evaluation.
images = tf.placeholder(tf.float32, shape=[None, 784])
labels = tf.placeholder(tf.int64, shape=[None, 10])
The None here means that you can feed a batch of any size, i.e. as many images as you want, as long as you numpy array is of shape (anything, 784).
The third step consists in defining your model as well as the loss function and the optimizer.
The fourth step consists in training your network by feeding it with random batches of data using the placeholders created above. As your network is training, you can periodically print its performance like the training loss/accuracy as well as the test loss/accuracy.
You can find a complete and very simple example here.