I am using https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/3 to extract image feature vectors. However, I'm confused when it comes to how to preprocess the images prior to passing them through the module.
Based on the related Github explanation, it's said that the following should be done:
image_path = "path/to/the/jpg/image"
image_string = tf.read_file(image_path)
image = tf.image.decode_jpeg(image_string, channels=3)
image = tf.image.convert_image_dtype(image, tf.float32)
# All other transformations (during training), in my case:
image = tf.random_crop(image, [224, 224, 3])
image = tf.image.random_flip_left_right(image)
# During testing:
image = tf.image.resize_image_with_crop_or_pad(image, 224, 224)
However, using the aforementioned transformation, the results I am getting suggest that something might be wrong. Moreover, the Resnet paper is saying that the images should be preprocessed by:
A 224×224 crop is randomly sampled from an image or its
horizontal flip, with the per-pixel mean subtracted...
which I can't quite understand what is means. Can someone point me in the right direction?
Looking forward to you answers!
The image modules on TensorFlow Hub all expect pixel values in range [0,1], like you get in your code snippet above. This makes it easy and safe to switch between modules.
Inside the module, the input values are scaled to the range that the network was trained for. The module https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/3 has been published from a TF-Slim checkpoint (see documentation), which uses yet another convention for normalizing inputs than He&al. -- but all this is taken care of.
To demystify the language in He&al.: it refers to the mean R, G and B values aggregated over all pixels of the dataset they studied, following the old wisdom that normalizing inputs to zero mean helps neural networks train better. However, later papers on image classification no longer expended this degree of attention to dataset-specific preprocessing.
The citation from the Resnet paper you mentioned is based on the following explanation from the Alexnet paper:
ImageNet consists of variable-resolution images, while our system requires a constant input dimensionality. Therefore, we down-sampled the images to a fixed resolution of256×256. Given a rectangular image, we first rescaled the image such that the shorter side was of length 256, and thencropped out the central 256×256patch from the resulting image. We did not pre-process the images in any other way, except for subtracting the mean activity over the training set from each pixel.
So in the Resnet paper, a similar process consist in taking a of 224x224 pixels part of the image (or of its horizontally flipped version) to ensure the network is given constant-sized images, and then center it by substracting the mean.
Related
I'm new to transfer learning in TensorFlow and I choose tfhub to simplify finding a dataset, but now I'm confused because my model gives me a wrong prediction when I try to use an image from the internet. I used the efficientnet_v2_imagenet1k_b0 feature vector without fine-tuning to train a rock-paper-scissors dataset from https://www.kaggle.com/drgfreeman/rockpaperscissors. I used image data generator and flow from directory for data processing.
This is my model here
This is my train result here
This is my test result here
It's the second time I get something like this when using transfer learning with tfhub. I want to know why this happened and how to fix it, so this problem doesn't happen again. Thanks a lot for your help and sorry for my bad English.
I downloaded your code to my local machine and the dataset as well.
Had to make a few adjustments to make it run locally.
I believe the model efficientnet_v2_imagenet1k_b0 is different
from the newer efficient net models in that this version DOES
require pixel levels to be scaled between 0 and 1. I ran the model
with and without rescaling and it works well only if the pixlels
are rescaled. Below is the code I used to test if the model correctly predicts
an image downloaded from the internet. It worked as expected.
import cv2
class_dict=train_generator.class_indices
print (class_dict)
rev_dict={}
for key, value in class_dict.items():
rev_dict[value]=key
print (rev_dict)
fpath=r'C:\Temp\rps\1.jpg' # an image downloaded from internet that should be paper class
img=plt.imread(fpath)
print (img.shape)
img=cv2.resize(img, (224,224)) # resize to 224 X 224 to be same size as model was trained on
print (img.shape)
plt.imshow(img)
img=img/255.0 # rescale as was done with training images
img=np.expand_dims(img,axis=0)
print(img.shape)
p=model.predict(img)
print (p)
index=np.argmax(p)
print (index)
klass=rev_dict[index]
prob=p[0][index]* 100
print (f'image is of class {klass}, with probability of {prob:6.2f}')
the results were
{'paper': 0, 'rock': 1, 'scissors': 2}
{0: 'paper', 1: 'rock', 2: 'scissors'}
(300, 300, 3)
(224, 224, 3)
(1, 224, 224, 3)
[[9.9902594e-01 5.5121275e-04 4.2284720e-04]]
0
image is of class paper, with probability of 99.90
You had this in your code
uploaded = files.upload()
len_file = len(uploaded.keys())
This did not run because files was not defined
so could not find what causes your misclassification problem.
Remember in flow_from_directory, if you do not specify the color mode it defaults to rgb. So even though training images are 4 channel PNG the
actual model is trained on 3 channels. So make sure the images you want to predict are 3 channels.
To help really need to see the code for how you provide your data to model.predict. However as a guess, remember efficientnet needs to have the pixels in the range from0 to 255 so do not scale your images. Make sure your test images are rgb an of the same size as the image size used in training. Also need to see code for how you process the predictions
I still have not understood a concept. One reason that we use fully convolutional layer at the end in a CNN network is to handle different images sizes during training. My question is that if this is the case why we always crop or squeeze images into squared sizes in the input section. Please do not say the question is repeated, we use squared images to make it easier, check pyramid pooling, and so on.
For example, Here's a link
DeepLab can accept any images with different sizes. But in its code, there is a target_size as (513). Now, if CNN can accept images with different sizes, why we need to use target_size. If this is for converting images into a standard format, why 513?
During training, we should specify batch size. What is our batch_size in this case: (5, None, None, None). Is it possible to have images with different sizes in a batch?
I read many posts and still, I am confused with these questions:
- How can we train a model on images with different sizes (imagine that sizes are standard)? I see some codes use a batch size of one. I think it is not a solution.
- Is there any snipped code that shows how can we define batches for a model like FCN to accept dataset with different sizes?
- In this paper: Here's a link my problem was explained but authors again resized images into squared format, if we can use batches comprises of images with different sizes why they proposed that idea of using squared images between 180 by 180 and 224 by 224.
Has DeepLab used this part: link to make images into a standard format? or for other reason?
width, height = image.size
resize_ratio = 1.0 * 513 / max(width, height)
target_size = (int(resize_ratio * width), int(resize_ratio * height))
I could not find the place of their code when they training the model on PASCAL dataset.
I expected to find a simple code for Keras or Tensorflow whereas it shows easily that we can apply a CNN model such as FCN or DeepLab for a dataset such as PASCAL VOC2012 (for Segmentation) with images of different sizes without any resizing or cropping. Still, I am looking.
Thank you for detail answers in advance. Please do not repeat answers like you can use batch size one, squared images are common and better, you can add black margins to the images, fully connected layer is the problem, you can use global max pooling, and so on. I am looking to find a code that works on images with different sizes.
I could not find the place of DeepLab model in TensorFlow GitHub where it accepts batches with different sizes?? here
Also in here FCN it is trained on COCO dataset with target_size of 320 by 320. Why? it should be any size for FCN.
Also, could one explain to me how can we have a batch of images with different sizes? Could we have an np array of different sized images? Batch = [5, none, none, 3] each of 5 with different sizes.
I also found another confusing part in semantic segmentation. Using Keras Augmentation we can not augment image with more than 4 channels. It means that using Keras augmentation, we can not train PASCAL dataset with 21 channels. ??
I have found some expression for SSD Multibox-loss function as follows:
multibox_loss = confidence_loss + alpha * location_loss
Can someone explains what are the explanations for those terms?
SSD Multibox (short for Single Shot Multibox Detector) is a neural network that can detect and locate objects in an image in a single forward pass. The network is trained in a supervised manner on a dataset of images where a bounding box and a class label is given for each object of interest. The loss term
multibox_loss = confidence_loss + alpha * location_loss
is made up of two parts:
Confidence loss is a categorical cross-entropy loss for classifying the detected objects. The purpose of this term is to make sure that correct label is assigned to each detected object.
Location loss is a regression loss (either the smooth L1 or the L2 loss) on the parameters (width, height and corner offset) of the detected bounding box. The purpose of this term is to make sure that the correct region of the image is identified for the detected objects. The alpha term is a hyper parameter used to scale the location loss.
The precise formulation of the loss is given in Equation 1 of the SSD: Single Shot MultiBox Detector paper.
When implementing the reinforcement learning with tensorflow, the inputs are black/white images. Each pixel can be represented as a bit 1/0.
Can I give the data directly to tensorflow, with each bit as a feature? Or I had to expand the bits to bytes before sending to tensorflow? I'm new to tensorflow, so some code example would be nice.
Thanks
You can directly load the Image data as you would normally do, the Image being binary will have no effect other that the input channel width becoming 1 for the input.
Whenever you put an Image through a convnet, each output filter generally learns features for all the channels, so in case of a binary image, there is a separate kernel defined for each input channel / output channel combination (Since Only 1 input channel) in the first layer.
Each channel is defined by it's number of filters and there exists a 2D kernel for each input channel which averages over all filters, so you will have weights/parameters equal to input_channels * number_of_filters * filter_dims, here for the first layer input_channels becomes one.
Since you asked for some sample code.
Let your image be in a tensor X, simply use
X_out = tf.nn.conv2d(X, filters = 6, kernel_size = [height,width])
After that you can apply an activation, this will make your output image have 6 channels. If you face any problem or have some doubts, feel free to comment, for theoretical clarification, check out https://www.coursera.org/learn/convolutional-neural-networks/lecture/nsiuW/one-layer-of-a-convolutional-network
Edit
Since the question was about simple neural net, not conv net, here is the code for that,
X_train is the variable in which image is stored as (n_x,n_x) byte resolution, n_x is used later.
You will need to flatten the input.
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
This first flattens the image horizontally and then transposes it to arrange it vertically.
Then you will create placeholder tensor X as :
X = tf.placeholder(tf.bool,[n_x*n_x,None]) #Your Input tensor should have dimension same as your input layer.
let W, b be weight and bias respectively.
Z1 = tf.add(tf.matmul(W1,X),b1) #Linear Transformation step
A1 = tf.nn.relu(Z1) #Activation Step
And you keep on creating your graph, I think that answers your question, if not let me know.
I found many methods in tf.image to resize images, but almost all of them are crop or pad or interpolation. Can the methods with interpolation shrink images? I just want to shrink my images without crop.
Thanks a lot!
Well, seems you need tf.image.resize_images?
res = tf.image.resize_images(images, size, method=ResizeMethod.BILINEAR, align_corners=False)
default resize method is BILINEAR
val = np.random.rand(100, 70, 3)
x = tf.constant(val)
y = tf.image.resize_images(x, (30,30))
with tf.Session() as sess:
a = sess.run(y) # size (30, 30, 3)
At the end of the day, images are matrices/tensors. In order to shrink an image directly you will have to do some lossy compression. Namely, average or max pooling on individual groups of pixels (or matrix values) and them save them out as pre-processed images for your model. Try looking into the Tensorflow max-pool and average-pool functions:
tf.nn.max_pool(value, ksize, strides, padding, data_format='NHWC', name=None)
tf.nn.avg_pool(value, ksize, strides, padding, data_format='NHWC', name=None)
Depending on the application, you may just want to "shrink" images using a model that "creates features" such as a Convolutional Neural Network (CNN) that then you can use for your model. However, you still have to process your original sized images on every run.
Another advanced approach is to do some sort of decomposition, such as the SVD, to get a lower dimensional representation of your images. However, most methods are linear techniques and might not preserve everything exactly. Another method is to train an autoencoder and then save out the lower-dimensional results out to be the pre-processed data.
Hope this helps!