InceptionV3 and transfer learning with tensorflow - tensorflow

I would like to do a transfer learning from the given inceptionV3 in tensorflow example. Following the classify image example and the operator and tensor names given here https://github.com/AKSHAYUBHAT/VisualSearchServer/blob/master/notebooks/notebook_network.ipynb I can create my graph. But when, I put a batch of images of size (100, 299, 299, 3) in the pre-computed inception graph, I get the following shape error at the pool_3 layer :
ValueError: Cannot reshape a tensor with 204800 elements to shape [1, 2048] (2048 elements)
It seems that this inceptionV3 graph doesn't accept image batch as input. am I wrong ?

Actually it works for transfer learning if you extract the right thing. There is no problem feeding a batch of images in the shape of [N, 299, 299, 3] as ResizeBilinear:0 and then using the pool_3:0 tensor. It's the reshaping afterwards that breaks, but you can reshape yourself (you'll have your own layers afterwards anyway). If you wanted to use the original classifier with a batch, you could add your own reshaping on top of pool_3:0 and then add the softmax layer, reusing the weights/biases tensors of the original softmax.
TLDR: With double_img being a stack of two images with shape (2, 299, 299, 3) this works:
pooled_2 = sess.graph.get_tensor_by_name("pool_3:0").eval(session=sess, feed_dict={'ResizeBilinear:0':double_img})
pooled_2.shape
# => (2, 1, 1, 2048)

You're not wrong. This seems like a very reasonable feature request, so I've opened a ticket for it on github. Follow that for updates.

Something like this should do it:
with g.as_default():
inputs = tf.placeholder(tf.float32, shape=[batch_size, 299, 299, 3],
name='input')
with slim.arg_scope(inception.inception_v3_arg_scope()):
logits, end_points = inception.inception_v3( inputs,
num_classes=FLAGS.num_classes, is_training=False)
variables_to_restore = lim.get_variables_to_restore(exclude=exclude)
sess = tf.Session()
saver = tf_saver.Saver(variables_to_restore)
Then you should be able to call the operation:
sess.run("pool_3:0",feed_dict={'ResizeBilinear:0':images})

etarion made a very good point. However, we don't have to reshape it ourselves; instead, we could change the value of shape that reshape takes as input. I.e.,
input_tensor_name = 'import/input:0'
shape_tensor_name = 'import/InceptionV3/Predictions/Shape:0'
output_tensor_name= 'import/InceptionV3/Predictions/Reshape_1:0'
output_tensor = tf.import_graph_def(
graph.as_graph_def(),
input_map={input_tensor_name: image_batch,
shape_tensor_name: [batch_size, num_class]},
return_elements=[output_tensor_name])
These tensor names are based on inception_v3_2016_08_28_frozen.pb.

Related

Resizing images in preprocessing for inception, how to use batches?

I'm trying to resize some images to use them with Inception. I want to do it as a separate preprocessing step to speed things up later. Running tf.image.resize on all of the images at once crashes, as does a loop.
I'd like to do it in batches, but I don't know how to do that with making it part of my neural network model - I want to keep it outside as a separate preprocessing step.
I made this:
inception = tf.keras.applications.inception_v3.InceptionV3(include_top=True, input_shape=(299, 299, 3))
inception = tf.keras.Model([inception.input], [inception.layers[-2].output])
for layer in inception.layers:
layer.trainable = False
resize_incept = tf.keras.Sequential([
tf.keras.layers.Resizing(299, 299),
inception])
resize_incept.compile()
So can I just call it on my images? But then how do I batch it? When I call it like resize_incept(images) it crashes (too big), but if I call resize_incept(images, batch_size = 25), it doesn't work (TypeError: call() got an unexpected keyword argument 'batch_size').
EDIT: I'm trying to figure out if I can use tf.data.dataset for this
EDIT2: I've put my data (which was an array of (batch, 32, 32, 3) into tf.data.dataset so I can try this:
train_dataset = train_dataset.map(lambda x, y: (resize_incept(x), y))
But when I try it, it give me this error:
ValueError: Input 0 is incompatible with layer model_1: expected shape=(None, 299, 299, 3), found shape=(299, 299, 3)
The problem seems to be that whatever is coming out of the resize layer is somehow wrong for going into the inception layer (because what I'm putting in at first is (32,32,3) and there are no complaints about those dimensions)? But the inception layer, already has input_shape=(299, 299, 3) so I would think that's what shape it would take?
If you want it as a seperate preprocessing step. I would recommend to use the image_dataset_from_directory() function included in keras preprocesssing.
tf.keras.preprocessing.image_dataset_from_directory(
directory,
labels="inferred",
label_mode="int",
class_names=None,
color_mode="rgb",
batch_size=32,
image_size=(256, 256),
shuffle=True,
seed=None,
validation_split=None,
subset=None,
interpolation="bilinear",
follow_links=False,
crop_to_aspect_ratio=False,
**kwargs)
You can manipulate the batch size and the wanted image size and choose wether to split the data or not. Also, be carefull with crop_to_aspect_ratio as you may lose important features from the cropped areas.
If you want to know more about it's parameters and a code example you could check the
image_dataset_from_directory documentation.
Good luck!

Issue with feeding value into placeholder tensor for sess.run()

I want to get the value of an intermediate tensor in a convolutional neural network for a specific input. I know how to do this in keras and even though I have trained a model using keras, I'm going to move towards constructing and training the model using only tensorflow. Therefore, I want to move away from something like K.function(input_layer, output_layer) which is fairly simple, and instead use tensorflow. I believe I should use placeholder values, like the following approach:
with tf.compat.v1.Session(graph=tf.Graph()) as sess:
loaded_model = tf.keras.models.load_model(filepath)
graph = tf.compat.v1.get_default_graph()
images = tf.compat.v1.placeholder(tf.float32, shape=(None, 28, 28, 1)) # To specify input at MNIST images
output_tensor = graph.get_tensor_by_name(tensor_name) # tensor_name is 'dense_1/MatMul:0'
output = sess.run([output_tensor], feed_dict={images: x_test[0:1]}) # x_test[0:1] is of shape (1, 28, 28, 1)
print(output)
However, I get the following error message for the sess.run() line: Invalid argument: You must feed a value for placeholder tensor 'conv2d_2_input' with dtype float and shape [?,28,28,1]. I am unsure why I get this message because the image used for feed_dict is of type float and is what I believe to be the correct shape. Any help would be suggested.
You must use the input tensor from the Keras model, not make your own new placeholder, which would be disconnected from the rest of the model:
with tf.Graph().as_default(), tf.compat.v1.Session() as sess:
# Load model
loaded_model = tf.keras.models.load_model(filepath)
# Take model input tensor
images = loaded_model.input
# Take output of the second layer (index 1)
output_tensor = loaded_model.layers[1].output
# Evaluate
output = sess.run(output_tensor, feed_dict={images: x_test[0:1]})
print(output)

The input dimension of the LSTM layer in Keras

I'm trying keras.layers.LSTM.
The following code works.
#!/usr/bin/python3
import tensorflow as tf
import numpy as np
from tensorflow import keras
data = np.array([1, 2, 3]).reshape((1, 3, 1))
x = keras.layers.Input(shape=(3, 1))
y = keras.layers.LSTM(10)(x)
model = keras.Model(inputs=x, outputs=y)
print (model.predict(data))
As shown above, the input data shape is (1, 3, 1), and the actual input shape in the Input layer is (3, 1). I'm a little bit confused about this inconsistency of the dimension.
If I use the following shape in the Input layer, it doesn't work:
x = keras.layers.Input(shape=(1, 3, 1))
The error message is as follows:
ValueError: Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, 1, 3, 1]
It seems that the rank of the input must be 3, but why should we use a rank-2 shape in the Input layer?
Keras works with "batches" of "samples". Since most models use variable batch sizes that you define only when fitting, for convenience you don't need to care about the batch dimension, but only with the sample dimension.
That said, when you use shape = (3,1), this is the same as defining batch_shape = (None, 3, 1) or batch_input_shape = (None, 3, 1).
The three options mean:
A variable batch size: None
With samples of shape (3, 1).
It's important to know this distinction especially when you are going to create custom layers, losses or metrics. The actual tensors all have the batch dimension and you should take that into account when making operations with tensors.
Check out the documentation for tf.keras.Input. The syntax is as-
tf.keras.Input(
shape=None,
batch_size=None,
name=None,
dtype=None,
sparse=False,
tensor=None,
**kwargs
)
shape: defines the shape of a single sample, with variable batch size.
Notice, that it expects the first value as batch_size otherwise pass batch_size as a parameter explicitly

Changing a pretrained Keras model with fixed input to a flexible one in tensoflow?

I want to use a pre-trained BagNet (https://github.com/wielandbrendel/bag-of-local-features-models) to extract features. The net has fixed input height and width, the input is (None,3,224,224). Now, I want to build a new model with fexible input sizes. I tried approaches with model.layers.pop()[0] to remove the first layer and replace it with a flexible input but I get errors:
ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input0_6:0", shape=(?, 3, 224, 224), dtype=float32) at layer "input0". The following previous layers were accessed without issue: []
keras_model = bagnets.keras.bagnet8()
keras_model.layers.pop()[0]
x = Input(batch_shape=(None, 3, None, None))
newModel = Model(x, keras_model.output)
How could I ressolve this issue or what are other options?

Oversampling images during inference

It is is a common practice in convolutional neural networks to oversample a given image during inference,
I.e to create a batch from different transformation of the same image (most common - different crops and mirroring), transfer the entire batch through the network and average (or another kind of reducing function) over the results to get a single prediction (caffe example),
How can this approach be implemented in tensorflow?
You can take a look at the TF cnn tutorial. In particular, the function distorted_inputs does the image preprocessing step.
In short, there are a couple of TF functions in the tf.image package that help with distorting the images. You can use either them or regular numpy functions to create an extra dimension for the output, for which you can average the results:
Before:
input_place = tf.placeholder(tf.float32, [None, 256, 256, 3])
prediction = some_model(input_place) # size: [None]
sess.run(prediction, feed_dict={input_place: batch_of_images})
After:
input_place = tf.placeholder(tf.float32, [None, NUM_OF_DISTORTIONS, 256, 256, 3])
prediction = some_model(input_place) # make sure it is of size [None, NUM_DISTORTIONS]
new_prediction = tf.reduce_mean(prediction, axis=1)
new_batch = np.zeros(batch_size, NUM_OF_DISTORTIONS, 256, 256, 3)
for i in xrange(len(batch_of_images)):
for f in xrange(len(distortion_functions)):
new_batch[i, f, :, :, :] = distortion_functions[f](batch_of_images[i])
sess.run(new_prediction, feed_dict={input_place: new_batch})
Take a look at TF's image-related functions. You could apply those transformations at test time to some input image, and stack all of them together to make a batch.
I imagine you could also do this using OpenCV or some other image processing tool. I don't see a need to do it in the computation graph. You could create the batches beforehand, and pass it through in feed_dict.