Transform 3D Tensor to 4D - tensorflow

I am using the VGG16 Model, which expects a 4D Tensor as input. When I call model.fit(xtrain, ytrain, ...) my xtrain is a list of 3D Tensor [size, size, features] - so in this case: [224,224,3]
What I want is 4D Tensors with [len(images), size, size, features]
How could I modify my code to get there?
I tried tf.expand_dims and tf.concant but it didn't work.
# Transforming my image to a 3D Tensor
image = tf.io.read_file(image)
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.resize(image, [IMG_SIZE, IMG_SIZE])
image = image / 255.0
Error msg after model.fit:
Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (224, 224, 3)

It looks like you are reading in only a single image and passing that. If that's the case, you can add a dimension of 1 to the first axis of the image. There's lots of ways to do that.
Using reshape:
image = image.reshape(1, 224, 224, 3)
Using some fancy numpy slicing notation to add an axis (personal favorite):
image = image[None, ...]
Using numpy.expand_dims() as explained in Abhijit's answer.
I imagine you want to be reading a bunch of images in though. Possibly an issue with your input process? Can you wrap your read in a loop and read multiple files? Something like:
images = []
for file in image_files:
image = tf.io.read_file(file)
# ...
images.append(image)
images = np.asarray(images)

numpy.expand_dims(image, axis=0)

Related

Feeding tf.data Dataset with multidimensional output to Keras model

I want to feed a tf.data Dataset to a Keras model, but I get the following error:
AttributeError: 'DatasetV1Adapter' object has no attribute 'ndim'
This dataset will be used to solve a segmentation problem, so both input and output will be images (3D tensors)
The dataset is created with this code:
dataset = tf.data.Dataset.list_files(TRAIN_PATH + "*.png",shuffle=False)
def process_path(file_path):
img = tf.io.read_file(file_path)
img = tf.image.decode_png(img, channels=3)
train_image_path=tf.strings.regex_replace(file_path,"image","mask")
mask = tf.io.read_file(train_image_path)
mask = tf.image.decode_png(mask, channels=1)
mask = tf.squeeze(mask)
mask = tf.one_hot(tf.cast(mask, tf.int32), Num_Classes, axis = -1)
return img,mask
dataset = dataset.map(process_path)
dataset = dataset.batch(32,drop_remainder=True)
Taking an item from the dataset shows that I get a tuple containing an input tensor and an output tensor, whose dimensions are correct:
Input: (batch-size, image height, image width, 3 channels)
Output: (batch-size, image height, image width, 4 channels)
When fitting the model I get an error:
model.fit(dataset, epochs = 50)
I've solved the provem moving to Keras 2.4.3 and Tensorflow 2.2
Everything was right but apparently the previous release of Keras did not manage this tf.data correctly.
Here's a tutorial I've found very useful on this.

How to replace the input channel shape from (224, 224, 3) to (224, 224, 1) in VGG16?

I am using VGG16 for transfer learning. My images are grayscale. So, I need to change the input channel shape of Vgg16 from (224, 224, 3) to (224, 224, 1). I tried the following code and got error:
TypeError: build() takes from 1 to 2 positional arguments but 4 were given
Can anyone help me where Am I doing it wrong?
vgg16_model= load_model('Fetched_VGG.h5')
vgg16_model.summary()
# transform the model to Sequential
model= Sequential()
for layer in vgg16_model.layers[1:-1]:
model.add(layer)
# Freezing the layers (Oppose weights to be updated)
for layer in model.layers:
layer.trainable = False
model.build(224,224,1)
model.add(Dense(2, activation='softmax', name='predictions'))
you can't, even if you get rid of the input layer, this model has a graph that has already been compiled and your first conv layer expects an input with 3 channels. I don't think there is really an easy work around to make it accept 1 channel if there is any at all.
you need to repeat your data in third dimension and have the same grayscale image in all 3 bands instead of RGB, that works just fine.
if your image has the shape of : (224,224,1):
import numpy as np
gray_image_3band = np.repeat(gray_img, repeats = 3, axis = -1)
if your image has the shape of : (224,224)
gray_image_3band = np.repeat(gray_img[..., np.newaxis], repeats = 3, axis = -1)
you don't need to call the model.build() anymore this way, keep the input layer. but if you ever wanted to call it you need to pass the shape as a tuple like this:
model.build( (224, 224, 1) ) # this is correct, notice the parentheses

Tensorflow Object Detection API 1-channel image

Is there any way to use pre-trained models in Object Detection API of Tensorflow, which trained for RGB images, for single channel grayscale images(depth) ?
I tried the following approach to perform object detection on Grayscale (1 Channel images) using a pre-trained model (faster_rcnn_resnet101_coco_11_06_2017) in Tensorflow. It did work for me.
The model was trained on RGB Images, So I just had to modify certain code in object_detection_tutorial.ipynb, available in the Tensorflow Repo.
First Change:
Note that exisitng code in the ipynb was written for 3 Channel Images, So change the load_image_into_numpy array function as shown
From
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
To
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
channel_dict = {'L':1, 'RGB':3} # 'L' for Grayscale, 'RGB' : for 3 channel images
return np.array(image.getdata()).reshape(
(im_height, im_width, channel_dict[image.mode])).astype(np.uint8)
Second Change: Grayscale images have only data in 1 channel. To perform object detection we need 3 channels(the inference code was written for 3 channels)
This can be achieved in two ways.
a) Duplicate the single channel data into two more channels
b) Fill the other two channels with Zeros.
Both of them will work, I used the first method
In the ipynb, go the section where you read the images and convert them into numpy arrays (the forloop at the end of the ipynb).
Change the code From:
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
To this:
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
if image_np.shape[2] != 3:
image_np = np.broadcast_to(image_np, (image_np.shape[0], image_np.shape[1], 3)).copy() # Duplicating the Content
## adding Zeros to other Channels
## This adds Red Color stuff in background -- not recommended
# z = np.zeros(image_np.shape[:-1] + (2,), dtype=image_np.dtype)
# image_np = np.concatenate((image_np, z), axis=-1)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
That's it, Run the file and you should see the results.
These are my results

List of Tensors when single Tensor expected

I use concat to get tensors as the input of CNN. But got the error: List of Tensors when single Tensor expected
image_raw = img.tobytes()
image = tf.decode_raw(image_raw, tf.uint8)
image = tf.reshape(image, [1, image_height, image_width, 3])
image_val = image
for i in range(batch_size-1):
image_val = tf.concat(0,[image_val,image])
return image_val
I have searched the answers for these question, add
image_val = tf.stack([image_val],0) before return, but still get the same error ,why?
**build environment:**
TensorFlow version 0.12
python 3.5
The error List of Tensors when single Tensor expected comes from the fact you wrote tf.concat(0,[image_val,image]) instead of tf.concat([image_val,image],0).
Maybe check again the type of image_height, image_width because sometimes it is necessary to cast these into an integer dtype, e.g. tf.cast(image_height, tf.int32)

TensorFlow MNIST example feeding own images

I am trying to learn TensorFlow, so I was trying to understand their example with smaller dimensions. Suppose I have image1, image2, image3 three 28x28 matrices which hold grayscale values (0..255). image1 is the training image, image2 is the validation image, and image3 is the test image. I was trying to understand how I can feed my own images into the MNIST example they have here.
I am particularly interested in replacing the following line with my own imageset:
X, Y, testX, testY = mnist.load_data(one_hot=True)
Your help is much appreciated.
Suppose your image is a numpy array, of shape [1, 28, 28, 1].
You can just feed this numpy array to the node X or textX. Even though X is not a placeholder, you can provide its value to TensorFlow.
X_value = ... # numpy array
# ... same for Y_value, testX_value, testY_value
feed_dict = {X: X_value, Y: Y_value, testX: testX_value, testY: testY_value}
sess.run(train_op, feed_dict=feed_dict)
mnist.load_data(one_hot=True) is nothing but some preprossesing of the data. If you have some images in hand, you can just make them an ndarray and feed into the graph. For examples if you have a node named images, you can feed the images using feed_dict = {images: some_image}.