I am trying to learn TensorFlow, so I was trying to understand their example with smaller dimensions. Suppose I have image1, image2, image3 three 28x28 matrices which hold grayscale values (0..255). image1 is the training image, image2 is the validation image, and image3 is the test image. I was trying to understand how I can feed my own images into the MNIST example they have here.
I am particularly interested in replacing the following line with my own imageset:
X, Y, testX, testY = mnist.load_data(one_hot=True)
Your help is much appreciated.
Suppose your image is a numpy array, of shape [1, 28, 28, 1].
You can just feed this numpy array to the node X or textX. Even though X is not a placeholder, you can provide its value to TensorFlow.
X_value = ... # numpy array
# ... same for Y_value, testX_value, testY_value
feed_dict = {X: X_value, Y: Y_value, testX: testX_value, testY: testY_value}
sess.run(train_op, feed_dict=feed_dict)
mnist.load_data(one_hot=True) is nothing but some preprossesing of the data. If you have some images in hand, you can just make them an ndarray and feed into the graph. For examples if you have a node named images, you can feed the images using feed_dict = {images: some_image}.
Related
I want to feed a tf.data Dataset to a Keras model, but I get the following error:
AttributeError: 'DatasetV1Adapter' object has no attribute 'ndim'
This dataset will be used to solve a segmentation problem, so both input and output will be images (3D tensors)
The dataset is created with this code:
dataset = tf.data.Dataset.list_files(TRAIN_PATH + "*.png",shuffle=False)
def process_path(file_path):
img = tf.io.read_file(file_path)
img = tf.image.decode_png(img, channels=3)
train_image_path=tf.strings.regex_replace(file_path,"image","mask")
mask = tf.io.read_file(train_image_path)
mask = tf.image.decode_png(mask, channels=1)
mask = tf.squeeze(mask)
mask = tf.one_hot(tf.cast(mask, tf.int32), Num_Classes, axis = -1)
return img,mask
dataset = dataset.map(process_path)
dataset = dataset.batch(32,drop_remainder=True)
Taking an item from the dataset shows that I get a tuple containing an input tensor and an output tensor, whose dimensions are correct:
Input: (batch-size, image height, image width, 3 channels)
Output: (batch-size, image height, image width, 4 channels)
When fitting the model I get an error:
model.fit(dataset, epochs = 50)
I've solved the provem moving to Keras 2.4.3 and Tensorflow 2.2
Everything was right but apparently the previous release of Keras did not manage this tf.data correctly.
Here's a tutorial I've found very useful on this.
I am using the VGG16 Model, which expects a 4D Tensor as input. When I call model.fit(xtrain, ytrain, ...) my xtrain is a list of 3D Tensor [size, size, features] - so in this case: [224,224,3]
What I want is 4D Tensors with [len(images), size, size, features]
How could I modify my code to get there?
I tried tf.expand_dims and tf.concant but it didn't work.
# Transforming my image to a 3D Tensor
image = tf.io.read_file(image)
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.resize(image, [IMG_SIZE, IMG_SIZE])
image = image / 255.0
Error msg after model.fit:
Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (224, 224, 3)
It looks like you are reading in only a single image and passing that. If that's the case, you can add a dimension of 1 to the first axis of the image. There's lots of ways to do that.
Using reshape:
image = image.reshape(1, 224, 224, 3)
Using some fancy numpy slicing notation to add an axis (personal favorite):
image = image[None, ...]
Using numpy.expand_dims() as explained in Abhijit's answer.
I imagine you want to be reading a bunch of images in though. Possibly an issue with your input process? Can you wrap your read in a loop and read multiple files? Something like:
images = []
for file in image_files:
image = tf.io.read_file(file)
# ...
images.append(image)
images = np.asarray(images)
numpy.expand_dims(image, axis=0)
I am using Tensorflow 2.0 and am able to train a CNN for image classification of 3-channel images. I perform image preprocessing within the data input pipeline (shown below) and would like to include the preprocessing functionality in the served model itself. My model is served with a TF Serving Docker container and the Predict API.
The data input pipeline for training is based on the documentation at https://www.tensorflow.org/alpha/tutorials/load_data/images.
My pipeline image preprocessing function is load_and_preprocess_from_path_label:
def load_and_preprocess_path(image_path):
# Load image
image = tf.io.read_file(image_path)
image = tf.image.decode_png(image)
# Normalize to [0,1] range
image /= 255
# Convert to HSV and Resize
image = tf.image.rgb_to_hsv(image)
image = tf.image.resize(image, [HEIGHT, WIDTH])
return image
def load_and_preprocess_from_path_label(image_path, label):
return load_and_preprocess_path(image_path), label
With lists of image paths, the pipeline prefetches and performs image preprocessing using tf functions within load_and_preprocess_from_path_label:
all_image_paths, all_image_labels = parse_labeled_image_paths()
x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(all_image_paths, all_image_labels, test_size=0.2)
# Create a TensorFlow Dataset of training images and labels
ds = tf.data.Dataset.from_tensor_slices((x_train, y_train))
image_label_ds = ds.map(load_and_preprocess_from_path_label)
BATCH_SIZE = 32
IMAGE_COUNT = len(all_image_paths)
ds = image_label_ds.apply(tf.data.experimental.shuffle_and_repeat(buffer_size=IMAGE_COUNT))
ds = ds.batch(BATCH_SIZE)
ds = ds.prefetch(buffer_size=AUTOTUNE)
# Create image pipeline for model
image_batch, label_batch = next(iter(ds))
feature_map_batch = model(image_batch)
# Train model
model.fit(ds, epochs=5)
Previous Tensorflow examples I've found use serving_input_fn(), and utilized tf.placeholder which seems to no longer exist in Tensorflow 2.0.
An example for serving_input_fn in Tensorflow 2.0 is shown on https://www.tensorflow.org/alpha/guide/saved_model. Since I am using the Predict API, it looks like I would need something similar to:
serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(...)
# Save the model with the serving preprocessing function
model.export_saved_model(MODEL_PATH, serving_input_fn)
Ideally, the served model would accept a 4D Tensor of 3-channel image samples of any size and would perform the initial image preprocessing on them (decode image, normalize, convert to HSV, and resize) before classifying.
How can I create a serving_input_fn in Tensorflow 2.0 with a preprocessing function similar to my load_and_preprocess_path function?
I faced a similar issue when upgrading. It appears that the way to achieve this in Tensorflow 2 is to provide a function which the saved model can use to make the predictions, something like:
def serve_load_and_preprocess_path(image_paths: tf.Tensor[tf.string]):
# loaded images may need converting to the tensor shape needed for the model
loaded_images = tf.map_fn(load_and_preprocess_path, image_paths, dtype=tf.float32)
predictions = model(loaded_images)
return predictions
serve_load_and_preprocess_path = tf.function(serve_load_and_preprocess_path)
serve_load_and_preprocess_path = serve_load_and_preprocess_path.get_concrete_function(
image_paths=tf.TensorSpec([None,], dtype=tf.string))
tf.saved_model.save(
model,
MODEL_PATH,
signatures=serve_load_and_preprocess_path
)
# check the models give the same output
loaded = tf.saved_model.load(MODEL_PATH)
loaded_model_predictions = loaded.serve_load_and_preprocess_path(...)
np.testing.assert_allclose(trained_model_predictions, loaded_model_predictions, atol=1e-6)
Expanding and simplifying #harry-salmon answer. For me the following worked:
def save_model_with_serving_signature(model, model_path):
#tf.function(input_signature=[tf.TensorSpec(shape=[None, ], dtype=tf.string)])
def serve_load_and_preprocess_path(image_paths):
return model(tf.map_fn(load_and_preprocess_path, image_paths, dtype=tf.float32))
tf.saved_model.save(
model,
model_path,
signatures=serve_load_and_preprocess_path
)
Note: dtype=tf.float32 in map function was important and didn't work without it. I found solution here. Also I simplified the concrete function work by simply adding a decorator (see this for details).
I read an example in tf-slim-mnist, and read one or two answers in Google, but all of them feed data to an 'images' tensor and a 'labels' tensor from an already filled-up tenser of data. For example, in tf-slim-mnist,
# load batch of dataset
images, labels = load_batch(
dataset,
FLAGS.batch_size,
is_training=True)
def load_batch(dataset, batch_size=32, height=28, width=28, is_training=False):
data_provider = slim.dataset_data_provider.DatasetDataProvider(dataset)
image, label = data_provider.get(['image', 'label'])
image = lenet_preprocessing.preprocess_image(
image,
height,
width,
is_training)
images, labels = tf.train.batch(
[image, label],
batch_size=batch_size,
allow_smaller_final_batch=True)
return images, labels
Another example, in tensorflow github issues #5987,
graph = tf.Graph()
with graph.as_default():
image, label = input('train', FLAGS.dataset_dir)
images, labels = tf.train.shuffle_batch([image, label], batch_size=FLAGS.batch_size, capacity=1000 + 3 * FLAGS.batch_size, min_after_dequeue=1000)
images_validation, labels_validation = inputs('validation', FLAGS.dataset_dir, 5000)
images_test, labels_test = inputs('test', FLAGS.dataset_dir, 10000)
Because my data is of variable size, it is hard to fill up a tensor of data beforehand.
Is there any way to use feed_dict with slim.learning.train()? Is it a proper way to add feed_dict as an argument to the train_step_fn()? If yes, how? Thanks.
I think feed_dict is not a good way when input data size varies and hard to fill in memory.
Convert your data into tfrecords is a more proper way. Here is the example of convert data. You can deal with the data by TFRecordReader and parse_example to deal with output file.
I have been working on MNIST dataset to learn how to use Tensorflow and Python for my deep learning course.
I want to resize MNIST as 22 & 22 using tensorflow, then I train it, but I do not how to do?
Could you help me?
TheRevanchist's answer is correct. However, for the mnist dataset, you first need to reshape the mnist array before you send it to tf.image.resize_images():
import tensorflow as tf
import numpy as np
import cv2
mnist = tf.contrib.learn.datasets.load_dataset("mnist")
batch = mnist.train.next_batch(10)
X_batch = batch[0]
batch_tensor = tf.reshape(X_batch, [10, 28, 28, 1])
resized_images = tf.image.resize_images(batch_tensor, [22,22])
The code above takes out a batch of 10 mnist images and reshapes them from 28x28 images to 22x22 tensorflow images.
If you want to display the images, you can use opencv and the code below. The resized_images.eval() converts the tensorflow image to a numpy array!
with tf.Session() as sess:
numpy_imgs = resized_images.eval(session=sess) # mnist images converted to numpy array
for i in range(10):
cv2.namedWindow('Resized image #%d' % i, cv2.WINDOW_NORMAL)
cv2.imshow('Resized image #%d' % i, numpy_imgs[i])
cv2.waitKey(0)
Did you try tf.image.resize_image?
The method:
resize_images(images, size, method=ResizeMethod.BILINEAR,
align_corners=False)
where images is a batch of images, and size is a vector tensor which determines the new height and width. You can look at the full documentation here: https://www.tensorflow.org/api_docs/python/tf/image/resize_images
Updated: TensorFlow 2.4.1
Short Answer
Use tf.image.resize (instead of resize_images). The link other provided no longer exits. Updated link.
Long Answer
MNIST in tf.keras.datasets.mnist is the following shape
(batch_size, 28 , 28)
Here is the full implementation. Please read the comment which attach with the code.
(x_train, y_train), (_, _) = tf.keras.datasets.mnist.load_data()
# expand new axis, channel axis
x_train = np.expand_dims(x_train, axis=-1)
# [optional]: we may need 3 channel (instead of 1)
x_train = np.repeat(x_train, 3, axis=-1)
# it's always better to normalize
x_train = x_train.astype('float32') / 255
# resize the input shape , i.e. old shape: 28, new shape: 32
x_train = tf.image.resize(x_train, [32,32]) # if we want to resize
print(x_train.shape)
# (60000, 32, 32, 3)
You can use cv2.resize() function of opencv
Use a for loop to go iterate through every image
And inside for loop for every image add this line cv2.resize(source_image, (22, 22))
def resize(mnist):
train_data = []
for img in mnist.train._images:
resized_img = cv2.resize(img, (22, 22))
train_data.append(resized_img)
return train_data