How to use feed_dict in slim.learning.train of tensorflow - tensorflow

I read an example in tf-slim-mnist, and read one or two answers in Google, but all of them feed data to an 'images' tensor and a 'labels' tensor from an already filled-up tenser of data. For example, in tf-slim-mnist,
# load batch of dataset
images, labels = load_batch(
dataset,
FLAGS.batch_size,
is_training=True)
def load_batch(dataset, batch_size=32, height=28, width=28, is_training=False):
data_provider = slim.dataset_data_provider.DatasetDataProvider(dataset)
image, label = data_provider.get(['image', 'label'])
image = lenet_preprocessing.preprocess_image(
image,
height,
width,
is_training)
images, labels = tf.train.batch(
[image, label],
batch_size=batch_size,
allow_smaller_final_batch=True)
return images, labels
Another example, in tensorflow github issues #5987,
graph = tf.Graph()
with graph.as_default():
image, label = input('train', FLAGS.dataset_dir)
images, labels = tf.train.shuffle_batch([image, label], batch_size=FLAGS.batch_size, capacity=1000 + 3 * FLAGS.batch_size, min_after_dequeue=1000)
images_validation, labels_validation = inputs('validation', FLAGS.dataset_dir, 5000)
images_test, labels_test = inputs('test', FLAGS.dataset_dir, 10000)
Because my data is of variable size, it is hard to fill up a tensor of data beforehand.
Is there any way to use feed_dict with slim.learning.train()? Is it a proper way to add feed_dict as an argument to the train_step_fn()? If yes, how? Thanks.

I think feed_dict is not a good way when input data size varies and hard to fill in memory.
Convert your data into tfrecords is a more proper way. Here is the example of convert data. You can deal with the data by TFRecordReader and parse_example to deal with output file.

Related

How to create a serving_input_fn in Tensorflow 2.0 for image preprocessing?

I am using Tensorflow 2.0 and am able to train a CNN for image classification of 3-channel images. I perform image preprocessing within the data input pipeline (shown below) and would like to include the preprocessing functionality in the served model itself. My model is served with a TF Serving Docker container and the Predict API.
The data input pipeline for training is based on the documentation at https://www.tensorflow.org/alpha/tutorials/load_data/images.
My pipeline image preprocessing function is load_and_preprocess_from_path_label:
def load_and_preprocess_path(image_path):
# Load image
image = tf.io.read_file(image_path)
image = tf.image.decode_png(image)
# Normalize to [0,1] range
image /= 255
# Convert to HSV and Resize
image = tf.image.rgb_to_hsv(image)
image = tf.image.resize(image, [HEIGHT, WIDTH])
return image
def load_and_preprocess_from_path_label(image_path, label):
return load_and_preprocess_path(image_path), label
With lists of image paths, the pipeline prefetches and performs image preprocessing using tf functions within load_and_preprocess_from_path_label:
all_image_paths, all_image_labels = parse_labeled_image_paths()
x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(all_image_paths, all_image_labels, test_size=0.2)
# Create a TensorFlow Dataset of training images and labels
ds = tf.data.Dataset.from_tensor_slices((x_train, y_train))
image_label_ds = ds.map(load_and_preprocess_from_path_label)
BATCH_SIZE = 32
IMAGE_COUNT = len(all_image_paths)
ds = image_label_ds.apply(tf.data.experimental.shuffle_and_repeat(buffer_size=IMAGE_COUNT))
ds = ds.batch(BATCH_SIZE)
ds = ds.prefetch(buffer_size=AUTOTUNE)
# Create image pipeline for model
image_batch, label_batch = next(iter(ds))
feature_map_batch = model(image_batch)
# Train model
model.fit(ds, epochs=5)
Previous Tensorflow examples I've found use serving_input_fn(), and utilized tf.placeholder which seems to no longer exist in Tensorflow 2.0.
An example for serving_input_fn in Tensorflow 2.0 is shown on https://www.tensorflow.org/alpha/guide/saved_model. Since I am using the Predict API, it looks like I would need something similar to:
serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(...)
# Save the model with the serving preprocessing function
model.export_saved_model(MODEL_PATH, serving_input_fn)
Ideally, the served model would accept a 4D Tensor of 3-channel image samples of any size and would perform the initial image preprocessing on them (decode image, normalize, convert to HSV, and resize) before classifying.
How can I create a serving_input_fn in Tensorflow 2.0 with a preprocessing function similar to my load_and_preprocess_path function?
I faced a similar issue when upgrading. It appears that the way to achieve this in Tensorflow 2 is to provide a function which the saved model can use to make the predictions, something like:
def serve_load_and_preprocess_path(image_paths: tf.Tensor[tf.string]):
# loaded images may need converting to the tensor shape needed for the model
loaded_images = tf.map_fn(load_and_preprocess_path, image_paths, dtype=tf.float32)
predictions = model(loaded_images)
return predictions
serve_load_and_preprocess_path = tf.function(serve_load_and_preprocess_path)
serve_load_and_preprocess_path = serve_load_and_preprocess_path.get_concrete_function(
image_paths=tf.TensorSpec([None,], dtype=tf.string))
tf.saved_model.save(
model,
MODEL_PATH,
signatures=serve_load_and_preprocess_path
)
# check the models give the same output
loaded = tf.saved_model.load(MODEL_PATH)
loaded_model_predictions = loaded.serve_load_and_preprocess_path(...)
np.testing.assert_allclose(trained_model_predictions, loaded_model_predictions, atol=1e-6)
Expanding and simplifying #harry-salmon answer. For me the following worked:
def save_model_with_serving_signature(model, model_path):
#tf.function(input_signature=[tf.TensorSpec(shape=[None, ], dtype=tf.string)])
def serve_load_and_preprocess_path(image_paths):
return model(tf.map_fn(load_and_preprocess_path, image_paths, dtype=tf.float32))
tf.saved_model.save(
model,
model_path,
signatures=serve_load_and_preprocess_path
)
Note: dtype=tf.float32 in map function was important and didn't work without it. I found solution here. Also I simplified the concrete function work by simply adding a decorator (see this for details).

TensorFlow: read a frozen model, add operations, then save to a new frozen model

I am sorry in advance if the title does not reflect exactly my problem (I think it does, but I'm not sure), which I describe below.
I am working on converting a Yolo object detection model to a TensorFlow frozen model .pb and then to use that model for prediction on mobile phones.
I have successfully obtained a working .pb model (i.e. a frozen graph from Yolo's graph). But since the outputs of the network (there are two of them) are not the bounding boxes, I have to write a function for conversion (this part is not my question, I already have a working function for this task):
def get_boxes_from_output(outputs_of_the_graph, anchors,
num_classes, input_image_shape,
score_threshold=score, iou_threshold=iou)
"""
Apply some operations on the outputs_of_the_graph to obtain bounding boxes information
"""
return boxes, scores, classes
So the pipeline is simple: I have to load the pb model, then throw the image data to it to get two outputs, then from these two outputs, I apply the above function (that contains tensor operations) to obtain bounding boxes information. The code look like this:
model_path = 'model_data/yolo.pb'
class_names = _get_class('model_data/classes.txt')
anchors = _get_anchors('model_data/yolo_anchors.txt')
score = 0.25
iou = 0.5
# Load the Tensorflow model into memory.
detection_graph = tf.Graph()
with detection_graph.as_default():
graph_def = tf.GraphDef()
with tf.gfile.GFile(model_path, 'rb') as fid:
graph_def.ParseFromString(fid.read())
tf.import_graph_def(graph_def, name='')
# Get the input and output nodes (there are two outputs)
l_input = detection_graph.get_tensor_by_name('input_1:0')
l_output = [detection_graph.get_tensor_by_name('conv2d_10/BiasAdd:0'),
detection_graph.get_tensor_by_name('conv2d_13/BiasAdd:0')]
#initialize_all_variables
tf.global_variables_initializer()
# Generate output tensor targets for filtered bounding boxes.
input_image_shape = tf.placeholder(dtype=tf.float32,shape=(2, ))
training = tf.placeholder(tf.bool, name='training')
boxes, scores, classes = get_boxes_from_output(l_output, anchors,
len(class_names), input_image_shape,
score_threshold=score, iou_threshold=iou)
image = Image.open('./data/image1.jpg')
image = preprocess_image(image)
image_data = np.array(image, dtype='float32')
image_data = np.expand_dims(image_data, 0) # Add batch dimension.
sess = tf.Session(graph=detection_graph)
# Run the session to get the output bounding boxes
out_boxes, out_scores, out_classes = sess.run(
[boxes, scores, classes],
feed_dict={
l_input: image_data,
input_image_shape: [image.size[1], image.size[0]],
training: False
})
# Now how do I save a new model that outputs directly [boxes, scores, classes]
Now my question is how do I save a new .pb model from the session, so that I can load it again elsewhere and it can directly outputs boxes, scores, classes?
I hope the question is clear enough.
Thank you very much in advance for your help!
Add nodes and save to a frozen model
Once you have added the new ops, you need to write the new graph using tf.train.write_graph:
boxes, scores, classes = get_boxes_from_output()
tf.train.write_graph(sess.graph_def,save_dir,'new_cnn_weights.pb')
Then you need to freeze the above graph using the freeze_graph utility. Make sure the output_node_names are set to boxes, scores, classes as shown below:
# Freeze graph
from tensorflow.python.tools import freeze_graph
import os
input_graph_path = os.path.join(save_dir, 'new_cnn_weights.pb')
input_saver_def_path = ''
input_binary = False
output_node_names = 'boxes, scores, classes'
restore_op_name = ''
filename_tensor_name = ''
output_graph_path = os.path.join(save_dir, 'new_frozen_cnn_weights.pb')
clear_devices = False
checkpoint_path = os.path.join(save_dir, 'test_model')
freeze_graph.freeze_graph(input_graph_path, input_saver_def_path,
input_binary, checkpoint_path, output_node_names,
restore_op_name, filename_tensor_name,
output_graph_path, clear_devices, '')
Check the optimized graph
#Load the new optimized graph and check whether the output is consistent,
tf.reset_default_graph()
with tf.gfile.GFile(save_dir+'new_frozen_cnn_weights.pb', 'rb') as f:
graph_def_optimized = tf.GraphDef()
graph_def_optimized.ParseFromString(f.read())
G = tf.Graph()
with tf.Session(graph=G) as sess:
boxes,scores,classes = tf.import_graph_def(graph_def_optimized, return_elements=['boxes:0', 'scores:0', 'classes:0'])
print('Operations in Optimized Graph:')
print([op.name for op in G.get_operations()])
x = G.get_tensor_by_name('import/import/input:0')
print(sess.run([boxes, scores, classes], feed_dict={x: np.expand_dims(img, 0)}))

Tensorflow Object Detection API 1-channel image

Is there any way to use pre-trained models in Object Detection API of Tensorflow, which trained for RGB images, for single channel grayscale images(depth) ?
I tried the following approach to perform object detection on Grayscale (1 Channel images) using a pre-trained model (faster_rcnn_resnet101_coco_11_06_2017) in Tensorflow. It did work for me.
The model was trained on RGB Images, So I just had to modify certain code in object_detection_tutorial.ipynb, available in the Tensorflow Repo.
First Change:
Note that exisitng code in the ipynb was written for 3 Channel Images, So change the load_image_into_numpy array function as shown
From
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
To
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
channel_dict = {'L':1, 'RGB':3} # 'L' for Grayscale, 'RGB' : for 3 channel images
return np.array(image.getdata()).reshape(
(im_height, im_width, channel_dict[image.mode])).astype(np.uint8)
Second Change: Grayscale images have only data in 1 channel. To perform object detection we need 3 channels(the inference code was written for 3 channels)
This can be achieved in two ways.
a) Duplicate the single channel data into two more channels
b) Fill the other two channels with Zeros.
Both of them will work, I used the first method
In the ipynb, go the section where you read the images and convert them into numpy arrays (the forloop at the end of the ipynb).
Change the code From:
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
To this:
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
if image_np.shape[2] != 3:
image_np = np.broadcast_to(image_np, (image_np.shape[0], image_np.shape[1], 3)).copy() # Duplicating the Content
## adding Zeros to other Channels
## This adds Red Color stuff in background -- not recommended
# z = np.zeros(image_np.shape[:-1] + (2,), dtype=image_np.dtype)
# image_np = np.concatenate((image_np, z), axis=-1)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
That's it, Run the file and you should see the results.
These are my results

How to use Tensorflow's tf.cond() with two different Dataset iterators without iterating both?

I want to feed a CNN with the tensor "images". I want this tensor to contain images from the training set ( which have FIXED size ) when the placeholder is_training is True, otherwise I want it to contain images from the test set ( which are of NOT FIXED size ).
This is needed because in training I take a random fixed crop from the training images, while in test I want to perform a dense evaluation and feed the entire images inside the network ( it is fully convolutional so it will accept them)
The current NOT WORKING way is to create two different iterators, and try to select the training/test input with tf.cond at the session.run(images,{is_training:True/False}).
The problem is that BOTH the iterators are evaluated. The training and test dataset are also of different size so I cannot iterate both of them until the end. Is there a way to make this work? Or to rewrite this in a smarter way?
I've seen some questions/answers about this but they always used tf.assign which takes a numpy array and assigns it to a tensor. In this case I cannot use tf.assign because I already have a tensor coming from the iterators.
The current code that I have is this one. It simply checks the shape of the tensor "images":
train_filenames, train_labels = list_images(args.train_dir)
val_filenames, val_labels = list_images(args.val_dir)
graph = tf.Graph()
with graph.as_default():
# Preprocessing (for both training and validation):
def _parse_function(filename, label):
image_string = tf.read_file(filename)
image_decoded = tf.image.decode_jpeg(image_string, channels=3)
image = tf.cast(image_decoded, tf.float32)
return image, label
# Preprocessing (for training)
def training_preprocess(image, label):
# Random flip and crop
image = tf.image.random_flip_left_right(image)
image = tf.random_crop(image, [args.crop,args.crop, 3])
return image, label
# Preprocessing (for validation)
def val_preprocess(image, label):
flipped_image = tf.image.flip_left_right(image)
batch = tf.stack([image,flipped_image],axis=0)
return batch, label
# Training dataset
train_filenames = tf.constant(train_filenames)
train_labels = tf.constant(train_labels)
train_dataset = tf.contrib.data.Dataset.from_tensor_slices((train_filenames, train_labels))
train_dataset = train_dataset.map(_parse_function,num_threads=args.num_workers, output_buffer_size=args.batch_size)
train_dataset = train_dataset.map(training_preprocess,num_threads=args.num_workers, output_buffer_size=args.batch_size)
train_dataset = train_dataset.shuffle(buffer_size=10000)
batched_train_dataset = train_dataset.batch(args.batch_size)
# Validation dataset
val_filenames = tf.constant(val_filenames)
val_labels = tf.constant(val_labels)
val_dataset = tf.contrib.data.Dataset.from_tensor_slices((val_filenames, val_labels))
val_dataset = val_dataset.map(_parse_function,num_threads=1, output_buffer_size=1)
val_dataset = val_dataset.map(val_preprocess,num_threads=1, output_buffer_size=1)
train_iterator = tf.contrib.data.Iterator.from_structure(batched_train_dataset.output_types,batched_train_dataset.output_shapes)
val_iterator = tf.contrib.data.Iterator.from_structure(val_dataset.output_types,val_dataset.output_shapes)
train_images, train_labels = train_iterator.get_next()
val_images, val_labels = val_iterator.get_next()
train_init_op = train_iterator.make_initializer(batched_train_dataset)
val_init_op = val_iterator.make_initializer(val_dataset)
# Indicates whether we are in training or in test mode
is_training = tf.placeholder(tf.bool)
def f_true():
with tf.control_dependencies([tf.identity(train_images)]):
return tf.identity(train_images)
def f_false():
return val_images
images = tf.cond(is_training,f_true,f_false)
num_images = images.shape
with tf.Session(graph=graph) as sess:
sess.run(train_init_op)
#sess.run(val_init_op)
img = sess.run(images,{is_training:True})
print(img.shape)
The problem is that when I want to use only the training iterator, I comment the line to initialize the val_init_op but there is the following error:
FailedPreconditionError (see above for traceback): GetNext() failed because the iterator has not been initialized. Ensure that you have run the initializer operation for this iterator before getting the next element.
[[Node: IteratorGetNext_1 = IteratorGetNext[output_shapes=[[2,?,?,3], []], output_types=[DT_FLOAT, DT_INT32], _device="/job:localhost/replica:0/task:0/cpu:0"](Iterator_1)]]
If I do not comment that line everything works as expected, when is_training is true I get training images and when is_training is False I get validation images. The issue is that both the iterators need to be initialized and when I evaluate one of them, the other is incremented too. Since as I said they are of different size this causes an issue.
I hope there is a way to solve it! Thanks in advance
The trick is to call iterator.get_next() inside the f_true() and f_false() functions:
def f_true():
train_images, _ = train_iterator.get_next()
return train_images
def f_false():
val_images, _ = val_iterator.get_next()
return val_images
images = tf.cond(is_training, f_true, f_false)
The same advice applies to any TensorFlow op that has a side effect, like assigning to a variable: if you want that side effect to happen conditionally, the op must be created inside the appropriate branch function passed to tf.cond().

Online oversampling in Tensorflow input pipeline

I have an input pipeline similar to the one in the Convolutional Neural Network tutorial. My dataset is imbalanced and I want to use minority oversampling to try to deal with this. Ideally, I want to do this "online", i.e. I don't want to duplicate data samples on disk.
Essentially, what I want to do is duplicate individual examples (with some probability) based on the label. I have been reading a bit on Control Flow in Tensorflow. And it seems tf.cond(pred, fn1, fn2) is the way to go. I am just struggling to find the right parameterisation, since fn1 and fn2 would need to output lists of tensors, where the lists have the same size.
This is roughly what I have so far:
image = image_preprocessing(image_buffer, bbox, False, thread_id)
pred = tf.reshape(tf.equal(label, tf.convert_to_tensor([2])), [])
r_image = tf.cond(pred, lambda: [tf.identity(image), tf.identity(image)], lambda: [tf.identity(image),])
r_label = tf.cond(pred, lambda: [tf.identity(label), tf.identity(label)], lambda: [tf.identity(label),])
However, this raises an error as I mentioned before:
ValueError: fn1 and fn2 must return the same number of results.
Any ideas?
P.S.: this is my first Stack Overflow question. Any feedback on my question is appreciated.
After doing a bit more research, I found a solution for what I wanted to do. What I forgot to mention is that the code mentioned in my question is followed by a batch method, such as batch() or batch_join().
These functions take an argument that allows you to group tensors of various batch size rather than just tensors of a single example. The argument is enqueue_many and should be set to True.
The following piece of code does the trick for me:
for thread_id in range(num_preprocess_threads):
# Parse a serialized Example proto to extract the image and metadata.
image_buffer, label_index = parse_example_proto(
example_serialized)
image = image_preprocessing(image_buffer, bbox, False, thread_id)
# Convert 3D tensor of shape [height, width, channels] to
# a 4D tensor of shape [batch_size, height, width, channels]
image = tf.expand_dims(image, 0)
# Define the boolean predicate to be true when the class label is 1
pred = tf.equal(label_index, tf.convert_to_tensor([1]))
pred = tf.reshape(pred, [])
oversample_factor = 2
r_image = tf.cond(pred, lambda: tf.concat(0, [image]*oversample_factor), lambda: image)
r_label = tf.cond(pred, lambda: tf.concat(0, [label_index]*oversample_factor), lambda: label_index)
images_and_labels.append([r_image, r_label])
images, label_batch = tf.train.shuffle_batch_join(
images_and_labels,
batch_size=batch_size,
capacity=2 * num_preprocess_threads * batch_size,
min_after_dequeue=1 * num_preprocess_threads * batch_size,
enqueue_many=True)