Image data augmentation in TF 2.0 -- rotation - tensorflow

I am training a tensorflow model with with multiple images as input and a segmentation mask at the output. I wanted to perform random rotation augmentation in my dataset pipeline.
I have a list of parallel image file names (input output files aligned) for which I convert into tf dataset object using tf.data.Dataset.from_generator and then and use Dataset.map function to load images with tf.image.decode_png and tf.io.read_file commands.
How can I perform random rotations on the input-output images. I tried using random_transform function of ImageDataGenerator class but it expects numpy data as input and does not work on Tensors (Since tensorflow does not support eager execution in data pipeline I cannot convert it into numpy as well). I suppose I can use tf.numpy_function but I expect there should be some simple solution for this simple problem.

Related

How to use a cv2 image augmentation function with tensorflow tf.data.Dataset?

I am using tf.data.Dataset to create my dataset and training a CNN with keras. I need to apply masks on the images, and the mask depends on the shape of the image, there are no predefined pixel coordinates.
When looking for an answer on the internet, I found that there are 2 ways of accessing shapes of images in TensorFlow (in training time):
Using eager execution (which is not enabled by default in my case, I'm using tf v 12.0)
Using a session
I do not want to use eager execution because it slows down training, and cannot use a session because I train and test the CNN using Keras (I feed the data to model.train() using iterators of tf.data.Dataset).
As a consequence, I have no way of knowing the shapes of images, and thus cannot access specific pixels for data augmentation.
I wrote a function using OpenCV (cv2) that applies the masks. Is there a way to integrate it with the TensorFlow data pipeline?
EDIT : I found a solution. I used tf.py_func to wrap the python functions
You can use map to transform elements of your dataset. You can then use tf.py_function to wrap your cv2 function into a tf op that executes eagerly. In tensorflow 1.x, you may use tf.py_func but the behavior is a bit different. See tf.py_function documentation for more info.
So, in TF-2.x it will look something like:
def cv2_func(image, label):
# your code goes here
def tf_cv2_func(image, label):
[image, label] = tf.py_function(cv2_func, [image, label], [tf.float32, tf.float64])
return image, label
train_ds = train_ds.shuffle(BUFFER_SIZE).map(tf_cv2_func).batch(BATCH_SIZE)
NOTE: Since you need image augmentation, I thought of supplying with some information on various image-augmentation libraries. This does not show you how to add OpenCV function into your tfdata-pipeline. But, if your requirements are standard enough, you may be able to use one of these:
tf.keras.preprocessing.image.ImageDataGenerator
imaug
albumentations
Data Augmentation in Python
Package: albumentations
library: external
url: Python albumentations library
Package: imaug :star:
library: external
url: Python imaug library
Package: tf.keras.preprocessing.image.ImageDataGenerator
library: external
url: Pyhon - TensorFlow ImageDataGenerator library
Examples
Example(s)/use of albumentations.
url: Example use-cases of Albumentations
Example(s)/use of imaug.
url: Data Augmentation for Deep Learning
:star::page_facing_up::heavy_check_mark: Fantastic Article
url: Data Augmentation techniques in python
Example(s)/use of tf.keras.preprocessing.image.ImageDataGenerator.
url: Official Example use-case of tf.keras - ImageDataGenerator
url: Building powerful image classification models using very little data

Image feature extraction with TF2 Keras API and TF Dataset

How should I use a tf Dataset in order run model.predict(data) and have access to the other features of the tf Dataset?
For example: my tf dataset has this format:
(tensor<(100,224,224,3)>, tensor<(100,)>) -> preprocesses images as tf.float32, uuids of the images as tf.string
If I extract the feature vector like this:
for image_data, uuids in ds.batch(100):
features = model.predict(data[0]) -> I get an array of features.
At this moment features is an array of (100, 2048) and uuids is a tensor of (100,) tf.string
How can I combine them in order to write the feature vectors to disk?
From my understanding, I need to have both of them in the same format, either both tensors so I can continue using tf code and save the feature vector as a tfrecord, either to get the uuid as a string from the uuid tensor so I can use python code and save the array in the file using numpy.tofile.
So my questions are:
- How can I make the features to be a tensor?
- Or can I get the string value from the tensor uuid?
- Does anything sounds wrong in what I try to do? Is there a more optimal way to create the input pipeline? Or did I misunderstood the usage of Keras API and tf dataset?
If I use a python pipeline I can successfully save the array in a file. But I would like to use tf dataset because I think it's gonna be faster and more optimized due to it's parallel map function, batching and autotuning the parallel calls.

Tensorflow Lite for variable sized input

I have a model much like the tensorflow speech command demo except it takes a variable sized 1D array as input. Now I find it difficult to convert this model to TF lite using tflite_convert which requires input_shape for input.
It's said that tf lite requires fixed size input for efficiency and you can resize input during inference as part of your model. However, I think it would involve truncating the input which I don't want. Is there any way to make this work with TF lite?
You can convert your model using a fixed shape as in --input_shape=64, then at inference-time you would do:
interpreter->ResizeInputTensor(interpreter->inputs()[0], {128});
interpreter->AllocateTensors();
// ... populate your input tensors with 128 entries ...
interpreter->Invoke();
// ... read your output tensor ...

TensorFlow input pipeline for deployment on CloudML

I'm relatively new to TensorFlow and I'm having trouble modifying some of the examples to use batch/stream processing with input functions. More specifically, what is the 'best' way to modify this script to make it suitable for training and serving deployment on Google Cloud ML?
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/text_classification.py
Something akin to this example:
https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/census/estimator/trainer
I can package it up and train it in the cloud, but I can't figure out how to apply even the simple vocab_processor transformations to an input tensor. I know how to do it with pandas, but there I can't apply the transformation to batches (using the chunk_size parameter). I would be very happy if I could reuse my pandas preprocessing pipelines in TensorFlow.
I think you have 3 options
1) You cannot reuse pandas preprocessing pipelines in TF. However, you could start TF with the output of your pandas preprocessing. So you could build a vocab and convert the text words to integers, and save a new preprocessed dataset to disk. Then read the integer data (which is encoding your text) in TF to do training.
2) You could build a vocab outside of TF in pandas. Then inside TF, after reading the words, you can make a table to map the text to integers. But if you are going to build a vocab outside of TF, you might as well do the transformation at the same time outside of TF, which is option 1.
3) Use tensorflow_transform. You can call tft.string_to_int() on the text column to automatically build the vocab and convert to integers. The output of tensorflow_transform is preprocessed data in tf.example format. Then training can start from the tf.example files. This is again option 1 but with tf.example files. If you want to run prediction on raw text data, this option allows you to make an exported graph that has the same text preprocessing built in, so you don't have to manage the preprocessing step at prediction time. However, this option is the most complicated as it introduces two additional ideas: tf.example files and beam pipelines.
For examples of tensorflow_transform see https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/criteo_tft
and
https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/reddit_tft

Tensorflow Transfer Learning with Input Pipeline

I want to use transfer learning with Google's Inception network for an image recognition problem. I am using retrain.py from the TensorFlow example source for inspiration.
In retrain.py, the Inception graph is loaded and a feed dict is used to feed the new images into the model's input layer. However, I have my data serialized in TFRecord files and have been using an input pipeline to feed in my inputs, as demonstrated here.
So I have a tensor images which returns my input data in batches when run. But how can I feed these images into Inception? I can't use a feed dict since my inputs are tensors, not NumPy arrays. My two ideas are
1) simply call sess.run() on each batch to convert it to a NumPy array, and then use a feed dict to pass it to Inception.
2) replace the input node in the Inception graph with my own batch input tensor
I think (1) would work, but it seems a little inelegant. (2) seems more natural to me, but I can't do exactly that because TensorFlow graphs can only be appended to and not otherwise modified.
Is there a better approach?
You can implement option (2), replacing the input node, but you will need to modify retrain.py to do so. The tf.import_graph_def() function supports a limited form of modification to the imported graph, by remapping tensors in the imported graph to existing tensors in the target graph.
This line in retrain.py calls tf.import_graph_def() to import the Inception model, where jpeg_data_tensor becomes the tensor that you feed with input data:
bottleneck_tensor, jpeg_data_tensor, resized_input_tensor = (
tf.import_graph_def(graph_def, name='', return_elements=[
BOTTLENECK_TENSOR_NAME, JPEG_DATA_TENSOR_NAME,
RESIZED_INPUT_TENSOR_NAME]))
Instead of retrieving jpeg_data_tensor from the imported graph, you can remap it to an input pipeline that you construct yourself:
# Output of a training pipeline, returning a `tf.string` tensor containing
# a JPEG-encoded image.
jpeg_data_tensor = ...
bottleneck_tensor, resized_input_tensor = (
tf.import_graph_def(
graph_def,
input_map={JPEG_DATA_TENSOR_NAME: jpeg_data_tensor},
return_elements=[BOTTLENECK_TENSOR_NAME, RESIZED_INPUT_TENSOR_NAME]))
Wherever you previously fed jpeg_data_tensor, you no longer need to need it, because the inputs will be read from the input pipeline you constructed. (Note that you might need to handle resized_input_tensor as well... I'm not intimately familiar with retrain.py, so some restructuring might be necessary.)