How to load images dataset from CSV with preprocess? - tensorflow

I have a CSV file with images paths and associate classes. I would like to import them into a dataset (or a generator) to use it in a neural network. Everybody shows the flow_from_dataframe() function from Keras but I would like to apply transformations to some pictures (depending of their names).May I achieve this with the parameters ? Is it possible to achieve this after calling the function ? Is there another Keras function to do this ? So, what's the best way to achieve this ?
(the code with flow_from_dataframe() :)
datagen = ImageDataGenerator(rescale=1./255)
train_generator = datagen.flow_from_dataframe(
pd.read_csv(dataset_file_path),
x_col="filename",
y_col="class")

Related

Is there a way to use keras flow_from_directory with distributed training (mirrored strategy)?

I cannot find any documentation or example code on how to do dynamic file loading with distribute strategies in tensorflow. I see many examples using keras flow_from_directory which seems to be very nice because it relies on generator and does scaling etc on the fly so doesn't require loading hundreds of thousands of images at once. But the distribute strategy examples (mirrored strategy etc) all show pre-loading the entire dataset and then using dataset.from_tensor_slices() and loading that to strategy.experimental_distribute_dataset(). This is not feasable with large data that exceeds the memory. I've tried to combine the above dynamic loading method from keras with distributed batching but it looks like there is no trivial way to convert some flow_from_directory output to be compatible with what strategy.experimental_distribute_dataset() expects so these two features are probably not compatible with one another.
Is there some other way to do this with distributed training?
I could probably hand-code dynamic file loading during training but it would be pretty slow and basic compared to what generators and flow offer. I would be very surprised if TF2 didn't include this functionality for distributed training yet.
dataset.from_tensor_slices() does not load the whole data to your memory at once.
it may look that way, but when configured the right way, it uses batches, prefetching, caching etc.
look at my small example of how I am using the tf.data pipeline. the map function gets executed during the training or prefetching and not in the beginning. so not all the data gets loaded to your memory at the same time.
def parse_img(label, path):
img = tf.io.read_file(path)
img = tf.image.decode_png(img, channels=0, dtype=tf.uint8)
img = tf.image.convert_image_dtype(img, tf.float32)
return img, label
ds_train = tf.data.Dataset.from_tensor_slices((list_labels_train,
list_paths_train))
ds_train = ds_train.map(parse_img, num_parallel_calls=AUTOTUNE)
ds_train = ds_train.cache()
ds_train = ds_train.shuffle(len(list_paths_train), seed=42,
reshuffle_each_iteration=True)
ds_train = ds_train.batch(BATCH_SIZE)
ds_train = ds_train.repeat()
ds_train = ds_train.prefetch(buffer_size=AUTOTUNE)

Get hidden layer outputs

I make a densenet network from gluon.vision
densenet = vision.densenet121(pretrained=True, ctx=mx.cpu())
I want to get the outputs of each convolutionnal layer (after a prediction), to plot them afterwards (features maps).
I can't do densenet.get_internals() (as I saw on internet and Github), as my network is not a Symbol but a HybridBlock.
I find a solution in mxnet forum :
Gluon, get features maps of a CNN
Actually, you have to transform the gluon model to a symbol using methods export() to save parameters (method from HybridBlock), and mx.sym.load() to load them.
function get_interals()["name_of_the_layer"] get all the layers from begining to this layer, so you can do feat_maps = net(image) to get all the features maps for this layer.
Then you can do a SummaryWriter in mxBoard to export it to Tensorboard.

Tensorflow Keras GCP ML engine model serving

I'm working on an image classifier with tensorflow estimator + keras retraining the last layer of a pretrained application inception_v3 on GCP ML engine.
The keras model is exported with tf.keras.estimator.model_to_estimator and the input function receive the path of the image stored on GCP cloud storage open the image with tf.image.decode_jpeg and return a dataset with the following format dict(zip(['inception_v3_input'], [image])), label
I'm trying to define the tf.estimator.export.ServingInputReceiver but I'm having some trouble defining it.
The model is serving correctly the prediction with the predict method using the input function without the labels.
My idea was to reuse the input_function to decode the image passing only the path of the image on cloud storage to the prediction also for the google endpoint, but I can't understand how to do it.
Thank's for your help
If I'm understanding correctly, your question is how to get the file from Cloud Storage, considering that you want to decode the image this way:
image_decoded = tf.image.decode_jpeg(image_string)
So, in this case, you can use:
image_string = file_io.FileIO(filename, mode='r')
By importing file_io first:
from tensorflow.python.lib.io import file_io
According to the comments on this question about reading input data from GCS, using the file_read function should provide the same results since " there was a bunch of work done to abstract file io and file systems, so there all the io functionality works consistently". So you can try also with read_file function.

How to augment data in tensorflow tfrecords?

I am storing my data using tfrecords and I read them as tensors using Dataset API and then I use the Estimator API to perform training. Now, I want to do online data-augmentation on each item in the dataset, but after trying for a while I cannot find a way out to do it. I want randomly flipping, randomly rotation and other manipulators.
I am following the instructions given in this tutorial with a custom estimator which is the my CNN and I am not sure where the data augmentation step occurs.
Using TFRecords doesn't prevent you from doing data augmentation.
Following the tutorial you linked in your comment, here is what roughly happens:
You create the dataset from the TFRecords files, and parse the file to get an image and a label
dataset = tf.data.TFRecordDataset(filenames=filenames)
dataset = dataset.map(parse)
You can now apply a new preprocessing function to do some data augmentation during training
# Only do it when we are training
if train:
dataset = dataset.map(train_preprocess)
The train_preprocess function can be something like this:
def train_preprocess(image, label):
flip_image = tf.image.random_flip_left_right(image)
# Other transformations...
return flip_image, label

Feeding a single image into model trained with inception v3

I've searched around the internet for a few days and cannot seem to find an example of someone feeding a single image into a graph created using inception. Please let me know if I have grossly overlooked something obvious. To but the problem in context, I've
1) Trained a model and produced the relevant checkpoint files
model.ckpt-10000.data-00000-of-00001
model.ckpt-10000.index
model.ckpt-10000.meta
2) I then load the model
tf.reset_default_graph()
sess = tf.Session()
saver = tf.train.import_meta_graph(checkpoint_path + "/model.ckpt-10000.meta", clear_devices=True)
#<tensorflow.python.training.saver.Saver object at 0x11eea89e8>
sess.run(saver.restore(sess, checkpoint_path + "/model.ckpt-10000"))
3) This works correctly, so I load the default graph,
graph = tf.get_default_graph()
Here is where I am lost. As seen by this example, we must identify the layers of the graph by name to pass our image data into -- http://cv-tricks.com/tensorflow-tutorial/training-convolutional-neural-network-for-image-classification/.
So, what are the names of these layers? I suppose they something like "DecodeJpeg" and "/tower1/preditions/logits", but those are no better than guesses.
Thank you for your help.
The standard way of mapping between operations before and after save/restore is by adding them to collections. Search for tf.add_to_collection and tf.get_collection in https://www.tensorflow.org/api_guides/python/meta_graph. These examples save training_op and logits, but you can save your input placeholders as well.
If you cannot re-save the meta graph def and it does not have any collections, looking at node names and types (inputs are typically placeholder ops) might be the best you can do.