I'm using google colab and kaggle notebook both are great tools for noob like me but limited with 13gb of RAM, I want to train pretrained models on animals faces dataset, it containe around 14200 samples divided into 3 classes.
I can't load all the data, just 1000 samples from each class. after normalisation and loading the data into the model the ram jumps from 3GB to 13GB
I think my code is not optimized, how to add data pipeline with tf.data to my code
this is my code
#X and Y represent data and lables
X=[]
Z=[]
IMG_SIZE=224
#data path
cat='../input/animal-faces/afhq/train/cat'
dog='../input/animal-faces/afhq/train/dog'
wild='../input/animal-faces/afhq/train/wild'
def assign_label(img,img_type):
return img_type
def make_train_data(img_type,DIR):
for img in tqdm(os.listdir(DIR)):
label=assign_label(img,img_type)
path = os.path.join(DIR,img)
filenames.append(str(path))
img = cv2.imread(path,cv2.IMREAD_COLOR)
img = cv2.resize(img, (IMG_SIZE,IMG_SIZE))
X.append(np.array(img))
Z.append(str(label))
n = 1000
make_train_data('dog',dog)
X=X[:n]
Z=Z[:n]
filenames=filenames[:n]
print(len(X))
make_train_data('cat',cat)
X=X[:2*n]
Z=Z[:2*n]
filenames=filenames[:2*n]
print(len(X))
make_train_data('wild',wild)
X=X[:3*n]
Z=Z[:3*n]
filenames=filenames[:3*n]
print(len(X))
le=LabelEncoder()
Y=le.fit_transform(Z)
Y=tf.keras.utils.to_categorical(Y,3)
X=np.array(X)
print(X.dtype)
X=np.float32(X/255)
print(X.dtype)
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=42)
X=[]
print(len(x_train))
print(len(x_test
batch_size=32
epochs=20
the type of x_train and x_test is "numpy.ndarray"
Related
I'm trying to understand how to use multiple gpus to train a model on data too large for the GPU memory. Using tf.distribute.MirroredStrategy seems to copy the full data set to each GPU. What I'm hoping to do is to send a subset of the full dataset to each GPU (2 or 4 gpus) and use MirroredStrategy to reconcile parameter updates on each epoch.
MirroredStrategy.distribute_datasets_from_function() looks promising.
https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy#distribute_datasets_from_function
Problem details:
A fairly complicated multimodal NN with ~200k parameters synthesizing many text, transactional, and structured inputs and with multiple regression and probabilistic outputs. I'm looking at moving development from a single GPU with 24gb memory to cloud compute with multiple 16gb cards on a single node.
The input and targets are currently dictionaries of numpy arrays. I'm hoping for a toy example converting those dictionaries into a distributed data set through to training with different subsets of the full data set assigned to each GPU.
I attempted this:
def build_model(**model_params):
'''
Builds a model from model_params
'''
return tf.keras.Model(
inputs = [MY_INPUT_TENSORS],
output = [MY_OUTPUT_TENSORS])
distributed_strategy = tf.distribute.MirroredStrategy()
with distributed_strategy.scope():
train_model = build_model(**model_params)
train_model.compile(...)
train_model.fit(X_dict, y_dict)
This runs on a 50% sample of the data, but returns OOM on the full sample on 2 GPUs. The full data set appears to be copied to each of the 2 16gb GPUs available. The same model runs with a 100% sample on a single 24gb GPU.
Here's how I got it working with tf.data.Dataset.from_tensor_slices() and tf.distribute.MirroredStrategy.experimental_distribute_dataset():
#Data exists in the form of dictionaries of large numpy arrays
x_train, y_train, x_validation, y_validation = {},{},{},{}
#Create tensorflow datasets using CPU / system memory
with tf.device("CPU"):
train = tf.data.Dataset.from_tensor_slices((x_train, y_train))
valid = tf.data.Dataset.from_tensor_slices((x_validation, y_validation))
batch_size = 1024
epochs = 30
distributed_strategy = tf.distribute.MirroredStrategy()
num_gpu = distributed_strategy.num_replicas_in_sync
#Create a distributed dataset from the tensorflow datasets.
#The data gets streamed to the GPUs, so shuffling, repetition / epoch, and batch
#size need to be manually specified
train = train.shuffle(100*batch_size).repeat(epochs).batch(num_gpu * batch_size, drop_remainder=True)
train_dist = distributed_strategy.experimental_distribute_dataset(train)
valid = valid.repeat(epochs).batch(num_gpu * batch_size, drop_remainder=True)
#Build and compile the model
with distributed_strategy.scope():
train_model = build_model(**model_params)
train_model.compile(
optimizer = tf.keras.optimizers.Adam(learning_rate = learning_rate),
loss = losses,
loss_weights = weights )
#Train the model. steps_per_epoch and validation_steps need to be specified.
train_model.fit(
train_dist,
validation_data = valid,
epochs = epochs,
steps_per_epoch = int(len(train)//epochs),
validation_steps = int(len(valid)//epochs),
use_multiprocessing = True,
verbose = 1,
)
Tensorflow Version : 2.1.0
Model built using tf.keras
Graphics Card : Nvidia GTX 1660TI 6GB DDR6
CPU : Intel i7 9th Gen
Ram : 16 GB DDR4
Storage Disk : SSD (NVME)
I wrote a code to read audio files in batches in a multithread manner using tf.keras.Sequences with multiple workers, but the issue with that code is the CPU is not concurrently reading the next set of audio batches while training the GPU due to which the GPU is being only utilized upto 30 percent of its max capacity (Training time for an epoch is around 25 minutes).
So I decided to move to tf.data.Datasets.from_generator to use the existing generator function to read the batches in a more efficient manner. But that input pipeline is performing more bad (taking 47 minutes to train an epoch). I have attached the code that I used to read create the input pipeline. I have read the file names and their categories from an excel file and fed them to the generator and created the pipeline.
Even after after applying prefetch the pipeline was performing really worse.
Since this is the first time that I am using tf.data API, I would like some insights if I have made any mistakes or not.
This is my code to generate the batches.
# Function read the audio files
def get_x(file):
data = []
for i in file:
audio, fs = sf.read(i, dtype="float32")
data.append(audio[::2])
data = np.array(data, dtype=np.float32)
data = np.expand_dims(data, axis=-1)
return data
def data_generator(files, labels, batchsize):
while True:
start = 0
end = batchsize
while start < len(files):
x = get_x(files[start:end])
y = np.array(tf.keras.utils.to_categorical(labels[start:end], num_classes=2), dtype=np.float32)
yield x, y
start += batchsize
end += batchsize
# Get the tensorflow data dataset object to generate batches
def tf_data_dataset(files, labels, batch_size):
autotune = tf.data.experimental.AUTOTUNE
dataset = tf.data.Dataset.from_generator(
data_generator,
output_types=(np.float32, np.float32),
output_shapes=(tf.TensorShape([None, 16000, 1]),
tf.TensorShape([None, 2])),
args=(files, labels, batch_size))
dataset = dataset.prefetch(buffer_size=autotune)
return dataset
I am trying to train some layers of a network whose inputs are an image and a scalar. Please see the figure below for a better understanding. .
As you can see only the dark yellow layers will be trained. So I need to freeze the rest, that is for later.
Purpose of this architecture is to map images (chest x-rays) to 14 kinds of diseases.
The images are stored in the following directory: /home/akde/chexnet/CheXNet-Keras/data/images
Names of the images are the image IDs.
A dataframe maps images (Images are named as the Image ID) to classes (diseases)
As you can see an image can be mapped to more than one class (disease).
Another dataframe maps the images (Image IDs) to the patient age. You can see it below.
Image is the first input and patient age is the second.
So in short, for each image id, I have an image and age value which are in 2 separate dataframes.
I can already test (gives absurd results since the network is not trained, but still proves that the network accepts the input and gives some result) it using the following code.
res3 = model3.predict( [test_image, a] )
where a is the scalar input while the test_image is the image input.
My training data is stored in multiple dataframes, having read that post, I deduce that flow_from_dataframe should be used.
The first thing I have done was to see this post which explains how to use mixed inputs. That gave me some background but since it does not use fit_generator (instead uses fit) it did not solve my problem.
Then I have read this post which does not use multiple inputs. Again no clue.
Afterwards, I have seen this post, which takes 2 images as input ( not one image one scalar). So again no help.
Even though I haven't found a solution to my problem I have written the following piece of code which will be the skeleton the solution.
datagen=ImageDataGenerator(rescale=1./255., validation_split=0.25)
train_generator = datagen.flow_from_dataframe(traindf,
directory="/home/akde/chexnet/CheXNet-Keras/data/images",
class_mode="other",
x_col="Image Index",
y_col=["Atelectasis", "Cardiomegaly", "Effusion", "Infiltration", "Mass",
"Nodule", "Pneumonia", "Pneumothorax", "Consolidation", "Edema",
"Emphysema", "Fibrosis", "Pleural_Thickening", "Hernia"],
color_mode="rgb",
batch_size=32,
target_size=(224, 224)
)
STEP_SIZE_TRAIN=train_generator.n//train_generator.batch_size
model3.compile(optimizers.rmsprop(lr=0.0001, decay=1e-6),loss="categorical_crossentropy",metrics=["accuracy"])
model3.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
epochs=10
)
I know this piece of code is far from the solution.
So how can I create a generator that uses 2 dataframes which are explained earlier (the one that maps images to the diseases and the other one which maps image IDs to age).
In other words, what is the way of writing a generator that takes an image and a scalar value as an input, considering the fact that both are represented in dataframes. How can I write the generator that written in bold below.
model3.fit_generator(**generator=train_generator**,
steps_per_epoch=STEP_SIZE_TRAIN,
epochs=10
)
For your purpose you need to create a custom generator.
I will recommand you to take a deep look at this link :
https://blog.ml6.eu/training-and-serving-ml-models-with-tf-keras-3d29b41e066c
And especially this code :
import ast
import numpy as np
import math
import os
import random
from tensorflow.keras.preprocessing.image import img_to_array as img_to_array
from tensorflow.keras.preprocessing.image import load_img as load_img
def load_image(image_path, size):
# data augmentation logic such as random rotations can be added here
return img_to_array(load_img(image_path, target_size=(size, size))) / 255.
class KagglePlanetSequence(tf.keras.utils.Sequence):
"""
Custom Sequence object to train a model on out-of-memory datasets.
"""
def __init__(self, df_path, data_path, im_size, batch_size, mode='train'):
"""
df_path: path to a .csv file that contains columns with image names and labels
data_path: path that contains the training images
im_size: image size
mode: when in training mode, data will be shuffled between epochs
"""
self.df = pd.read_csv(df_path)
self.im_size = im_size
self.batch_size = batch_size
self.mode = mode
# Take labels and a list of image locations in memory
self.wlabels = self.df['weather_labels'].apply(lambda x: ast.literal_eval(x)).tolist()
self.glabels = self.df['ground_labels'].apply(lambda x: ast.literal_eval(x)).tolist()
self.image_list = self.df['image_name'].apply(lambda x: os.path.join(data_path, x + '.jpg')).tolist()
def __len__(self):
return int(math.ceil(len(self.df) / float(self.batch_size)))
def on_epoch_end(self):
# Shuffles indexes after each epoch
self.indexes = range(len(self.image_list))
if self.mode == 'train':
self.indexes = random.sample(self.indexes, k=len(self.indexes))
def get_batch_labels(self, idx):
# Fetch a batch of labels
return [self.wlabels[idx * self.batch_size: (idx + 1) * self.batch_size],
self.glabels[idx * self.batch_size: (idx + 1) * self.batch_size]]
def get_batch_features(self, idx):
# Fetch a batch of images
batch_images = self.image_list[idx * self.batch_size: (1 + idx) * self.batch_size]
return np.array([load_image(im, self.im_size) for im in batch_images])
def __getitem__(self, idx):
batch_x = self.get_batch_features(idx)
batch_y = self.get_batch_labels(idx)
return batch_x, batch_y
Hope this will help to find your solution !
Previously I manually trained my model using model.fit() inside a for loop to train it on small batches of data, due to memory constraints. The problem with this is that I can't have access to all previous histories through history.history, because it's like each time a new model is trained, and previous histories aren't stored anywhere.
When I use model.fit() on a 500 batch size, around 7 GB of my ram gets full. I use keras with tensorflow-cpu back end.
But when I use a generator, even with a batch size of 50 won't fit in memory, and gets swapped onto the disk.
I'm performing classification, using 224*224 images, and I am trying to fine tune vgg face. I'm using vgg face implemented according to this link:
VGG-Face
I'm using ResNet and SeNet architectures, as described in the link.
I've previously shuffled my data. I've put aside %20 of my data for test.
My data, image addresses and labels, are stored in a list. The %20 of my training data will be used for validation. For example if batch size is equal to 50, train_data_generator will create a batch with size 40 from the first %80 portion of training data, and vl_data_generator will create a batch with size 10 from the last %20 portion of training data. I've written a class, and by creating an instance and invoking train method
through it, I perform training. Here are generator and training parts of my code, excluding model definitions:
def prepare_input_data(self, batch_addresses):
image = []
for j in range(len(batch_addresses)):
img = cv2.imread(batch_addresses[j])
img = cv2.resize(img, (224, 224))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img - np.array([103.939, 116.779, 123.68])
image.append(img)
data = np.array(image)
data = data.astype('float32')
data /= 255
return data
def train_data_generator(self, addresses, labels, batch_size):
"""Train data generator"""
#Use first %80 of data for training.
addresses = addresses[: int(0.8 * len(addresses))]
labels = labels[: int(0.8 * len(labels))]
total_data = len(addresses)
while 1:
for i in range(total_data / batch_size):
batch_addresses = addresses[i * batch_size: (i + 1) * batch_size]
batch_labels = labels[i * batch_size: (i + 1) * batch_size]
data = self.prepare_input_data(batch_addresses)
batch_labels = np_utils.to_categorical(batch_labels, self.nb_class)
yield data, batch_labels
def val_data_generator(self, addresses, labels, batch_size):
"""Validation data generator"""
#Use the last %20 of data for validation
addresses = addresses[int(0.8 * len(addresses)):]
labels = labels[int(0.8 * len(labels)):]
total_data = len(addresses)
image = []
while 1:
for i in range(total_data / batch_size):
batch_addresses = addresses[i * batch_size: (i + 1) * batch_size]
batch_labels = labels[i * batch_size: (i + 1) * batch_size]
data = self.prepare_input_data(batch_addresses)
batch_labels = np_utils.to_categorical(batch_labels, self.nb_class)
yield data, batch_labels
def train(self, label_interested_in):
"""Trains the model"""
#Read training data from json file, and get addresses and labels
addresses, labels = self.create_address_and_label(label_interested_in)
batch_size = 50
train_batch_size = 40
val_batch_size = 10
steps = int(len(addresses) / batch_size) + 1
print(len(addresses), steps)
#Perform training
history = self.custom_vgg_model.fit_generator(
self.train_data_generator(addresses, labels, train_batch_size),
steps_per_epoch=steps, epochs=self.number_of_epochs,
verbose=1, validation_data=self.val_data_generator(addresses, labels, val_batch_size),
validation_steps=steps, initial_epoch=0)
Why am I seeing such high memory usage? Is it because the way generators work in keras? I read that generators prepare batches beforehand to speedup the training process by running in parallel with the training. Or am I doing something wrong?
As a side question, since there isn't a batch_size argument in fit_generator(), am I correct in assuming that data gets loaded into the model based on generators and gradient updates are performed after each training and validation batch is loaded?
Try workers=0
This will not invoke any multiprocessing which is intended to fill up the queue beforehand up to the max_queue_size argument with using k workers.
What this does is; prepare a queue of generated data on CPU while training is ongoing on GPU so no time is lost and avoid bottlenecks.
For your need workers=0 will work
For deeper inquiry refer to
keras fit_generator
I'm new to tensorflow, but i already followed and executed the tutorials they promote and many others all over the web.
I made a little convolutional neural network over the MNIST images. Nothing special, but i would like to test on my own images.
Now my problem comes: I created several folders; the name of each folder is the class (label) the images inside belong.
The images have different shapes; i mean they have no fixed size.
How can i load them for using with Tensorflow?
I followed many tutorials and answers both here on StackOverflow and on others Q/A sites. But still, i did not figure out how to do this.
The tf.data API (tensorflow 1.4 onwards) is great for things like this. The pipeline will looks something like the following:
Create an initial tf.data.Dataset object that iterates over all examples
(if training) shuffle/repeat the dataset;
map it through some function that makes all images the same size;
batch;
(optionall) prefetch to tell your program to collect the preprocess subsequent batches of data while the network is processing the current batch; and
and get inputs.
There are a number of ways of creating your initial dataset (see here for a more in depth answer)
TFRecords with Tensorflow Datasets
Supporting tensorflow version 1.12 onwards, Tensorflow datasets provides a relatively straight-forward API for creating tfrecord datasets, and also handles data downloading, sharding, statistics generation and other functionality automatically.
See e.g. this image classification dataset implementation. There's a lot of bookeeping stuff in there (download urls, citations etc), but the technical part boils down to specifying features and writing a _generate_examples function
features = tfds.features.FeaturesDict({
"image": tfds.features.Image(shape=(_TILES_SIZE,) * 2 + (3,)),
"label": tfds.features.ClassLabel(
names=_CLASS_NAMES),
"filename": tfds.features.Text(),
})
...
def _generate_examples(self, root_dir):
root_dir = os.path.join(root_dir, _TILES_SUBDIR)
for i, class_name in enumerate(_CLASS_NAMES):
class_dir = os.path.join(root_dir, _class_subdir(i, class_name))
fns = tf.io.gfile.listdir(class_dir)
for fn in sorted(fns):
image = _load_tif(os.path.join(class_dir, fn))
yield {
"image": image,
"label": class_name,
"filename": fn,
}
You can also generate the tfrecords using lower level operations.
Load images via tf.data.Dataset.map and tf.py_func(tion)
Alternatively you can load the image files from filenames inside tf.data.Dataset.map as below.
image_paths, labels = load_base_data(...)
epoch_size = len(image_paths)
image_paths = tf.convert_to_tensor(image_paths, dtype=tf.string)
labels = tf.convert_to_tensor(labels)
dataset = tf.data.Dataset.from_tensor_slices((image_paths, labels))
if mode == 'train':
dataset = dataset.repeat().shuffle(epoch_size)
def map_fn(path, label):
# path/label represent values for a single example
image = tf.image.decode_jpeg(tf.read_file(path))
# some mapping to constant size - be careful with distorting aspec ratios
image = tf.image.resize_images(out_shape)
# color normalization - just an example
image = tf.to_float(image) * (2. / 255) - 1
return image, label
# num_parallel_calls > 1 induces intra-batch shuffling
dataset = dataset.map(map_fn, num_parallel_calls=8)
dataset = dataset.batch(batch_size)
# try one of the following
dataset = dataset.prefetch(1)
# dataset = dataset.apply(
# tf.contrib.data.prefetch_to_device('/gpu:0'))
images, labels = dataset.make_one_shot_iterator().get_next()
I've never worked in a distributed environment, but I've never noticed a performance hit from using this approach over tfrecords. If you need more custom loading functions, also check out tf.py_func.
More general information here, and notes on performance here
Sample input pipeline script to load images and labels from directory. You could do preprocessing(resizing images etc.,) after this.
import tensorflow as tf
filename_queue = tf.train.string_input_producer(
tf.train.match_filenames_once("/home/xxx/Desktop/stackoverflow/images/*/*.png"))
image_reader = tf.WholeFileReader()
key, image_file = image_reader.read(filename_queue)
S = tf.string_split([key],'/')
length = tf.cast(S.dense_shape[1],tf.int32)
# adjust constant value corresponding to your paths if you face issues. It should work for above format.
label = S.values[length-tf.constant(2,dtype=tf.int32)]
label = tf.string_to_number(label,out_type=tf.int32)
image = tf.image.decode_png(image_file)
# Start a new session to show example output.
with tf.Session() as sess:
# Required to get the filename matching to run.
tf.initialize_all_variables().run()
# Coordinate the loading of image files.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
for i in xrange(6):
# Get an image tensor and print its value.
key_val,label_val,image_tensor = sess.run([key,label,image])
print(image_tensor.shape)
print(key_val)
print(label_val)
# Finish off the filename queue coordinator.
coord.request_stop()
coord.join(threads)
File Directory
./images/1/1.png
./images/1/2.png
./images/3/1.png
./images/3/2.png
./images/2/1.png
./images/2/2.png
Output:
(881, 2079, 3)
/home/xxxx/Desktop/stackoverflow/images/3/1.png
3
(155, 2552, 3)
/home/xxxx/Desktop/stackoverflow/images/2/1.png
2
(562, 1978, 3)
/home/xxxx/Desktop/stackoverflow/images/3/2.png
3
(291, 2558, 3)
/home/xxxx/Desktop/stackoverflow/images/1/1.png
1
(157, 2554, 3)
/home/xxxx/Desktop/stackoverflow/images/1/2.png
1
(866, 936, 3)
/home/xxxx/Desktop/stackoverflow/images/2/2.png
2
For loading images of equal size just use this:
tf.keras.preprocessing.image_dataset_from_directory(dir)
docs: https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory
To load images with different shapes , tf provides a pipeline implementation (ImageGenerator):
from tensorflow.keras.preprocessing.image import ImageDataGenerator
TARGET_SHAPE = (500,500)
BATCH_SIZE = 32
train_dir = "train_images_directory" #ex: images/train/
test_dir = "train_images_directory" #ex: images/test/
train_images_generator = ImageDataGenerator(rescale=1.0/255,)
train_data_gen =
image_train_gen.flow_from_directory(batch_size=BATCH_SIZE,
directory=train_dir,
target_size=TARGET_SHAPE,
shuffle=True,
class_mode='sparse')
# do the same for validation and test dataset
# 1- image_generator 2- load images from directory with target shape