Should shuffle be set to True during Keras Tuner? - tensorflow2.0

I am using Keras Tuner to hypertune my model. I am setting the parameter value “validation_split = 0.2” in the search() call. Does it still make sense to pass “shuffle = True” or is that redundant / counter-productive?
tuner = RandomSearch(
hypermodel = build_model,
objective = kt.Objective("val_loss", direction = "min"),
max_trials = 100,
executions_per_trial = 2,
directory = "V5",
project_name = "case8",
seed = RANDOM_SEED,
overwrite = True
)
tuner.search(
x = x_train_new,
y = y_train.values,
batch_size = 1024,
epochs = 100,
validation_split = 0.2,
shuffle = True,
callbacks = [ model_early_stopping]
)

validation_split = 0.2
This will split data into training data =0.8 and validation data=0.2
By default, Keras tuner shuffles the data, hence no need to explicitly mention it.
For time-series data, the tuner should not shuffle the data, in this case, keep its value to false.
For the generator, the tuner ignores the value of the shuffle parameter even if we pass it.
You can find more details over here
https://github.com/tensorflow/tensorflow/blob/80117da12365720167632761a61e0e32e4db2dcc/tensorflow/python/keras/engine/training.py#L1003

Related

evaluating two inputs and one output model tensorflow

I am trying to evaluate a model with 2 inputs and 1 output, each input goes to separate pretrained model and then the output from both the models get averaged. I am using the same data for both the inputs.
test_dir = 'D:\Graduation_project\Damage type not collected'
test_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255,)
test_set = test_datagen.flow_from_directory(test_dir,
class_mode = 'categorical',
batch_size = 16,
target_size=(150,150))
test_set1 = test_datagen.flow_from_directory(test_dir,
class_mode = 'categorical',
batch_size = 16,
target_size=(150,150))
loading first model and renaming the layers
def load_dense_model():
densenet = tf.keras.models.load_model('D:\Graduation_project\saved models\damage_type_model.h5', compile=False)
for i, layer in enumerate(densenet.layers):
layer._name = 'Densenet_layer' + str(i)
return densenet
loading second model
def load_vgg19_model():
vgg19 = tf.keras.models.load_model('D:\Graduation_project\saved models\damage_type_VGG19.h5', compile=False)
return vgg19
creating ensemble model
def ensamble_model(first_model, second_model):
densenet = first_model()
vgg19 = second_model()
output_1 = densenet.get_layer('Densenet_layer613')
output_2 = vgg19.get_layer('dense_4')
avg = tf.keras.layers.Average()([output_1.output, output_2.output])
model = Model(inputs=[densenet.input, vgg19.input], outputs=avg)
return model
METRICS = [
'accuracy',
tf.metrics.TruePositives(name='tp'),
tf.metrics.FalsePositives(name='fp'),
tf.metrics.TrueNegatives(name='tn'),
tf.metrics.FalseNegatives(name='fn'),
tf.metrics.Precision(name='precision'),
tf.metrics.Recall(name='recall'),
tfa.metrics.F1Score(name='F1_Score', num_classes=5),
tfa.metrics.MultiLabelConfusionMatrix(num_classes=5)
]
model = ensamble_model(load_dense_model, load_vgg19_model)
compiling and evaluating the model
model.compile(optimizer = 'adam' , loss ='binary_crossentropy',
metrics = 'accuracy')
model.evaluate({'Densenet_layer0':test_set1, 'input_2':test_set})
evaluate() fails to run
ValueError: Failed to find data adapter that can handle input: (<class 'dict'> containing {"<class 'str'>"} keys and {"<class 'tensorflow.python.keras.preprocessing.image.DirectoryIterator'>"} values), <class 'NoneType'>
My guess is that your model complaining because you are feeding a dict/list of iterators that yield an image each, instead of feeding an iterator that yields the image twice (once for each model).
What would happen if you wrap your DirectoryIterator on a generator that can feed the data correctly?
def gen_itertest(test_dir):
test_set = test_datagen.flow_from_directory(test_dir,
class_mode = 'categorical',
batch_size = 16,
target_size=(150,150))
for i in range(len(test_set)):
x = test_set.next()
yield [x[0], x[0]], x[1] # Twice the input, only once the label
and then you can feed this to the evaluate
testset = gen_itertest('D:\Graduation_project\Damage type not collected')
result = model.evaluate(testset)
I am not sure this will work but because you haven't provide us with a minimal, reproducible example, I am not going to do one to test it.
Try calling the evaluate() like this:
result = model.evaluate(x=[test_set1, test_set])
Then you could get the name of the metrics doing something like this:
dict(zip(model.metrics_names, result))

Keras evaluation suddenly starts giving unreasonably low accuracy while loss stays the same

A couple days ago I trained a Resnet in colab and evaluated it with the following code:
model.compile(loss= "sparse_categorical_crossentropy",
optimizer = optimizer,
metrics = ["accuracy"]
)
checkpoint_cb = keras.callbacks.ModelCheckpoint(
filepath= checkpoint_path,
save_weights_only = False,
monitor= 'val_pred_loss',
save_best_only= True
)
tensorboard_cb = keras.callbacks.TensorBoard(tensorboard_path)
earlystopping_cb = keras.callbacks.EarlyStopping(patience = 6, monitor="val_pred_loss", min_delta = 0.005)
history = model.fit(
x = train_set,
validation_data = val_set,
validation_steps = 1629//val_b_size,
epochs = epochs,
steps_per_epoch = steps_per_epoch,
callbacks = [checkpoint_cb, tensorboard_cb, earlystopping_cb]
)
best_model = tf.keras.models.load_model(checkpoint_path)
test_set = test_set.batch(17)
print(best_model.evaluate(test_set))
The output was [0.42691513895988464, 0.8850889205932617]
The model does not have any custom components, it's a simple resnet with new GAP and dense layers for classification, upon rerunning the last 3 lines today I consistently got a nonsensical accuracy [0.42691513895988464, 0.004352692514657974]. I initially thought that I changed something in the script by mistake or messed up the file save and load, but the CE loss is the same. How is this possible?
Edit: the issue involves any loaded model, evaluating a trained net directly from RAM works as expected
Here's how the model is defined:
base_model = keras.applications.ResNet50( include_top = False,
weights = "imagenet",
input_shape = (448, 448, 3),
)
avg = keras.layers.GlobalAveragePooling2D()(base_model.output) # 14 x 14 x 2048 ->2048
o = keras.layers.Dense(196, activation = "softmax")(avg)
model = keras.Model(inputs=base_model.input, outputs=[o])
Update: replacing the load_model statement with model.load_weights resolves the issue, I'd still like to know the reason.

Preprocessing for TensorFlow Dataset 'cats_vs_dogs'

I am trying to create a preprocessing function so that the training_dataset can be directly fed into a keras sequential neural network. The preprocess function should return features and labels.
def preprocessing_function(data):
features = ...
labels = ...
return features, labels
dataset, info = tfds.load(name='cats_vs_dogs', split=tfds.Split.TRAIN, with_info=True)
training_dataset = dataset.map(preprocessing_function)
How should I write the preprocessing_function? I spent several hours researching and trying to make it happen, but to no avail. Hoping someone can assist.
Here are two functions for preprocessing. FIrst one will be applied to both train and validation data to normalize the data and resize to the expected size of network. The second function, augmentation, will be applied to training set only. The type of augmentation you want to do depends on your dataset and application, but I provided this as an example.
#Fetching, pre-processing & preparing data-pipeline
def preprocess(ds):
x = tf.image.resize_with_pad(ds['image'], IMG_SIZE_W, IMG_SIZE_H)
x = tf.cast(x, tf.float32)
x = (x-MEAN)/(VARIANCE)
y = tf.one_hot(ds['label'], NUM_CLASSES)
return x, y
def augmentation(image,label):
image = tf.image.random_flip_left_right(image)
image = tf.image.resize_with_crop_or_pad(image, IMG_W+4, IMG_W+4) # zero pad each side with 4 pixels
image = tf.image.random_crop(image, size=[BATCH_SIZE, IMG_W, IMG_H, 3]) # Random crop back to 32x32
return image, label
and to load training and validation datasets, do something like this:
def get_dataset(dataset_name, shuffle_buff_size=1024, batch_size=BATCH_SIZE, augmented=True):
train, info_train = tfds.load(dataset_name, split='train[:80%]', with_info=True)
val, info_val = tfds.load(dataset_name, split='train[80%:]', with_info=True)
TRAIN_SIZE = info_train.splits['train'].num_examples * 0.8
VAL_SIZE = info_train.splits['train'].num_examples * 0.2
train = train.map(preprocess).cache().repeat().shuffle(shuffle_buff_size).batch(batch_size)
if augmented==True:
train = train.map(augmentation)
train = train.prefetch(tf.data.experimental.AUTOTUNE)
val = val.map(preprocess).cache().repeat().batch(batch_size)
val = val.prefetch(tf.data.experimental.AUTOTUNE)
return train, info_train, val, info_val, TRAIN_SIZE, VAL_SIZE

Tensorflow Federated : Why my iterative Process unable to train rounds

I write a code with TFF from my own dataset, all the code run correctly except
this line
In train_data, I make 4 dataset, loaded with tf.data.Dataset, they have the type "DatasetV1Adapter"
def client_data(n):
ds = source.create_tf_dataset_for_client(source.client_ids[n])
return ds.repeat(10).map(map_fn).shuffle(500).batch(20)
federated_train_data = [client_data(n) for n in range(4)]
batch = tf.nest.map_structure(lambda x: x.numpy(), iter(train_data[0]).next())
def model_fn():
model = tf.keras.models.Sequential([
.........
return tff.learning.from_compiled_keras_model(model, batch)
all this run correctly and I get trainer and state:
trainer = tff.learning.build_federated_averaging_process(model_fn)
Except, When I would to begin training and round with this code:
state, metrics = iterative_process.next(state, federated_train_data)
print('round 1, metrics={}'.format(metrics))
I can't. error comes! So, from where can be the error? from type of dataset? or the way that I make my data federated?
As verified in the comments above, adding a take(N) call for some finite integer N in the client_data function should solve this problem. The issue is that TFF will reduce over all the elements in the dataset you pass it. If you have an infinite dataset, this means "keep running the reduction forever". N here should represent "how much data an individual client has", and can really be anything of your choosing.
here is my code, I use Tensorflow v2.1.0 and tff 0.12.0
img_height = 200
img_width = 200
num_classes = 2
batch_size = 10
input_shape = (img_height, img_width, 3)
img_gen = tf.keras.preprocessing.image.ImageDataGenerator()
gen0 = img_gen.flow_from_directory(par1_train_data_dir,(200, 200),'rgb', batch_size=10)
ds_par1 = tf.data.Dataset.from_generator(gen
output_types=(tf.float32, tf.float32),
output_shapes=([None,img_height,img_width,3], [None,num_classes])
)
ds_par2 = tf.data.Dataset.from_generator(gen0
output_types=(tf.float32, tf.float32),
output_shapes=([None,img_height,img_width,3], [None,num_classes])
)
dataset_dict={}
dataset_dict['1'] = ds_par1
dataset_dict['2'] = ds_par2
def create_tf_dataset_for_client_fn(client_id):
return dataset_dict[client_id]
source = tff.simulation.ClientData.from_clients_and_fn(['1','2'],create_tf_dataset_for_client_fn)
def client_data(n):
ds = source.create_tf_dataset_for_client(source.client_ids[n])
return ds
train_data = [client_data(n) for n in range(1)]
images, labels = next(img_gen.flow_from_directory(par1_train_data_dir,batch_size=batch_size,target_size=(img_height,img_width)))
sample_batch = (images,labels)
def create_compiled_keras_model():
.....
def model_fn():
keras_model = create_compiled_keras_model()
return tff.learning.from_compiled_keras_model(keras_model, sample_batch)
iterative_process = tff.learning.build_federated_averaging_process(model_fn)
state = iterative_process.initialize()
state, metrics = iterative_process.next(state, train_data)
print('round 1, metrics={}'.format(round_num, metrics))

How to use Tensorflow's tf.cond() with two different Dataset iterators without iterating both?

I want to feed a CNN with the tensor "images". I want this tensor to contain images from the training set ( which have FIXED size ) when the placeholder is_training is True, otherwise I want it to contain images from the test set ( which are of NOT FIXED size ).
This is needed because in training I take a random fixed crop from the training images, while in test I want to perform a dense evaluation and feed the entire images inside the network ( it is fully convolutional so it will accept them)
The current NOT WORKING way is to create two different iterators, and try to select the training/test input with tf.cond at the session.run(images,{is_training:True/False}).
The problem is that BOTH the iterators are evaluated. The training and test dataset are also of different size so I cannot iterate both of them until the end. Is there a way to make this work? Or to rewrite this in a smarter way?
I've seen some questions/answers about this but they always used tf.assign which takes a numpy array and assigns it to a tensor. In this case I cannot use tf.assign because I already have a tensor coming from the iterators.
The current code that I have is this one. It simply checks the shape of the tensor "images":
train_filenames, train_labels = list_images(args.train_dir)
val_filenames, val_labels = list_images(args.val_dir)
graph = tf.Graph()
with graph.as_default():
# Preprocessing (for both training and validation):
def _parse_function(filename, label):
image_string = tf.read_file(filename)
image_decoded = tf.image.decode_jpeg(image_string, channels=3)
image = tf.cast(image_decoded, tf.float32)
return image, label
# Preprocessing (for training)
def training_preprocess(image, label):
# Random flip and crop
image = tf.image.random_flip_left_right(image)
image = tf.random_crop(image, [args.crop,args.crop, 3])
return image, label
# Preprocessing (for validation)
def val_preprocess(image, label):
flipped_image = tf.image.flip_left_right(image)
batch = tf.stack([image,flipped_image],axis=0)
return batch, label
# Training dataset
train_filenames = tf.constant(train_filenames)
train_labels = tf.constant(train_labels)
train_dataset = tf.contrib.data.Dataset.from_tensor_slices((train_filenames, train_labels))
train_dataset = train_dataset.map(_parse_function,num_threads=args.num_workers, output_buffer_size=args.batch_size)
train_dataset = train_dataset.map(training_preprocess,num_threads=args.num_workers, output_buffer_size=args.batch_size)
train_dataset = train_dataset.shuffle(buffer_size=10000)
batched_train_dataset = train_dataset.batch(args.batch_size)
# Validation dataset
val_filenames = tf.constant(val_filenames)
val_labels = tf.constant(val_labels)
val_dataset = tf.contrib.data.Dataset.from_tensor_slices((val_filenames, val_labels))
val_dataset = val_dataset.map(_parse_function,num_threads=1, output_buffer_size=1)
val_dataset = val_dataset.map(val_preprocess,num_threads=1, output_buffer_size=1)
train_iterator = tf.contrib.data.Iterator.from_structure(batched_train_dataset.output_types,batched_train_dataset.output_shapes)
val_iterator = tf.contrib.data.Iterator.from_structure(val_dataset.output_types,val_dataset.output_shapes)
train_images, train_labels = train_iterator.get_next()
val_images, val_labels = val_iterator.get_next()
train_init_op = train_iterator.make_initializer(batched_train_dataset)
val_init_op = val_iterator.make_initializer(val_dataset)
# Indicates whether we are in training or in test mode
is_training = tf.placeholder(tf.bool)
def f_true():
with tf.control_dependencies([tf.identity(train_images)]):
return tf.identity(train_images)
def f_false():
return val_images
images = tf.cond(is_training,f_true,f_false)
num_images = images.shape
with tf.Session(graph=graph) as sess:
sess.run(train_init_op)
#sess.run(val_init_op)
img = sess.run(images,{is_training:True})
print(img.shape)
The problem is that when I want to use only the training iterator, I comment the line to initialize the val_init_op but there is the following error:
FailedPreconditionError (see above for traceback): GetNext() failed because the iterator has not been initialized. Ensure that you have run the initializer operation for this iterator before getting the next element.
[[Node: IteratorGetNext_1 = IteratorGetNext[output_shapes=[[2,?,?,3], []], output_types=[DT_FLOAT, DT_INT32], _device="/job:localhost/replica:0/task:0/cpu:0"](Iterator_1)]]
If I do not comment that line everything works as expected, when is_training is true I get training images and when is_training is False I get validation images. The issue is that both the iterators need to be initialized and when I evaluate one of them, the other is incremented too. Since as I said they are of different size this causes an issue.
I hope there is a way to solve it! Thanks in advance
The trick is to call iterator.get_next() inside the f_true() and f_false() functions:
def f_true():
train_images, _ = train_iterator.get_next()
return train_images
def f_false():
val_images, _ = val_iterator.get_next()
return val_images
images = tf.cond(is_training, f_true, f_false)
The same advice applies to any TensorFlow op that has a side effect, like assigning to a variable: if you want that side effect to happen conditionally, the op must be created inside the appropriate branch function passed to tf.cond().