Concatenating conv layers with different filter sizes in CNTK - cntk

In CNTK - how can I use several filter sizes on the same layer (e.g. filter sizes 2,3,4,5)?
Following the work done here (link to code in github below(1)), I want to take text, use an embedding layer, apply four different sizes of filters (2,3,4,5), concatenate the results and feed it to a fully connected layer.
Network architecture figure
Keras sample code:
main_input = Input(shape=(100,)
embedding = Embedding(output_dim=32, input_dim=100, input_length=100, dropout=0)(main_input)
conv1 = getconvmodel(2,256)(embedding)
conv2 = getconvmodel(3,256)(embedding)
conv3 = getconvmodel(4,256)(embedding)
conv4 = getconvmodel(5,256)(embedding)
merged = merge([conv1,conv2,conv3,conv4],mode="concat")
def getconvmodel(filter_length,nb_filter):
model = Sequential()
model.add(Convolution1D(nb_filter=nb_filter,
`enter code here`input_shape=(100,32),
filter_length=filter_length,
border_mode='same',
activation='relu',
subsample_length=1))
model.add(Lambda(sum_1d, output_shape=(nb_filter,)))
#model.add(BatchNormalization(mode=0))
model.add(Dropout(0.5))
return model
(1): /joshsaxe/eXposeDeepNeuralNetwork/blob/master/src/modeling/models.py

You can do something like this:
import cntk as C
import cntk.layers as cl
def getconvmodel(filter_length,nb_filter):
#Function
def model(x):
f = cl.Convolution(filter_length, nb_filter, activation=C.relu))(x)
f = C.reduce_sum(f, axis=0)
f = cl.Dropout(0.5) (f)
return model
main_input = C.input_variable(100)
embedding = cl.Embedding(32)(main_input)
conv1 = getconvmodel(2,256)(embedding)
conv2 = getconvmodel(3,256)(embedding)
conv3 = getconvmodel(4,256)(embedding)
conv4 = getconvmodel(5,256)(embedding)
merged = C.splice([conv1,conv2,conv3,conv4])

Or with Sequential() and a lambda:
def getconvmodel(filter_length,nb_filter):
return Sequential([
cl.Convolution(filter_length, nb_filter, activation=C.relu)),
lambda f: C.reduce_sum(f, axis=0),
cl.Dropout()
])

Related

Combine keras model horizontally, one of them pretrained

I have two functional Keras models in the same level (same input and output shape), one of them is pre-trained, I would like to combine them horizontally and then retrain the whole model. I mean I want to initialize the pretrained with its weights and the other one randomly. How can I horizontally combine them by adding them in branches (not concatenate)?
def define_model_a(input_shape, initializer, outputs = 1):
input_layer = Input(shape=(input_shape))
# first path
path10 = input_layer
path11 = Conv1D(filters=1, kernel_size=3, strides=1, padding="same", use_bias = True, kernel_initializer=initializer)(path10)
path12 = Lambda(lambda x: abs(x))(path11)
output = Add()([path10, path12])
define_model_a = Model(inputs=input_layer, outputs=output)
define_model_a._name = 'model_a'
return define_model_a
def define_model_b(input_shape, initializer, outputs = 1):
input_layer = Input(shape=(input_shape))
# first path
path10 = input_layer
path11 = Conv1D(filters=1, kernel_size=3, strides=1, padding="same", use_bias = True, kernel_initializer=initializer)(path10)
path12 = ReLU()(path11)
path13 = Dense(1, use_bias = True)(path12)
output = path13
define_model_b = Model(inputs=input_layer, outputs=output)
define_model_b._name = 'model_b'
return define_model_b
def define_merge_interpretation()
????
????
output = Add()(model_a, model_b)
model = Model(inputs=input_layer, outputs=output)
return model
initializer = tf.keras.initializers.HeNormal()
model_a = define_model_a(input_shape, initializer, outputs = 1)
model_b = define_model_b(input_shape, initializer, outputs = 1)
model_a.load_weights(load_path)
merge_interpretation = def merge_interprettation( )
history = merge_interpretation.fit(......
As reference, I am looking for a final structure like this in the image, but with some pretrained branches.

How can I reduce the dimension of data, loaded through the flow_from_directory function of ImageDataGenerator?

Since I load my data (images) from the structured folders, I utilize the flow_from_directory function of the ImageDataGenerator class, which is provided by Keras. I've no issues while feeding this data to a CNN model. But when it comes to an LSTM model, getting the following error: ValueError: Error when checking input: expected lstm_1_input to have 3 dimensions, but got array with shape (64, 28, 28, 1). How can I reduce the dimension of the input data while reading it via ImageDataGenerator objects to be able to use an LSTM model instead of a CNN?
p.s. The shape of the input images is (28, 28) and they are grayscale.
train_valid_datagen = ImageDataGenerator(validation_split=0.2)
train_gen = train_valid_datagen.flow_from_directory(
directory=TRAIN_IMAGES_PATH,
target_size=(28, 28),
color_mode='grayscale',
batch_size=64,
class_mode='categorical',
shuffle=True,
subset='training'
)
Update: The LSTM model code:
inp = Input(shape=(28, 28, 1))
inp = Lambda(lambda x: squeeze(x, axis=-1))(inp) # from 4D to 3D
x = LSTM(num_units, dropout=dropout, recurrent_dropout=recurrent_dropout, activation=activation_fn, return_sequences=True)(inp)
x = BatchNormalization()(x)
x = Dense(128, activation=activation_fn)(x)
output = Dense(nb_classes, activation='softmax', kernel_regularizer=l2(0.001))(x)
model = Model(inputs=inp, outputs=output)
you start feeding your network with 4D data like your images in order to have the compatibility with ImageDataGenerator and then you have to reshape them in 3D format for LSTM.
These are the possibilities:
with only one channel you can simply squeeze the last dimension
inp = Input(shape=(28, 28, 1))
x = Lambda(lambda x: tf.squeeze(x, axis=-1))(inp) # from 4D to 3D
x = LSTM(32)(x)
if you have multiple channels (this is the case of RGB images or if would like to apply a RNN after a Conv2D) a solution can be this
inp = Input(shape=(28, 28, 1))
x = Conv2D(32, 3, padding='same', activation='relu')(inp)
x = Reshape((28,28*32))(x) # from 4D to 3D
x = LSTM(32)(x)
the fit can be computed as always with model.fit_generator
UPDATE: model review
inp = Input(shape=(28, 28, 1))
x = Lambda(lambda x: squeeze(x, axis=-1))(inp) # from 4D to 3D
x = LSTM(32, dropout=dropout, recurrent_dropout=recurrent_dropout, activation=activation_fn, return_sequences=False)(x)
x = BatchNormalization()(x)
x = Dense(128, activation=activation_fn)(x)
output = Dense(nb_classes, activation='softmax', kernel_regularizer=l2(0.001))(x)
model = Model(inputs=inp, outputs=output)
model.summary()
pay attention when you define inp variable (don't overwrite it)
set return_seq = False in LSTM in order to have 2D output

Converting keras functional model to keras class in tensorflow 2

I am trying to convert a Keras functional model into class derived from tensorflow.keras.models.Model and I'm facing 2 issues.
1. I need to multiply 2 layers using tensorflow.keras.layers.multiply, but it returns a ValueError: A merge layer should be called on a list of inputs.
2. If I remove this layern thus working with a classical CNN, it returns a tensorflow.python.eager.core._SymbolicException:Inputs to eager execution function cannot be Keras symbolic tensors, but found [<tf.Tensor 'patch:0' shape=(None, 64, 64, 3) dtype=float32>].
I would appreciate some guidance to convert my code. I'm using Python 3.7, TensorFlow 2.0rc2 and Keras 2.3.0. The class I have defined is the following:
class TestCNN(Model):
"""
conv1 > conv2 > fc1 > fc2 > alpha * fc2 > Sigmoid > output
"""
def __init__(self, input_dimension, n_category,**kwargs):
"""
Instanciator
:param input_dimension: tuple of int, theoretically (patch_size x patch_size x channels)
:param n_category: int, the number of categories to classify,
:param weight_decay: float, weight decay parameter for all the kernel regularizers
:return: the Keras model
"""
super(TestCNN, self).__init__(name='testcnn', **kwargs)
self.input_dimension = input_dimension
self.n_category = n_category
self.conv1 = Conv2D(36, activation='relu', name='conv1/relu')
self.conv1_maxpooling = MaxPooling2D((2, 2), name='conv1/maxpooling')
self.conv2 = Conv2D(48, activation='relu', name='conv2/relu')
self.conv2_maxpooling = MaxPooling2D((2, 2), name='conv2/maxpooling')
self.flatten1 = Flatten(name='flatten1')
self.fc1 = Dense(512, activation='relu', name='fc1/relu')
self.fc2 = Dense(512, activation='relu', name='fc2/relu')
self.alpha = TestLayer(layer_dim=128, name='alpha')
self.output1 = TestSigmoid(output_dimension=n_category, name='output_layer')
#tensorflow.function
def call(self, x):
x = self.conv1(x)
x = self.conv1_maxpooling(x)
x = self.conv2(x)
x = self.conv2_maxpooling(x)
x = self.flatten1(x)
x = self.fc1(x)
x = self.fc2(x)
alpha_times_fc2 = multiply([alpha_output, fc2_output], name='alpha_times_fc2')
return self.output1(alpha_times_fc2)
def build(self, **kwargs):
inputs = Input(shape=self.input_dimension, dtype='float32', name='patch')
outputs = self.call(inputs)
super(TestCNN, self).__init__(name="TestCNN", inputs=inputs, outputs=outputs, **kwargs)
Then, in my main loop, I'm creating the instance as following:
testcnn = TestCNN(input_dimension=input_dimension, n_category=training_set.category_count)
optimizer = tensorflow.keras.optimizers.Adam(
lr=parameter['training']['adam']['learning_rate'],
beta_1=parameter['training']['adam']['beta1'],
beta_2=parameter['training']['adam']['beta2'])
metrics_list = [tensorflow.keras.metrics.TruePositives]
loss_function = tensorflow.keras.losses.categorical_crossentropy
loss_metrics = tensorflow.keras.metrics.Mean()
testcnn.build()
testcnn.summary()
This code is raising the tensorflow.python.eager.core._SymbolicException. If I comment out some lines and return directly the results of the fc2 layer, I've got the ValueError.
I have commenter the build() function in my model and call it in my main script as following:
testcnn.build(input_dimension)
testcnn.compile(optimizer=adam_optimizer, loss=loss_function, metrics=metrics_list)
testcnn.summary()
Input dimension is a list formatted as following:
input_dimension = (batch_size, image_size, image_size, channels)

How can I change dense layer to conv2D layer in keras?

I want to classification for pictures with different input sizes. I would like to use the following paper ideas.
'Fully Convolutional Networks for Semantic Segmentation'
https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Long_Fully_Convolutional_Networks_2015_CVPR_paper.pdf
I did change the dense layer to conv2D layer like this.
def FullyCNN(input_shape, n_classes):
inputs = Input(shape=(None, None, 1))
first_layer = Conv2D(filters=16, kernel_size=(12,16), strides=1, activation='relu', kernel_initializer='he_normal', name='conv1')(inputs)
first_layer = BatchNormalization()(first_layer)
first_layer = MaxPooling2D(pool_size=2)(first_layer)
second_layer = Conv2D(filters=24, kernel_size=(8,12), strides=1, activation='relu', kernel_initializer='he_normal', name='conv2')(first_layer)
second_layer = BatchNormalization()(second_layer)
second_layer = MaxPooling2D(pool_size=2)(second_layer)
third_layer = Conv2D(filters=32, kernel_size=(5,7), strides=1, activation='relu', kernel_initializer='he_normal', name='conv3')(first_layer)
third_layer = BatchNormalization()(third_layer)
third_layer = MaxPooling2D(pool_size=2)(third_layer)
fully_layer = Conv2D(64, kernel_size=8, activation='relu', kernel_initializer='he_normal')(third_layer)
fully_layer = BatchNormalization()(fully_layer)
fully_layer = Dropout(0.5)(fully_layer)
fully_layer = Conv2D(n_classes, kernel_size=1)(fully_layer)
output = Conv2DTranspose(n_classes, kernel_size=1, activation='softmax')(fully_layer)
model = Model(inputs=inputs, outputs=output)
return model
and I made generator for using fit_generator().
def data_generator(x_train, y_train):
while True:
index = np.asscalar(np.random.choice(len(x_train),1))
feature = np.expand_dims(x_train[index],-1)
feature = np.resize(feature,(-1,feature.shape))
feature = np.expand_dims(feature,0) # make (1,input_height,input_width,1)
label = y_train[index]
yield (feature,label)
and These are images about my data.
However, there is some problems about dimension.
Since the output layer must have 4 dimensions unlike the original CNN model, dimensions do not fit in the label.
Model summary:
Original CNN model summary:
How can I handle this problem? I tried to change dimension about label by expanding the dimension.
label = np.expand_dims(label,0)
label = np.expand_dims(label,0)
label = np.expand_dims(label,0)
I think there is a better way and I wonderd that is it necessary to have conv2DTranspose? And should batch size be 1?

How to replace (or insert) intermediate layer in Keras model?

I have a trained Keras model and I would like:
1) to replace Con2D layer with the same but without bias.
2) to add BatchNormalization layer before first Activation
How can I do this?
def keras_simple_model():
from keras.models import Model
from keras.layers import Input, Dense, GlobalAveragePooling2D
from keras.layers import Conv2D, MaxPooling2D, Activation
inputs1 = Input((28, 28, 1))
x = Conv2D(4, (3, 3), activation=None, padding='same', name='conv1')(inputs1)
x = Activation('relu')(x)
x = Conv2D(4, (3, 3), activation=None, padding='same', name='conv2')(x)
x = Activation('relu')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='pool1')(x)
x = Conv2D(8, (3, 3), activation=None, padding='same', name='conv3')(x)
x = Activation('relu')(x)
x = Conv2D(8, (3, 3), activation=None, padding='same', name='conv4')(x)
x = Activation('relu')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='pool2')(x)
x = GlobalAveragePooling2D()(x)
x = Dense(10, activation=None)(x)
x = Activation('softmax')(x)
model = Model(inputs=inputs1, outputs=x)
return model
if __name__ == '__main__':
model = keras_simple_model()
print(model.summary())
The following function allows you to insert a new layer before, after or to replace each layer in the original model whose name matches a regular expression, including non-sequential models such as DenseNet or ResNet.
import re
from keras.models import Model
def insert_layer_nonseq(model, layer_regex, insert_layer_factory,
insert_layer_name=None, position='after'):
# Auxiliary dictionary to describe the network graph
network_dict = {'input_layers_of': {}, 'new_output_tensor_of': {}}
# Set the input layers of each layer
for layer in model.layers:
for node in layer._outbound_nodes:
layer_name = node.outbound_layer.name
if layer_name not in network_dict['input_layers_of']:
network_dict['input_layers_of'].update(
{layer_name: [layer.name]})
else:
network_dict['input_layers_of'][layer_name].append(layer.name)
# Set the output tensor of the input layer
network_dict['new_output_tensor_of'].update(
{model.layers[0].name: model.input})
# Iterate over all layers after the input
model_outputs = []
for layer in model.layers[1:]:
# Determine input tensors
layer_input = [network_dict['new_output_tensor_of'][layer_aux]
for layer_aux in network_dict['input_layers_of'][layer.name]]
if len(layer_input) == 1:
layer_input = layer_input[0]
# Insert layer if name matches the regular expression
if re.match(layer_regex, layer.name):
if position == 'replace':
x = layer_input
elif position == 'after':
x = layer(layer_input)
elif position == 'before':
pass
else:
raise ValueError('position must be: before, after or replace')
new_layer = insert_layer_factory()
if insert_layer_name:
new_layer.name = insert_layer_name
else:
new_layer.name = '{}_{}'.format(layer.name,
new_layer.name)
x = new_layer(x)
print('New layer: {} Old layer: {} Type: {}'.format(new_layer.name,
layer.name, position))
if position == 'before':
x = layer(x)
else:
x = layer(layer_input)
# Set new output tensor (the original one, or the one of the inserted
# layer)
network_dict['new_output_tensor_of'].update({layer.name: x})
# Save tensor in output list if it is output in initial model
if layer_name in model.output_names:
model_outputs.append(x)
return Model(inputs=model.inputs, outputs=model_outputs)
The difference with respect to the simpler case of a purely sequential model is that before iterating over the layers to find the key layer, you first parse the graph and store the input layers of each layer in an auxiliary dictionary. Then, as you iterate over the layers, you also store the new output tensor of each layer, which is used to determine the input layers of each layer, when building the new model.
A use case would be the following, where a Dropout layer is inserted after each activation layer of ResNet50:
from keras.applications.resnet50 import ResNet50
from keras.models import load_model
model = ResNet50()
def dropout_layer_factory():
return Dropout(rate=0.2, name='dropout')
model = insert_layer_nonseq(model, '.*activation.*', dropout_layer_factory)
# Fix possible problems with new model
model.save('temp.h5')
model = load_model('temp.h5')
model.summary()
You can use the following functions:
from keras.models import Model
def replace_intermediate_layer_in_keras(model, layer_id, new_layer):
layers = [l for l in model.layers]
x = layers[0].output
for i in range(1, len(layers)):
if i == layer_id:
x = new_layer(x)
else:
x = layers[i](x)
new_model = Model(input=layers[0].input, output=x)
return new_model
def insert_intermediate_layer_in_keras(model, layer_id, new_layer):
layers = [l for l in model.layers]
x = layers[0].output
for i in range(1, len(layers)):
if i == layer_id:
x = new_layer(x)
x = layers[i](x)
new_model = Model(input=layers[0].input, output=x)
return new_model
Example:
from keras.layers import Conv2D, BatchNormalization
model = keras_simple_model()
print(model.summary())
model = replace_intermediate_layer_in_keras(
model, 3,
Conv2D(
4, (3, 3),
activation=None,
padding='same',
name='conv2_repl',
use_bias=False
)
)
print(model.summary())
model = insert_intermediate_layer_in_keras(
model, 4, BatchNormalization()
)
print(model.summary())
There are some limitation on replacements due to layer shapes etc.
This was how i did it:
import keras
from keras.models import Model
from tqdm import tqdm
from keras import backend as K
def make_list(X):
if isinstance(X, list):
return X
return [X]
def list_no_list(X):
if len(X) == 1:
return X[0]
return X
def replace_layer(model, replace_layer_subname, replacement_fn,
**kwargs):
"""
args:
model :: keras.models.Model instance
replace_layer_subname :: str -- if str in layer name, replace it
replacement_fn :: fn to call to replace all instances
> fn output must produce shape as the replaced layers input
returns:
new model with replaced layers
quick examples:
want to just remove all layers with 'batch_norm' in the name:
> new_model = replace_layer(model, 'batch_norm', lambda **kwargs : (lambda u:u))
want to replace all Conv1D(N, m, padding='same') with an LSTM (lets say all have 'conv1d' in name)
> new_model = replace_layer(model, 'conv1d', lambda layer, **kwargs: LSTM(units=layer.filters, return_sequences=True)
"""
model_inputs = []
model_outputs = []
tsr_dict = {}
model_output_names = [out.name for out in make_list(model.output)]
for i, layer in enumerate(model.layers):
### Loop if layer is used multiple times
for j in range(len(layer._inbound_nodes)):
### check layer inp/outp
inpt_names = [inp.name for inp in make_list(layer.get_input_at(j))]
outp_names = [out.name for out in make_list(layer.get_output_at(j))]
### setup model inputs
if 'input' in layer.name:
for inpt_tsr in make_list(layer.get_output_at(j)):
model_inputs.append(inpt_tsr)
tsr_dict[inpt_tsr.name] = inpt_tsr
continue
### setup layer inputs
inpt = list_no_list([tsr_dict[name] for name in inpt_names])
### remake layer
if replace_layer_subname in layer.name:
print('replacing '+layer.name)
x = replacement_fn(old_layer=layer, **kwargs)(inpt)
else:
x = layer(inpt)
### reinstantialize outputs into dict
for name, out_tsr in zip(outp_names, make_list(x)):
### check if is an output
if name in model_output_names:
model_outputs.append(out_tsr)
tsr_dict[name] = out_tsr
return Model(model_inputs, model_outputs)
I have a custom layer (taken from someone online) called BatchNormalizationFreeze, so an example of usage is this:
new_model = model_replacement(model, 'batch_normal', lambda **kwargs : BatchNormalizationFreeze()(x))
If youre gonna do multiple layers just replace the replacement function with a psuedo model that does them all at once
Unfortunately replacing a layer is no small feat for models that do not follow the sequential pattern. For sequential patterns it is OK to just x = layer(x) and replace with new_layer when you see fit as in the previous answer.
However, for models that do not have a classic sequential pattern (say you have a simple "concatenation" of two columns) you have to actually "parse" the graph and use your "new_layer" (or layers) in the right places. Hope this is not too discouraging and happy graph parsing and reconstructing :)