Just trying to use Keras but I am a bit confused about the Conv2D function when using padding=same. I wonder if someone can help me to figure out how p (padding) value is set when padding="same"?
Here is a code example:
# X.shape = (3, 2, 2, 2) at this point
X = Conv2D(filters=4, kernel_size=(2, 2), strides = (1, 1), padding = 'same',
name = 'apply_conv_2',
kernel_initializer = glorot_uniform())(X)
X = BatchNormalization(axis = 3, name = 'apply_bn_2')(X)
X = Activation('relu')(X)
# X.shape = (3, 2, 2, 4) at this point
You should read dimensions as (nr_samples, height, width, nr_channels)
If padding="same", height and width will remain the same. But I am a bit confused which value p takes here when calculating dimensions.
For instance, the dimension height should be calculated as:
height_next = ROUND_DOWN((( height_prev + 2xpadding - kernel_size ) / stride) + 1)
height_next = height_prev = 2.
And as seen above, kernel_size = 2 above. And stride = 1.
So..
2 = ROUND_DOWN(((2 + 2xpadding - 2) / 1) + 1)
If padding is 2, then the result becomes 5, which is not equal to 2.
If padding is 1, then result is 3 which is not equal to 2.
If padding is 0, then result is 1, which is not equal to 2.
I assume padding needs to be an integer value.
How does Keras calculate padding value here?
Related
in_dim, out_dim = 10, 7
bias = False
activation = None
layer = tfp.layers.weight_norm.WeightNorm(
tf.keras.layers.Dense(out_dim, input_shape=(None, in_dim, ),
use_bias = bias, activation = activation),
input_shape = (None, None, in_dim))
I would like to give a input with variable length in the second dimension.
Suppose I run the below code first.
input = tf.random.normal(shape = (2, 3, 10))
output = layer(input)
output.shape
# [2, 3, 7]
After running the above code, I give another input to network
input2 = tf.random.normal(shape = (2, 4, 10))
output2 = layer(input2)
However, it causes error
Input 0 of layer "weight_norm_3" is incompatible with the layer:
expected shape=(None, 3, 10), found shape=(2, 4, 10)
I would like to give a variable length of second dimension. How can I do it?
I'm looking through some different neural network architectures and trying to piece together how to recreate them on my own.
One issue I'm running into is the functional difference between the Concatenate() and Add() layers in Keras. It seems like they accomplish similar things (combining multiple layers together), but I don't quite see the real difference between the two.
Here's a sample keras model that takes two separate inputs and then combines them:
inputs1 = Input(shape = (32, 32, 3))
inputs2 = Input(shape = (32, 32, 3))
x1 = Conv2D(kernel_size = 24, strides = 1, filters = 64, padding = "same")(inputs1)
x1 = BatchNormalization()(x1)
x1 = ReLU()(x1)
x1 = Conv2D(kernel_size = 24, strides = 1, filters = 64, padding = "same")(x1)
x2 = Conv2D(kernel_size = 24, strides = 1, filters = 64, padding = "same")(inputs2)
x2 = BatchNormalization()(x2)
x2 = ReLU()(x2)
x2 = Conv2D(kernel_size = 24, strides = 1, filters = 64, padding = "same")(x2)
add = Concatenate()([x1, x2])
out = Flatten()(add)
out = Dense(24, activation = 'softmax')(out)
out = Dense(10, activation = 'softmax')(out)
out = Flatten()(out)
mod = Model([inputs1, inputs2], out)
I can substitute out the Add() layer with the Concatenate() layer and everything works fine, and the models seem similar, but I have a hard time understanding the difference.
For reference, here's the plot of each one with keras's plot_model function:
KERAS MODEL WITH ADDED LAYERS:
KERAS MODEL WITH CONCATENATED LAYERS:
I notice when you concatenate your model size is bigger vs adding layers. Is it the case with a concatenation you just stack the weights from the previous layers on top of each other and with an Add() you add together the values?
It seems like it should be more complicated, but I'm not sure.
As you said, both of them combine input, but they combine in a different way.
their name already suggest their usage
Add() inputs are added together,
For example (assume batch_size=1)
x1 = [[0, 1, 2]]
x2 = [[3, 4, 5]]
x = Add()([x1, x2])
then x should be [[3, 5, 7]], where each element is added
notice that the input shape is (1, 3) and (1, 3), the output is also (1, 3)
Concatenate() concatenates the output,
For example (assume batch_size=1)
x1 = [[0, 1, 2]]
x2 = [[3, 4, 5]]
x = Concatenate()([x1, x2])
then x should be [[0, 1, 2, 3, 4, 5]], where the inputs are horizontally stacked together,
notice that the input shape is (1, 3) and (1, 3), the output is also (1, 6),
even when the tensor has more dimensions, similar behaviors still apply.
Concatenate creates a bigger model for an obvious reason, the output size is simply the size of all inputs summed, while add has the same size with one of the inputs
For more information about add/concatenate, and other ways to combine multiple inputs, see this
I am trying to implement and train YOLO on my own, based on this implementation https://github.com/allanzelener/YAD2K/. The problems I am having is the width/height values in my prediction tensor are exploding and I never see an IOU above 0 between my prediction objects and ground truth objects. This all goes wrong within the first few minibatches of the first epoch. The loss and most of my prediction width/heights are nan.
My image size is 416x416, I'm using 5 anchors, and have 5 classes. I'm dividing the image into a 13x13 grid for a prediction tensor of [batch_size, 13, 13, 5, 10]. The ground truths for each batch are [batch_size, 13, 13, 5, 5], without one hot for the class probabilities.
Below is my loss function (based on https://github.com/allanzelener/YAD2K/blob/master/yad2k/models/keras_yolo.py#L152), which passes the image to my model and then calls predict_transform which reshapes the tensor and transforms the coordinates.
def loss_custom(true_box_grid, x):
# training=training is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
y_ = model(x, training=training)
# (batch, rows, cols, anchors, vals)
center_coords, wh_coords, obj_scores, class_probs = DetectNet.predict_transform(y_)
detector_mask = create_mask(true_box_grid)
total_loss = 0
pred_wh_half = wh_coords / 2.
# bottom left corner
pred_mins = center_coords - pred_wh_half
# top right corner
pred_maxes = center_coords + pred_wh_half
true_xy = true_box_grid[..., 0:2]
true_wh = true_box_grid[..., 2:4]
true_wh_half = true_wh / 2.
true_mins = true_xy - true_wh_half
true_maxes = true_xy + true_wh_half
# max bottom left corner
intersect_mins = tf.math.maximum(pred_mins, true_mins)
# min top right corner
intersect_maxes = tf.math.minimum(pred_maxes, true_maxes)
intersect_wh = tf.math.maximum(intersect_maxes - intersect_mins, 0.)
# product of difference between x max and x min, y max and y min
intersect_areas = intersect_wh[..., 0] * intersect_wh[..., 1]
pred_areas = wh_coords[..., 0] * wh_coords[..., 1]
true_areas = true_wh[..., 0] * true_wh[..., 1]
union_areas = pred_areas + true_areas - intersect_areas
iou_scores = intersect_areas / union_areas
# Best IOUs for each location.
iou_scores = tf.expand_dims(iou_scores, 4)
best_ious = tf.keras.backend.max(iou_scores, axis=4) # Best IOU scores.
best_ious = tf.expand_dims(best_ious, 4)
# A detector has found an object if IOU > thresh for some true box.
object_detections = tf.keras.backend.cast(best_ious > 0.6, dtype=tf.float32)
no_obj_weights = params.noobj_loss_weight * (1 - object_detections) * (1 - detector_mask[...,:1])
no_obj_loss = no_obj_weights * tf.math.square(obj_scores)
# could use weight here on obj loss
obj_conf_loss = params.obj_loss_weight * detector_mask[...,:1] * tf.math.square(1 - obj_scores)
conf_loss = no_obj_loss + obj_conf_loss
matching_classes = tf.cast(true_box_grid[...,4], tf.int32)
matching_classes = tf.one_hot(matching_classes, params.num_classes)
class_loss = detector_mask[..., :1] * tf.math.square(matching_classes - class_probs)
# keras_yolo does a sigmoid on center_coords here but they should already be between 0 and 1 from predict_transform
pred_boxes = tf.concat([center_coords, wh_coords], axis=-1)
matching_boxes = true_box_grid[..., :4]
coord_loss = params.coord_loss_weight * detector_mask[..., :1] * tf.math.square(matching_boxes - pred_boxes)
confidence_loss_sum = tf.keras.backend.sum(conf_loss)
classification_loss_sum = tf.keras.backend.sum(class_loss)
coordinates_loss_sum = tf.keras.backend.sum(coord_loss)
# not sure why .5 is here, maybe to make sure numbers don't get too large
total_loss = 0.5 * (confidence_loss_sum + classification_loss_sum + coordinates_loss_sum)
return total_loss
Below is predict_transform (based on https://github.com/allanzelener/YAD2K/blob/master/yad2k/models/keras_yolo.py#L66) which reshapes the prediction tensor into a grid in order to compare with the ground truth objects. For the center coordinates, object scores, and class probabilities it does a sigmoid or softmax.
For the width height coordinates it performs the exponential operation on them (to make them positive) and multiplies them by the anchors. This seems to be where they start exploding.
def predict_transform(predictions):
predictions = tf.reshape(predictions, [-1, params.grid_height, params.grid_width, params.num_anchors, params.pred_vec_len])
conv_dims = predictions.shape[1:3]
conv_height_index = tf.keras.backend.arange(0, stop=conv_dims[0])
conv_width_index = tf.keras.backend.arange(0, stop=conv_dims[1])
conv_height_index = tf.tile(conv_height_index, [conv_dims[1]]) # (169,) tensor with 0-12 repeating
conv_width_index = tf.tile(tf.expand_dims(conv_width_index, 0), [conv_dims[0], 1]) # (13, 13) tensor with x offset in each row
conv_width_index = tf.keras.backend.flatten(tf.transpose(conv_width_index)) # (169,) tensor with 13 0's followed by 13 1's, etc (y offsets)
conv_index = tf.transpose(tf.stack([conv_height_index, conv_width_index])) # (169, 2)
conv_index = tf.reshape(conv_index, [1, conv_dims[0], conv_dims[1], 1, 2]) # y offset, x offset
conv_dims = tf.cast(tf.reshape(conv_dims, [1, 1, 1, 1, 2]), tf.float32) # grid_height x grid_width, max dims of anchors
# makes the center coordinate between 0 and 1, each grid cell is normalized to 1 x 1
center_coords = tf.math.sigmoid(predictions[...,:2])
conv_index = tf.cast(conv_index, tf.float32)
center_coords = (center_coords + conv_index) / conv_dims
# makes the objectness score a probability between 0 and 1
obj_scores = tf.math.sigmoid(predictions[...,4:5])
anchors = DetectNet.get_anchors()
anchors = tf.reshape(anchors, [1, 1, 1, params.num_anchors, 2])
# exp to make width and height positive then multiply by anchor dims to resize box to anchor
# should fit close to anchor, normalizing by conv_dims should make it between 0 and approx 1
wh_coords = (tf.math.exp(predictions[...,2:4])*anchors) / conv_dims
# apply sigmoid to class scores to make them probabilities
class_probs = tf.keras.activations.softmax(predictions[..., 5 : 5 + params.num_classes])
# (batch, rows, cols, anchors, vals)
return center_coords, wh_coords, obj_scores, class_probs
I have another doubt in creating the ground truth data, based on https://github.com/allanzelener/YAD2K/blob/master/yad2k/models/keras_yolo.py#L352. In the below box[0] and box[1] are the center coordinates, i and j are the grid cell coordinates (between 0 and 13), and box[2] and box[3] are the width and height.
They have all been normalized to be within the grid coordinates (0 to 13). Its placing the object in the ground truth grid with its corresponding best anchor. box[0] - j and box[1] - i ensure the center coordinates are between 0 and 1.
However I don't understand np.log(box[2] / anchors[best_anchor][0]), as the anchors are also on the grid coordinate scale, and the quotient may be less than 1, which will produce a negative number after the log. I often see negative widths and heights in my ground truth data as I am training, and don't know what to make of that.
if best_iou > 0:
adjusted_box = np.array(
[
box[0] - j, # center should be between 0 and 1, like prediction will be
box[1] - i,
np.log(box[2] / anchors[best_anchor][0]), # quotient might be less than one, not sure why log is used
np.log(box[3] / anchors[best_anchor][1]),
box[4] # class label
],
dtype=np.float32
)
true_box_grid[i, j, best_anchor] = adjusted_box
Also here is my model, which is very watered down because of my lack of computational resources.
def create_model():
model = models.Sequential()
model.add(Conv2D(6, 3, padding='same', data_format='channels_last', kernel_regularizer=l2(5e-4)))
model.add(BatchNormalization())
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPool2D())
model.add(Conv2D(8, 3, padding='same', data_format='channels_last', kernel_regularizer=l2(5e-4)))
model.add(BatchNormalization())
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPool2D())
model.add(Conv2D(12, 3, padding='same', data_format='channels_last', kernel_regularizer=l2(5e-4)))
model.add(BatchNormalization())
model.add(LeakyReLU(alpha=0.1))
model.add(Conv2D(8, 1, padding='same', data_format='channels_last', kernel_regularizer=l2(5e-4)))
model.add(BatchNormalization())
model.add(LeakyReLU(alpha=0.1))
model.add(Conv2D(12, 3, padding='same', data_format='channels_last', kernel_regularizer=l2(5e-4)))
model.add(BatchNormalization())
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPool2D())
model.add(Flatten())
model.add(Dense(params.grid_height * params.grid_width * params.pred_vec_len * params.num_anchors, activation='relu'))
return model
I'm wondering what I can do to prevent the predicted widths and heights, and thus the loss from exploding. The exponential is there to ensure they are positive, which makes sense. I could also do a sigmoid on them, but I don't want to restrict them to be between 0 and 1. In the YOLO paper, they mention that they pretrain their network so that the layer weights are already initialized when the YOLO training begins. Is this a problem of initializing the network properly?
How do I get the Kernel values from tf.keras.layers.Conv2D?
Here is my code:
#input image is 5 X 5 and 1 channel
input_shape = (1, 1, 5, 5)
x = tf.random.normal(input_shape)
y = tf.keras.layers.Conv2D(
2, 2, activation= tf.nn.relu, input_shape=input_shape,
data_format='channels_first')(x)
I am using tf version 2.2
I have tried y.get_weights() and this didn't work I got:
AttributeError: 'tensorflow.python.framework.ops.EagerTensor'
object has no attribute 'get_weights'
You need to actually store the layer in a variable. In your code, y is the result of the convolution. For example
input_shape = (1, 1, 5, 5)
x = tf.random.normal(input_shape)
conv_layer = tf.keras.layers.Conv2D(
2, 2, activation= tf.nn.relu, input_shape=input_shape,
data_format='channels_first')
y = conv_layer(x)
Now you should be able to use conv_layer.get_weights().
I am relatively new to TF and am wondering how to get tensor slice dynamically, from a unknown shape of tensor?
I want to get the weights from the last layer (output_layer) and do softmax and then only look at those indices on the 2-nd dimensions (from the out_reshape). The numpy-type of striding didnt work, so I am using tf.gather instead (after changing the axis so that the desired axis is on the first axis).
And this works:
out_reshape = tf.gather(out_reshape, [1,2,3,4])
This outputs a tensor with [4, 3, ?] (as we expected). But I want to change the indices based on the data fed to T (instead of [1,2,3,4] as shown above).
this gives an unpredicted result (as shown in the code below):
out_reshape = tf.gather(out_reshape, T)
and
out_reshape.shape
this gives TensorShape(None)), but I was expecting to get [?, 3, ?], where the first value is the same length as T (the data fed into the T is an 1-d ndarray, such as [100, 200, 300, 400]).
What is going on here? Why its output shape collapses to None?
The entire code is something like this:
graph = tf.Graph()
tf.reset_default_graph()
with graph.as_default():
y=tf.placeholder(tf.float32, shape =(31, 3, None), name = 'Y_observed') # (samples x) ind x seqlen
T=tf.placeholder(tf.int32, shape =(None), name = "T_observed")
x = tf.placeholder(tf.float32, shape = (None, None, 4) , name = 'X_observed')
model = Conv1D(filters = 16,
kernel_size = filter_width,
padding = "same",
activation='relu')(x)
model = Conv1D(filters = 16,
kernel_size = filter_width,
padding = "same",
activation='relu')(model)
model = Conv1D(filters = n_output_channels,
kernel_size = 1,
padding = "same",
activation='relu')(model)
model_output = tf.identity(model, name='last_layer')
output_layer = tf.get_default_graph().get_tensor_by_name('last_layer:0')
out = output_layer[:, 512:, :]
out_norm = tf.nn.softmax( out, axis=1 )
out_reshape = tf.transpose(out_norm, (1, 2, 0)) # this gives a [?,3,?] tensor
out_reshape = tf.gather(out_reshape, T) # --> Problematic part !
...
updates = tf.train.AdamOptimizer(1e-4).minimize....
...