how does depth_multiplier in tensorflow DepthwiseConv2D work? - tensorflow

Currently I am studying about Computer Vision, and studying about Depthwise Convolution. (Not Depthwise Seperate Convolution!)
So, What I want to know is how does "depth_multiplier" argument works.
First of all,
When not using "depth_multiplier" and only use
DepthwiseConv2D(kernel_size = (3,3)),
the shape of the kernels are (3x3x3)
and the output shape becomes (32x32x3)
and here is the question,
But when I am using "depth_multiplier",
DepthwiseConv2D(kernel_size = (3,3),depth_multiplier=4)
does the shape of the kernel become (3x3x3)x4? -> which means 4 of (3x3x3) kernels
and the output shape becomes (32x32x3)x4? OR output shape becomes (32x32x12)?
It is important for me Because
after this layer I am going to use Normal Convolution. which has 128 number of kernels and kernel_size = (3,3). And I want to know HOW the shape of the kernel will be. Will it be (3x3x3)x128? OR other shape?
The Code for training works but want to know How It Actually Works.
ALSO I want to know how (depth_multiplier=#) really works.
Thank you Previously.

Yes.
With depth_multiplier=1:
x = tf.keras.Input(shape=(32, 32, 3))
y = tf.keras.layers.DepthwiseConv2D(kernel_size = (3,3), padding='SAME')(x)
keras.Model(inputs=x, outputs=y).layers[-1].weights[0].shape
#outputs TensorShape([3, 3, 3, 1])
With depth_multiplier=4:
y = tf.keras.layers.DepthwiseConv2D(kernel_size = (3,3),depth_multiplier=4, padding='SAME')(x)
keras.Model(inputs=x, outputs=y).layers[-1].weights[0].shape
#outputs: TensorShape([3, 3, 3, 4])

Related

How to understand shape of weights returned by layer.get_weights() function?

I was working with manipulating weights of a CNN model (LENET-5) for my academical project where I encountered this problem that it was very difficult to understand what the keras function layer.get_weights() return? My model is
# Create Model LENET-5 Traditional
model = Sequential([
Conv2D(filters=6, kernel_size=(5, 5), activation='tanh', input_shape=(28, 28, 1), padding="same"),
AveragePooling2D(), # pool_size=(2, 2),
Conv2D(filters=16, kernel_size=(5, 5), activation='tanh', input_shape=(10, 10, 1)),
AveragePooling2D(),
Flatten(),
Dense(units=120, activation='tanh'),
Dense(units=84, activation='tanh'),
# Softmax function gives probabilities of each output class
Dense(units=10, activation='softmax')
])
that gives us a summary of:
Now after compiling and training the model, the get_weights() function for a first conv2d layer returns a np array of shape: (5, 5, 1, 6)
Now we know that in first Conv2d layer of LENET-5, there are 6 kernels of 1 channel (grayscale) all containing 5x5 = 25 parameters right? but the array looks like this:
it is 1 array that contains 5 arrays that have 5 more arrays with 1 array of 6 elements. how do I map these to 6 kernels ? which 25 values are from 1st kernel ? it is not understandable because 6 kernels that contains 5x5 weights are present there. Can anyone please elaborate the mapping of these weights to kernels/filters?
To get access of a single weight I need weight[0][0][0][0][0] that returns the first element of first array but I don't know how and which kernel it belongs to. and let's suppose if I want to access weight of 3rd kernel that is on 3rd row and 3rd column how can I access that? there must be an interpretation to that.
I expect someone to please explain me about the shape of weight returned by get_weights() function and how I can map them to kernels of conv2d layer.. Visualization can help a lot too.

How to multiply tensors with different shapes/dimensions?

I have a convolutional autoencoder model. While an autoencoder typically focuses on reconstructing the input without using any label information, I want to use the class label to perform class conditional scaling/shifting after convolutions. I am curious if utilizing the label in this way might help produce better reconstructions.
num_filters = 32
input_img = layers.Input(shape=(28, 28, 1)) # input image
label = layers.Input(shape=(10,)) # label
# separate scale value for each of the filter dimensions
scale = layers.Dense(num_filters, activation=None)(label)
# conv_0 produces something of shape (None,14,14,32)
conv_0 = layers.Conv2D(num_filters, (3, 3), strides=2, activation=None, padding='same')(input_img)
# TODO: Need help here. Multiply conv_0 by scale along each of the filter dimensions.
# This still outputs something of shape (None,14,14,32)
# Essentially each 14x14x1 has it's own scalar multiplier
In the example above, the output of the convolutional layer is (14,14,32) and the scale layer is of shape (32,). I want the convolutional output to be multiplied by the corresponding scale value along each filter dimension. For example, if these were numpy arrays I could do something like conv_0[:, :, i] * scale[i] for i in range(32).
I looked at tf.keras.layers.Multiply which can be found here, but based on the documentation I believe that takes in tensors of the same size as input. How do I work around this?
You don't have to loop. Simply do the following by making two tensors broadcast-compatible,
out = layers.Multiply()([conv_0, tf.expand_dims(tf.expand_dims(scale,axis=1), axis=1)])
I dont know if i actually understood what you are trying to achieve but i did a quick numpy test. I believe it should hold in tensorflow also:
conv_0 = np.ones([14, 14, 32])
scale = np.array([ i + 1 for i in range(32)])
result = conv_0 * scale
check whether channel-wise slices actually scaled element-wise in this case by the element found at index 1 in scale, which is 2
conv_0_slice_1 = conv_0[:, :, 1]
result_slice_1 = result[:, :, 1]

Understanding basic Keras Conv2DTranspose example

This is definitely a basic question, but I'm having trouble understanding exactly what is going on with Keras's layers.Conv2DTranspose function. I have the following three lines:
Setup
model = tf.keras.Sequential()
...
model.add(layers.Reshape((10, 10, 256)))
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 10, 10, 128)
The first occurrence of Reshape gets me a tensor of shape [10x10x256].
In the Conv2DTranspose layer, somehow I'm sliding a filter of shape [5x5] along this tensor and ending up with a new tensor of shape [10x10x128].
Question
What mathematically is happening to get me from the first tensor [10x10x256] to the second [10x10x128]?
It's almost the same as a convolution, but with fancy paddings to get the feeling of doing a backward convolution.
The sliding window in your picture is correctly positioned.
But it's not a "window", it is actually a "sliding block". The size of the block is 256 in depth.
So, it goes multiplying and summing all the channels for each stride.
But then there are 128 different sliding blocks (as you defined in your layer with filters=128). Each of these 128 sliding blocks produce separate output channel.
Great explanations about transposed convolutions: https://datascience.stackexchange.com/questions/6107/what-are-deconvolutional-layers

CNN features(dimensions) feed to LSTM Tensorflow

So recently i am working on a project which i am supposed to take images as input to a CNN and extract the features and feed them to LSTM for training. I am using 2 Layer CNN for feature extraction and im taking the features form fully connected layer and trying to feed them to LSTM. Problem is when i want to feed the FC layer to LSTM as input i get error regarding to wrong dimension. my FC layer is a Tensor with (128,1024) dimension. I tried to reshape it like this tf.reshape(fc,[-1]) which gives me a tensor ok (131072, )
dimension and still wont work. Could anyone give me any ideas of how im suppose to feed the FC to LSTM?here i just write part of my code and teh error i get.
Convolution Layer with 32 filters and a kernel size of 5
conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv1 = tf.layers.max_pooling2d(conv1, 2, 2)
# Convolution Layer with 32 filters and a kernel size of 5
conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv2 = tf.layers.max_pooling2d(conv2, 2, 2)
# Flatten the data to a 1-D vector for the fully connected layer
fc1 = tf.contrib.layers.flatten(conv2)
# Fully connected layer (in contrib folder for now)
fc1 = tf.layers.dense(fc1, 1024)
# Apply Dropout (if is_training is False, dropout is not applied)
fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training)
s = tf.reshape(fc1, [1])
rnn_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
outputs, states = rnn.static_rnn(rnn_cell, s, dtype=tf.float32)
return tf.matmul(outputs[-1], rnn_weights['out']) + rnn_biases['out']
here is the error:
ValueError: Cannot reshape a tensor with 131072 elements to shape [1] (1 elements) for 'ConvNet/Reshape' (op: 'Reshape') with input shapes: [128,1024], [1] and with input tensors computed as partial shapes: input[1] = [1].
You have a logical error in how you approach the problem. Collapsing the data to a 1D tensor is not going to solve anything (even if you get it to work correctly).
If you are taking a sequence of images as input your input tensor should be 5D (batch, sequence_index, x, y, channel) or something permutation like that. conv2d should complain about the extra dimension but you probably missing one of them. You should try to fix it first.
Next use conv3d and max_pool3d with a window of 1 for the depth (since you don't want the different frames to interact at this stage).
When you are done you should still have 5D tensor, but x and y dimensions should be 1 (you should check this, and fix the operation if that's not the case).
The RNN part expects 3D tensors (batch, sequence_index, fature_index). You can use tf.squeeze to remove the 1 sized dimensions from your 5D tensor and get this 3D tensor. You shouldn't have to reshape anything.
If you don't use batches, it's OK, but the operations will still expect the dimension to be there (but for you it will be 1). Missing the dimension will cause problems with shapes down the line.

autocorrelation of the input in tensorflow/keras

I have a 1D input signal. I want to compute autocorrelation as the part of the neural net for further use inside the network.
I need to perform convolution of input with input itself.
To perform convolution in keras custom layer/ tensorflow. We need the following parameters
data shape is "[batch, in_height, in_width, in_channels]",
filter shape is "[filter_height, filter_width, in_channels, out_channels]
There is no batch present in filter shape, which needs to be input in my case
TensorFlow now has an auto_correlation function. It should be in release 1.6. If you build from source you can use it right now (see e.g. the github code).
Here is a possible solution.
By self convolution, I understood a regular convolution where the filter is exactly the same as the input (if it's not that, sorry for my misunderstanding).
We need a custom function for that, and a Lambda layer.
At first I used padding = 'same' which brings outputs with the same length as the inputs. I'm not sure about what output length you want exactly, but if you want more, you should add padding yourself before doing the convolution. (In the example with length 7, for a complete convolution from one end to another, this manual padding would include 6 zeros before and 6 zeros after the input length, and use padding = 'valid'. Find the backend functions here)
Working example - Input (5,7,2)
from keras.models import Model
from keras.layers import *
import keras.backend as K
batch_size = 5
length = 7
channels = 2
channels_batch = batch_size*channels
def selfConv1D(x):
#this function unfortunately needs to know previously the shapes
#mainly because of the for loop, for other lines, there are workarounds
#but these workarounds are not necessary since we'll have this limitation anyway
#original x: (batch_size, length, channels)
#bring channels to the batch position:
x = K.permute_dimensions(x,[2,0,1]) #(channels, batch_size, length)
#suppose channels are just individual samples (since we don't mix channels)
x = K.reshape(x,(channels_batch,length,1))
#here, we get a copy of x reshaped to match filter shapes:
filters = K.permute_dimensions(x,[1,2,0]) #(length, 1, channels_batch)
#now, in the lack of a suitable available conv function, we make a loop
allChannels = []
for i in range (channels_batch):
f = filters[:,:,i:i+1]
allChannels.append(
K.conv1d(
x[i:i+1],
f,
padding='same',
data_format='channels_last'))
#although channels_last is my default config, I found this bug:
#https://github.com/fchollet/keras/issues/8183
#convolution output: (1, length, 1)
#concatenate all results as samples
x = K.concatenate(allChannels, axis=0) #(channels_batch,length,1)
#restore the original form (passing channels to the end)
x = K.reshape(x,(channels,batch_size,length))
return K.permute_dimensions(x,[1,2,0]) #(batch_size, length, channels)
#input data for the test:
x = np.array(range(70)).reshape((5,7,2))
#little model that just performs the convolution
inp= Input((7,2))
out = Lambda(selfConv1D)(inp)
model = Model(inp,out)
#checking results
p = model.predict(x)
for i in range(5):
print("x",x[i])
print("p",p[i])
You can just use tf.nn.conv3d by treating the "batch size" as "depth":
# treat the batch size as depth.
data = tf.reshape(input_data, [1, batch, in_height, in_width, in_channels])
kernel = [filter_depth, filter_height, filter_width, in_channels, out_channels]
out = tf.nn.conv3d(data, kernel, [1,1,1,1,1], padding='SAME')