Question in making custom layer at tensorflow [duplicate] - tensorflow

I am trying to make customized layer which reduces channels by reduce_sum(axis=-1)
For example when input shape is (32,32,128)
I want to change the input shape from (32,32,128) to (None,32,32,128) and
And if the channels have index which will be like [0],[1],[2],[3]………[126],[127]
And my customized layer want to do is adding 2or 3 or 4…N channels
Lets say if i want to add only 2 channels
Whoch will be [0]+[1],[1]+[2]….[126]+[127]
And the output shape will be (32,32,64)
also (None,32,32,64)
For more details lets say i want to add 3 channels which will be [0]+[1]+[2],[3]+[4]+[4]… … [123]+[124]+[125],[126]+[127]
And the output shape will be (32,32,44)
Here too (None,32,32,44)
So is it possible to make it?
Is there index in channels? If so it would be kinda easy to make it I think…

x = tf.random.normal(shape=(32,32,128))
sum_dim = 2
x = tf.reduce_sum(tf.reshape(x, (x.shape[0], x.shape[1], -1, sum_dim)), -1)
#[32, 32, 64]
If you want to write the keras layer:
class ChannelSum(keras.layers.Layer):
def __init__(self, sum_dim=2):
super(ChannelSum, self).__init__()
self.sum_dim = sum_dim
def call(self, x ):
return tf.reduce_sum(tf.reshape(x, (-1, x.shape[1], x.shape[2], x.shape[3]//self.sum_dim, self.sum_dim)), -1)

Related

how to merge 'Conv-BN-Scale' into a single 'Conv' layer for tensorflow?

For faster inference one model, I want to merge 'Conv-BN-Scale' into a single 'Conv' layer for my tensorflow model, but I can not find some useful complete example about how to do it?
Anyone can give some advises or complete code example?
Thanks!
To merge two layers, you will need to pass a Tensor and get a tensor back that is after both the layers are applied, suppose your input tensor is X.
def MlConvBnScale(X ,kernel,strides , padding = 'SAME' , scale = False, beta_initializer = 0.1, gamma_initializer = 0.1, moving_mean_initializer = 0.1, moving_variance_initializer = 0.1):
convLout = tf.nn.conv2d(X,
filter = Kernel,
strides = strides,
padding = padding)
return tf.nn.batch_normalization(convLout,
scale = scale,
beta_initializer = beta_initializer,
gamma_initializer = gamma_initializer,
moving_mean_initializer = moving_mean_intializer,
moving_variance_initializer = moving_variance_initializer )
And that will return a tensor after performing both the operations, I have taken default values of variables but you can modify them in your function call, and in case your input is not already a tensor but a numpy array you can use tf.convert_to_tensor() from this link https://www.tensorflow.org/api_docs/python/tf/convert_to_tensor, and in case you are struggling with kernel/filter and its application, check out this thread. What does tf.nn.conv2d do in tensorflow?
If you have any queries or run into trouble implementing it, comment down below and we will see.

Soft attention from scratch for video sequences

I am trying to implement soft attention for video sequences classification. As there are a lot of implementations and examples about NLP so I tried following this schema but for video 1. Basically a LSTM with an Attention Model in between.
1 https://blog.heuritech.com/2016/01/20/attention-mechanism/
My code for my attention layer is the following which I am not sure it is implemented correctly.
def attention_layer(self, input, context):
# Input is a Tensor: [batch_size, lstm_units]
# Input (Seq_length, batch_size, lstm_units)
# Context is a LSTMStateTuple: [batch_size, lstm_units]. Hidden_state, output = StateTuple
hidden_state, _ = context
weights_y = tf.get_variable("att_weights_Y", [self.lstm_units, self.lstm_units], initializer=tf.contrib.layers.xavier_initializer())
weights_c = tf.get_variable("att_weights_c", [self.lstm_units, self.lstm_units], initializer=tf.contrib.layers.xavier_initializer())
z_ = []
for feat in input:
# Equation => M = tanh(Wc c + Wy y)
Wcc = tf.matmul(hidden_state, weights_c)
Wyy = tf.matmul(feat, weights_y)
m = tf.add(Wcc, Wyy)
m = tf.tanh(m, name='M_matrix')
# Equation => s = softmax(m)
s = tf.nn.softmax(m, name='softmax_att')
z = tf.multiply(feat, s)
z_.append(z)
out = tf.stack(z_, axis=1)
out = tf.reduce_sum(out, 1)
return out, s
So, adding this layer in between my LSTMs (or at the begining of my 2 LSTM) makes the training so slow. More specifically, it takes a lot of time when I declare my optimizer:
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
My questions are:
Is the implementation correct? If it is, is there a way to optimize it in order to make it train properly?
I was not able to make it work with the seq2seq APIs. Is there any API with Tensorflow that allows me tackle this specific issue?
Does it actually makes sense to use this for sequence classification?

How to use maxout activation function in tensorflow?

I want to use maxout activation function in tensorflow, but I don't know which function should use.
I sent a pull request for maxout, here is the link:
https://github.com/tensorflow/tensorflow/pull/5528
Code is as follows:
def maxout(inputs, num_units, axis=None):
shape = inputs.get_shape().as_list()
if axis is None:
# Assume that channel is the last dimension
axis = -1
num_channels = shape[axis]
if num_channels % num_units:
raise ValueError('number of features({}) is not a multiple of num_units({})'
.format(num_channels, num_units))
shape[axis] = -1
shape += [num_channels // num_units]
outputs = tf.reduce_max(tf.reshape(inputs, shape), -1, keep_dims=False)
return outputs
Here is how it works:
I don't think there is a maxout activation but there is nothing stopping yourself from making it yourself. You could do something like the following.
with tf.variable_scope('maxout'):
layer_input = ...
layer_output = None
for i in range(n_maxouts):
W = tf.get_variable('W_%d' % d, (n_input, n_output))
b = tf.get_variable('b_%d' % i, (n_output,))
y = tf.matmul(layer_input, W) + b
if layer_output is None:
layer_output = y
else:
layer_output = tf.maximum(layer_output, y)
Note that this is code I just wrote in my browser so there may be syntax errors but you should get the general idea. You simply perform a number of linear transforms and take the maximum across all the transforms.
How about this code?
This seems to work in my test.
def max_out(input_tensor,output_size):
shape = input_tensor.get_shape().as_list()
if shape[1] % output_size == 0:
return tf.transpose(tf.reduce_max(tf.split(input_tensor,output_size,1),axis=2))
else:
raise ValueError("Output size or input tensor size is not fine. Please check it. Reminder need be zero.")
I refer the diagram in the following page.
From version 1.4 on you can use tf.contrib.layers.maxout.
Maxout is a layer such that it calculates N*M output for a N*1 input, and then it returns the maximum value across the column, i.e., the final output has shape N*1 as well. Basically it uses multiple linear fittings to mimic a complex function.

How can I implement a Binarizer Layer in TensorFlow?

I'm trying to implement the binarizer in page 4 of this paper. It's not too difficult of a function. It's simply this:
No gradients to be backpropagated for this function. I'm trying to do it in TensorFlow. There are two ways to go about it:
Implementing it in C++ using TensorFlow. However, the instructions are quite unclear to me. It would be great if someone could walk me through it. One thing that I was unclear was why is the gradient for ZeroOutOp implemented in Python?
I decided to go with the pure Python approach.
Here's the code:
import tensorflow as tf
import numpy as np
def py_func(func, inp, out_type, grad):
grad_name = "BinarizerGradients_Schin"
tf.RegisterGradient(grad_name)(grad)
g = tf.get_default_graph()
with g.gradient_override_map({"PyFunc": grad_name}):
return tf.py_func(func, inp, out_type)
'''
This is a hackish implementation to speed things up. Doesn't directly follow the formula.
'''
def _binarizer(x):
probability_matrix = (x + 1) / float(2)
probability_matrix = np.matrix.round(probability_matrix, decimals=0)
np.putmask(probability_matrix, probability_matrix==0.0, -1.0)
return probability_matrix
def binarizer(x):
return py_func(_binarizer, [x], [tf.float32], _BinarizerNoOp)
def _BinarizerNoOp(op, grad):
return grad
The problem happens here. Inputs are 32x32x3 CIFAR images and they get reduced to 4x4x64 in the last layer. My last layer has a shape of (?, 4, 4, 64), where ? is the batch size. After putting it through this by calling:
binarized = binarizer.binarizer(h_pool3)
h_deconv1 = tf.nn.conv2d_transpose(h_pool3, W_deconv1, output_shape=[batch_size, img_height/4, img_width/4, 64], strides=[1,2,2,1], padding='SAME') + b_deconv1
The following error occurs:
ValueError: Shapes (4, 4, 64) and (?, 4, 4, 64) are not compatible
I can kinda guess why this happens. The ? represents the batch size and after putting the last layer through the binarizer, the ? dimension seems to disappear.
I think you can proceed as described in this answer. Applied to our problem:
def binarizer(input):
prob = tf.truediv(tf.add(1.0, input), 2.0)
bernoulli = tf.contrib.distributions.Bernoulli(p=prob, dtype=tf.float32)
return 2 * bernoulli.sample() - 1
Then, where you setup your network:
W_h1, bias_h1 = ...
h1_before_bin = tf.nn.tanh(tf.matmul(x, W_h1) + bias_h1)
# The interesting bits:
t = tf.identity(h1_before_bin)
h1 = t + tf.stop_gradient(binarizer(h1_before_bin) - t)
However, I'm not sure how to verify that this works...

Visualizing output of convolutional layer in tensorflow

I'm trying to visualize the output of a convolutional layer in tensorflow using the function tf.image_summary. I'm already using it successfully in other instances (e. g. visualizing the input image), but have some difficulties reshaping the output here correctly. I have the following conv layer:
img_size = 256
x_image = tf.reshape(x, [-1,img_size, img_size,1], "sketch_image")
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
So the output of h_conv1 would have the shape [-1, img_size, img_size, 32]. Just using tf.image_summary("first_conv", tf.reshape(h_conv1, [-1, img_size, img_size, 1])) Doesn't account for the 32 different kernels, so I'm basically slicing through different feature maps here.
How can I reshape them correctly? Or is there another helper function I could use for including this output in the summary?
I don't know of a helper function but if you want to see all the filters you can pack them into one image with some fancy uses of tf.transpose.
So if you have a tensor that's images x ix x iy x channels
>>> V = tf.Variable()
>>> print V.get_shape()
TensorShape([Dimension(-1), Dimension(256), Dimension(256), Dimension(32)])
So in this example ix = 256, iy=256, channels=32
first slice off 1 image, and remove the image dimension
V = tf.slice(V,(0,0,0,0),(1,-1,-1,-1)) #V[0,...]
V = tf.reshape(V,(iy,ix,channels))
Next add a couple of pixels of zero padding around the image
ix += 4
iy += 4
V = tf.image.resize_image_with_crop_or_pad(image, iy, ix)
Then reshape so that instead of 32 channels you have 4x8 channels, lets call them cy=4 and cx=8.
V = tf.reshape(V,(iy,ix,cy,cx))
Now the tricky part. tf seems to return results in C-order, numpy's default.
The current order, if flattened, would list all the channels for the first pixel (iterating over cx and cy), before listing the channels of the second pixel (incrementing ix). Going across the rows of pixels (ix) before incrementing to the next row (iy).
We want the order that would lay out the images in a grid.
So you go across a row of an image (ix), before stepping along the row of channels (cx), when you hit the end of the row of channels you step to the next row in the image (iy) and when you run out or rows in the image you increment to the next row of channels (cy). so:
V = tf.transpose(V,(2,0,3,1)) #cy,iy,cx,ix
Personally I prefer np.einsum for fancy transposes, for readability, but it's not in tf yet.
newtensor = np.einsum('yxYX->YyXx',oldtensor)
anyway, now that the pixels are in the right order, we can safely flatten it into a 2d tensor:
# image_summary needs 4d input
V = tf.reshape(V,(1,cy*iy,cx*ix,1))
try tf.image_summary on that, you should get a grid of little images.
Below is an image of what one gets after following all the steps here.
In case someone would like to "jump" to numpy and visualize "there" here is an example how to display both Weights and processing result. All transformations are based on prev answer by mdaoust.
# to visualize 1st conv layer Weights
vv1 = sess.run(W_conv1)
# to visualize 1st conv layer output
vv2 = sess.run(h_conv1,feed_dict = {img_ph:x, keep_prob: 1.0})
vv2 = vv2[0,:,:,:] # in case of bunch out - slice first img
def vis_conv(v,ix,iy,ch,cy,cx, p = 0) :
v = np.reshape(v,(iy,ix,ch))
ix += 2
iy += 2
npad = ((1,1), (1,1), (0,0))
v = np.pad(v, pad_width=npad, mode='constant', constant_values=p)
v = np.reshape(v,(iy,ix,cy,cx))
v = np.transpose(v,(2,0,3,1)) #cy,iy,cx,ix
v = np.reshape(v,(cy*iy,cx*ix))
return v
# W_conv1 - weights
ix = 5 # data size
iy = 5
ch = 32
cy = 4 # grid from channels: 32 = 4x8
cx = 8
v = vis_conv(vv1,ix,iy,ch,cy,cx)
plt.figure(figsize = (8,8))
plt.imshow(v,cmap="Greys_r",interpolation='nearest')
# h_conv1 - processed image
ix = 30 # data size
iy = 30
v = vis_conv(vv2,ix,iy,ch,cy,cx)
plt.figure(figsize = (8,8))
plt.imshow(v,cmap="Greys_r",interpolation='nearest')
you may try to get convolution layer activation image this way:
h_conv1_features = tf.unpack(h_conv1, axis=3)
h_conv1_imgs = tf.expand_dims(tf.concat(1, h_conv1_features_padded), -1)
this gets one vertical stripe with all images concatenated vertically.
if you want them padded (in my case of relu activations to pad with white line):
h_conv1_features = tf.unpack(h_conv1, axis=3)
h_conv1_max = tf.reduce_max(h_conv1)
h_conv1_features_padded = map(lambda t: tf.pad(t-h_conv1_max, [[0,0],[0,1],[0,0]])+h_conv1_max, h_conv1_features)
h_conv1_imgs = tf.expand_dims(tf.concat(1, h_conv1_features_padded), -1)
I personally try to tile every 2d-filter in a single image.
For doing this -if i'm not terribly mistaken since I'm quite new to DL- I found out that it could be helpful to exploit the depth_to_space function, since it takes a 4d tensor
[batch, height, width, depth]
and produces an output of shape
[batch, height*block_size, width*block_size, depth/(block_size*block_size)]
Where block_size is the number of "tiles" in the output image. The only limitation to this is that the depth should be the square of block_size, which is an integer, otherwise it cannot "fill" the resulting image correctly.
A possible solution could be of padding the depth of the input tensor up to a depth that is accepted by the method, but I sill havn't tried this.
Another way, which I think very easy, is using the get_operation_by_name function. I had hard time visualizing the layers with other methods but this helped me.
#first, find out the operations, many of those are micro-operations such as add etc.
graph = tf.get_default_graph()
graph.get_operations()
#choose relevant operations
op_name = '...'
op = graph.get_operation_by_name(op_name)
out = sess.run([op.outputs[0]], feed_dict={x: img_batch, is_training: False})
#img_batch is a single image whose dimensions are (1,n,n,1).
# out is the output of the layer, do whatever you want with the output
#in my case, I wanted to see the output of a convolution layer
out2 = np.array(out)
print(out2.shape)
# determine, row, col, and fig size etc.
for each_depth in range(out2.shape[4]):
fig.add_subplot(rows, cols, each_depth+1)
plt.imshow(out2[0,0,:,:,each_depth], cmap='gray')
For example below is the input(colored cat) and output of the second conv layer in my model.
Note that I am aware this question is old and there are easier methods with Keras but for people who use an old model from other people (such as me), this may be useful.