I am trying to understand Bahdanaus attention using the following tutorial:
https://www.tensorflow.org/tutorials/text/nmt_with_attention
The calculation is the following:
self.attention_units = attention_units
self.W1 = Dense(self.attention_units)
self.W2 = Dense(self.attention_units)
self.V = Dense(1)
score = self.V(tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc)))
I have two problems:
I cannot understand why the shape of tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc)) is (batch_size,max_len,attention_units) ?
Using the rules of matrix multiplication I got the following results:
a) Shape of self.W1(last_inp_dec) -> (1,hidden_units_dec) * (hidden_units_dec,attention_units) = (1,attention_units)
b) Shape of self.W2(last_inp_enc) -> (max_len,hidden_units_dec) * (hidden_units_dec,attention_units) = (max_len,attention_units)
Then we add up a) and b) quantities. How do we end up with dimensionality (max_len, attention_units) or (batch_size, max_len, attention_units)? How can we do addition with different size of second dimension (1 vs max_len)?
Why do we multiply tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc)) by self.V? Because we want alphas as scalar?
) I cannot understand why the shape of tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc)) is
(batch_size,max_len,attention_units) ?
From the comments section of the code in class BahdanauAttention
query_with_time_axis shape = (batch_size, 1, hidden size)
Note that the dimension 1 was added using tf.expand_dims to make the shape compatible with values for the addition. The added dimension of 1 gets broadcast during the addition operation. Otherwise, the incoming shape was (batch_size, hidden size), which would not have been compatible
values shape = (batch_size, max_len, hidden size)
Addition of the query_with_time_axis shape and values shape gives us a shape of (batch_size, max_len, hidden size)
) Why do we multiply tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc)) by self.V? Because we want alphas as scalar?
self.V is the final layer, the output of which gives us the score. The random weight initialization of the self.V layer is handled by keras behind the scene in the line self.V = tf.keras.layers.Dense(1).
We are not multiplying tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc)) by self.V.
The construct self.V(tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc)) means --> the tanh activations resulting from the operation tf.nn.tanh(self.W1(last_inp_dec) + self.W2(input_enc)) form the input matrix to the single output output layer represented by self.V.
The shapes are slightly different from the ones you have given. It is best understood with a direct example perhaps?
Assuming 10 units in the alignment layer and 128 embedding dimensions on the decoder and 256 dimensions on the encoder and 19 timesteps, then:
last_inp_dec and input_enc shapes would be (?,128) and (?,19,256). We need to now expand last_inp_dec over the time axis to make it (?,1,128) so that addition is possible.
The layer weights for w1,w2,v will be (?,128,10), (?,256,10) and (?,10,1) respectively. Notice how self.w1(last_inp_dec) works out to (?,1,10). This is added to each of the self.w2(input_enc) to give a shape of (?,19,10). The result is fed to self.v and the output is (?,19,1) which is the shape we want - a set of 19 weights. Softmaxing this gives the attention weights.
Multiplying this attention weight with each encoder hidden state and summing up returns the context.
To your question as to why 'v' is needed, it is needed because Bahdanau provides the option of using 'n' units in the alignment layer (to determine w1,w2) and we need one more layer on top to massage the tensor back to the shape we want - a set of attention weights..one for each time step.
I just posted an answer at Understanding Bahdanau's Attention Linear Algebra
with all the shapes the tensors and weights involved.
Related
I have a convolutional autoencoder model. While an autoencoder typically focuses on reconstructing the input without using any label information, I want to use the class label to perform class conditional scaling/shifting after convolutions. I am curious if utilizing the label in this way might help produce better reconstructions.
num_filters = 32
input_img = layers.Input(shape=(28, 28, 1)) # input image
label = layers.Input(shape=(10,)) # label
# separate scale value for each of the filter dimensions
scale = layers.Dense(num_filters, activation=None)(label)
# conv_0 produces something of shape (None,14,14,32)
conv_0 = layers.Conv2D(num_filters, (3, 3), strides=2, activation=None, padding='same')(input_img)
# TODO: Need help here. Multiply conv_0 by scale along each of the filter dimensions.
# This still outputs something of shape (None,14,14,32)
# Essentially each 14x14x1 has it's own scalar multiplier
In the example above, the output of the convolutional layer is (14,14,32) and the scale layer is of shape (32,). I want the convolutional output to be multiplied by the corresponding scale value along each filter dimension. For example, if these were numpy arrays I could do something like conv_0[:, :, i] * scale[i] for i in range(32).
I looked at tf.keras.layers.Multiply which can be found here, but based on the documentation I believe that takes in tensors of the same size as input. How do I work around this?
You don't have to loop. Simply do the following by making two tensors broadcast-compatible,
out = layers.Multiply()([conv_0, tf.expand_dims(tf.expand_dims(scale,axis=1), axis=1)])
I dont know if i actually understood what you are trying to achieve but i did a quick numpy test. I believe it should hold in tensorflow also:
conv_0 = np.ones([14, 14, 32])
scale = np.array([ i + 1 for i in range(32)])
result = conv_0 * scale
check whether channel-wise slices actually scaled element-wise in this case by the element found at index 1 in scale, which is 2
conv_0_slice_1 = conv_0[:, :, 1]
result_slice_1 = result[:, :, 1]
Same as the title, in tf.keras.layers.Embedding, why it is important to know the size of dictionary as input dimension?
Because internally, the embedding layer is nothing but a matrix of size vocab_size x embedding_size. It is a simple lookup table: row n of that matrix stores the vector for word n.
So, if you have e.g. 1000 distinct words, your embedding layer needs to know this number in order to store 1000 vectors (as a matrix).
Don't confuse the internal storage of a layer with its input or output shape.
The input shape is (batch_size, sequence_length) where each entry is an integer in the range [0, vocab_size[. For each of these integers the layer will return the corresponding row (which is a vector of size embedding_size) of the internal matrix, so that the output shape becomes (batch_size, sequence_length, embedding_size).
In such setting, the dimensions/shapes of the tensors are the following:
The input tensor has size [batch_size, max_time_steps] such that each element of that tensor can have a value in the range 0 to vocab_size-1.
Then, each of the values from the input tensor pass through an embedding layer, that has a shape [vocab_size, embedding_size]. The output of the embedding layer is of shape [batch_size, max_time_steps, embedding_size].
Then, in a typical seq2seq scenario, this 3D tensor is the input of a recurrent neural network.
...
Here's how this is implemented in Tensorflow so you can get a better idea:
inputs = tf.placeholder(shape=(batch_size, max_time_steps), ...)
embeddings = tf.Variable(shape=(vocab_size, embedding_size], ...)
inputs_embedded = tf.nn.embedding_lookup(embeddings, encoder_inputs)
Now, the output of the embedding lookup table has the [batch_size, max_time_steps, embedding_size] shape.
In a CNN for binary classification of images, should the shape of output be (number of images, 1) or (number of images, 2)? Specifically, here are 2 kinds of last layer in a CNN:
keras.layers.Dense(2, activation = 'softmax')(previousLayer)
or
keras.layers.Dense(1, activation = 'softmax')(previousLayer)
In the first case, for every image there are 2 output values (probability of belonging to group 1 and probability of belonging to group 2). In the second case, each image has only 1 output value, which is its label (0 or 1, label=1 means it belongs to group 1).
Which one is correct? Is there intrinsic difference? I don't want to recognize any object in those images, just divide them into 2 groups.
Thanks a lot!
This first one is the correct solution:
keras.layers.Dense(2, activation = 'softmax')(previousLayer)
Usually, we use the softmax activation function to do classification tasks, and the output width will be the number of the categories. This means that if you want to classify one object into three categories with the labels A,B, or C, you would need to make the Dense layer generate an output with a shape of (None, 3). Then you can use the cross_entropyloss function to calculate the LOSS, automatically calculate the gradient, and do the back-propagation process.
If you want to only generate one value with the Dense layer, that means you get a tensor with a shape of (None, 1) - so it produces a single numeric value, like a regression task. You are using the value of the output to represent the category. The answer is correct, but does not perform like the general solution of the classification task.
The difference is if the class probabilities are independent of each other (multi-label classification) or not.
When there are 2 classes and you generally have P(c=1) + P(c=0) = 1 then
keras.layers.Dense(2, activation = 'softmax')
keras.layers.Dense(1, activation = 'sigmoid')
both are correct in terms of class probabilities. The only difference being how you supply the labels during training. But
keras.layers.Dense(2, activation = 'sigmoid')
is incorrect in that context. However, it is correct implementation if you have P(c=1) + P(c=0) != 1. This is the case for multi-label classification where an instance may belong to more than one correct class.
I'm working with padded sequences of maximum length 50. I have two types of sequence data:
1) A sequence, seq1, of integers (1-100) that correspond to event types (e.g. [3,6,3,1,45,45....3]
2) A sequence, seq2, of integers representing time, in minutes, from the last event in seq1. So the last element is zero, by definition. So for example [100, 96, 96, 45, 44, 12,... 0]. seq1 and seq2 are the same length, 50.
I'm trying to run the LSTM primarily on the event/seq1 data, but have the time/seq2 strongly influence the forget gate within the LSTM. The reason for this is I want the LSTM to tend to really penalize older events and be more likely to forget them. I was thinking about multiplying the forget weight by the inverse of the current value of the time/seq2 sequence. Or maybe (1/seq2_element + 1), to handle cases where it's zero minutes.
I see in the keras code (LSTMCell class) where the change would have to be:
f = self.recurrent_activation(x_f + K.dot(h_tm1_f,self.recurrent_kernel_f))
So I need to modify keras' LSTM code to accept multiple inputs. As an initial test, within the LSTMCell class, I changed the call function to look like this:
def call(self, inputs, states, training=None):
time_input = inputs[1]
inputs = inputs[0]
So that it can handle two inputs given as a list.
When I try running the model with the Functional API:
# Input 1: event type sequences
# Take the event integer sequences, run them through an embedding layer to get float vectors, then run through LSTM
main_input = Input(shape =(max_seq_length,), dtype = 'int32', name = 'main_input')
x = Embedding(output_dim = embedding_length, input_dim = num_unique_event_symbols, input_length = max_seq_length, mask_zero=True)(main_input)
## Input 2: time vectors
auxiliary_input = Input(shape=(max_seq_length,1), dtype='float32', name='aux_input')
m = Masking(mask_value = 99999999.0)(auxiliary_input)
lstm_out = LSTM(32)(x, time_vector = m)
# Auxiliary loss here from first input
auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')(lstm_out)
# An abitrary number of dense, hidden layers here
x = Dense(64, activation='relu')(lstm_out)
# The main output node
main_output = Dense(1, activation='sigmoid', name='main_output')(x)
## Compile and fit the model
model = Model(inputs=[main_input, auxiliary_input], outputs=[main_output, auxiliary_output])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'], loss_weights=[1., 0.2])
print(model.summary())
np.random.seed(21)
model.fit([train_X1, train_X2], [train_Y, train_Y], epochs=1, batch_size=200)
However, I get the following error:
An `initial_state` was passed that is not compatible with `cell.state_size`. Received `state_spec`=[InputSpec(shape=(None, 50, 1), ndim=3)]; however `cell.state_size` is (32, 32)
Any advice?
You can't pass a list of inputs to default recurrent layers in Keras. The input_spec is fixed and the recurrent code is implemented based on single tensor input also pointed out in the documentation, ie it doesn't magically iterate over 2 inputs of same timesteps and pass that to the cell. This is partly because of how the iterations are optimised and assumptions made if the network is unrolled etc.
If you like 2 inputs, you can pass constants (doc) to the cell which will pass the tensor as is. This is mainly to implement attention models in the future. So 1 input will iterate over timesteps while the other will not. If you really like 2 inputs to be iterated like a zip() in python, you will have to implement a custom layer.
I would like to throw in a different ideas here. They don't require you to modify the Keras code.
After the embedding layer of the event types, stack the embeddings with the elapsed time. The Keras function is keras.layers.Concatenate(axis=-1). Imagine this, a single even type is mapped to a n dimensional vector by the embedding layer. You just add the elapsed time as one more dimension after the embedding so that it becomes a n+1 vector.
Another idea, sort of related to your problem/question and may help here, is 1D convolution. The convolution can happen right after the concatenated embeddings. The intuition for applying convolution to event types and elapsed time is actually 1x1 convolution. In such a way that you linearly combine the two together and the parameters are trained. Note in terms of convolution, the dimensions of the vectors are called channels. Of course, you can also convolve more than 1 event at a step. Just try it. It may or may not help.
I have been trying to learn how to code up an RNN and LSTM in tensorflow. I found an example online on this blog post
http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html
Below are the snippets which I am having trouble understanding for an LSTM network to be used eventually for char-rnn generation
x = tf.placeholder(tf.int32, [batch_size, num_steps], name='input_placeholder')
y = tf.placeholder(tf.int32, [batch_size, num_steps], name='labels_placeholder')
embeddings = tf.get_variable('embedding_matrix', [num_classes, state_size])
rnn_inputs = [tf.squeeze(i) for i in tf.split(1,
num_steps, tf.nn.embedding_lookup(embeddings, x))]
Different Section of the Code Now where the weights are defined
with tf.variable_scope('softmax'):
W = tf.get_variable('W', [state_size, num_classes])
b = tf.get_variable('b', [num_classes], initializer=tf.constant_initializer(0.0))
logits = [tf.matmul(rnn_output, W) + b for rnn_output in rnn_outputs]
y_as_list = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(1, num_steps, y)]
x is the data to be fed, and y is the set of labels. In the lstm equations we have a series of gates, x(t) gets multiplied by a series and prev_hidden_state gets multiplied by some set of weights, biases are added and non-liniearities are applied.
Here are the doubts I have
In this case only one weight matrix is defined does that mean that
works for both x(t) and prev_hidden_state as well.
For the embeddings matrix I know it has to be multiplied by the
weight matrix but why is the first dimension num_classes
For the rnn_inputs we are using squeeze which removes dimensions of 1
but why would I want to do that in a one-hot-encoding.
Also from the splits I understand that we are unrolling the x of
dimension (batch_size X num_steps) into discrete (batch_size X 1)
vectors and then passing these values through the network is this
right
May I help you.
In this case only one weight matrix is defined does that mean that works for both x(t) and prev_hidden_state as well.
There are more weights as you call tf.nn.rnn_cell.LSTMCell. They are the internal weights of the RNN cell, which tensorflow created it implicitly when you call the cell.
The weight matrix you explicitly defined is the transform from the hidden state to the vocabulary space.
You can view the implicit weights accounting for the recurrent parts, taking the previous hidden state and current input and output the new hidden state. And the weight matrix you defined transform the hidden states(i.e. state_size = 200) to the higher vocabulary space.(i.e. vocab_size = 2000)
For further information, maybe you can view this tutorial : http://colah.github.io/posts/2015-08-Understanding-LSTMs/
For the embeddings matrix I know it has to be multiplied by the weight matrix but why is the first dimension num_classes
The num_classes accounts for the vocab_size, the embedding matrix is transforming the vocabulary to the required embedding size(in this example is equal to the state_size).
For the rnn_inputs we are using squeeze which removes dimensions of 1 but why would I want to do that in a one-hot-encoding.
You need to get rid of the extra dimension because tf.nn.rnn takes inputs as (batch_size, input_size) instead of (batch_size, 1, input_size).
Also from the splits I understand that we are unrolling the x of dimension (batch_size X num_steps) into discrete (batch_size X 1) vectors and then passing these values through the network is this right?
Being more precise, after embedding. (batch_size, num_steps, state_size) turns into a list of num_step elements, each of size (batch_size, 1, state_size).
The flow goes like this :
The embedding matrix embed each word as a state_size dimension vector(a row of the matrix), making the size (vocab_size, state_size).
Retrieve the the indices specified by the x placeholder and get the rnn input, which is size (batch_size, num_steps, state_size).
tf.split split the inputs to (batch_size, 1, state_size)
tf.squeeze sqeeze them to (batch_size, state_size), forming the desired input format for tf.nn.rnn.
If there's any problem with the tensorflow methods, maybe you can search them in the tensorflow API for more detailed introduction.