Tensorflow reusing of Multi-Layered LSTM Network - tensorflow

I am trying to use same LSTM architecture for different inputs and hence passing the same cells while unfolding the bidirectional LSTM while unfolding different inputs. I am not sure if it's creating two whole different LSTM Networks. It looks like there are two different nodes in my Graph. My code and graph looks something like this:
def get_multirnn_cell(self):
cells = []
for _ in range(config.n_layers):
cell = tf.nn.rnn_cell.LSTMCell(config.n_hidden, initializer=tf.glorot_uniform_initializer())
dropout_cell = tf.nn.rnn_cell.DropoutWrapper(cell=cell,
input_keep_prob=config.keep_prob,
output_keep_prob=config.keep_prob)
cells.append(dropout_cell)
return cells
def add_lstm_op(self):
with tf.variable_scope('lstm'):
cells_fw = self.get_multirnn_cell()
cells_bw = self.get_multirnn_cell()
cell_fw = tf.nn.rnn_cell.MultiRNNCell(cells_fw)
cell_bw = tf.nn.rnn_cell.MultiRNNCell(cells_bw)
(_, _), (state_one_fw, state_one_bw) = tf.nn.bidirectional_dynamic_rnn(cell_fw, cell_bw,
inputs=self.question_one,
sequence_length=self.seql_one,
dtype=tf.float32)
self.state_one = tf.concat([state_one_fw[-1].h, state_one_bw[-1].h], name='state_one', axis=-1)
# self.state_one = tf.concat([state_one_fw, state_one_bw], axis=-1)
# [batch_size, 2*hidden_size]
(_, _), (state_two_fw, state_two_bw) = tf.nn.bidirectional_dynamic_rnn(cell_fw, cell_bw,
inputs=self.question_two,
sequence_length=self.seql_two,
dtype=tf.float32)
self.state_two = tf.concat([state_two_fw[-1].h, state_two_bw[-1].h], name='state_two', axis=-1)

If you want to reuse the multirnn_cell, you could pass a reuse=tf.AUTO_REUSE for the variable_scope.
with tf.variable_scope('lstm', reuse=tf.AUTO_REUSE)
See the doc.

Related

Multilayer LSTM without tf.contrib.rnn.MultiRNNCell

To implement a multilayer LSTM network, I usually use the following code:
def lstm_cell():
return tf.contrib.rnn.LayerNormBasicLSTMCell(model_settings['rnn_size'])
attn_cell = lstm_cell
def attn_cell():
return tf.contrib.rnn.DropoutWrapper(lstm_cell(), output_keep_prob=0.7)
cell = tf.contrib.rnn.MultiRNNCell([attn_cell() for _ in range(num_layers)], state_is_tuple=True)
outputs_, _ = tf.nn.dynamic_rnn(cell, x, dtype=tf.float32)
But, this way, I do not have access to the hidden layer outputs, if I want to manipulate the arrangements of the hidden layer outputs.
Is there any other way to make a multilayer LSTM network without using tf.contrib.rnn.MultiRNNCell?
You can simply stack several LSTM layers, for example via the Sequential module:
model = Sequential()
model.add(layers.LSTM(..., return_sequences = True, input_shape = (...)))
model.add(layers.LSTM(..., return_sequences = True)
...
model.add(layers.LSTM(...))
In this case the return sequences keyword is crucial for the intermediate layers.

Provide Tensorflow Seq2Seq output as input at next step (inference)

I would like to create a Seq2Seq model to forecast time series data. I am using the InferenceHelper and I am struggling with the sample_fn parameter. I would like to pass the decoder output of each cell through a dense layer in order to generate a single output at each time step. So I'm providing a function that does this to the sample_fn parameter.
Later on I would like to concatenate the rnn cell outputs with other non-time-series features and build more dense layers on top of it.
The network does fine at training time but not during inference. I think this is caused by the fact that I'm not sharing the same dense layer between training and inference time.
I tried to set the reuse parameter and used a with tf.variable_scope() environment. However, the sample_fn is already called within a specific scope in dynamic_decode and so I fail to use the same scope as I did during training.
The relevant part of my code looks as follows:
The placeholders:
inputs = tf.placeholder(shape=(None, 100, 1), dtype=tf.float32, name='inputs')
input_lengths = tf.placeholder(shape=(None,), dtype=tf.int32, name='input_lengths')
targets = tf.placeholder(shape=(None, 100), dtype=tf.float32, name='targets')
target_lengths = tf.placeholder(shape=(None,), dtype=tf.int32, name='target_lengths')
The encoder:
encoder_cell = tf.nn.rnn_cell.MultiRNNCell([tf.contrib.rnn.GRUCell(num_units=16, name='encoder_cell_0'])
self.decoder_cell = tf.nn.rnn_cell.MultiRNNCell([tf.contrib.rnn.GRUCell(num_units=16, name='decoder_cell_0']))
_, final_encoder_states = tf.nn.dynamic_rnn(cell=encoder_cell, inputs=inputs,
sequence_length=input_lengths, dtype=tf.float32)
The decoder (training)
start_tokens = tf.fill([tf.shape(inputs)[0]], start_token)
start_tokens = tf.cast(tf.expand_dims(start_tokens, 1), dtype=tf.float32)
targets_as_inputs = tf.concat([start_tokens, targets], axis=1)
targets_as_inputs = tf.reshape(targets_as_inputs, (-1, targets_as_inputs.shape[1], 1))
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=targets_as_inputs, sequence_length=target_lengths, name='training_helper')
training_decoder = tf.contrib.seq2seq.BasicDecoder(cell=decoder_cell, helper=training_helper, initial_state=final_encoder_states)
train_outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder=training_decoder, maximum_iterations=max_target_sequence_length, impute_finished=True)
train_predictions = train_outputs.rnn_output
train_predictions = tf.layers.dense(train_predictions, 1, activation=None, name='output_dense_layer')
The decoder (inference). The incorrect part:
def sample_fn(outputs):
return tf.layers.dense(outputs, 1, activation=None,
name='output_dense_layer', reuse=tf.AUTO_REUSE)
infer_helper = tf.contrib.seq2seq.InferenceHelper(sample_fn=sample_fn, sample_shape=(1),
sample_dtype=tf.float32, start_inputs=start_tokens, end_fn=lambda sample_ids: False, next_inputs_fn=None)
infer_decoder = tf.contrib.seq2seq.BasicDecoder(cell=decoder_cell, helper=infer_helper, initial_state=final_encoder_states)
infer_outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder=infer_decoder, maximum_iterations=max_target_sequence_length, impute_finished=True)
infer_predictions = infer_outputs.rnn_output
infer_predictions = sample_fn(infer_predictions)
There is a similar question: How to use tensorflow seq2seq without embeddings?
The author uses sample_fn=lambda outputs: outputs. But this returns a ValueError in my case because the dimensions don't match. How could they with multiple cells? sample_fn should return a single value.
For now, I have solved my problem by creating my own dynamic_decode function. I copied everything beside
with variable_scope.variable_scope(scope, "decoder", reuse=reuse) as varscope:
as well as a related if condition with varscope and another if condition testing the decoder class from tf.contrib.seq2seq.dynamic_decode.
Not a nice solution but good enough for now.

How to use multilayered bidirectional LSTM in Tensorflow?

I want to know how to use multilayered bidirectional LSTM in Tensorflow.
I have already implemented the contents of bidirectional LSTM, but I wanna compare this model with the model added multi-layers.
How should I add some code in this part?
x = tf.unstack(tf.transpose(x, perm=[1, 0, 2]))
#print(x[0].get_shape())
# Define lstm cells with tensorflow
# Forward direction cell
lstm_fw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
# Backward direction cell
lstm_bw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
# Get lstm cell output
try:
outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
dtype=tf.float32)
except Exception: # Old TensorFlow version only returns outputs not states
outputs = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
dtype=tf.float32)
# Linear activation, using rnn inner loop last output
outputs = tf.stack(outputs, axis=1)
outputs = tf.reshape(outputs, (batch_size*n_steps, n_hidden*2))
outputs = tf.matmul(outputs, weights['out']) + biases['out']
outputs = tf.reshape(outputs, (batch_size, n_steps, n_classes))
You can use two different approaches to apply multilayer bilstm model:
1) use out of previous bilstm layer as input to the next bilstm. In the beginning you should create the arrays with forward and backward cells of length num_layers. And
for n in range(num_layers):
cell_fw = cell_forw[n]
cell_bw = cell_back[n]
state_fw = cell_fw.zero_state(batch_size, tf.float32)
state_bw = cell_bw.zero_state(batch_size, tf.float32)
(output_fw, output_bw), last_state = tf.nn.bidirectional_dynamic_rnn(cell_fw, cell_bw, output,
initial_state_fw=state_fw,
initial_state_bw=state_bw,
scope='BLSTM_'+ str(n),
dtype=tf.float32)
output = tf.concat([output_fw, output_bw], axis=2)
2) Also worth a look at another approach stacked bilstm.
This is primarily same as the first answer but with a little variation of usage of scope name and with added dropout wrappers. It also takes care of the error the first answer gives about variable scope.
def bidirectional_lstm(input_data, num_layers, rnn_size, keep_prob):
output = input_data
for layer in range(num_layers):
with tf.variable_scope('encoder_{}'.format(layer),reuse=tf.AUTO_REUSE):
# By giving a different variable scope to each layer, I've ensured that
# the weights are not shared among the layers. If you want to share the
# weights, you can do that by giving variable_scope as "encoder" but do
# make sure first that reuse is set to tf.AUTO_REUSE
cell_fw = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.truncated_normal_initializer(-0.1, 0.1, seed=2))
cell_fw = tf.contrib.rnn.DropoutWrapper(cell_fw, input_keep_prob = keep_prob)
cell_bw = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.truncated_normal_initializer(-0.1, 0.1, seed=2))
cell_bw = tf.contrib.rnn.DropoutWrapper(cell_bw, input_keep_prob = keep_prob)
outputs, states = tf.nn.bidirectional_dynamic_rnn(cell_fw,
cell_bw,
output,
dtype=tf.float32)
# Concat the forward and backward outputs
output = tf.concat(outputs,2)
return output
On top of Taras's answer. Here is another example using just 2-layer Bidirectional RNN with GRU cells
embedding_weights = tf.Variable(tf.random_uniform([vocabulary_size, state_size], -1.0, 1.0))
embedding_vectors = tf.nn.embedding_lookup(embedding_weights, tokens)
#First BLSTM
cell = tf.nn.rnn_cell.GRUCell(state_size)
cell = tf.nn.rnn_cell.DropoutWrapper(cell, output_keep_prob=1-dropout)
(forward_output, backward_output), _ = \
tf.nn.bidirectional_dynamic_rnn(cell, cell, inputs=embedding_vectors,
sequence_length=lengths, dtype=tf.float32,scope='BLSTM_1')
outputs = tf.concat([forward_output, backward_output], axis=2)
#Second BLSTM using the output of previous layer as an input.
cell2 = tf.nn.rnn_cell.GRUCell(state_size)
cell2 = tf.nn.rnn_cell.DropoutWrapper(cell2, output_keep_prob=1-dropout)
(forward_output, backward_output), _ = \
tf.nn.bidirectional_dynamic_rnn(cell2, cell2, inputs=outputs,
sequence_length=lengths, dtype=tf.float32,scope='BLSTM_2')
outputs = tf.concat([forward_output, backward_output], axis=2)
BTW, don't forget to add different scope name. Hope this help.
As #Taras pointed out, you can use:
(1) tf.nn.bidirectional_dynamic_rnn()
(2) tf.contrib.rnn.stack_bidirectional_dynamic_rnn().
All previous answers only capture (1), so I give some details on (2), in particular since it usually outperforms (1). For an intuition about the different connectivities
see here.
Let's say you want to create a stack of 3 BLSTM layers, each with 64 nodes:
num_layers = 3
num_nodes = 64
# Define LSTM cells
enc_fw_cells = [LSTMCell(num_nodes)for layer in range(num_layers)]
enc_bw_cells = [LSTMCell(num_nodes) for layer in range(num_layers)]
# Connect LSTM cells bidirectionally and stack
(all_states, fw_state, bw_state) = tf.contrib.rnn.stack_bidirectional_dynamic_rnn(
cells_fw=enc_fw_cells, cells_bw=enc_bw_cells, inputs=input_embed, dtype=tf.float32)
# Concatenate results
for k in range(num_layers):
if k == 0:
con_c = tf.concat((fw_state[k].c, bw_state[k].c), 1)
con_h = tf.concat((fw_state[k].h, bw_state[k].h), 1)
else:
con_c = tf.concat((con_c, fw_state[k].c, bw_state[k].c), 1)
con_h = tf.concat((con_h, fw_state[k].h, bw_state[k].h), 1)
output = tf.contrib.rnn.LSTMStateTuple(c=con_c, h=con_h)
In this case, I use the final states of the stacked biRNN rather than the states at all timesteps (saved in all_states), since I was using an encoding decoding scheme, where the above code was only the encoder.

How to share LSTM unit for 2 separate input in TensorFlow?

Assume I have 2 input q and a, how to make the 2 inputs share 1 LSTM cell? Now part of my code as belows
def lstmnets(self, sequence, seq_len):
seq_embeds = self.embeds(sequence)
# lstm_cell = tf.contrib.rnn.BasicLSTMCell(self.hidden_size)
lstm_cell = tf.nn.rnn_cell.LSTMCell(self.hidden_size)
init_state = lstm_cell.zero_state(self.batch_size, dtype=tf.float32)
lstm_out, final_state = tf.nn.dynamic_rnn(lstm_cell, seq_embeds, initial_state=init_state, sequence_length=seq_len)
return lstm_out
def inference(self, q, a, q_len, a_len):
with tf.variable_scope('lstmnets') as scope:
query_rep = self.lstmnets(q, q_len)
scope.reuse_variables()
title_rep = self.lstmnets(a, a_len)
But for this codes, My structure have 2 stacked LSTM as following figure. How can I just use one LSTM for both? In addition, how can I initial the LSTM weights and add them to histogram? so far, I find no related tutorials for this. Thanks.
Your code seems to be fine as it uses scope.reuse_variable() to share the LSTM weights. The best way is to check is by printing the variables in the graph and verify whether the lstm_cell is declared only once. So in your inference function print the variable names:
def inference(self, q, a, q_len, a_len):
with tf.variable_scope('lstmnets') as scope:
query_rep = self.lstmnets(q, q_len)
scope.reuse_variables()
title_rep = self.lstmnets(a, a_len)
for v in tf.global_variables():
print(v.name)

how to stack LSTM layers using TensorFlow

what I have is the following, which I believe is a network with one hidden LSTM layer:
# Parameters
learning rate = 0.001
training_iters = 100000
batch_size = 128
display_step = 10
# Network Parameters
n_input = 13
n_steps = 10
n_hidden = 512
n_classes = 13
# tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
# Define weights
weights = {
'out' : tf.Variable(tf.random_normal([n_hidden, n_classes]))
}
biases = {
'out' : tf.Variable(tf.random_normal([n_classes]))
}
However, I am trying to build an LSTM network using TensorFlow to predict power consumption. I have been looking around to find a good example, but I could not find any model with 2 hidden LSTM layers. Here's the model that I would like to build:
1 input layer,
1 output layer,
2 hidden LSTM layers(with 512 neurons in each),
time step(sequence length): 10
Could anyone guide me to build this using TensorFlow? ( from defining weights, building input shape, training, predicting, use of optimizer or cost function, etc), any help would be much appreciated.
Thank you so much in advance!
Here is how I do it in a translation model with GRU cells. You can just replace the GRU with an LSTM. It is really easy just use tf.nn.rnn_cell.MultiRNNCell with a list of the multiple cells it should wrap. In the code bellow I am manually unrolling it but you can pass it to tf.nn.dynamic_rnn or tf.nn.rnn as well.
y = input_tensor
with tf.variable_scope('encoder') as scope:
rnn_cell = rnn.MultiRNNCell([rnn.GRUCell(1024) for _ in range(3)])
state = tf.zeros((BATCH_SIZE, rnn_cell.state_size))
output = [None] * TIME_STEPS
for t in reversed(range(TIME_STEPS)):
y_t = tf.reshape(y[:, t, :], (BATCH_SIZE, -1))
output[t], state = rnn_cell(y_t, state)
scope.reuse_variables()
y = tf.pack(output, 1)
First you need some placeholders to put your training data (one batch)
x_input = tf.placeholder(tf.float32, [batch_size, truncated_series_length, 1])
y_output = tf.placeholder(tf.float32, [batch_size, truncated_series_length, 1])
A LSTM need a state, which consists of two components, the hidden state and the cell state, very good guide here: https://arxiv.org/pdf/1506.00019.pdf. For every layer in the LSTM you have one cell state and one hidden state.
The problem is that Tensorflow stores this in a LSTMStateTuple which you can not send into placeholder. So you need to store it in a Tensor, and then unpack it into a tuple:
state_placeholder = tf.placeholder(tf.float32, [num_layers, 2, batch_size, state_size])
l = tf.unpack(state_placeholder, axis=0)
rnn_tuple_state = tuple(
[tf.nn.rnn_cell.LSTMStateTuple(l[idx][0], l[idx][1])
for idx in range(num_layers)]
)
Then you can use the built-in Tensorflow API to create the stacked LSTM layer.
cell = tf.nn.rnn_cell.LSTMCell(state_size, state_is_tuple=True)
cell = tf.nn.rnn_cell.MultiRNNCell([cell]*num_layers, state_is_tuple=True)
outputs, state = tf.nn.dynamic_rnn(cell, x_input, initial_state=rnn_tuple_state)
From here you continue with the outputs to calculate logits and then a loss with respect to the y_inputs.
Then you run each batch with the sess.run-command, with truncated backpropagation (good explanation here http://r2rt.com/styles-of-truncated-backpropagation.html)
init_state = np.zeros((num_layers, 2, batch_size, state_size))
...current_state... = sess.run([...state...], feed_dict={x_input:batch_in, state_placeholder:current_state ...})
current_state = np.array(current_state)
You will have to convert the state to a numpy array before feeding it again.
Perhaps it is better to use a librarly like Tflearn or Keras instead?