MLP to initialize LSTM cell state in Keras - tensorflow

Can we use output of MLP as cell state in LSTM network and train the MLP too with back propagation?
This is similar to image captioning with CNN & LSTM where the output of CNN is flattened and used as initial hidden/cell state and train the stacked network where even the CNN part is updated through back-propagation.
I tried an architecture in keras to achieve the same. Please find the code here.
But the weights of the MLP are not being updated. I understand this is more straightforward in tensorflow where we can explicitly mention which parameters to update with the loss, but can anyone help me with keras API?

Yes, we can. Simply pass the output as the initial hidden state. Remember that an LSTM has two hidden states, h and c. You can read more about this here. Note that you also do not have to create multiple keras models, but can simple connect all the layers.:
# define mlp
mlp_inp = Input(batch_shape=(batch_size, hidden_state_dim))
mlp_dense = Dense(hidden_state_dim, activation='relu')(mlp_inp)
## Define LSTM model
lstm_inp = Input(batch_shape=(batch_size, seq_len, inp_size))
lstm_layer = LSTM(lstm_dim)(lstm_inp, initial_state=[mlp_dense,mlp_dense])
lstm_out = Dense(10,activation='softmax')(lstm_layer)
model = Model([mlp_inp, lstm_inp], lstm_out)
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics= ['accuracy'])
However, because of the above fact about having two states, you may want to consider two MLP layers for each initial state separately.
# define mlp
mlp_inp = Input(batch_shape=(batch_size, hidden_state_dim))
mlp_dense_h = Dense(hidden_state_dim, activation='relu')(mlp_inp)
mlp_dense_c = Dense(hidden_state_dim, activation='relu')(mlp_inp)
## Define LSTM model
lstm_inp = Input(batch_shape=(batch_size, seq_len, inp_size))
lstm_layer = LSTM(lstm_dim)(lstm_inp, initial_state=[mlp_dense_h,mlp_dense_c])
lstm_out = Dense(10,activation='softmax')(lstm_layer)
model = Model([mlp_inp, lstm_inp], lstm_out)
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics= ['accuracy'])
Also, note that when you go about saving this model, use save_weights instead of save because save_model can not handle the initial state passing. Also, as a slight note.

Related

How can I properly train a model to predict a moving average using LSTM in keras?

I'm learning how to train RNN model on Keras and I was expecting that training a model to predict the Moving Average of the last N steps would be quite easy.
I have a time series with thousands of steps and I'm able to create a model and train it with batches of data.
If I train it with the following model though, the test set predictions differ a lot from real values. (batch = 30, moving average window = 10)
inputs = tf.keras.Input(shape=(batch_length, num_features))
x = tf.keras.layers.LSTM(10, return_sequences=False)(inputs)
outputs = tf.keras.layers.Dense(num_labels)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs, name="test_model")
To be able to get good predictions, I need to add another layer of TimeDistributed, getting 2D predictions instead of 1D ones (I get one prediction per each time step)
inputs = tf.keras.Input(shape=(batch_length, num_features))
x = tf.keras.layers.LSTM(10, return_sequences=True)(inputs)
x = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(num_labels))(x)
outputs = tf.keras.layers.Dense(num_labels)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs, name="test_model")
I suggest that if your goal is to give as input the last 10 timesteps and have as a prediction the moving average to try a regressor model with Densely Connected layers rather than an RNN. (Linear activation with regularization might work well enough)
That option would be cheaper to train and run than an LSTM

Extracting activations from a specific layer of neural network

I was working on an image recognition problem. After training the model, I saved the architecture as well as weights. Now I want to use the model for extracting features from other images and perform SVM on that. For this, I want to remove the last two layers of my model and get the values calculated by the CNN and fully connected layers till then. How can I do that in Keras?
# a simple model
model = keras.models.Sequential([
keras.layers.Input((32,32,3)),
keras.layers.Conv2D(16, 3, activation='relu'),
keras.layers.Flatten(),
keras.layers.Dense(10, activation='softmax')
])
# after training
feature_only_model = keras.models.Model(model.inputs, model.layers[-2].output)
feature_only_model take a (32,32,3) for input and the output is the feature vector
If your model is subclassed - just change call() method.
If not:
if your model is complicated - wrap your model by subclassed model and change forward pass in call() method, or
if your model is simple - create model without the last layers, load weights to every layer separately

How to do Gradient Normalization using Tensorflow LazyAdamOptimizer in functional Keras Model?

I am using a bidirectional RNN in Keras and need to use Tensoflows LazyAdamOptimizer. I need to do Gradient Normalization. How can I implement gradient normalization with tensorflows LazyAdamOptimizer and than use the functional keras model further on?
I am training a unsupervised RNN to predict a input sequence of lenght 10. The Problem is, that i am using a keras functional model. Because of the sparsity of the embedding layer i need to use Tensorflows LazyAdamOptimizer, which is not a default optimizer in keras. When using a default keras optimizer i can do gradient normalization just by setting the argument 'clipnorm=1' in the optimizer function. Because i am using LazyAdam i need to do this with tensorflow and than pass it back to my keras model, but i can't get the code going.
#model architecture
model_input = Input(shape=(seq_len, ))
embedding_a = Embedding(len(port_fwd_dict), 50, input_length=seq_len, mask_zero=True)(model_input)
lstm_a = Bidirectional(GRU(25, return_sequences=True,implementation=2, reset_after=True, recurrent_activation='sigmoid'), merge_mode="concat (embedding_a)
dropout_a = Dropout(0.2)(lstm_a)
lstm_b = Bidirectional(GRU(25, return_sequences=False, activation="relu", implementation=2, reset_after=True, recurrent_activation='sigmoid'), merge_mode="concat")(dropout_a)
dropout_b = Dropout(0.2)(lstm_b)
dense_layer = Dense(100, activation="linear")(dropout_b)
dropout_c = Dropout(0.2)(dense_layer)
model_output = Dense(len(port_fwd_dict)-1, activation="softmax(dropout_c)
# trying to implement gradient normalization
optimizer = tf.contrib.opt.LazyAdamOptimizer()
optimizer = tf.contrib.estimator.clip_gradients_by_norm(optimizer, 1)
loss = tf.reduce_mean(categorical_crossentropy(model_input, model_output))
train_op = optimizer.minimize(loss, tf.train.get_global_step())
model = Model(inputs=model_input, outputs=model_output)
model.compile(optimizer=train_op, loss='categorical_crossentropie', metrics = [ 'categorical_accuracy'])
history = model.fit(X_train, Y_train, epochs=epochs, batch_size=batch_size, validation_split=validation_split, class_weight = 'auto')
Blockquote
I get the following Error: NameError: name 'categorical_crossentropy' is not defined
But even if this error is solved, i do not know if this code will work. Because I need to use the keras function model.compile and in this function there need to be a loss specified. but when i do this in the tensorflow part above, it is not working.
I need a way to do gradient normalization and use my normal keras functional model?!
maybe you can try my implement of lazy optimizer:
https://github.com/bojone/keras_lazyoptimizer
It is a pure keras implement, wrapping a existing optimizer to be a lazy version.

How to save tensorflow dynamic_rnn model and restore them as an decoder in a new encoder-decoder model?

I am trying to train a encoder-decoder model to automatically generate summary. the encoder part use CNN to encode article's abstract. the decoder part is RNN to generate article's title.
so the skeleton looks like:
encoder_state = CNNEncoder(encoder_inputs)
decoder_outputs, _ = RNNDecoder(encoder_state,decoder_inputs)
But I want to pre-trained the RNN decoder to teach the model to learn how to speak first. the decoder part is:
def RNNDecoder(encoder_state,decoder_inputs):
decoder_inputs_embedded = tf.nn.embedding_lookup(embeddings, decoder_inputs)
#from tensorflow.models.rnn import rnn_cell, seq2seq
cell = rnn.GRUCell(memory_dim)
decoder_outputs, decoder_final_state = tf.nn.dynamic_rnn(
cell, decoder_inputs_embedded,
initial_state=encoder_state,
dtype=tf.float32,scope="plain_decoder1"
)
return decoder_outputs, decoder_final_state
So my concern is how to save the save and restore RNNDecoder part separately?
Here you can take the output of the dynamic RNN first.
decoder_cell = tf.contrib.rnn.LSTMCell(decoder_hidden_units)
decoder_outputs, decoder_final_state = tf.nn.dynamic_rnn(decoder_cell, decoder_inputs_embedded,initial_state=encoder_final_state,dtype=tf.float32, time_major=True, scope="plain_decoder")
Take the decoder_outputs. Then use a softmax layer to fully connect it.
decoder_logits = tf.contrib.layers.linear(decoder_outputs, vocab_`size)
Then you can create a softmax loss with decoder_logits and train it in the noramal way.
When you want to restore the parameters you this kind of method in a session
with tf.Session() as session:
saver = tf.train.Saver()
saver.restore(session, checkpoint_file)
Here the checkpoint file should be your exact checkpoint file. So when running what happen is it will only restore your decoder weights and train with the main model.

Implementing a many-to-many LSTM in TensorFlow?

I am using TensorFlow to make predictions on time-series data. So it is like I have 50 tags and I want to find out the next possible 5 tags.
As shown in the following picture, I want to make it like the 4th structure.
I went through the tutorial demo: Recurrent Neural Networks
But I found it can provide like the 5th one in the above picture, which is different.
I am wondering which model could I use? I am thinking of the seq2seq models, but not sure if it is the right way.
You are right that you can use a seq2seq model. For brevity I've written up an example of how you can do it in Keras which also has a Tensorflow backend. I've not run the example so it might need tweaking. If your tags are one-hot you need to use cross-entropy loss instead.
from keras.models import Model
from keras.layers import Input, LSTM, RepeatVector
# The input shape is your sequence length and your token embedding size
inputs = Input(shape=(seq_len, embedding_size))
# Build a RNN encoder
encoder = LSTM(128, return_sequences=False)(inputs)
# Repeat the encoding for every input to the decoder
encoding_repeat = RepeatVector(5)(encoder)
# Pass your (5, 128) encoding to the decoder
decoder = LSTM(128, return_sequences=True)(encoding_repeat)
# Output each timestep into a fully connected layer
sequence_prediction = TimeDistributed(Dense(1, activation='linear'))(decoder)
model = Model(inputs, sequence_prediction)
model.compile('adam', 'mse') # Or categorical_crossentropy
model.fit(X_train, y_train)