Implementing a one-to-many rnn model in tensorflow - tensorflow

I would like to implement a "none-to-many" RNN of the following form: "Learning to learn without gradient descent
by gradient descent"
to reproduce this piece of work https://arxiv.org/pdf/1611.03824.pdf.
The input ("training-data") to this model is the function f and not a sequence of data as usual. What I would like to do is something like
x_0 = tf.constant(..)
h_0 = tf.constant(..)
f_params = tf.placeholder(..)
h = h_0
x = x_0
cell = tf.contrib.rnn.LSTMCell(num_units)
for _ in range(seq_length):
y = f(x, f_params)
x,h = cell([x,y],h)
But I can not find a way to get this to work. All examples that I can find online use tf.contrib.rnn.static_rnn() or tf.nn.dynamic_rnn() to implement "many-to-many" or "many-to-one" architectures.

Related

Gradient in Pytorch and TensorFlow

I am new to PyTorch and Tensorflow and I would like to use them for solving ODEs and PDEs. My question is how to take a gradient of a vector (let's say Y=Y_(3*1) and Y = [Y1(X1,X2,X3), Y2(X1,X2,X3), Y3(X1,X2,X3)]^T with respect to the vector X = [X1,X2,X3]^T to get the following matrix with both PyTorch and TensorFlow (Keras).
F = [[Y11, Y12, Y13] , [Y21, Y22, Y23], [Y31, Y32, Y33]]
where
Yij = dYi/dXj
Thanks
I expect to get
F = [[Y11, Y12, Y13] , [Y21, Y22, Y23], [Y31, Y32, Y33]]
where
Yij = dYi/dXj

LSTM from scratch in tensorflow 2

I'm trying to make LSTM in tensorflow 2.1 from scratch, without using the one already supplied with keras (tf.keras.layers.LSTM), just to learn and code something. To do so, I've defined a class "Model" that when called (like with model(input)) it computes the matrix multiplications of the LSTM. I'm pasting here part of my code, the other parts are on github (link)
class Model(object):
[...]
def __call__(self, inputs):
assert inputs.shape == (vocab_size, T_steps)
outputs = []
for time_step in range(T_steps):
x = inputs[:,time_step]
x = tf.expand_dims(x,axis=1)
z = tf.concat([self.h_prev,x],axis=0)
f = tf.matmul(self.W_f, z) + self.b_f
f = tf.sigmoid(f)
i = tf.matmul(self.W_i, z) + self.b_i
i = tf.sigmoid(i)
o = tf.matmul(self.W_o, z) + self.b_o
o = tf.sigmoid(o)
C_bar = tf.matmul(self.W_C, z) + self.b_C
C_bar = tf.tanh(C_bar)
C = (f * self.C_prev) + (i * C_bar)
h = o * tf.tanh(C)
v = tf.matmul(self.W_v, h) + self.b_v
v = tf.sigmoid(v)
y = tf.math.softmax(v, axis=0)
self.h_prev = h
self.C_prev = C
outputs.append(y)
outputs = tf.squeeze(tf.stack(outputs,axis=1))
return outputs
But this neural netoworks has three problems:
1) it is way slow during training. In comparison a model that uses tf.keras.layers.LSTM() is trained more than 10 times faster. Why is this? Maybe because I didn't use a minibatch training, but a stochastic one?
2) the NN seems to not learn anything at all. After just some (very few!) training examples, the loss seems to settle down and it won't decrease anymore, but rather it oscillates around the reached value. After training, I tested the NN making it generate some text, but it just outputs non-sense gibberish. Why isn't learning anything?
3) the loss function outputs very high values. I've coded a categorical cross-entropy loss function but, with 100 characters long sequence, the value of the function is over 370 per training example. Shouldn't it be way lower than this?
I've wrote the loss function like this:
def compute_loss(predictions, desired_outputs):
l = 0
for i in range(T_steps):
l -= tf.math.log(predictions[desired_outputs[i], i])
return l
I know they're open questions, but unfortunately I can't make it works. So any answer, even a short answer that help me to make myself solve the problem, is fine :)

How to apply class weights in linear classifier for binary classification?

This is the linear classifier that I am using to perform binary classification, here is code snippet:
my_optimizer = tf.train.AdagradOptimizer(learning_rate = learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer,5.0)
# Create a linear classifier object
linear_classifier = tf.estimator.LinearClassifier(
feature_columns = feature_columns,
optimizer = my_optimizer
)
linear_classifier.train(input_fn = training_input_fn, steps = steps)
The dataset is imbalanced, there are only two classes yes/no. The number of NO class examples are 36548 while number of YES class examples are 4640.
How can I apply balancing to this data? I have been searching around and I could find stuff related to class weights etc but I couldn't find how can I create class weights and how to apply to the train method of tensor flow.
Here is how I am calculating losses:
training_probabilities = linear_classifier.predict(input_fn = training_predict_input_fn)
training_probabilities = np.array([item['probabilities'] for item in training_probabilities])
validation_probabilities = linear_classifier.predict(input_fn=validation_predict_input_fn)
validation_probabilities = np.array([item['probabilities'] for item in validation_probabilities])
training_log_loss = metrics.log_loss(training_targets, training_probabilities)
validation_log_loss = metrics.log_loss(validation_targets, validation_probabilities)
I assume that you are using the log_loss function from sklearn for computing your loss. If that is the case you can add class weights by using the argument sample_weight and pass on an array containing the weight to be given for each data point. sample_weight is an rolled out version of class_weights. You can compute sample_weight array by passing on the sample weights as given here.
Add the following lines to your code:
sample_wts = compute_sample_weight("balanced", training_targets)
training_log_loss = metrics.log_loss(training_targets, training_probabilities, sample_weight= sample_wts)

How to implement Gaussian Mixture for VAE?

I feel like I don't really know what I'm doing so I will describe what I think I'm doing and what I want to do and where that fails.
Given a normal variational autoencoder:
...
net = tf.layers.dense(net, units=code_size * 2, activation=None)
mean = net[:, :code_size]
std = net[:, code_size:]
posterior = tfd.MultivariateNormalDiagWithSoftplusScale(mean, std)
net = posterior.sample()
net = tf.layers.dense(net, units=input_size, ...)
...
What I think I'm doing: Let the neural network find a "mean" and "std" value and use it to create a Normal distribution (Gaussian).
Sample from that distribution and use that for the decoder.
In other words: learn a Gaussian distribution of the encoding
Now I would like to do the same for a mixture of Gaussians.
...
net = tf.layers.dense(net, units=code_size * 2 * code_size, activation=None)
means, stds = tf.split(net, 2, axis=-1)
means = tf.split(means, code_size, axis=-1)
stds = tf.split(stds, code_size, axis=-1)
components = [tfd.MultivariateNormalDiagWithSoftplusScale(means[i], stds[i]) for i in range(code_size)]
probs = [1.0 / code_size] * code_size
gauss_mix = tfd.Mixture(cat=tfd.Categorical(probs=probs), components=components)
net = gauss_mix.sample()
net = tf.layers.dense(net, units=input_size, ...)
...
That seemed relatively straight forward for me except that it fails with the following error:
Shapes () and (?,) are not compatible
This seems to come from probs that doesn't have the batch dimension (I didn't thought it would need that).
I thought that probs defines the probability between the components.
If I define a probs that also has the batch dimension I get the following cryptic error I don't know what it should mean:
Dimension -1796453376 must be >= 0
Do I generally misunderstand some concepts?
Or what do I need to do differently?

Soft attention from scratch for video sequences

I am trying to implement soft attention for video sequences classification. As there are a lot of implementations and examples about NLP so I tried following this schema but for video 1. Basically a LSTM with an Attention Model in between.
1 https://blog.heuritech.com/2016/01/20/attention-mechanism/
My code for my attention layer is the following which I am not sure it is implemented correctly.
def attention_layer(self, input, context):
# Input is a Tensor: [batch_size, lstm_units]
# Input (Seq_length, batch_size, lstm_units)
# Context is a LSTMStateTuple: [batch_size, lstm_units]. Hidden_state, output = StateTuple
hidden_state, _ = context
weights_y = tf.get_variable("att_weights_Y", [self.lstm_units, self.lstm_units], initializer=tf.contrib.layers.xavier_initializer())
weights_c = tf.get_variable("att_weights_c", [self.lstm_units, self.lstm_units], initializer=tf.contrib.layers.xavier_initializer())
z_ = []
for feat in input:
# Equation => M = tanh(Wc c + Wy y)
Wcc = tf.matmul(hidden_state, weights_c)
Wyy = tf.matmul(feat, weights_y)
m = tf.add(Wcc, Wyy)
m = tf.tanh(m, name='M_matrix')
# Equation => s = softmax(m)
s = tf.nn.softmax(m, name='softmax_att')
z = tf.multiply(feat, s)
z_.append(z)
out = tf.stack(z_, axis=1)
out = tf.reduce_sum(out, 1)
return out, s
So, adding this layer in between my LSTMs (or at the begining of my 2 LSTM) makes the training so slow. More specifically, it takes a lot of time when I declare my optimizer:
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
My questions are:
Is the implementation correct? If it is, is there a way to optimize it in order to make it train properly?
I was not able to make it work with the seq2seq APIs. Is there any API with Tensorflow that allows me tackle this specific issue?
Does it actually makes sense to use this for sequence classification?