tensorflow neural net with continuous / floating point output? - tensorflow

I'm trying to create a simple neural net in tensorflow that learns some simple relationship between inputs and outputs (for example, y=-x) where the inputs and outputs are floating point values (meaning, no softmax used on the output).
I feel like this should be pretty easy to do, but I must be messing up somewhere. Wondering if there are any tutorials or examples out there that do something similar. I looked through the existing tensorflow tutorials and didn't see anything like this and looked through several other sources of tensorflow examples I found by googling, but didn't see what I was looking for.
Here's a trimmed down version of what I've been trying. In this particular version, I've noticed that my weights and biases always seem to be stuck at zero. Perhaps this is due to my single input and single output?
I've had good luck altering the mist example for various nefarious purposes, but everything I've gotten to work successfully used softmax on the output for categorization. If I can figure out how to generate a raw floating point output from my neural net, there are several fun projects I'd like to do with it.
Anyone see what I'm missing? Thanks in advance!
- J.
# Trying to define the simplest possible neural net where the output layer of the neural net is a single
# neuron with a "continuous" (a.k.a floating point) output. I want the neural net to output a continuous
# value based off one or more continuous inputs. My real problem is more complex, but this is the simplest
# representation of it for explaining my issue. Even though I've oversimplified this to look like a simple
# linear regression problem (y=m*x), I want to apply this to more complex neural nets. But if I can't get
# it working with this simple problem, then I won't get it working for anything more complex.
import tensorflow as tf
import random
import numpy as np
INPUT_DIMENSION = 1
OUTPUT_DIMENSION = 1
TRAINING_RUNS = 100
BATCH_SIZE = 10000
VERF_SIZE = 1
# Generate two arrays, the first array being the inputs that need trained on, and the second array containing outputs.
def generate_test_point():
x = random.uniform(-8, 8)
# To keep it simple, output is just -x.
out = -x
return ( np.array([ x ]), np.array([ out ]) )
# Generate a bunch of data points and then package them up in the array format needed by
# tensorflow
def generate_batch_data( num ):
xs = []
ys = []
for i in range(num):
x, y = generate_test_point()
xs.append( x )
ys.append( y )
return (np.array(xs), np.array(ys) )
# Define a single-layer neural net. Originally based off the tensorflow mnist for beginners tutorial
# Create a placeholder for our input variable
x = tf.placeholder(tf.float32, [None, INPUT_DIMENSION])
# Create variables for our neural net weights and bias
W = tf.Variable(tf.zeros([INPUT_DIMENSION, OUTPUT_DIMENSION]))
b = tf.Variable(tf.zeros([OUTPUT_DIMENSION]))
# Define the neural net. Note that since I'm not trying to classify digits as in the tensorflow mnist
# tutorial, I have removed the softmax op. My expectation is that 'net' will return a floating point
# value.
net = tf.matmul(x, W) + b
# Create a placeholder for the expected result during training
expected = tf.placeholder(tf.float32, [None, OUTPUT_DIMENSION])
# Same training as used in mnist example
cross_entropy = -tf.reduce_sum(expected*tf.log(tf.clip_by_value(net,1e-10,1.0)))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.Session()
init = tf.initialize_all_variables()
sess.run(init)
# Perform our training runs
for i in range( TRAINING_RUNS ):
print "trainin run: ", i,
batch_inputs, batch_outputs = generate_batch_data( BATCH_SIZE )
# I've found that my weights and bias values are always zero after training, and I'm not sure why.
sess.run( train_step, feed_dict={x: batch_inputs, expected: batch_outputs})
# Test our accuracy as we train... I am defining my accuracy as the error between what I
# expected and the actual output of the neural net.
#accuracy = tf.reduce_mean(tf.sub( expected, net))
accuracy = tf.sub( expected, net) # using just subtract since I made my verification size 1 for debug
# Uncomment this to debug
#import pdb; pdb.set_trace()
batch_inputs, batch_outputs = generate_batch_data( VERF_SIZE )
result = sess.run(accuracy, feed_dict={x: batch_inputs, expected: batch_outputs})
print " progress: "
print " inputs: ", batch_inputs
print " outputs:", batch_outputs
print " actual: ", result

Your loss should be the squared difference of output and true value:
loss = tf.reduce_mean(tf.square(expected - net))
This way the network learns to optimize this loss and make the output closer to the real result. Cross entropy should only be used for output values between 0 and 1 i.e. for classification.

If anyone is interested, I got this example to work. Here's the code:
# Trying to define the simplest possible neural net where the output layer of the neural net is a single
# neuron with a "continuous" (a.k.a floating point) output. I want the neural net to output a continuous
# value based off one or more continuous inputs. My real problem is more complex, but this is the simplest
# representation of it for explaining my issue. Even though I've oversimplified this to look like a simple
# linear regression problem (y=m*x), I want to apply this to more complex neural nets. But if I can't get
# it working with this simple problem, then I won't get it working for anything more complex.
import tensorflow as tf
import random
import numpy as np
INPUT_DIMENSION = 1
OUTPUT_DIMENSION = 1
TRAINING_RUNS = 100
BATCH_SIZE = 10000
VERF_SIZE = 1
# Generate two arrays, the first array being the inputs that need trained on, and the second array containing outputs.
def generate_test_point():
x = random.uniform(-8, 8)
# To keep it simple, output is just -x.
out = -x
return (np.array([x]), np.array([out]))
# Generate a bunch of data points and then package them up in the array format needed by
# tensorflow
def generate_batch_data(num):
xs = []
ys = []
for i in range(num):
x, y = generate_test_point()
xs.append(x)
ys.append(y)
return (np.array(xs), np.array(ys))
# Define a single-layer neural net. Originally based off the tensorflow mnist for beginners tutorial
# Create a placeholder for our input variable
x = tf.placeholder(tf.float32, [None, INPUT_DIMENSION])
# Create variables for our neural net weights and bias
W = tf.Variable(tf.zeros([INPUT_DIMENSION, OUTPUT_DIMENSION]))
b = tf.Variable(tf.zeros([OUTPUT_DIMENSION]))
# Define the neural net. Note that since I'm not trying to classify digits as in the tensorflow mnist
# tutorial, I have removed the softmax op. My expectation is that 'net' will return a floating point
# value.
net = tf.matmul(x, W) + b
# Create a placeholder for the expected result during training
expected = tf.placeholder(tf.float32, [None, OUTPUT_DIMENSION])
# Same training as used in mnist example
loss = tf.reduce_mean(tf.square(expected - net))
# cross_entropy = -tf.reduce_sum(expected*tf.log(tf.clip_by_value(net,1e-10,1.0)))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
sess = tf.Session()
init = tf.initialize_all_variables()
sess.run(init)
# Perform our training runs
for i in range(TRAINING_RUNS):
print("trainin run: ", i, )
batch_inputs, batch_outputs = generate_batch_data(BATCH_SIZE)
# I've found that my weights and bias values are always zero after training, and I'm not sure why.
sess.run(train_step, feed_dict={x: batch_inputs, expected: batch_outputs})
# Test our accuracy as we train... I am defining my accuracy as the error between what I
# expected and the actual output of the neural net.
# accuracy = tf.reduce_mean(tf.sub( expected, net))
accuracy = tf.subtract(expected, net) # using just subtract since I made my verification size 1 for debug
# tf.subtract()
# Uncomment this to debug
# import pdb; pdb.set_trace()
print("W=%f, b=%f" % (sess.run(W), sess.run(b)))
batch_inputs, batch_outputs = generate_batch_data(VERF_SIZE)
result = sess.run(accuracy, feed_dict={x: batch_inputs, expected: batch_outputs})
print(" progress: ")
print(" inputs: ", batch_inputs)
print(" outputs:", batch_outputs)
print(" actual: ", result)

When using the built in, easy way of constructing the NN, I used
loss=tf.keras.losses.MeanSquaredError().

Related

tf.keras.layers.BatchNormalization with trainable=False appears to not update its internal moving mean and variance

I am trying to find out, how exactly does BatchNormalization layer behave in TensorFlow. I came up with the following piece of code which to the best of my knowledge should be a perfectly valid keras model, however the mean and variance of BatchNormalization doesn't appear to be updated.
From docs https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization
in the case of the BatchNormalization layer, setting trainable = False on the layer means that the layer will be subsequently run in inference mode (meaning that it will use the moving mean and the moving variance to normalize the current batch, rather than using the mean and variance of the current batch).
I expect the model to return a different value with each subsequent predict call.
What I see, however, are the exact same values returned 10 times.
Can anyone explain to me why does the BatchNormalization layer not update its internal values?
import tensorflow as tf
import numpy as np
if __name__ == '__main__':
np.random.seed(1)
x = np.random.randn(3, 5) * 5 + 0.3
bn = tf.keras.layers.BatchNormalization(trainable=False, epsilon=1e-9)
z = input = tf.keras.layers.Input([5])
z = bn(z)
model = tf.keras.Model(inputs=input, outputs=z)
for i in range(10):
print(x)
print(model.predict(x))
print()
I use TensorFlow 2.1.0
Okay, I found the mistake in my assumptions. The moving average is being updated during training not during inference as I thought. This makes perfect sense, as updating the moving averages during inference would likely result in an unstable production model (for example a long sequence of highly pathological input samples [e.g. such that their generating distribution differs drastically from the one on which the network was trained] could potentially bias the network and result in worse performance on valid input samples).
The trainable parameter is useful when you're fine-tuning a pretrained model and want to freeze some of the layers of the network even during training. Because when you call model.predict(x) (or even model(x) or model(x, training=False)), the layer automatically uses the moving averages instead of batch averages.
The code below demonstrates this clearly
import tensorflow as tf
import numpy as np
if __name__ == '__main__':
np.random.seed(1)
x = np.random.randn(10, 5) * 5 + 0.3
z = input = tf.keras.layers.Input([5])
z = tf.keras.layers.BatchNormalization(trainable=True, epsilon=1e-9, momentum=0.99)(z)
model = tf.keras.Model(inputs=input, outputs=z)
# a dummy loss function
model.compile(loss=lambda x, y: (x - y) ** 2)
# a dummy fit just to update the batchnorm moving averages
model.fit(x, x, batch_size=3, epochs=10)
# first predict uses the moving averages from training
pred = model(x).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
# outputs the same thing as previous predict
pred = model(x).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
# here calling the model with training=True results in update of moving averages
# furthermore, it uses the batch mean and variance as in training,
# so the result is very different
pred = model(x, training=True).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
# here we see again that the moving averages are used but they differ slightly after
# the previous call, as expected
pred = model(x).numpy()
print(pred.mean(axis=0))
print(pred.var(axis=0))
print()
In the end, I found that the documentation (https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization) mentions this:
When performing inference using a model containing batch normalization, it is generally (though not always) desirable to use accumulated statistics rather than mini-batch statistics. This is accomplished by passing training=False when calling the model, or using model.predict.
Hopefully this will help someone with similar misunderstanding in the future.

Pytorch how to get the gradient of loss function twice

Here is what I'm trying to implement:
We calculate loss based on F(X), as usual. But we also define "adversarial loss" which is a loss based on F(X + e). e is defined as dF(X)/dX multiplied by some constant. Both loss and adversarial loss are backpropagated for the total loss.
In tensorflow, this part (getting dF(X)/dX) can be coded like below:
grad, = tf.gradients( loss, X )
grad = tf.stop_gradient(grad)
e = constant * grad
Below is my pytorch code:
class DocReaderModel(object):
def __init__(self, embedding=None, state_dict=None):
self.train_loss = AverageMeter()
self.embedding = embedding
self.network = DNetwork(opt, embedding)
self.optimizer = optim.SGD(parameters)
def adversarial_loss(self, batch, loss, embedding, y):
self.optimizer.zero_grad()
loss.backward(retain_graph=True)
grad = embedding.grad
grad.detach_()
perturb = F.normalize(grad, p=2)* 0.5
self.optimizer.zero_grad()
adv_embedding = embedding + perturb
network_temp = DNetwork(self.opt, adv_embedding) # This is how to get F(X)
network_temp.training = False
network_temp.cuda()
start, end, _ = network_temp(batch) # This is how to get F(X)
del network_temp # I even deleted this instance.
return F.cross_entropy(start, y[0]) + F.cross_entropy(end, y[1])
def update(self, batch):
self.network.train()
start, end, pred = self.network(batch)
loss = F.cross_entropy(start, y[0]) + F.cross_entropy(end, y[1])
loss_adv = self.adversarial_loss(batch, loss, self.network.lexicon_encoder.embedding.weight, y)
loss_total = loss + loss_adv
self.optimizer.zero_grad()
loss_total.backward()
self.optimizer.step()
I have few questions:
1) I substituted tf.stop_gradient with grad.detach_(). Is this correct?
2) I was getting "RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time." so I added retain_graph=True at the loss.backward. That specific error went away.
However now I'm getting a memory error after few epochs (RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1525909934016/work/aten/src/THC/generic/THCStorage.cu:58
). I suspect I'm unnecessarily retaining graph.
Can someone let me know pytorch's best practice on this? Any hint / even short comment will be highly appreciated.
I think you are trying to implement generative adversarial network (GAN), but from the code, I don't understand and can't follow to what you are trying to achieve as there are a few missing pieces for a GAN to works. I can see there's a discriminator network module, DNetwork but missing the generator network module.
If to guess, when you say 'loss function twice', I assumed you mean you have one loss function for the discriminator net and another for the generator net. If that's the case, let me share how I would implement a basic GAN model.
As an example, let's take a look at this Wasserstein GAN Jupyter notebook
I'll skip the less important bits and zoom into the important ones here:
First, import PyTorch libraries and set up
# Set up batch size, image size, and size of noise vector:
bs, sz, nz = 64, 64, 100 # nz is the size of the latent z vector for creating some random noise later
Build a discriminator module
class DCGAN_D(nn.Module):
def __init__(self):
... truncated, the usual neural nets stuffs, layers, etc ...
def forward(self, input):
... truncated, the usual neural nets stuffs, layers, etc ...
Build a generator module
class DCGAN_G(nn.Module):
def __init__(self):
... truncated, the usual neural nets stuffs, layers, etc ...
def forward(self, input):
... truncated, the usual neural nets stuffs, layers, etc ...
Put them all together
netG = DCGAN_G().cuda()
netD = DCGAN_D().cuda()
Optimizer needs to be told what variables to optimize. A module automatically keeps track of its variables.
optimizerD = optim.RMSprop(netD.parameters(), lr = 1e-4)
optimizerG = optim.RMSprop(netG.parameters(), lr = 1e-4)
One forward step and one backward step for Discriminator
Here, the network can calculate gradient during the backward pass, depends on the input to this function. So, in my case, I have 3 type of losses; generator loss, dicriminator real image loss, dicriminator fake image loss. I can get gradient of loss function three times for 3 different net passes.
def step_D(input, init_grad):
# input can be from generator's generated image data or input image from dataset
err = netD(input)
err.backward(init_grad) # backward pass net to calculate gradient
return err # loss
Control trainable parameters [IMPORTANT]
Trainable parameters in the model are those that require gradients.
def make_trainable(net, val):
for p in net.parameters():
p.requires_grad = val # note, i.e, this is later set to False below in netG update in the train loop.
In TensorFlow, this part can be coded like below:
grad = tf.gradients(loss, X)
grad = tf.stop_gradient(grad)
So, I think this will answer your first question, "I substituted tf.stop_gradient with grad.detach_(). Is this correct?"
Train loop
You can see here how's the 3 different loss functions are being called here.
def train(niter, first=True):
for epoch in range(niter):
# Make iterable from PyTorch DataLoader
data_iter = iter(dataloader)
i = 0
while i < n:
###########################
# (1) Update D network
###########################
make_trainable(netD, True)
# train the discriminator d_iters times
d_iters = 100
j = 0
while j < d_iters and i < n:
j += 1
i += 1
# clamp parameters to a cube
for p in netD.parameters():
p.data.clamp_(-0.01, 0.01)
data = next(data_iter)
##### train with real #####
real_cpu, _ = data
real_cpu = real_cpu.cuda()
real = Variable( data[0].cuda() )
netD.zero_grad()
# Real image discriminator loss
errD_real = step_D(real, one)
##### train with fake #####
fake = netG(create_noise(real.size()[0]))
input.data.resize_(real.size()).copy_(fake.data)
# Fake image discriminator loss
errD_fake = step_D(input, mone)
# Discriminator loss
errD = errD_real - errD_fake
optimizerD.step()
###########################
# (2) Update G network
###########################
make_trainable(netD, False)
netG.zero_grad()
# Generator loss
errG = step_D(netG(create_noise(bs)), one)
optimizerG.step()
print('[%d/%d][%d/%d] Loss_D: %f Loss_G: %f Loss_D_real: %f Loss_D_fake %f'
% (epoch, niter, i, n,
errD.data[0], errG.data[0], errD_real.data[0], errD_fake.data[0]))
"I was getting "RuntimeError: Trying to backward through the graph a second time..."
PyTorch has this behaviour; to reduce GPU memory usage, during the .backward() call, all the intermediary results (if you have like saved activations, etc.) are deleted when they are not needed anymore. Therefore, if you try to call .backward() again, the intermediary results don't exist and the backward pass cannot be performed (and you get the error you see).
It depends on what you are trying to do. You can call .backward(retain_graph=True) to make a backward pass that will not delete intermediary results, and so you will be able to call .backward() again. All but the last call to backward should have the retain_graph=True option.
Can someone let me know pytorch's best practice on this
As you can see from the PyTorch code above and from the way things are being done in PyTorch which is trying to stay Pythonic, you can get a sense of PyTorch's best practice there.
If you want to work with higher-order derivatives (i.e. a derivative of a derivative) take a look at the create_graph option of backward.
For example:
loss = get_loss()
loss.backward(create_graph=True)
loss_grad_penalty = loss + loss.grad
loss_grad_penalty.backward()

Linear Regression model On tensorflow can't learn bias

I am trying to train a linear regression model in Tensorflow using some generated data. The model seems to learn the slope of the line, but is unable to learn the bias.
I have tried changing the no. of epochs, the weight(slope) and the biases, but every time , the learnt bias by the model comes out to be zero. I don't know where I am going wrong and some help would be appreciated.
Here is the code.
import numpy as np
import tensorflow as tf
# assume the linear model to be Y = W*X + b
X = tf.placeholder(tf.float32, [None, 1])
Y = tf.placeholder(tf.float32, [None,1])
# the weight and biases
W = tf.Variable(tf.zeros([1,1]))
b = tf.Variable(tf.zeros([1]))
# the model
prediction = tf.matmul(X,W) + b
# the cost function
cost = tf.reduce_mean(tf.square(Y - prediction))
# Use gradient descent
learning_rate = 0.000001
train_step =
tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
steps = 1000
epochs = 10
Verbose = False
# In the end, the model should learn these values
test_w = 3
bias = 10
for _ in xrange(epochs):
for i in xrange(steps):
# make fake data for the model
# feed one example at a time
# stochastic gradient descent, because we only use one example at a time
x_temp = np.array([[i]])
y_temp = np.array([[test_w*i + bias]])
# train the model using the data
feed_dict = {X: x_temp, Y:y_temp}
sess.run(train_step,feed_dict=feed_dict)
if Verbose and i%100 == 0:
print("Iteration No: %d" %i)
print("W = %f" % sess.run(W))
print("b = %f" % sess.run(b))
print("Finally:")
print("W = %f" % sess.run(W))
print("b = %f" % sess.run(b))
# These values should be close to the values we used to generate data
https://github.com/HarshdeepGupta/tensorflow_notebooks/blob/master/Linear%20Regression.ipynb
Outputs are in the last line of code.
The model needs to learn test_w and bias (in the notebook link, it is in the 3rd cell, after the first comment), which are set to 3 and 10 respectively.
The model correctly learns the weight(slope), but is unable to learn the bias. Where is the error?
The main problem is that you are feeding just one sample at a time to the model. This makes your optimizer very inestable, that's why you have to use such a small learning rate. I will suggest you to feed more samples in each step.
If you insist in feeding one sample at a time, maybe you should consider using an optimizer with momentum, like tf.train.AdamOptimizer(learning_rate). This way you can increase the learning rate and reach convergence.

How do I share weights across different RNN cells that feed in different inputs in Tensorflow?

I'm curious if there is a good way to share weights across different RNN cells while still feeding each cell different inputs.
The graph that I am trying to build is like this:
where there are three LSTM Cells in orange which operate in parallel and between which I would like to share the weights.
I've managed to implement something similar to what I want using a placeholder (see below for code). However, using a placeholder breaks the gradient calculations of the optimizer and doesn't train anything past the point where I use the placeholder. Is it possible to do this a better way in Tensorflow?
I'm using Tensorflow 1.2 and python 3.5 in an Anaconda environment on Windows 7.
Code:
def ann_model(cls,data, act=tf.nn.relu):
with tf.name_scope('ANN'):
with tf.name_scope('ann_weights'):
ann_weights = tf.Variable(tf.random_normal([1,
cls.n_ann_nodes]))
with tf.name_scope('ann_bias'):
ann_biases = tf.Variable(tf.random_normal([1]))
out = act(tf.matmul(data,ann_weights) + ann_biases)
return out
def rnn_lower_model(cls,data):
with tf.name_scope('RNN_Model'):
data_tens = tf.split(data, cls.sequence_length,1)
for i in range(len(data_tens)):
data_tens[i] = tf.reshape(data_tens[i],[cls.batch_size,
cls.n_rnn_inputs])
rnn_cell = tf.nn.rnn_cell.BasicLSTMCell(cls.n_rnn_nodes_lower)
outputs, states = tf.contrib.rnn.static_rnn(rnn_cell,
data_tens,
dtype=tf.float32)
with tf.name_scope('RNN_out_weights'):
out_weights = tf.Variable(
tf.random_normal([cls.n_rnn_nodes_lower,1]))
with tf.name_scope('RNN_out_biases'):
out_biases = tf.Variable(tf.random_normal([1]))
#Encode the output of the RNN into one estimate per entry in
#the input sequence
predict_list = []
for i in range(cls.sequence_length):
predict_list.append(tf.matmul(outputs[i],
out_weights)
+ out_biases)
return predict_list
def create_graph(cls,sess):
#Initializes the graph
with tf.name_scope('input'):
cls.x = tf.placeholder('float',[cls.batch_size,
cls.sequence_length,
cls.n_inputs])
with tf.name_scope('labels'):
cls.y = tf.placeholder('float',[cls.batch_size,1])
with tf.name_scope('community_id'):
cls.c = tf.placeholder('float',[cls.batch_size,1])
#Define Placeholder to provide variable input into the
#RNNs with shared weights
cls.input_place = tf.placeholder('float',[cls.batch_size,
cls.sequence_length,
cls.n_rnn_inputs])
#global step used in optimizer
global_step = tf.Variable(0,trainable = False)
#Create ANN
ann_output = cls.ann_model(cls.c)
#Combine output of ANN with other input data x
ann_out_seq = tf.reshape(tf.concat([ann_output for _ in
range(cls.sequence_length)],1),
[cls.batch_size,
cls.sequence_length,
cls.n_ann_nodes])
cls.rnn_input = tf.concat([ann_out_seq,cls.x],2)
#Create 'unrolled' RNN by creating sequence_length many RNN Cells that
#share the same weights.
with tf.variable_scope('Lower_RNNs'):
#Create RNNs
daily_prediction, daily_prediction1 =[cls.rnn_lower_model(cls.input_place)]*2
When training mini-batches are calculated in two steps:
RNNinput = sess.run(cls.rnn_input,feed_dict = {
cls.x:batch_x,
cls.y:batch_y,
cls.c:batch_c})
_ = sess.run(cls.optimizer, feed_dict={cls.input_place:RNNinput,
cls.y:batch_y,
cls.x:batch_x,
cls.c:batch_c})
Thanks for your help. Any ideas would be appreciated.
You have 3 different inputs : input_1, input_2, input_3 fed it to a LSTM model which has the parameters shared. And then you concatenate the outputs of the 3 lstm and pass it to a final LSTM layer. The code should look something like this:
# Create input placeholder for the network
input_1 = tf.placeholder(...)
input_2 = tf.placeholder(...)
input_3 = tf.placeholder(...)
# create a shared rnn layer
def shared_rnn(...):
...
rnn_cell = tf.nn.rnn_cell.BasicLSTMCell(...)
# generate the outputs for each input
with tf.variable_scope('lower_lstm') as scope:
out_input_1 = shared_rnn(...)
scope.reuse_variables() # the variables will be reused.
out_input_2 = shared_rnn(...)
scope.reuse_variables()
out_input_3 = shared_rnn(...)
# verify whether the variables are reused
for v in tf.global_variables():
print(v.name)
# concat the three outputs
output = tf.concat...
# Pass it to the final_lstm layer and out the logits
logits = final_layer(output, ...)
train_op = ...
# train
sess.run(train_op, feed_dict{input_1: in1, input_2: in2, input_3:in3, labels: ...}
I ended up rethinking my architecture a little and came up with a more workable solution.
Instead of duplicating the middle layer of LSTM cells to create three different cells with the same weights, I chose to run the same cell three times. The results of each run were stored in a 'buffer' like tf.Variable, and then that whole variable was used as an input into the final LSTM layer.
I drew a diagram here
Implementing it this way allowed for valid outputs after 3 time steps, and didn't break tensorflows backpropagation algorithm (i.e. The nodes in the ANN could still train.)
The only tricky thing was to make sure that the buffer was in the correct sequential order for the final RNN.

Custom dropout in tensorflow

I'm training a DNN model on some data, and am hoping to analyze the learned weights to learn something about the true system I am studying (signaling cascades in biology). I guess one could say I am using Artificial NNs to learn about Biological NNs.
For each of my training examples, I have removed a single gene, that is responsible for signaling at the top layer.
As I am modeling this signaling cascade as a NN, and removing one of the nodes in the first hidden layer, I realized that I'm doing a real life version of dropout.
I would therefore like to use dropout to train my model, however the implementations of dropout that I have seen online seem to randomly drop out a node. What I need is a way to specify which node to dropout for each training example.
Any advice on how to implement this? I'm open to any package, but right now everything i have already done is in Tensorflow so I'd appreciate a solution that uses that framework.
For those that prefer the details explained:
I have 10 input variables, that are fully connected to 32 relu nodes in the first layer, which are fully connected to a second layer (relu), which is fully connected to the output (linear because I am doing regression).
In addition to the 10 input variables, I also happen to know which of the 28 nodes should be dropped out.
Is there a way I can specify this when training?
Here is the code I currently use:
num_stresses = 10
num_kinase = 32
num_transcription_factors = 200
num_genes = 6692
# Build neural network
# Input variables (10)
# Which Node to dropout (32)
stress = tflearn.input_data(shape=[None, num_stresses])
kinase_deletion = tflearn.input_data(shape=[None, num_kinase])
# This is the layer that I want to perform selective dropout on,
# I should be able to specify which of the 32 nodes should output zero
# based on a 1X32 vector of ones and zeros.
kinase = tflearn.fully_connected(stress, num_kinase, activation='relu')
transcription_factor = tflearn.fully_connected(kinase, num_transcription_factors, activation='relu')
gene = tflearn.fully_connected(transcription_factor, num_genes, activation='linear')
adam = tflearn.Adam(learning_rate=0.00001, beta1=0.99)
regression = tflearn.regression(gene, optimizer=adam, loss='mean_square', metric='R2')
# Define model
model = tflearn.DNN(regression, tensorboard_verbose=1)
I would supply your input variables and an equal sized vector of all 1's except for the one you want to drop, that one is a 0.
The very first operation should then be multiplication to zero out the gene you want to drop. From there on out, it should be the exact same as what you have now.
You can either multiply (zero out your gene) before handing it to tensorflow or add another place holder and feed it into the graph in the feed_dict like you do your variables. The latter one would probably be better.
If you need to drop a hidden node (in layer 2), it's just another vector of 1s and a 0.
Let me know if that works or if you need more help.
Edit:
Ok, so I haven't really worked with tflearn very much (I just did regular tensorflow), but I think you can combine tensorflow and tflearn. Basically, I added tf.multiply. You might have to add in another tflearn.input_data(shape =[num_stresses]) and tflearn.input_data(shape =[num_kinase]) to give you placeholders for the stresses_dropout_vector and kinase_dropout_vector. And of course, you can change the number and positions of zeros in those two vectors.
import tensorflow as tf ###### New ######
import tflearn
num_stresses = 10
num_kinase = 32
num_transcription_factors = 200
num_genes = 6692
stresses_dropout_vector = [1] * num_stresses ###### NEW ######
stresses_dropout_vector[desired_node_to_drop] = 0 ###### NEW ######
kinase_dropout_vector = [1] * num_kinase ###### NEW ######
kinase_dropout_vector[desired_hidden_node_to_drop] = 0 ###### NEW ######
# Build neural network
# Input variables (10)
# Which Node to dropout (32)
stress = tflearn.input_data(shape=[None, num_stresses])
kinase_deletion = tflearn.input_data(shape=[None, num_kinase])
# This is the layer that I want to perform selective dropout on,
# I should be able to specify which of the 32 nodes should output zero
# based on a 1X32 vector of ones and zeros.
stress_dropout = tf.multiply(stress, stresses_dropout_vector) ###### NEW ###### Drops out an input
kinase = tflearn.fully_connected(stress_dropout, num_kinase, activation='relu') ### changed stress to stress_dropout
kinase_dropout = tf.multiply(kinase, kinase_dropout_vector) ###### NEW ###### Drops out a hidden node
transcription_factor = tflearn.fully_connected(kinase_dropout, num_transcription_factors, activation='relu') ### changed kinase to kinase_dropout
gene = tflearn.fully_connected(transcription_factor, num_genes, activation='linear')
adam = tflearn.Adam(learning_rate=0.00001, beta1=0.99)
regression = tflearn.regression(gene, optimizer=adam, loss='mean_square', metric='R2')
# Define model
model = tflearn.DNN(regression, tensorboard_verbose=1)
If adding in tensorflow doesn't work, you just have to find a regular old tflearn.multiply function that does an element wise multiplication of two given tensors/vectors.
Hope that helps.
For completeness, here is my final implementation:
import numpy as np
import pandas as pd
import tflearn
import tensorflow as tf
meta = pd.read_csv('../../input/nn/meta.csv')
experiments = meta["Unnamed: 0"]
del meta["Unnamed: 0"]
stress_one_hot = pd.get_dummies(meta["train"])
kinase_deletion = pd.get_dummies(meta["Strain"])
kinase_one_hot = 1 - kinase_deletion
expression = pd.read_csv('../../input/nn/data.csv')
genes = expression["Unnamed: 0"]
del expression["Unnamed: 0"] # This holds the gene names just so you know...
expression = expression.transpose()
# Set up data for tensorflow
# Gene expression
target = expression
target = np.array(expression, dtype='float32')
target_mean = target.mean(axis=0, keepdims=True)
target_std = target.std(axis=0, keepdims=True)
target = target - target_mean
target = target / target_std
# Stress information
data1 = stress_one_hot
data1 = np.array(data1, dtype='float32')
data_mean = data1.mean(axis=0, keepdims=True)
data_std = data1.std(axis=0, keepdims=True)
data1 = data1 - data_mean
data1 = data1 / data_std
# Kinase information
data2 = kinase_one_hot
data2 = np.array(data2, dtype='float32')
# For Reference
# data1.shape
# #(301, 10)
# data2.shape
# #(301, 29)
# Build the Neural Network
num_stresses = 10
num_kinase = 29
num_transcription_factors = 200
num_genes = 6692
# Build neural network
# Input variables (10)
# Which Node to dropout (32)
stress = tflearn.input_data(shape=[None, num_stresses])
kinase_deletion = tflearn.input_data(shape=[None, num_kinase])
# This is the layer that I want to perform selective dropout on,
# I should be able to specify which of the 32 nodes should output zero
# based on a 1X32 vector of ones and zeros.
kinase = tflearn.fully_connected(stress, num_kinase, activation='relu')
kinase_dropout = tf.mul(kinase, kinase_deletion)
transcription_factor = tflearn.fully_connected(kinase_dropout, num_transcription_factors, activation='relu')
gene = tflearn.fully_connected(transcription_factor, num_genes, activation='linear')
adam = tflearn.Adam(learning_rate=0.00001, beta1=0.99)
regression = tflearn.regression(gene, optimizer=adam, loss='mean_square', metric='R2')
# Define model
model = tflearn.DNN(regression, tensorboard_verbose=1)
# Start training (apply gradient descent algorithm)
model.fit([data1, data2], target, n_epoch=20000, show_metric=True, shuffle=True)#,validation_set=0.05)