Tensorflow TPU XLA hangs indefinitely on custom keras layer (121278 conv2d calls on slices of a 15.5 MB tensor, "pairwise conv2d"). How to debug,fix? - tensorflow

I'm using Google's cloud TPUs (v2.8) to train a Tensorflow/keras model with a custom keras layer, which I call a pairwise conv2d. Tensorflow/keras code is below. The model compiles fine, but XLA compilation hangs indefinitely. If I scale down or remove pairwise conv2d, everything works normally.
Pairwise conv2d extracts all possible pairs of rows from an "image" and runs conv2d (1 filter) on it using a kernel size of (2,x), where x right now is 6. The current "image" size is (493x28) with one channel. Pairs of rows from the "image" are extracted, followed by applying conv2d. So conv2d is operating on a tensor with shape (batch_size, 2, 28, 1). All possible pairs of rows is 493*492/2 = 121278 separate conv2d calls. The output from each conv2d call is then stacked to generate the output.
So yep, that's a lot of conv2d calls, and definitely the source of the problem. If I reduce the number of conv2d calls down to 100, XLA compilation proceeds normally.
The "image" here is not an image -- it's a matrix of binding probabilities for transcription factors binding to DNA sites at different positions. So the rows here are different transcription factors (493) and the columns are different DNA sites (28 positions, maxpooled). We expect that adjacent/nearby transcription factors could interact with one another and so taking all possible pairs of rows is the same as considering all possible pairs of transcription factors.
Are there smart way of debugging XLA compilation? I can dump the generated files using XLA_FLAGS="--xla_dump_to=/tmp/generated" TF_XLA_FLAGS="--tf_xla_auto_jit=2" python3 train_model.py
but that doesn't really help me.
Are there better ways of accomplishing the pairwise conv2d that doesn't split the conv2d into 121278 calls? The tensor size is only 15.5 MB (per batch). I tried lowering the batch size to 32, but I don't think affects XLA compilation. I don't think this is a memory issue as model training doesn't even begin yet.
Any help would be appreciated! Thanks in advance.
EDIT #1. tf.map_fn is not supported by XLA on TPUs. The code below was edited to replace the map_fn call with a for loop + tf.stack. A few initial observations: [1] The for loop is unrolled by XLA, but there is a limit of 50000 loops. [2] The layer call() is called several times during model compilation. [3] XLA compilation triggers a Segfault (likely out of memory) when running PairwiseConv2D on the 121278 slices of the image (3 separate PairwiseConv2D layers). This was reduced to a single PairwiseConv2D layer (50000 slices of the image), but it still triggered a SegFault. Now running at 10000 slices of the image and memory usage on TPU v2.8 (64 GiB) is flat at around 60%.
class PairwiseConv2D(layers.Layer):
"""Layer that carries out Conv2D on specified pairs of rows (axis=1) within an input tensor using a specified kernel"""
def __init__(self, indices, kernel_size, dtype=None, **kwargs):
super().__init__(dtype=dtype, **kwargs)
self.indices = indices #tf.convert_to_tensor(itertools.combinations(range(493),2),dtype=tf.int32)
self.numFilters = indices.shape[0] #493*492/2
self.kernel_size = kernel_size #(2,6)
def build(self, input_shape = None):
self.filter_weights = self.add_weight("weights", shape=[self.numFilters, self.kernel_size[0], self.kernel_size[1], 1, 1], initializer="zeros", dtype=self.dtype)
#tf.function
def call(self, inputs):
ylist = []
for n in range(self.numFilters):
print('iteration #%s/%s' % (n, self.numFilters) )
y = tf.nn.conv2d( tf.stack([inputs[:, tf.gather(self.indices,n)[0], :, : ], inputs[:, tf.gather(self.indices,n)[1], :, :]], axis=1),
tf.reshape(self.filter_weights[n, :,:,:,:], [self.kernel_size[0], self.kernel_size[1], 1, 1]),
strides=1,
padding='SAME')
# ReLu Activation
y = tf.nn.relu(y)
ylist.append( y )
x = tf.stack(ylist, axis=1)
return x
def get_config(self):
config = super().get_config()
config.update({
"indices" : list(self.indices.numpy()),
"kernel_size" : self.kernel_size
})
return config
#classmethod
def from_config(cls, config):
return cls(**config)
```

Related

What would be the output from tensorflow dense layer if we assign itself as input and output while making a neural network?

I have been going through the implementation of neural network in openAI code for any Vanilla Policy Gradient (As a matter of fact, this part is used nearly everywhere). The code looks something like this :
def mlp_categorical_policy(x, a, hidden_sizes, activation, output_activation, action_space):
act_dim = action_space.n
logits = mlp(x, list(hidden_sizes) + [act_dim], activation, None)
logp_all = tf.nn.log_softmax(logits)
pi = tf.squeeze(tf.random.categorical(logits, 1), axis=1)
logp = tf.reduce_sum(tf.one_hot(a, depth=act_dim) * logp_all, axis=1)
logp_pi = tf.reduce_sum(tf.one_hot(pi, depth=act_dim) * logp_all, axis=1)
return pi, logp, logp_pi
and this multi-layered perceptron network is defined as follows :
def mlp(x, hidden_sizes=(32,), activation=tf.tanh, output_activation=None):
for h in hidden_sizes[:-1]:
x = tf.layers.dense(inputs=x, units=h, activation=activation)
return tf.layers.dense(inputs=x, units=hidden_sizes[-1], activation=output_activation)
My question is what is the return from this mlp function? I mean the structure or shape. Is it an N-dimentional tensor? If so, how is it given as an input to tf.random_categorical? If not, and its just has the shape [hidden_layer2, output], then what happened to the other layers? As per their website description about random_categorical it only takes a 2-D input. The complete code of openAI's VPG algorithm can be found here. The mlp is implemented here. I would be highly grateful if someone would just tell me what this mlp_categorical_policy() is doing?
Note: The hidden size is [64, 64], the action dimension is 3
Thanks and cheers
Note that this is a discrete action space - there are action_space.n different possible actions at every step, and the agent chooses one.
To do this the MLP is returning the logits (which are a function of the probabilities) of the different actions. This is specified in the code by + [act_dim] which is appending count of the action_space as the final MLP layer. Note that the last layer of an MLP is the output layer. The input layer is not specified in tensorflow, it is inferred from the inputs.
tf.random.categorical takes the logits and samples a policy action pi from them, which is returned as a number.
mlp_categorical_policy also returns logp, the log probability of the action a (used to assign credit), and logp_pi, the log probability of the policy action pi.
It seems your question is more about the return from the mlp.
The mlp creates a series of fully connected layers in a loop. In each iteration of the loop, the mlp is creating a new layer using the previous layer x as an input and assigning it's output to overwrite x, with this line x = tf.layers.dense(inputs=x, units=h, activation=activation).
So the output is not the same as the input, on each iteration x is overwritten with the value of the new layer. This is the same kind of coding trick as x = x + 1, which increments x by 1. This effectively chains the layers together.
The output of tf.layers.dense is a tensor of size [:,h] where : is the batch dimension (and can usually be ignored). The creation of the last layer happens outisde the loop, it can be seen that the number of nodes in this layer is act_dim (so shape is [:,3]). You can check the shape by doing this:
import tensorflow.compat.v1 as tf
import numpy as np
def mlp(x, hidden_sizes=(32,), activation=tf.tanh, output_activation=None):
for h in hidden_sizes[:-1]:
x = tf.layers.dense(x, units=h, activation=activation)
return tf.layers.dense(x, units=hidden_sizes[-1], activation=output_activation)
obs = np.array([[1.0,2.0]])
logits = mlp(obs, [64, 64, 3], tf.nn.relu, None)
print(logits.shape)
result: TensorShape([1, 3])
Note that the observation in this case is [1.,2.], it is nested inside a batch of size 1.

What is the structure of a Keras model if input_shape is omitted and why does it perform better?

I omitted the input_shape in the first layer of my Keras model by mistake. Eventually I noticed this and fixed it – and my model's performance dropped dramatically.
Looking at the structure of the model with and without input_shape, I discovered that the better-performing model has the output shape of multiple. Moreover, plotting it with plot_model shows no connections between the layers:
When it comes to performance, the model I understand (with input_shape) achieves a validation loss of 4.0513 (MSE) after 10 epochs with my test code (below), while the "weird" model manages 1.3218 – and the difference only increases with more epochs.
Model definition:
model = keras.Sequential()
model.add(keras.layers.Dense(64, activation=tf.nn.relu, input_shape=(1001,)))
# add or remove this ^^^^^^^^^^^^^^^^^^^
model.add(keras.layers.Dropout(0.05))
...
(never mind the details, this is just a model that demonstrates the difference in performance with and without input_shape)
So what is happening in the better-performing model? What is multiple? How are the layers really connected? How could I build this same model while also specifying input_shape?
Complete script:
import tensorflow as tf
from tensorflow import keras
import numpy as np
from collections import deque
import math, random
def func(x):
return math.sin(x)*5 + math.sin(x*1.8)*4 + math.sin(x/4)*5
def get_data():
x = 0
dx = 0.1
q = deque()
r = 0
data = np.zeros((100000, 1002), np.float32)
while True:
x = x + dx
sig = func(x)
q.append(sig)
if len(q) < 1000:
continue
arr = np.array(q, np.float32)
for k in range(10):
xx = random.uniform(0.1, 9.9)
data[r, :1000] = arr[:1000]
data[r, 1000] = 5*xx #scale for easier fitting
data[r, 1001] = func(x + xx)
r = r + 1
if r >= data.shape[0]:
break
if r >= data.shape[0]:
break
q.popleft()
inputs = data[:, :1001]
outputs = data[:, 1001]
return (inputs, outputs)
np.random.seed(1)
tf.set_random_seed(1)
random.seed(1)
model = keras.Sequential()
model.add(keras.layers.Dense(64, activation=tf.nn.relu, input_shape=(1001,)))
# add or remove this ^^^^^^^^^^^^^^^^^^^
model.add(keras.layers.Dropout(0.05))
model.add(keras.layers.Dense(64, activation=tf.nn.relu))
model.add(keras.layers.Dropout(0.05))
model.add(keras.layers.Dense(64, activation=tf.nn.relu))
model.add(keras.layers.Dropout(0.05))
model.add(keras.layers.Dense(64, activation=tf.nn.relu))
model.add(keras.layers.Dropout(0.05))
model.add(keras.layers.Dense(1))
model.compile(
loss = 'mse',
optimizer = tf.train.RMSPropOptimizer(0.0005),
metrics = ['mae', 'mse'])
inputs, outputs = get_data()
hist = model.fit(inputs, outputs, epochs=10, validation_split=0.1)
print("Final val_loss is", hist.history['val_loss'][-1])
TL;DR
The reason that the results are different is because the two models have different initial weights. The fact that one performs (significantly) better than the other is purely by chance and as #today mentioned the results they obtain are approximately similar.
Details
As the documentation for tf.set_random_seed explains, random operations use two seeds, the graph-level seed and the operation specific seed; tf.set_random_seed sets the graph-level seed:
Operations that rely on a random seed actually derive it from two seeds: the graph-level and operation-level seeds. This sets the graph-level seed.
Taking a look at the definition for Dense we see that the default kernel initializer is 'glorot_uniform' (let's only consider the kernel initializer here but the same holds for the bias initializer). Walking farther through the source code we'll eventually find out that this fetches the GlorotUniform with default arguments. Specifically the random number generator seed for that specific operation (namely weight initialization) is set to None. Now if we check where this seed is used, we find it is passed to random_ops.truncated_normal for example. This in turn (as do all random operations) fetches now the two seeds, one being the graph-level seed and the other the operation specific seed: seed1, seed2 = random_seed.get_seed(seed). We can check the definition of the get_seed function and we find that if the operation specific seed is not given (which is our case) then it is derived from properties of the current graph: op_seed = ops.get_default_graph()._last_id. The corresponding part of the tf.set_random_seed docs read:
If the graph-level seed is set, but the operation seed is not: The system deterministically picks an operation seed in conjunction with the graph-level seed so that it gets a unique random sequence.
Now coming back to original problem, it makes a difference for the graph structure if input_shape is defined or not. Again looking at a bit of source code we find that Sequential.add builds the inputs and outputs of the network incrementally only if input_shape was specified; otherwise it just stores a list of layers (model._layers); compare model.inputs, model.outputs for the two definitions. The output is incrementally built by calling the layers directly which dispatches to Layer.__call__. This wrapper builds the layer, sets the layer's inputs and outputs and adds some metadata to the outputs; also it uses an ops.name_scope to group operations. We can see this from the visualization provided by Tensorboard (example for the simplified model architecture of Input -> Dense -> Dropout -> Dense):
Now in the case we didn't specify input_shape all the model has is a list of layers. Even after having called compile the model is actually not compiled (just attributes such as the optimizer are set). Instead it is compiled "on the fly" when for the first time data is passed in to the model. This happens in in model._standardize_weights: the model output is obtained via self.call(dummy_input_values, training=training). Checking this method we find that it builds the layers (note that the model is not yet built) and then computes the output incrementally by using Layer.call (not __call__). This leaves out all the meta data and also the grouping of operations and hence results in a different structure of the graph (though its computational operations are all the same). Again checking Tensorboard we find:
Expanding both graphs we would find that they contain the same operations, grouped differently together. However this has the effect that the keras.backend.get_session().graph._last_id is different for both definitions and hence results in a different seed for the random operations:
# With `input_shape`:
>>> keras.backend.get_session().graph._last_id
303
# Without `input_shape`:
>>> keras.backend.get_session().graph._last_id
7
Performance results
I used the OP's code with some modifications in order to have similar random operations:
Added the steps described here to ensure reproducibility in terms of randomization,
Set random seeds for Dense and Dropout variable initialization,
Removed validation_split since the splitting happens before "on the fly" compilation of the model without input_shape and hence might interfere with the seed,
Set shuffle = False since this might use a separate operation specific seed.
This is the complete code (in addition I performed export PYTHONHASHSEED=0 before running the script):
from collections import deque
from functools import partial
import math
import random
import sys
import numpy as np
import tensorflow as tf
from tensorflow import keras
seed = int(sys.argv[1])
np.random.seed(1)
tf.set_random_seed(seed)
random.seed(1)
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
keras.backend.set_session(sess)
def func(x):
return math.sin(x)*5 + math.sin(x*1.8)*4 + math.sin(x/4)*5
def get_data():
x = 0
dx = 0.1
q = deque()
r = 0
data = np.zeros((100000, 1002), np.float32)
while True:
x = x + dx
sig = func(x)
q.append(sig)
if len(q) < 1000:
continue
arr = np.array(q, np.float32)
for k in range(10):
xx = random.uniform(0.1, 9.9)
data[r, :1000] = arr[:1000]
data[r, 1000] = 5*xx #scale for easier fitting
data[r, 1001] = func(x + xx)
r = r + 1
if r >= data.shape[0]:
break
if r >= data.shape[0]:
break
q.popleft()
inputs = data[:, :1001]
outputs = data[:, 1001]
return (inputs, outputs)
Dense = partial(keras.layers.Dense, kernel_initializer=keras.initializers.glorot_uniform(seed=1))
Dropout = partial(keras.layers.Dropout, seed=1)
model = keras.Sequential()
model.add(Dense(64, activation=tf.nn.relu,
# input_shape=(1001,)
))
model.add(Dropout(0.05))
model.add(Dense(64, activation=tf.nn.relu))
model.add(Dropout(0.05))
model.add(Dense(64, activation=tf.nn.relu))
model.add(Dropout(0.05))
model.add(Dense(64, activation=tf.nn.relu))
model.add(Dropout(0.05))
model.add(Dense(1))
model.compile(
loss = 'mse',
optimizer = tf.train.RMSPropOptimizer(0.0005)
)
inputs, outputs = get_data()
shuffled = np.arange(len(inputs))
np.random.shuffle(shuffled)
inputs = inputs[shuffled]
outputs = outputs[shuffled]
hist = model.fit(inputs, outputs[:, None], epochs=10, shuffle=False)
np.save('without.{:d}.loss.npy'.format(seed), hist.history['loss'])
With this code I'd actually expect to obtain similar results for both approaches however it turns out that they are not equal:
for i in $(seq 1 10)
do
python run.py $i
done
Plot the mean loss +/- 1 std. dev.:
Initial weights and initial prediction
I verified that the initial weights and an initial prediction (before fitting) is the same for the two versions:
inputs, outputs = get_data()
mode = 'without'
pred = model.predict(inputs)
np.save(f'{mode}.prediction.npy', pred)
for i, layer in enumerate(model.layers):
if isinstance(layer, keras.layers.Dense):
w, b = layer.get_weights()
np.save(f'{mode}.{i:d}.kernel.npy', w)
np.save(f'{mode}.{i:d}.bias.npy', b)
and
for i in 0 2 4 8
do
for data in bias kernel
do
diff -q "with.$i.$data.npy" "without.$i.$data.npy"
done
done
Influence of Dropout
[ ! ] I checked the performance after removing all Dropout layers and in that case the performance is actually equal. So the crux seems to lie with the Dropout layers. Actually the performance of the models without Dropout layers is the same as for the model with Dropout layers but without specifying input_shape. So it seems that without input_shape the Dropout layers are not effective.
Basically the difference between the two versions is that one uses __call__ and the other uses call to compute the outputs (as explained above). Since performance is similar to without Dropout layers a possible explanation could be that the Dropout layers don't drop when input_shape is not specified. This could by caused by training=False, i.e. the layers don't recognize they are in training mode. However I don't see a reason why this would happen. Also we can consider again the Tensorboard graphs.
Specifying input_shape:
Not specifying input_shape:
where the switch also depends on the learning phase (as before):
To verify the training kwarg let's subclass Dropout:
class Dropout(keras.layers.Dropout):
def __init__(self, rate, noise_shape=None, seed=None, **kwargs):
super().__init__(rate, noise_shape=noise_shape, seed=1, **kwargs)
def __call__(self, inputs, *args, **kwargs):
training = kwargs.get('training')
if training is None:
training = keras.backend.learning_phase()
print('[__call__] training: {}'.format(training))
return super().__call__(inputs, *args, **kwargs)
def call(self, inputs, training=None):
if training is None:
training = keras.backend.learning_phase()
print('[call] training: {}'.format(training))
return super().call(inputs, training)
I obtain similar outputs for both version, however the calls to __call__ are missing when input_shape is not specified:
[__call__] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[call] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[__call__] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[call] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[__call__] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[call] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[__call__] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[call] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
So I suspect that the problem lies somewhere within __call__ but right now I can't figure out what it is.
System
I'm using Ubuntu 16.04, Python 3.6.7 and Tensorflow 1.12.0 via conda (no GPU support):
$ uname -a
Linux MyPC 4.4.0-141-generic #167-Ubuntu SMP Wed Dec 5 10:40:15 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ python --version
Python 3.6.7 :: Anaconda, Inc.
$ conda list | grep tensorflow
tensorflow 1.12.0 mkl_py36h69b6ba0_0
tensorflow-base 1.12.0 mkl_py36h3c3e929_0
Edit
I also had keras and keras-base installed (keras-applications and keras-preprocessing are required by tensorflow):
$ conda list | grep keras
keras 2.2.4 0
keras-applications 1.0.6 py36_0
keras-base 2.2.4 py36_0
keras-preprocessing 1.0.5 py36_0
After removing all, keras* and tensorflow*, then reinstalling tensorflow, the discrepancy vanished. Even after reinstalling keras the results remain similar. I also checked with a different virtualenv where tensorflow is installed via pip; also no discrepancy here. Right now I can't reproduce this discrepancy anymore. It must've been a broken installation of tensorflow.

Memory error while creating large one hot encoding for lstm

I am trying to build a character level lstm model using keras and for that I need to create one hot encoding for characters to feed in the model. And I have around 1000 characters in each line with around 160,000 lines.
I tried to create a numpy array of zeros and make the corresponding entries 1, but I am geting memory error due to large size of the matrix is there any other way to do this.
Sure:
Create batches. Only process, say, 10,000 entries (characters) at a time, computing and feeding them into your neural network just before they're needed (say, by using a generator instead of a list). Keras has a fit_generator training function to do this.
Group chunks of data together. Say, instead of a line being a matrix of the one-hot encodings of its characters, instead use the sum/max of all those columns to produce a single vector for the line. Now, each line is only a single vector, with dimensionality equal to the number of unique characters in your data set. E.g., instead of [[0, 0, 1], [0, 1, 0], [0, 0, 1]], use, [0, 1, 1] to represent the entire line.
Perhaps an easier and more intuitive solution is to add a custom one-hot encoding layer in your Keras model architecture.
def build_model(self, batch_size, print_summary=False):
X = Input(shape=(self.sequence_length,), batch_size=batch_size)
embedding = OneHotEncoding(num_classes=self.vocab_size+1,
sequence_length=self.sequence_length)(X)
encoder = Bidirectional(CuDNNLSTM(units=self.recurrent_units,
return_sequences=True))(embedding)
...
where we can define the OneHotEncoding layer as follows:
from tensorflow.keras.layers import Lambda
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Layer # for creating custom layers
class OneHotEncoding(Layer):
def __init__(self, num_classes=None, sequence_length=None):
if num_classes is None or sequence_length is None:
raise ValueError("Can't leave params #num_classes or #sequence_length empty")
super(OneHotEncoding, self).__init__()
self.num_classes = num_classes
self.sequence_length = sequence_length
def encode(self, inputs):
return K.one_hot(indices=inputs,
num_classes=self.num_classes)
def call(self, inputs):
return Lambda(function=self.encode,
input_shape=(self.sequence_length,))(inputs)
Here we are utilizing the fact that the Keras model is fed the training samples in appropriate batch sizes (with the standard fit function), which in turn doesn't yield a MemoryError.

Oversampling images during inference

It is is a common practice in convolutional neural networks to oversample a given image during inference,
I.e to create a batch from different transformation of the same image (most common - different crops and mirroring), transfer the entire batch through the network and average (or another kind of reducing function) over the results to get a single prediction (caffe example),
How can this approach be implemented in tensorflow?
You can take a look at the TF cnn tutorial. In particular, the function distorted_inputs does the image preprocessing step.
In short, there are a couple of TF functions in the tf.image package that help with distorting the images. You can use either them or regular numpy functions to create an extra dimension for the output, for which you can average the results:
Before:
input_place = tf.placeholder(tf.float32, [None, 256, 256, 3])
prediction = some_model(input_place) # size: [None]
sess.run(prediction, feed_dict={input_place: batch_of_images})
After:
input_place = tf.placeholder(tf.float32, [None, NUM_OF_DISTORTIONS, 256, 256, 3])
prediction = some_model(input_place) # make sure it is of size [None, NUM_DISTORTIONS]
new_prediction = tf.reduce_mean(prediction, axis=1)
new_batch = np.zeros(batch_size, NUM_OF_DISTORTIONS, 256, 256, 3)
for i in xrange(len(batch_of_images)):
for f in xrange(len(distortion_functions)):
new_batch[i, f, :, :, :] = distortion_functions[f](batch_of_images[i])
sess.run(new_prediction, feed_dict={input_place: new_batch})
Take a look at TF's image-related functions. You could apply those transformations at test time to some input image, and stack all of them together to make a batch.
I imagine you could also do this using OpenCV or some other image processing tool. I don't see a need to do it in the computation graph. You could create the batches beforehand, and pass it through in feed_dict.

Tensorflow: optimize over input with gradient descent

I have a TensorFlow model (a convolutional neural network) which I successfully trained using gradient descent (GD) on some input data.
Now, in a second step, I would like to provide an input image as initialization then and optimize over this input image with fixed network parameters using GD. The loss function will be a different one, but this a detail.
So, my main question is how to tell the gradient descent algorithm to
stop optimizing the network parameters
to optimize over the input image
The first can probably done with this
Holding variables constant during optimizer
Do you guys have ideas about the second point?
I guess I can recode the gradient descent algorithm myself using the TF gradient function, but my gut feeling tells me that there should be an easier way, which also allows me to benefit from more complex GD variants (Adam etc.).
No need for your SDG own implementation. TensorFlow provides all functions:
import tensorflow as tf
import numpy as np
# some input
data_pldhr = tf.placeholder(tf.float32)
img_op = tf.get_variable('input_image', [1, 4, 4, 1], dtype=tf.float32, trainable=True)
img_assign = img_op.assign(data_pldhr)
# your starting image
start_value = (np.ones((4, 4), dtype=np.float32) + np.eye(4))[None, :, :, None]
# override variable_getter
def nontrainable_getter(getter, *args, **kwargs):
kwargs['trainable'] = False
return getter(*args, **kwargs)
# all variables in this scope are not trainable
with tf.variable_scope('myscope', custom_getter=nontrainable_getter):
x = tf.layers.dense(img_op, 10)
y = tf.layers.dense(x, 10)
# the usual stuff
cost_op = tf.losses.mean_squared_error(x, y)
train_op = tf.train.AdamOptimizer(0.1).minimize(cost_op)
# fire up the training process
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(img_assign, {data_pldhr: start_value})
print(sess.run(img_op))
for i in range(10):
_, c = sess.run([train_op, cost_op])
print(c)
print(sess.run(img_op))
represent an image as tf.Variable with trainable=True
initialise this variable with the starting image (initial guess)
recreate the NN graph using TF variables with trainable=False and copy the weights from the trained NN graph using tf.assign
calculate the loss function
plug the loss into any TF optimiser algorithm you want
Another alternative is to use ScipyOptimizerInterface, which allows to use scipy's minimizer. This supports constrained minimization.
I'm looking for a solution to the same problem, but my model is not an easy one as I have an LSTM network with cells created with MultiRNNCell, I don't think it is possible to get the weight and clone the network. Is there any workaround so that I can compute the gradient wrt the input?