I use the tf.conbrib.image.rotate in loss function, and an error happen:
No gradients provided for any variable, check your graph for ops that do not support gradients,
My code is:
import tensorflow as tf
image_tensor = tf.placeholder(dtype=tf.float32, shape=[None,320,320,1])
target_tensor = tf.placeholder(dtype=tf.float32, shape=[None,320,320,1])
s = tf.concat([image_tensor, target_tensor],axis=3)
s = tf.layers.flatten(s)
w = tf.get_variable(initializer=tf.truncated_normal([204800,1], stddev=0.1),name='w')
b = tf.get_variable(initializer=tf.truncated_normal([1], stddev=0.1),name='b')
a = tf.matmul(s,w)+b
diff = tf.contrib.image.rotate(image_tensor, a[:,0], interpolation='BILINEAR') - target_tensor
loss = tf.reduce_sum(tf.square(diff))
optimizer = tf.train.GradientDescentOptimizer(0.001)
train = optimizer.minimize(loss)
My tensorflow is: 1.4.0, and my computer is Win10.
By the way, how can I rotate the 3D image in tensorflow? the tf.conbrib.image.rotate only work for 2D image.
the tf.contrib.image.rotate cannot be optimized. I find a useful code for spatial transform network (https://github.com/kevinzakka/spatial-transformer-network).
Related
I am trying to convert a Tensorflow(1.15) model to PyTorch model. Since I was getting very different loss values, I tried comparing the output of the LSTM in the forward pass for the same input. The declaration and initialization of the LSTM is given below:
Tensorflow Code
rnn_cell_video_fw = tf.contrib.rnn.LSTMCell(
num_units=self.options['rnn_size'],
state_is_tuple=True,
initializer=tf.orthogonal_initializer()
)
rnn_cell_video_fw = tf.contrib.rnn.DropoutWrapper(
rnn_cell_video_fw,
input_keep_prob=1.0 - rnn_drop,
output_keep_prob=1.0 - rnn_drop
)
sequence_length = tf.expand_dims(tf.shape(video_feat_fw)[1], axis=0)
initial_state = rnn_cell_video_fw.zero_state(batch_size=batch_size, dtype=tf.float32)
rnn_outputs_fw, _ = tf.nn.dynamic_rnn(
cell=rnn_cell_video_fw,
inputs=video_feat_fw,
sequence_length=sequence_length,
initial_state=initial_state,
dtype=tf.float32
)
PyTorch code
self.rnn_video_fw = nn.LSTM(self.options['video_feat_dim'], self.options['rnn_size'], dropout = self.options['rnn_drop'])
rnn_outputs_fw, _ = self.rnn_video_fw(video_feat_fw)
Initialization for LSTM in train.py
def init_weight(m):
if type(m) in [nn.LSTM]:
for param in m.parameters():
nn.init.orthogonal_(m.weight_hh_l0)
nn.init.orthogonal_(m.weight_ih_l0)
The output for tensorflow
The output for pytorch
The same is pretty much the case for every data item and my PyTorch model isn't converging. Is my suspicion of difference in output LSTM being the reason for it correct? If so, where am I going wrong?
Link to the paper
Link to TF code
let me know if anything else is required.
For a Deep learning model I defined with tf2.0 keras I need to write a custom loss function.
As this will depend on stuff like entropy and normal log_prob, it would really make my life less misrable if I could use tf.distributions.Normal and use two model outpus as mu and sigma respectivly.
However, as soon as I put this into my loss function, I get the Keras error that no gradient is defined for this function.
ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
I tried encalpsulating the call in a tf.contrib.eager.Variable as I read somewhere. Did not help.
What is the trick to use them? I don't see a reason from the fundamental arcitecture why I should not be able to use them in a mixed form.
#this is just an example which does not really give a meaningful result.
import tensorflow as tf
import tensorflow.keras as K
import numpy as np
def custom_loss_fkt(extra_output):
def loss(y_true,y_pred):
dist = tf.distributions.Normal(loc=y_pred,scale=extra_output)
d = dist.entropy()
return K.backend.mean(d)
return loss
input_node = K.layers.Input(shape=(1,))
dense = K.layers.Dense(8,activation='relu')(input_node)
#dense = K.layers.Dense(4,activation='relu')(dense)
out1 = K.layers.Dense(4,activation='linear')(dense)
out2 = K.layers.Dense(4,activation ='linear')(dense)
model = K.Model(inputs = input_node, outputs = [out1,out2])
model.compile(optimizer = 'adam', loss = [custom_loss_fkt(out2),custom_loss_fkt(out1)])
model.summary()
x = np.zeros((1,1))
y1 = np.array([[0.,0.1,0.2,0.3]])
y2 = np.array([[0.1,0.1,0.1,0.1]])
model.fit(x,[y1,y2],epochs=1000,verbose=0)
print(model.predict(x))
I have a pre-trained Keras Sequential model called agent, and I'm trying to fine-tune it with a loss function.
json_file = open('model/prior_model_RMSprop.json', 'r')
json_model = json_file.read()
json_file.close()
agent = model_from_json(json_model)
prior = model_from_json(json_model)
# load weights into model
agent.load_weights('model/model_RMSprop.h5')
prior.load_weights('model/model_RMSprop.h5')
agent_output = agent.output
prior_output = prior.output
loss = tf.reduce_mean(tf.square(agent_output - prior_output))
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
So far, everything works fine. However, when I add some basic tensorflow operations, the error happens
agent_logits = tf.cast(tf.argmax(agent_output, axis = 2), dtype = tf.float32)
prior_logits = tf.cast(tf.argmax(prior_output, axis = 2), dtype = tf.float32)
loss = tf.reduce_mean(tf.square(agent_logits - prior_logits))
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
ValueError: No gradients provided for any variable
So the tensorflow operations break the connection between the model and the loss function? I've been stucked here for almost 2 weeks so pls help. I'm also not very clear about how to update a Keras model's trainable weights with the loss function I defined. Any hints or related links will be appreciated!!!
So I am trying to write a simple softmax classifier in TensorFlow.
Here is the code:
# Neural network parameters
n_hidden_units = 500
n_classes = 10
# training set placeholders
input_X = tf.placeholder(dtype='float32',shape=(None,X_train.shape[1], X_train.shape[2]),name="input_X")
input_y = tf.placeholder(dtype='int32', shape=(None,), name="input_y")
# hidden layer
dim = X_train.shape[1]*X_train.shape[2] # dimension of each traning data point
flatten_X = tf.reshape(input_X, shape=(-1, dim))
weights_hidden_layer = tf.Variable(initial_value=np.zeros((dim,n_hidden_units)), dtype ='float32')
bias_hidden_layer = tf.Variable(initial_value=np.zeros((1,n_hidden_units)), dtype ='float32')
hidden_layer_output = tf.nn.relu(tf.matmul(flatten_X, weights_hidden_layer) + bias_hidden_layer)
# output layer
weights_output_layer = tf.Variable(initial_value=np.zeros((n_hidden_units,n_classes)), dtype ='float32')
bias_output_layer = tf.Variable(initial_value=np.zeros((1,n_classes)), dtype ='float32')
output_logits = tf.matmul(hidden_layer_output, weights_output_layer) + bias_output_layer
predicted_y = tf.nn.softmax(output_logits)
# loss
one_hot_labels = tf.one_hot(input_y, depth=n_classes, axis = -1)
loss = tf.losses.softmax_cross_entropy(one_hot_labels, output_logits)
# optimizer
optimizer = tf.train.MomentumOptimizer(0.01, 0.5).minimize(
loss, var_list=[weights_hidden_layer, bias_hidden_layer, weights_output_layer, bias_output_layer])
This compiles, and I have checked the shape of all the tensor and it coincides with what I expect.
However, I tried to run the optimizer using the following code:
# running the optimizer
s = tf.InteractiveSession()
s.run(tf.global_variables_initializer())
for i in range(5):
s.run(optimizer, {input_X: X_train, input_y: y_train})
loss_i = s.run(loss, {input_X: X_train, input_y: y_train})
print("loss at iter %i:%.4f" % (i, loss_i))
And the loss kept being the same in all iterations!
I must have messed up something, but I fail to see what.
Any ideas? I also appreciate if somebody leaves comments regarding code style and/or tensorflow tips.
You have made a mistake. You are initializing your weights using np.zeros. Use np.random.normal. You can choose mean for this Gaussian Distribution by using number of inputs going to a particular neuron. You can read more about it here.
The reason that you want to initialize with Gaussian Distribution is because you want to break symmetry. If all the weights are initialized by zero, then you can use backpropogation to see that all the weights will evolved same.
One could visualize the weight histogram using TensorBoard to make it easier. I executed your code for this. A few more lines are needed to set up Tensorboard logging but the histogram summary of weights can be easily added.
Initialized to zeros
weights_hidden_layer = tf.Variable(initial_value=np.zeros((784,n_hidden_units)), dtype ='float32')
tf.summary.histogram("weights_hidden_layer",weights_hidden_layer)
Xavier initialization
initializer = tf.contrib.layers.xavier_initializer()
weights_hidden_layer = tf.Variable(initializer(shape=(784,n_hidden_units)), dtype ='float32')
tf.summary.histogram("weights_hidden_layer",weights_hidden_layer)
I recently found a paper where they used a CNN with complex 2D-feature-maps as an input. However, there Network also outputs a complex output vector. They used Keras with tensorflow backend.
Here is the link: https://arxiv.org/pdf/1802.04479.pdf
I asked myself if it is possible to build complex Deep Neural Networks like CNNs with tensorflow. As far as i know it is not possible. Did i missed something?
There are other related questions which adresses the same problem with no answer: Complex convolution in tensorflow
when building a realy senseless model with real number in and output all works correct:
import tensorflow as tf
from numpy import random, empty
n = 10
feature_vec_real = random.rand(1,n)
X_real = tf.placeholder(tf.float64,feature_vec_real.shape)
def model(x):
out = tf.layers.dense(
inputs=x,
units=2
)
return out
model_output = model(X_real)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
output = sess.run(model_output,feed_dict={X_real:feature_vec_real})
but when using complex inputs:
import tensorflow as tf
from numpy import random, empty
n = 10
feature_vec_complex = empty(shape=(1,n),dtype=complex)
feature_vec_complex.real = random.rand(1,n)
feature_vec_complex.imag = random.rand(1,n)
X_complex = tf.placeholder(tf.complex128,feature_vec_complex.shape)
def complex_model(x):
out = tf.layers.dense(
inputs=x,
units=2
)
return out
model_output = complex_model(X_complex)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
output = sess.run(model_output,feed_dict={X_complex:feature_vec_complex})
i get the following error:
ValueError: An initializer for variable dense_7/kernel of is required
So what is the correct way to initialize the weights of the dense kernel when having complex inputs?
I know there is the possibility to handle complex numbers as two different layers in the network. But this is not what i want.
Thanks for your help!