I am a PyTorch user and am struggling a lot trying to understand the syntax of Tensorflow 1. For dependency reasons, I have to use Tensorflow 1.3.0. When looking for information online, a lot of it seems to be conflicting, depending on the Tensorflow version.
I have a simple feedforward neural network and would like to pass part of the output as an input into the next iteration (kind of like an RNN). I initialise the tensor as follows:
h = tf.placeholder(tf.float32, shape=[batch_size, 2])
hid = np.zeros((batch_size, 2))
The output looks like this:
(<tf.Tensor 'quad_expectation_1:0' shape=(100,) dtype=float32>, <tf.Tensor 'quad_variance_1:0' shape=(100,) dtype=float32>)
When I define this output as the new "hid", it gives me the following error:
ValueError: setting an array element with a sequence.
I don't understand how to access the values inside the tensor objects. This is probably a stupid question but I'd be so thankful if someone could help me with this.
Related
Is there a way to access the current Keras training step as a tensor in the tensorflow graph?
I am trying to build a model which has an 'epsilon' parameter which is decayed as a function of the current training step.
epsilon = some_fn_of(K.global_step) # <- Something like this?
self.q = K.Sequential([
K.layers.InputLayer(input_shape),
K.layers.Dense(n, name='q'),
K.layers.Lambda(lambda x: tf.cond(tf.random.uniform((), 0, 1) < epsilon,
lambda _: tf.constant(0.0),
lambda ac: ac)
], name='q')
FYI: I'm using the Tensorflow bundled Keras.
I don't know if this will work for all purposes, but it looks like you can find the next training step number using model.optimizer.iterations. The variable name appears to have the format "<optimizer name>/iter:0". You can find the iterations property in the Optimizer documentation. Example value:
<tf.Variable 'Adam/iter:0' shape=() dtype=int64, numpy=5978>
I suspect that Keras does not have any such tensor in the graph and that the only way to access the step is through Callbacks (Keras Docs,Tensorflow Docs). Especially since Keras is meant to be agnostic to the backend, and so would likely maintain the step outside of tensorflow.
I'm using Keras/TF with the following model:
conv = Conv2D(4, 3, activation = None, use_bias=True)(inputs)
conv = Conv2D(2, 1, activation = None, use_bias=True)(conv)
model = Model(input = inputs, output = conv)
model.compile(optimizer=Adam(lr=1e-4), loss=keras.losses.mean_absolute_error)
In model.fit, I get an error saying:
ValueError: Error when checking target: expected conv2d_2 to have
shape (300, 320, 2) but got array with shape (300, 320, 1)
This is as expected because the targets are single channel images whereas the last layer in the model has 2 channels.
What I don't understand is why when I use a custom loss function:
def my_loss2(y_true, y_pred):
return keras.losses.mean_absolute_error(y_true, y_pred)
and compile the model:
model.compile(optimizer = Adam(lr=1e-4), loss=my_loss2)
it does work (or at least, not giving the error). Is there any kind of automatic conversion/truncation going on?
I'm using TF (CPU) 1.12.0, and Keras 2.2.2
Sincerely,
Elad
Why is the behavior different for built-in and custom losses?
It turns out that Keras is performing an upfront shape check for built-in functions that are defined in the losses module.
In the source code of Model._standardize_user_data, which is called by fit, I found this comment:
# If `loss_fn` is not a function (e.g. callable class)
# or if it not in the `losses` module, then
# it is a user-defined loss and we make no assumptions
# about it.
In the code around that comment you can see that indeed, depending on the type of loss function (built-in or custom), the output shape is either passed to an inner call of standardize_input_data or not. If the output shape is passed, standardize_input_data is raising the error message you are getting.
And I think this behavior makes some sense: Without knowing the implementation of a loss function, you cannot know its shape requirements. Someone may invent some loss function that needs different shapes. On the other hand, the docs clearly say that the loss function's parameters must have the same shape:
y_true: True labels. TensorFlow/Theano tensor.
y_pred: Predictions. TensorFlow/Theano tensor of the same shape as y_true.
So I find this a little inconsistent...
Why does your custom loss function work with incompatible shapes?
If you provide a custom loss, it may still work, even if the shapes do not perfectly match. In your case, where only the last dimension is different, I'm quite sure that broadcasting is what is happening. The last dimension of your targets will just be duplicated.
In many cases broadcasting is quite useful. Here, however, it is likely not since it hides a logical error.
Actually, I want to generate sequences just like the thing that Alex Grave's did. I have the implementation of tensorflow. At the same time, I want to try the attention-based seq2seq model to generate the handwriting. So about the decoder, I did it with tf.nn.dynamic_rnn, it works. Now, I want to use the attentiom-based in tensorflow, so I want to change that to tf.contrib.seq2seq.dynamic_decode. But I get the error below:
TypeError: Cannot convert a list containing a tensor of dtype <dtype: 'int32'> to <dtype: 'float32'> (Tensor is: <tf.Tensor 'vector_rnn/DEC_RNN/transpose_1:0' shape=(100, ?) dtype=int32>)
I check the API documents of both of them. tf.nn.dynamic
tf.contrib.seq2seq.dynamic.decode
About the return of them, I did not get any idea to solve this error.
If you get any idea, please tell me! I would appreciate it very much.
Actually, it works if I use the tf.nn.dynamic_rnn to code the attention layers in decoder of VAE, just different with the tf.contrib.seq2seq.dynamic_decode.
I am trying to create a simple neural net in TensorFlow. The only tricky part is I have a custom operation that I have implemented with py_func. When I pass the output from py_func to a Dense layer, TensorFlow complains that the rank should be known. The specific error is:
ValueError: Inputs to `Dense` should have known rank.
I don't know how to preserve the shape of my data when I pass it through py_func. My question is how do I get the correct shape? I have a simple example below to illustrate the problem.
def my_func(x):
return np.sinh(x).astype('float32')
inp = tf.convert_to_tensor(np.arange(5))
y = tf.py_func(my_func, [inp], tf.float32, False)
with tf.Session() as sess:
with sess.as_default():
print(inp.shape)
print(inp.eval())
print(y.shape)
print(y.eval())
The output from this snippet is:
(5,)
[0 1 2 3 4]
<unknown>
[ 0.
1.17520118 3.62686038 10.01787472 27.28991699]
Why is y.shape <unknown>? I want the shape to be (5,) the same as inp. Thanks!
Since py_func can execute arbitrary Python code and output anything, TensorFlow can't figure out the shape (it would require analyzing Python code of function body) You can instead give the shape manually
y.set_shape(inp.get_shape())
I tried creating a tf.Variable with a dynamic shape. The following outlines the problem.
Doing this works.
init_bias = tf.random_uniform(shape=[self.config.hidden_layer_size, tf.shape(self.question_inputs)[0]])
However, when i try to do this:
init_bias = tf.Variable(init_bias)
It throws the error ValueError: initial_value must have a shape specified: Tensor("random_uniform:0", shape=(?, ?), dtype=float32)
Just come context (question input is a placeholder which dynamic batch ):
self.question_inputs = tf.placeholder(tf.int32, shape=[None, self.config.qmax])
It seems like putting a dynamic value into random uniform gives shape=(?,?) which gives an error with tf.Variable.
Thanks and appreciate any help!
This should work:
init_bias = tf.Variable(init_bias,validate_shape=False)
If validate_shape is False, tensorflow allows the variable to be initialized with a value of unknown shape.
However, what you're doing seems a little strange to me. In tensorflow, Variables are generally used to store weights of a neural net, whose shape remains fixed irrespective of the batch size. Variable batch size is handled by passing a variable length tensor into the graph (and multiplying/adding it with a fixed shape bias Variable).