TensorFlow Variable Shape assign - tensorflow

I am trying to create a variable and then trying to assign it with the value of my convolution layer.
However it is refusing because it is saying shapes are not equal even though I have passed validate_shape=False while creating the variable.
The convolution shape is [32,20,20,3]. How do I pass this into the variable?
the bottom code:
conv = tf.layers.conv2d_transpose(conv, filters=3, kernel_size=3, strides=(2,2), padding='same',activation=tf.nn.relu) # TO ASSIGN LATER
g=tf.Variable(([32,20,20]),dtype=tf.float32,validate_shape=False)#THE VARIABLE
loss = tf.reduce_mean(tf.square(conv))
opt = tf.train.AdamOptimizer().minimize(loss)
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(1000):
_, xx,inp,output,target = sess.run([opt, loss,x,conv,y])#
print(xx)
print("subtraction result:",output[0]-target[0])
g=g.assign(conv)
print(g.eval())
I am getting this error:
InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [3] rhs shape= [32,20,20,3]
[[Node: Assign_7 = Assign[T=DT_FLOAT, use_locking=false, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Variable_9, conv2d_transpose_98/Relu)]]
Can someone please help fix this?

I think you want:
import numpy as np
import tensorflow as tf
g = tf.Variable(initial_value=np.zeros((32,20,20,3)), expected_shape=(32,20,20,3), dtype=tf.float32)
If you print g you get the correct shape now:
<tf.Variable 'Variable_3:0' shape=(32, 20, 20, 3) dtype=float32_ref>
What you did was this:
g = tf.Variable(initial_value=(32,20,20), dtype=tf.float32, valid_shape=False)
By not stating expected_shape you defaulted to positional arguments, the first argument of tf.Variable is initial_value as per the documentation below:
__init__(
initial_value=None,
trainable=True,
collections=None,
validate_shape=True,
caching_device=None,
name=None,
variable_def=None,
dtype=None,
expected_shape=None,
import_scope=None,
constraint=None
)
That shape of the initial_value you declared would have been a vector of length [3] which is exactly the shape that the assign operation is complaining about.
Moral of the story: it's generally less buggy to declare arguments by name if you can. :)

Related

Evaluating Tensorflow Tensors

to get the gradients of the output with respect to the input,
one can use
grads = tf.gradients(model.output, model.input)
where grads =
[<tf.Tensor 'gradients_81/dense/MatMul_grad/MatMul:0' shape=(?, 18) dtype=float32>]
This is a modell, where there are 18 continous inputs and 1 continous output.
I assume, this is a symbolic expression and that one needs a list of 18 entries to feed it to the tensor, such that it gives out the derivatives as floats.
I would use
Test =[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0]
with tf.Session() as sess:
alpha = sess.run(grads, feed_dict = {model.input : Test})
print(alpha)
But I get the error
FailedPreconditionError (see above for traceback): Error while reading resource variable dense_2/bias from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/dense_2/bias)
[[Node: dense_2/BiasAdd/ReadVariableOp = ReadVariableOp[dtype=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](dense_2/bias)]]
What is wrong?
EDIT:
This is, what has happened before:
def build_model():
model = keras.Sequential([
...])
optimizer = ...
model.compile(loss='mse'... )
return model
model = build_model()
history= model.fit(data_train,train_labels,...)
loss, mae, mse = model.evaluate(data_eval,...)
Progress so far:
Test =[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0]
with tf.Session() as sess:
tf.keras.backend.set_session(sess)
tf.initializers.variables(model.output)
alpha = sess.run(grads, feed_dict = {model.input : Test})
is also not working, giving the error:
TypeError: Using a `tf.Tensor` as a Python `bool` is not allowed. Use `if t is not None:` instead of `if t:` to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor.
You're trying to use uninitialized variable. All you have to do is add
sess.run(tf.global_variables_initializer())
right after with tf.Session() as sess:
Edit:
You need to register session with Keras
with tf.Session() as sess:
tf.keras.backend.set_session(sess)
And use tf.initializers.variables(var_list) instead of tf.global_variables_initializer()
See https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html
Edit:
Test = np.ones((1, 18), dtype=np.float32)
inputs = layers.Input(shape=[18,])
layer = layers.Dense(10, activation='sigmoid')(inputs)
model = tf.keras.Model(inputs=inputs, outputs=layer)
model.compile(optimizer='adam', loss='mse')
checkpointer = tf.keras.callbacks.ModelCheckpoint(filepath='path/weights.hdf5')
model.fit(Test, nb_epoch=1, batch_size=1, callbacks=[checkpointer])
grads = tf.gradients(model.output, model.input)
with tf.Session() as sess:
tf.keras.backend.set_session(sess)
sess.run(tf.global_variables_initializer())
model.load_weights('path/weights.hdf5')
alpha = sess.run(grads, feed_dict={model.input: Test})
print(alpha)
This shows consistent result

TF one hot encode tensor object

Running a simple logistic regression following the mnist simple example
my code:
x = np.array(xHotdog + xNotHotdog)
y = np.array(yHotdog + yNotHotdog)
print("y shape before: "+str(y.shape))
y = tf.one_hot(indices=y, depth=2)
print("y shape after: "+str(y.shape))
y.eval()
return x,y
Later I run:
sess.run([optimizer, cost], feed_dict={x: batch_xs,y: batch_ys})
Getting the error:
TypeError: The value of a feed cannot be a tf.Tensor object.
Acceptable feed values include Python scalars, strings, lists, numpy
ndarrays, or TensorHandles.For reference, the tensor obje ct was
Tensor("one_hot:0", shape=(6457, 2), dtype=float32) which was passed
to the feed with key Tensor("Placeholder_1:0", shape=(?, 2),
dtype=float32).
You want to pass Tensor object to feed_dict and it raise an error. As mentioned in docs:
The optional feed_dict argument allows the caller to override the
value of tensors in the graph.
feed_dict: A dictionary that maps graph elements to values
So you need some values for the feed_dict. As shown in the error:
Acceptable feed values include Python scalars, strings, lists, numpy
ndarrays, or TensorHandles
Option 1: In your case you pass Tensor object which cause such exception. The issue can resolve in this way:
y = tf.placeholder(tf.float32, [None, 10], name="Y")
y = tf.one_hot(indices=y, depth=2)
...
sess.run([optimizer, cost], feed_dict={x: batch_xs,y: batch_ys})
Option 2: You can also evaluate the value with (in this case feed_dict not needed):
a = np.random.randint(1, 10, [20])
b = tf.one_hot(a, depth=10)
with tf.Session() as sess:
print(a)
print(sess.run([b]))

Initialize a variable with placeholder as shape

I want to initialize the Weights variable by including the BatchSize dimension, which will be different between the Training and Prediction stages. Tried using the placeholder for that, but doesn't seem to work:
batchsize = tf.placeholder(tf.int32, name='batchsize', shape=[])
...
output, state = tf.nn.dynamic_rnn(multicell, X, dtype=tf.float32, initial_state=inState)
weights = tf.Variable(tf.truncated_normal([batchsize, CELL_SIZE, 1], 0.0, 1.0), name='weights')
bias = tf.Variable(tf.zeros(1), name='bias')
preds = tf.add(tf.matmul(output, weights), bias, name='preds')
loss = tf.reduce_mean(tf.squared_difference(preds, Y_))
train_step = tf.train.AdamOptimizer(LR).minimize(loss)
I can get it to work by specifying batchsize as a constant for the weights variable dimension, as opposed to a placeholder, but this way I get an error when I try to recover the session for the Prediction stage, because there the batchsize is 1. If I specify the placeholder, I get the error:
ValueError: initial_value must have a shape specified: Tensor("truncated_normal:0", shape=(?, 32, 1), dtype=float32)
Even though I do pass the value for the batchsize placeholder into the feed_dict when running this part of the graph.
If I specify the option validate_shape=False while creating the weights variable, that stage of the graph works, but later I get this error in AdamOptimizer:
ValueError: as_list() is not defined on an unknown TensorShape.
How can I get this to work? My ultimate goal is to reduce the Cell-Size dimension of the dynamic_rnn output down to 1 to predict the output at each time-step of the RNN.
Make the whole size of variable
get the specific shape of variable corresponding to the batch size (using tf.gather)
self.model_X = tf.placeholder(dtype=tf.float32, shape=[None, 100], name='X')
real_batch_size = tf.cast(tf.shape(self.model_X)[0],tf.int32)
self.y_dk = tf.get_variable(name="y_dk",initializer=tf.truncated_normal(shape=[self.num_doc, self.num_topic], mean=0, stddev=tf.truediv(1.0,self.lambda_y)), dtype=tf.float32)
batch_y_dk = tf.reshape(tf.gather(self.y_dk, self.model_batch_data_idx), [real_batch_size, self.num_topic])

Setting the shape of a tensor as the shape of another tensor

I'm trying to run this piece of code:
def somefunc(x, rows, n_hidden):
vectors = tf.contrib.layers.embed_sequence(nodes, vocab_size=vocab_size, embed_dim=n_hidden)
batch_size = tf.shape(vectors)[0]
state = tf.zeros([batch_size, rows, n_hidden])
bias = tf.Variable(tf.constant(0.1, shape=[batch_size,1]) # Error here!
...
x = tf.placeholder(tf.int32, shape=[None, 200])
pred = somefunc(x, 200, 40)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=target))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
I get this error when the function is called (error is for bias shape):
TypeError: int() argument must be a string, a bytes-like object or a number, not 'Tensor'
I tried doing b = tf.Variable(0.1, validate_shape=False), but then I got this error at optimizer:
ValueError: as_list() is not defined on an unknown TensorShape.
If I remove validate_shape=False, I get a shape error.
I'm very sorry if I'm overlooking something obvious, but could someone tell me where I'm going wrong?
Thank you very much!
The shape argument of the tf.constant() op expects a static shape, so you can't use a tf.Tensor as part of the argument.
Fortunately there is another op that will suffice: tf.fill(), which allows the shape (its dims argument) to be a tf.Tensor. This means you can define bias as:
bias = tf.Variable(tf.fill(dims=[batch_size, 1], 0.1), validate_shape=False)

tensorflow constant with variable size

I have a variable batch size, so all of my inputs are of the form
tf.placeholder(tf.float32, shape=(None, ...)
to accept the variable batch sizes. However, how might you create a constant value with variable batch size? The issue is with this line:
log_probs = tf.constant(0.0, dtype=tf.float32, shape=[None, 1])
It is giving me an error:
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
I'm sure it is possible to initialize a constant tensor with variable batch size, how might I do so?
I've also tried the following:
tf.constant(0.0, dtype=tf.float32, shape=[-1, 1])
I get this error:
ValueError: Too many elements provided. Needed at most -1, but received 1
A tf.constant() has fixed size and value at graph construction time, so it probably isn't the right op for your application.
If you are trying to create a tensor with a dynamic size and the same (constant) value for every element, you can use tf.fill() and tf.shape() to create an appropriately-shaped tensor. For example, to create a tensor t that has the same shape as input and the value 0.5 everywhere:
input = tf.placeholder(tf.float32, shape=(None, ...))
# `tf.shape(input)` takes the dynamic shape of `input`.
t = tf.fill(tf.shape(input), 0.5)
As Yaroslav mentions in his comment, you may also be able to use (NumPy-style) broadcasting to avoid materializing a tensor with dynamic shape. For example, if input has shape (None, 32) and t has shape (1, 32) then computing tf.mul(input, t) will broadcast t on the first dimension to match the shape of input.
Suppose you want to do something using log_probs. For example, you want to do power operation on a tensor v and a constant log_probs. And you want the shape of log_probs to vary with the shape of v.
v = tf.placeholder(tf.float32, shape=(None, 1)
log_probs = tf.constant(0.0, dtype=tf.float32, shape=[None, 1])
result = tf.pow(v, log_probs)
However, you cannot construct the constant log_probs. While, firstly, you can construct tf.constant just with shape =[1] log_prob = tf.constant(0.0, dtype=tf.float32, shape=[None, 1]). Then use tf.map_fn() to do pow operation for each element of v.
v = tf.placeholder(tf.float32, shape=(None, 1)
log_prob = tf.constant(0.0, dtype=tf.float32, shape=[1])
result = tf.map_fn(lambda ele : tf.pow(ele, log_prob), v)