tensorflow constant with variable size - tensorflow

I have a variable batch size, so all of my inputs are of the form
tf.placeholder(tf.float32, shape=(None, ...)
to accept the variable batch sizes. However, how might you create a constant value with variable batch size? The issue is with this line:
log_probs = tf.constant(0.0, dtype=tf.float32, shape=[None, 1])
It is giving me an error:
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
I'm sure it is possible to initialize a constant tensor with variable batch size, how might I do so?
I've also tried the following:
tf.constant(0.0, dtype=tf.float32, shape=[-1, 1])
I get this error:
ValueError: Too many elements provided. Needed at most -1, but received 1

A tf.constant() has fixed size and value at graph construction time, so it probably isn't the right op for your application.
If you are trying to create a tensor with a dynamic size and the same (constant) value for every element, you can use tf.fill() and tf.shape() to create an appropriately-shaped tensor. For example, to create a tensor t that has the same shape as input and the value 0.5 everywhere:
input = tf.placeholder(tf.float32, shape=(None, ...))
# `tf.shape(input)` takes the dynamic shape of `input`.
t = tf.fill(tf.shape(input), 0.5)
As Yaroslav mentions in his comment, you may also be able to use (NumPy-style) broadcasting to avoid materializing a tensor with dynamic shape. For example, if input has shape (None, 32) and t has shape (1, 32) then computing tf.mul(input, t) will broadcast t on the first dimension to match the shape of input.

Suppose you want to do something using log_probs. For example, you want to do power operation on a tensor v and a constant log_probs. And you want the shape of log_probs to vary with the shape of v.
v = tf.placeholder(tf.float32, shape=(None, 1)
log_probs = tf.constant(0.0, dtype=tf.float32, shape=[None, 1])
result = tf.pow(v, log_probs)
However, you cannot construct the constant log_probs. While, firstly, you can construct tf.constant just with shape =[1] log_prob = tf.constant(0.0, dtype=tf.float32, shape=[None, 1]). Then use tf.map_fn() to do pow operation for each element of v.
v = tf.placeholder(tf.float32, shape=(None, 1)
log_prob = tf.constant(0.0, dtype=tf.float32, shape=[1])
result = tf.map_fn(lambda ele : tf.pow(ele, log_prob), v)

Related

tensorflow multiply two layers

I do have two inputs to my network. The one input gets feed through a few linear layers and then should be multiplied elementwise with the other input.
input_a = Input(shape=input_a_shape)
x = Dense(side_channel_speed_output_dimension, activation="relu")(x)
x = tf.reshape(x, [input_shape_image[0], input_shape_image[1]])
x = tf.expand_dims(x, input_shape_image[2])
x = tf.repeat(x, repeats=input_shape_image[2], axis=2)
input_b = Input(shape=input_shape_b)
At this stage I would like to multiply input_a and input_b. How do I do that?
I tried:
input = keras.layers.multiply([input_b, input_a])
There I got this error:
ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_2:0", shape=(None, 60, 40, 2), dtype=float32) at layer "input_2". The following previous layers were accessed without issue: ['input_1', 'dense', 'tf_op_layer_Reshape', 'tf_op_layer_ExpandDims', 'tf_op_layer_Repeat/Shape', 'tf_op_layer_Repeat/strided_slice', 'tf_op_layer_Repeat/strided_slice_1', 'tf_op_layer_Repeat/ExpandDims', 'tf_op_layer_Repeat/Tile', 'tf_op_layer_Repeat/concat']
I also tried just tf.multipy(a,b). It does not work either.
Does someone know, how to solve this?
Thanks
I got it now. I need to use this function:
x = keras.layers.multiply([input_image, x])

TensorFlow Variable Shape assign

I am trying to create a variable and then trying to assign it with the value of my convolution layer.
However it is refusing because it is saying shapes are not equal even though I have passed validate_shape=False while creating the variable.
The convolution shape is [32,20,20,3]. How do I pass this into the variable?
the bottom code:
conv = tf.layers.conv2d_transpose(conv, filters=3, kernel_size=3, strides=(2,2), padding='same',activation=tf.nn.relu) # TO ASSIGN LATER
g=tf.Variable(([32,20,20]),dtype=tf.float32,validate_shape=False)#THE VARIABLE
loss = tf.reduce_mean(tf.square(conv))
opt = tf.train.AdamOptimizer().minimize(loss)
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(1000):
_, xx,inp,output,target = sess.run([opt, loss,x,conv,y])#
print(xx)
print("subtraction result:",output[0]-target[0])
g=g.assign(conv)
print(g.eval())
I am getting this error:
InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [3] rhs shape= [32,20,20,3]
[[Node: Assign_7 = Assign[T=DT_FLOAT, use_locking=false, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Variable_9, conv2d_transpose_98/Relu)]]
Can someone please help fix this?
I think you want:
import numpy as np
import tensorflow as tf
g = tf.Variable(initial_value=np.zeros((32,20,20,3)), expected_shape=(32,20,20,3), dtype=tf.float32)
If you print g you get the correct shape now:
<tf.Variable 'Variable_3:0' shape=(32, 20, 20, 3) dtype=float32_ref>
What you did was this:
g = tf.Variable(initial_value=(32,20,20), dtype=tf.float32, valid_shape=False)
By not stating expected_shape you defaulted to positional arguments, the first argument of tf.Variable is initial_value as per the documentation below:
__init__(
initial_value=None,
trainable=True,
collections=None,
validate_shape=True,
caching_device=None,
name=None,
variable_def=None,
dtype=None,
expected_shape=None,
import_scope=None,
constraint=None
)
That shape of the initial_value you declared would have been a vector of length [3] which is exactly the shape that the assign operation is complaining about.
Moral of the story: it's generally less buggy to declare arguments by name if you can. :)

Initialize a variable with placeholder as shape

I want to initialize the Weights variable by including the BatchSize dimension, which will be different between the Training and Prediction stages. Tried using the placeholder for that, but doesn't seem to work:
batchsize = tf.placeholder(tf.int32, name='batchsize', shape=[])
...
output, state = tf.nn.dynamic_rnn(multicell, X, dtype=tf.float32, initial_state=inState)
weights = tf.Variable(tf.truncated_normal([batchsize, CELL_SIZE, 1], 0.0, 1.0), name='weights')
bias = tf.Variable(tf.zeros(1), name='bias')
preds = tf.add(tf.matmul(output, weights), bias, name='preds')
loss = tf.reduce_mean(tf.squared_difference(preds, Y_))
train_step = tf.train.AdamOptimizer(LR).minimize(loss)
I can get it to work by specifying batchsize as a constant for the weights variable dimension, as opposed to a placeholder, but this way I get an error when I try to recover the session for the Prediction stage, because there the batchsize is 1. If I specify the placeholder, I get the error:
ValueError: initial_value must have a shape specified: Tensor("truncated_normal:0", shape=(?, 32, 1), dtype=float32)
Even though I do pass the value for the batchsize placeholder into the feed_dict when running this part of the graph.
If I specify the option validate_shape=False while creating the weights variable, that stage of the graph works, but later I get this error in AdamOptimizer:
ValueError: as_list() is not defined on an unknown TensorShape.
How can I get this to work? My ultimate goal is to reduce the Cell-Size dimension of the dynamic_rnn output down to 1 to predict the output at each time-step of the RNN.
Make the whole size of variable
get the specific shape of variable corresponding to the batch size (using tf.gather)
self.model_X = tf.placeholder(dtype=tf.float32, shape=[None, 100], name='X')
real_batch_size = tf.cast(tf.shape(self.model_X)[0],tf.int32)
self.y_dk = tf.get_variable(name="y_dk",initializer=tf.truncated_normal(shape=[self.num_doc, self.num_topic], mean=0, stddev=tf.truediv(1.0,self.lambda_y)), dtype=tf.float32)
batch_y_dk = tf.reshape(tf.gather(self.y_dk, self.model_batch_data_idx), [real_batch_size, self.num_topic])

Setting the shape of a tensor as the shape of another tensor

I'm trying to run this piece of code:
def somefunc(x, rows, n_hidden):
vectors = tf.contrib.layers.embed_sequence(nodes, vocab_size=vocab_size, embed_dim=n_hidden)
batch_size = tf.shape(vectors)[0]
state = tf.zeros([batch_size, rows, n_hidden])
bias = tf.Variable(tf.constant(0.1, shape=[batch_size,1]) # Error here!
...
x = tf.placeholder(tf.int32, shape=[None, 200])
pred = somefunc(x, 200, 40)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=target))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
I get this error when the function is called (error is for bias shape):
TypeError: int() argument must be a string, a bytes-like object or a number, not 'Tensor'
I tried doing b = tf.Variable(0.1, validate_shape=False), but then I got this error at optimizer:
ValueError: as_list() is not defined on an unknown TensorShape.
If I remove validate_shape=False, I get a shape error.
I'm very sorry if I'm overlooking something obvious, but could someone tell me where I'm going wrong?
Thank you very much!
The shape argument of the tf.constant() op expects a static shape, so you can't use a tf.Tensor as part of the argument.
Fortunately there is another op that will suffice: tf.fill(), which allows the shape (its dims argument) to be a tf.Tensor. This means you can define bias as:
bias = tf.Variable(tf.fill(dims=[batch_size, 1], 0.1), validate_shape=False)

get the size of a variable batch dimension

assuming the input to the network is a placeholder with variable batch size, i.e.:
x = tf.placeholder(..., shape=[None, ...])
is it possible to get the shape of x after it has been fed? tf.shape(x)[0] still returns None.
If x has a variable batch size, the only way to get the actual shape is to use the tf.shape() operator. This operator returns a symbolic value in a tf.Tensor, so it can be used as the input to other TensorFlow operations, but to get a concrete Python value for the shape, you need to pass it to Session.run().
x = tf.placeholder(..., shape=[None, ...])
batch_size = tf.shape(x)[0] # Returns a scalar `tf.Tensor`
print x.get_shape()[0] # ==> "?"
# You can use `batch_size` as an argument to other operators.
some_other_tensor = ...
some_other_tensor_reshaped = tf.reshape(some_other_tensor, [batch_size, 32, 32])
# To get the value, however, you need to call `Session.run()`.
sess = tf.Session()
x_val = np.random.rand(37, 100, 100)
batch_size_val = sess.run(batch_size, {x: x_val})
print x_val # ==> "37"
You can get the shape of the tensor x using x.get_shape().as_list(). For getting the first dimension (batch size) you can use x.get_shape().as_list()[0].