tensorflow multiply two layers - tensorflow

I do have two inputs to my network. The one input gets feed through a few linear layers and then should be multiplied elementwise with the other input.
input_a = Input(shape=input_a_shape)
x = Dense(side_channel_speed_output_dimension, activation="relu")(x)
x = tf.reshape(x, [input_shape_image[0], input_shape_image[1]])
x = tf.expand_dims(x, input_shape_image[2])
x = tf.repeat(x, repeats=input_shape_image[2], axis=2)
input_b = Input(shape=input_shape_b)
At this stage I would like to multiply input_a and input_b. How do I do that?
I tried:
input = keras.layers.multiply([input_b, input_a])
There I got this error:
ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_2:0", shape=(None, 60, 40, 2), dtype=float32) at layer "input_2". The following previous layers were accessed without issue: ['input_1', 'dense', 'tf_op_layer_Reshape', 'tf_op_layer_ExpandDims', 'tf_op_layer_Repeat/Shape', 'tf_op_layer_Repeat/strided_slice', 'tf_op_layer_Repeat/strided_slice_1', 'tf_op_layer_Repeat/ExpandDims', 'tf_op_layer_Repeat/Tile', 'tf_op_layer_Repeat/concat']
I also tried just tf.multipy(a,b). It does not work either.
Does someone know, how to solve this?
Thanks

I got it now. I need to use this function:
x = keras.layers.multiply([input_image, x])

Related

What is wrong with the simple code in Keras below?

I am struggling for the last hour to understand what i am doing wrong. I am a novice in NN, but this is not my first code.
def simple_model(lr=0.1):
X = Input(shape=(6144,))
out = Dense(1)(X)
model = Model(inputs=X, outputs=out)
opt = tf.keras.optimizers.SGD(learning_rate=lr)
model.compile(optimizer=opt, loss='mean_squared_error')
model.summary()
return model
mod = simple_model()
a = np.zeros(6144)
v = mod.predict(a)
running this i get the following error:
WARNING:tensorflow:Model was constructed with shape (None, 6144) for input Tensor("input_1:0", shape=(None, 6144), dtype=float32), but it was called on an input with incompatible shape (32, 1).
......
ValueError: Input 0 of layer dense is incompatible with the layer: expected axis -1 of input shape to have value 6144 but received input with shape [32, 1]
Where does this [32, 1] come from ?!
I am sure there is some silly mistake in my code, but can't see it :(
p.s. It does compile the mode and prints the summary before throwing an error
mod = simple_model()
a = np.zeros(6144)
#Add this line
a = np.expand_dims(a,axis=0)
v = mod.predict(a)
The reason why your error appears is that Keras + TensorFlow only allow batch predictions. When we use expand_dims function, we actually create a batch of dimension 1.

Add Placeholder to layer

I have a Tensorflow layer with 2 nodes. These are the output nodes of another 2 larger hidden layers. Now I want to add 2 new nodes to this layer, so I end up with 4 nodes in total, and do some last computation. The added nodes are implemented as Placeholders so far, and have a dynamic shape depending on the batch size. Here is a sketch of the net:
Now I want to concatenate Nodes 3 and 4 to the nodes 1 and 2 of the previously computed layer. I know there is tf.concat for this, but I don't understand how to do this correctly.
How do I add Placeholders of the same batchsize as the original net input to a specific layer?
EDIT:
When I use tf.concat over axis=1, I end up with the following problem:
z = tf.placeholder(tf.float32, shape=[None, 2])
Weight_matrix = weight_variable([4, 2])
bias = bias_variable([4, 2])
concat = tf.concat((dnn_out, z), 1)
h_fc3 = tf.nn.relu(tf.matmul(concat, Weight_matrix) + bias)
Adding the bias to the tf.matmul result throws an InvalidArgumentError: Incompatible shapes: [20,2] vs. [4,2].
Since your data is batched, probably over the first dimension, you need to concatenate over the second (axis=1):
import tensorflow as tf
import numpy as np
dnn_output = tf.placeholder(tf.float32, (None, 2)) # replace with your DNN(input) result
additional_nodes = tf.placeholder(tf.float32, (None, 2))
concat = tf.concat((dnn_output, additional_nodes), axis=1)
print(concat)
# > Tensor("concat:0", shape=(?, 4), dtype=float32)
dense_output = tf.layers.dense(concat, units=2)
print(dense_output)
# > Tensor("dense/BiasAdd:0", shape=(?, 2), dtype=float32)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(dense_output, feed_dict={dnn_output: np.ones((5, 2)),
additional_nodes: np.zeros((5, 2))}))

How shape Tensor array?

I have lately been vexed by the following error message:
ValueError: Cannot feed value of shape (2455040,) for Tensor 'Placeholder:0', which has shape '(2455040, ?)'
Which is being produced from running the following code:
NUMCLASSES=16
NUMPIXELS=959*640*4
# set up to feed an array of images [images, size_of_image]
x = tf.placeholder(tf.float32, [NUMPIXELS,None])
....deletia....
# Define loss and optimizer..why is this 2d?
y_ = tf.placeholder(tf.float32, [None,NUMCLASSES])
sess = tf.InteractiveSession()
tf.global_variables_initializer().run(session=sess)
tl = get_tensor_list()
for f, n in tl:
str = '/users/me/downloads/train/' + f
mm = Image.open(str)
mm = mm.convert('F')
mma=np.array(mm)
i = mma.flatten() #now this is an array of floats of size NUMPIXELS
sess.run(train_step, feed_dict={x: i, y_: n}) # <<DEATH
Somehow, that array is getting a shape that tf does not like [(x,) when it wants (x,?)]. How to satisfy the tensorgods in this case? The tensor must be what it must be for other mathematical reasons not discussed.
reshaping the array might help.
i = mma.flatten().reshape((NUMPIXELS,1))
The error happens because the two tensors have different ranks: tensor with shape (2455040,) has rank 1, while tensor with shape (2455040,?) has rank 2.
You can do this:
x = tf.placeholder(tf.float32, [None])
x = tf.reshape(x, [NUMPIXELS,-1])

Initialize a variable with placeholder as shape

I want to initialize the Weights variable by including the BatchSize dimension, which will be different between the Training and Prediction stages. Tried using the placeholder for that, but doesn't seem to work:
batchsize = tf.placeholder(tf.int32, name='batchsize', shape=[])
...
output, state = tf.nn.dynamic_rnn(multicell, X, dtype=tf.float32, initial_state=inState)
weights = tf.Variable(tf.truncated_normal([batchsize, CELL_SIZE, 1], 0.0, 1.0), name='weights')
bias = tf.Variable(tf.zeros(1), name='bias')
preds = tf.add(tf.matmul(output, weights), bias, name='preds')
loss = tf.reduce_mean(tf.squared_difference(preds, Y_))
train_step = tf.train.AdamOptimizer(LR).minimize(loss)
I can get it to work by specifying batchsize as a constant for the weights variable dimension, as opposed to a placeholder, but this way I get an error when I try to recover the session for the Prediction stage, because there the batchsize is 1. If I specify the placeholder, I get the error:
ValueError: initial_value must have a shape specified: Tensor("truncated_normal:0", shape=(?, 32, 1), dtype=float32)
Even though I do pass the value for the batchsize placeholder into the feed_dict when running this part of the graph.
If I specify the option validate_shape=False while creating the weights variable, that stage of the graph works, but later I get this error in AdamOptimizer:
ValueError: as_list() is not defined on an unknown TensorShape.
How can I get this to work? My ultimate goal is to reduce the Cell-Size dimension of the dynamic_rnn output down to 1 to predict the output at each time-step of the RNN.
Make the whole size of variable
get the specific shape of variable corresponding to the batch size (using tf.gather)
self.model_X = tf.placeholder(dtype=tf.float32, shape=[None, 100], name='X')
real_batch_size = tf.cast(tf.shape(self.model_X)[0],tf.int32)
self.y_dk = tf.get_variable(name="y_dk",initializer=tf.truncated_normal(shape=[self.num_doc, self.num_topic], mean=0, stddev=tf.truediv(1.0,self.lambda_y)), dtype=tf.float32)
batch_y_dk = tf.reshape(tf.gather(self.y_dk, self.model_batch_data_idx), [real_batch_size, self.num_topic])

tensorflow constant with variable size

I have a variable batch size, so all of my inputs are of the form
tf.placeholder(tf.float32, shape=(None, ...)
to accept the variable batch sizes. However, how might you create a constant value with variable batch size? The issue is with this line:
log_probs = tf.constant(0.0, dtype=tf.float32, shape=[None, 1])
It is giving me an error:
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
I'm sure it is possible to initialize a constant tensor with variable batch size, how might I do so?
I've also tried the following:
tf.constant(0.0, dtype=tf.float32, shape=[-1, 1])
I get this error:
ValueError: Too many elements provided. Needed at most -1, but received 1
A tf.constant() has fixed size and value at graph construction time, so it probably isn't the right op for your application.
If you are trying to create a tensor with a dynamic size and the same (constant) value for every element, you can use tf.fill() and tf.shape() to create an appropriately-shaped tensor. For example, to create a tensor t that has the same shape as input and the value 0.5 everywhere:
input = tf.placeholder(tf.float32, shape=(None, ...))
# `tf.shape(input)` takes the dynamic shape of `input`.
t = tf.fill(tf.shape(input), 0.5)
As Yaroslav mentions in his comment, you may also be able to use (NumPy-style) broadcasting to avoid materializing a tensor with dynamic shape. For example, if input has shape (None, 32) and t has shape (1, 32) then computing tf.mul(input, t) will broadcast t on the first dimension to match the shape of input.
Suppose you want to do something using log_probs. For example, you want to do power operation on a tensor v and a constant log_probs. And you want the shape of log_probs to vary with the shape of v.
v = tf.placeholder(tf.float32, shape=(None, 1)
log_probs = tf.constant(0.0, dtype=tf.float32, shape=[None, 1])
result = tf.pow(v, log_probs)
However, you cannot construct the constant log_probs. While, firstly, you can construct tf.constant just with shape =[1] log_prob = tf.constant(0.0, dtype=tf.float32, shape=[None, 1]). Then use tf.map_fn() to do pow operation for each element of v.
v = tf.placeholder(tf.float32, shape=(None, 1)
log_prob = tf.constant(0.0, dtype=tf.float32, shape=[1])
result = tf.map_fn(lambda ele : tf.pow(ele, log_prob), v)