tensorflow 1.x reshape placeholder - tensorflow

From TF v1.x as shown below, x is an entry with dim [None, 784] to train my example model.
It looks similar to [?, 784] from tensorboard.
For some reason, I have to reshape x to [1, 784] to predict, that is x needs to look like [1, 784] instead of [?, 784] to predict after training the model.
Any suggestions?
with tf.name_scope('Input_Layer'):
x = tf.placeholder("float",shape=[None, 784]
,name="x")
x_image = tf.reshape(x, [-1, 28, 28, 1])
...

The "?" in tensorflow indicates that this dimension is not fixed. So it can vary from call to call. The function predict expects a shaped tensor [n_examples,784] where n_examples is the number of examples.
In your case, since you only need to predict one example, you need to reshape it to [1,784] , i.e., n_examples=1

Related

How can I multiply a tensor with an unknown dimension to a tensorflow variable?

I'm working in Keras (Tensorflow 2). I'd like to multiply each element of a tensor with its own trainable weight. Let's say that my input tensor is 1D, with 10 elements; so I try to define the input as a Keras input tensor, the weights as a tf.Variable, and I try to use the Keras Multiply layer, thus:
import tensorflow as tf
inputs = tf.keras.layers.Input(shape=(10), name='inputs')
weights = tf.Variable(tf.random.normal([10]), name='weights')
outputs = tf.keras.layers.Multiply()([inputs, weights])
Now when I inspect the dimensions they are:
inputs: shape=(None, 10)
weights: shape=(10,)
outputs: shape=(10, 10)
The input dimension has a None dimension, for the batch size, which is what I expect and want. However I expected outputs to have shape=(None, 10). Instead, the initial dimension for the batch size seems to have taken a fixed size of 10. How should I correct this?
You need to broadcast weights along dimenstion 0. The shape of the dimension you want to fix must be constant.
That is, weights must have the shape (1, 10), not (10,).
This can be done using:
weights = tf.Variable(tf.random.normal([1, 10]), name='weights')
or
weights = tf.Variable(tf.random.normal([10]), name='weights')
...
weights = tf.expand_dims(weights, axis=0)

TensorFlow post-LSTM fully connected layer outputs return the same values as each other

I was trying to train a sequence-to-sequence LSTM model with a dataset with three labels: [1, 0] for detection of class 1, [0, 1] for detection of class 2, and [0, 0] for detection of nothing. After getting the outputs from the LSTM network, I applied a fully connected layer to each cell's output the following way:
outputs, state = tf.nn.dynamic_rnn(cell, input)
# Shape of outputs is [batch_size, n_time_steps, n_hidden]
# As matmul works only on matrices, reshape to get the
# time dimension into the batch dimension
outputs = tf.reshape(outputs, [-1, n_hidden])
# Shape is [batch_size * n_time_steps, n_hidden]
w = tf.Variable(tf.truncated_normal(shape=[n_hidden, 2], stddev=0.1))
b = tf.Variable(tf.constant(0.1, shape=[2]))
logit = tf.add(tf.matmul(outputs, w), b, name='logit')
# Reshape back to [batch_size, n_time_steps, 2]
logit = tf.reshape(logit, [batch_size, -1, 2])
On the output, I apply tf.nn.sigmoid_cross_entropy_with_logits and reduce the mean. The model seems to work just fine achieving high accuracy and recall, except for the fact that in almost all the cases it outputs either [0, 0], or [1, 1]. The two logit outputs from the fully connected layer always have very similar values (but not the same). This effectively puts a hard-cap on precision of 50%, which the model converges to (but not a fraction of a percent above).
Now, my intuition would tell me that something must be wrong with the training step and both fully connected outputs are trained on the same data, but curiously enough when I replace my own implementation with the prepackaged one from tf.contrib:
outputs, state = tf.nn.dynamic_rnn(cell, input)
logit = tf.contrib.layers.fully_connected(outputs, 2, activation_fn=None)
without changing a single other thing, the model starts training properly. Now, the obvious solution would be to just use that implementation, but why doesn't the first one work?

regarding reshape a multi-dimensional tensor into [-1, n]

While reading a tensorflow segmentation, I am trying to figure out how does the following implementation aiming to do?
A x tensor is defined as follows self.x = tf.placeholder("float", shape=[None, None, None, n_label]).
Later, one function tries to invoke a transformed tensor "x1", which is defined as x1=tf.reshape(self.x, [-1, n_label])
My understanding is that tf.reshape(self.x, [-1,n_label])should try to re-shape
x tensor into a 1-D vector.
But I am kind of confusing about the x defined this way as shape=[None, None, None, n_label] and x1 transformed as such. What really should x1 look like and why doing so?
None means we don't want to specify dimension when creating a graph, rather want to determine it in the runtime. For instance, it could be useful when you want to use different minibatch sizes during train and for the inference.
Reshape with -1 for some dimension means just 'preserve the total size of a tensor'. For example, reshape.(x, [-1, 2]) for x of shape [3, 4, 2] would produce a new tensor of shape [12, 2].

Tensorflow reshape tensor gives None dimension

I have used the model described here on the 0.6.0 branch. The code can be found here. I have done some minor changes to the linked code.
In my code I create two models, one for training and one for validation, very similar as it is done in the Tensorflow Tutorial.
with tf.variable_scope("model", reuse=None, initializer=initializer):
m = PTBModel_User(is_training=True, config=config, name='Training model')
with tf.variable_scope("model", reuse=True, initializer=initializer):
mtest = PTBModel_User(is_training=False, config=config_valid, name='Validation model')
The first model, the one for training, seems to be created just fine, but the second, used for validation, does not. The output gets a None dimension! The row I'm refering to is on row 134 in the linked code:
output = tf.reshape(tf.concat(1, outputs), [-1, size])
I've added these lines right after the reshape of the output:
output_shape = output.get_shape()
print("Model num_steps:", num_steps)
print("Model batch_size:", batch_size)
print("Output dims", output_shape[0], output_shape[1])
and that gives me this:
Model num_steps: 400
Model batch_size: 1
Output dims Dimension(None) Dimension(650)
This problem only happens with the 'validation model', not with the 'training model'. For the 'training model' I get expected output:
Model num_steps: 400
Model batch_size: 2
Output dims Dimension(800) Dimension(650)
(Note that with the 'validation model' I use a batch_size=1 instead of batch_size=2 that I use for the training model)
From what I understand, using -1 as input to the reshape function, will figure the output shape out automagically! But then why do I get None? Nothing in my config fed to the model has a None value.
Thank you for all the help and tips!
TL;DR: A dimension being None simply means that shape inference could not determine an exact shape for the output tensor, at graph-building time. When you run the graph, the tensor will have the appropriate run-time shape.
If you're not interested in how shape inference works, you can stop reading now.
Shape inference applies local rules, based on a "shape function" that takes the shapes of the inputs to an operation and computes (possibly incomplete) shapes for the outputs of an operation. To figure out why tf.reshape() gives an incomplete shape, we have to look at its inputs, and work backwards:
The shape argument to tf.reshape() includes a [-1], which means "figure the output shape automagically" based on the shape of the tensor input.
The tensor input is the output of tf.concat() on the same line.
The inputs to tf.concat() are computed by a tf.mul() in BasicLSTMCell.__call__(). The tf.mul() op multiplies the result of a tf.tanh() and a tf.sigmoid() op.
The tf.tanh() op produces an output of size [?, hidden_size], and the tf.sigmoid() op produces an output of size [batch_size, hidden_size].
The tf.mul() op performs NumPy-style broadcasting. A dimension will only be broadcast if it has size 1. Consider three cases where we compute tf.mul(x, y):
If x has shape [1, 10], and y has shape [5, 10], then broadcasting will happen, and the output shape will be [5, 10].
If x has shape [1, 10], and y has shape [1, 10], then there will be no broadcasting, and the output shape will be [1, 10].
However, if x has shape [1, 10], and y has shape [?, 10], there is insufficient static information to tell whether broadcasting will happen (even though we happen to know that case 2 applies at runtime).
Therefore, when batch_size is 1, the tf.mul() op produces an output with the shape [?, hidden_size]; but when batch_size is greater than 1, the output shape is [batch_size, hidden_size].
Where shape inference breaks down, it can be appropriate to use the Tensor.set_shape() method to add information. This would potentially be useful in the BasicLSTMCell implementation, where we know more than it is possible to infer about the shapes of the outputs.

InceptionV3 and transfer learning with tensorflow

I would like to do a transfer learning from the given inceptionV3 in tensorflow example. Following the classify image example and the operator and tensor names given here https://github.com/AKSHAYUBHAT/VisualSearchServer/blob/master/notebooks/notebook_network.ipynb I can create my graph. But when, I put a batch of images of size (100, 299, 299, 3) in the pre-computed inception graph, I get the following shape error at the pool_3 layer :
ValueError: Cannot reshape a tensor with 204800 elements to shape [1, 2048] (2048 elements)
It seems that this inceptionV3 graph doesn't accept image batch as input. am I wrong ?
Actually it works for transfer learning if you extract the right thing. There is no problem feeding a batch of images in the shape of [N, 299, 299, 3] as ResizeBilinear:0 and then using the pool_3:0 tensor. It's the reshaping afterwards that breaks, but you can reshape yourself (you'll have your own layers afterwards anyway). If you wanted to use the original classifier with a batch, you could add your own reshaping on top of pool_3:0 and then add the softmax layer, reusing the weights/biases tensors of the original softmax.
TLDR: With double_img being a stack of two images with shape (2, 299, 299, 3) this works:
pooled_2 = sess.graph.get_tensor_by_name("pool_3:0").eval(session=sess, feed_dict={'ResizeBilinear:0':double_img})
pooled_2.shape
# => (2, 1, 1, 2048)
You're not wrong. This seems like a very reasonable feature request, so I've opened a ticket for it on github. Follow that for updates.
Something like this should do it:
with g.as_default():
inputs = tf.placeholder(tf.float32, shape=[batch_size, 299, 299, 3],
name='input')
with slim.arg_scope(inception.inception_v3_arg_scope()):
logits, end_points = inception.inception_v3( inputs,
num_classes=FLAGS.num_classes, is_training=False)
variables_to_restore = lim.get_variables_to_restore(exclude=exclude)
sess = tf.Session()
saver = tf_saver.Saver(variables_to_restore)
Then you should be able to call the operation:
sess.run("pool_3:0",feed_dict={'ResizeBilinear:0':images})
etarion made a very good point. However, we don't have to reshape it ourselves; instead, we could change the value of shape that reshape takes as input. I.e.,
input_tensor_name = 'import/input:0'
shape_tensor_name = 'import/InceptionV3/Predictions/Shape:0'
output_tensor_name= 'import/InceptionV3/Predictions/Reshape_1:0'
output_tensor = tf.import_graph_def(
graph.as_graph_def(),
input_map={input_tensor_name: image_batch,
shape_tensor_name: [batch_size, num_class]},
return_elements=[output_tensor_name])
These tensor names are based on inception_v3_2016_08_28_frozen.pb.