TF one hot encode tensor object - tensorflow

Running a simple logistic regression following the mnist simple example
my code:
x = np.array(xHotdog + xNotHotdog)
y = np.array(yHotdog + yNotHotdog)
print("y shape before: "+str(y.shape))
y = tf.one_hot(indices=y, depth=2)
print("y shape after: "+str(y.shape))
y.eval()
return x,y
Later I run:
sess.run([optimizer, cost], feed_dict={x: batch_xs,y: batch_ys})
Getting the error:
TypeError: The value of a feed cannot be a tf.Tensor object.
Acceptable feed values include Python scalars, strings, lists, numpy
ndarrays, or TensorHandles.For reference, the tensor obje ct was
Tensor("one_hot:0", shape=(6457, 2), dtype=float32) which was passed
to the feed with key Tensor("Placeholder_1:0", shape=(?, 2),
dtype=float32).

You want to pass Tensor object to feed_dict and it raise an error. As mentioned in docs:
The optional feed_dict argument allows the caller to override the
value of tensors in the graph.
feed_dict: A dictionary that maps graph elements to values
So you need some values for the feed_dict. As shown in the error:
Acceptable feed values include Python scalars, strings, lists, numpy
ndarrays, or TensorHandles
Option 1: In your case you pass Tensor object which cause such exception. The issue can resolve in this way:
y = tf.placeholder(tf.float32, [None, 10], name="Y")
y = tf.one_hot(indices=y, depth=2)
...
sess.run([optimizer, cost], feed_dict={x: batch_xs,y: batch_ys})
Option 2: You can also evaluate the value with (in this case feed_dict not needed):
a = np.random.randint(1, 10, [20])
b = tf.one_hot(a, depth=10)
with tf.Session() as sess:
print(a)
print(sess.run([b]))

Related

TensorFlow Variable Shape assign

I am trying to create a variable and then trying to assign it with the value of my convolution layer.
However it is refusing because it is saying shapes are not equal even though I have passed validate_shape=False while creating the variable.
The convolution shape is [32,20,20,3]. How do I pass this into the variable?
the bottom code:
conv = tf.layers.conv2d_transpose(conv, filters=3, kernel_size=3, strides=(2,2), padding='same',activation=tf.nn.relu) # TO ASSIGN LATER
g=tf.Variable(([32,20,20]),dtype=tf.float32,validate_shape=False)#THE VARIABLE
loss = tf.reduce_mean(tf.square(conv))
opt = tf.train.AdamOptimizer().minimize(loss)
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(1000):
_, xx,inp,output,target = sess.run([opt, loss,x,conv,y])#
print(xx)
print("subtraction result:",output[0]-target[0])
g=g.assign(conv)
print(g.eval())
I am getting this error:
InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [3] rhs shape= [32,20,20,3]
[[Node: Assign_7 = Assign[T=DT_FLOAT, use_locking=false, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Variable_9, conv2d_transpose_98/Relu)]]
Can someone please help fix this?
I think you want:
import numpy as np
import tensorflow as tf
g = tf.Variable(initial_value=np.zeros((32,20,20,3)), expected_shape=(32,20,20,3), dtype=tf.float32)
If you print g you get the correct shape now:
<tf.Variable 'Variable_3:0' shape=(32, 20, 20, 3) dtype=float32_ref>
What you did was this:
g = tf.Variable(initial_value=(32,20,20), dtype=tf.float32, valid_shape=False)
By not stating expected_shape you defaulted to positional arguments, the first argument of tf.Variable is initial_value as per the documentation below:
__init__(
initial_value=None,
trainable=True,
collections=None,
validate_shape=True,
caching_device=None,
name=None,
variable_def=None,
dtype=None,
expected_shape=None,
import_scope=None,
constraint=None
)
That shape of the initial_value you declared would have been a vector of length [3] which is exactly the shape that the assign operation is complaining about.
Moral of the story: it's generally less buggy to declare arguments by name if you can. :)

How shape Tensor array?

I have lately been vexed by the following error message:
ValueError: Cannot feed value of shape (2455040,) for Tensor 'Placeholder:0', which has shape '(2455040, ?)'
Which is being produced from running the following code:
NUMCLASSES=16
NUMPIXELS=959*640*4
# set up to feed an array of images [images, size_of_image]
x = tf.placeholder(tf.float32, [NUMPIXELS,None])
....deletia....
# Define loss and optimizer..why is this 2d?
y_ = tf.placeholder(tf.float32, [None,NUMCLASSES])
sess = tf.InteractiveSession()
tf.global_variables_initializer().run(session=sess)
tl = get_tensor_list()
for f, n in tl:
str = '/users/me/downloads/train/' + f
mm = Image.open(str)
mm = mm.convert('F')
mma=np.array(mm)
i = mma.flatten() #now this is an array of floats of size NUMPIXELS
sess.run(train_step, feed_dict={x: i, y_: n}) # <<DEATH
Somehow, that array is getting a shape that tf does not like [(x,) when it wants (x,?)]. How to satisfy the tensorgods in this case? The tensor must be what it must be for other mathematical reasons not discussed.
reshaping the array might help.
i = mma.flatten().reshape((NUMPIXELS,1))
The error happens because the two tensors have different ranks: tensor with shape (2455040,) has rank 1, while tensor with shape (2455040,?) has rank 2.
You can do this:
x = tf.placeholder(tf.float32, [None])
x = tf.reshape(x, [NUMPIXELS,-1])

TensorFlow unexpected behaviour

I am running the following code:
import tensorflow as tf
sess = tf.InteractiveSession()
y = tf.Variable(initial_value=[1,2])
sess.run(y, feed_dict={y: [100,2]})
Gives:
[100,2]
However, after that:
sess.run(y)
Gives the origianl value of y: [1,2].
Why doesn't the:
sess.run(y, feed_dict={y: [100,2]})
updates the value of y, and saves it?
Because feed_dict overrides the values of the keys of the dictionary.
With the statement:
sess.run(y, feed_dict={y: [100,2]})
you're telling tensorflow to replace the values of y with [100, 2] for the current computation. This is not an assignment.
Therefore, the next call
sess.run(y)
fetches the original variables and uses it.
If you want to assign a value to a variable, you have to define this operation in the computational graph, using tf.assing
If you want to use a feed dictionary, initialize a placeholder instead of a variable and define the output.
As an example (in the same style as your question code),
import tensorflow as tf
import numpy as np
sess = tf.InteractiveSession()
inputs = tf.placeholder(tf.int32, shape = (2,2))
output = tf.matmul(inputs, tf.transpose(inputs))
test_input = np.array([[10,2], [4,4]])
print test_input.shape
# (2,2)
sess.run(output, feed_dict = {inputs : test_input})
# array([[104, 48], [48, 32]], dtype=int32)
If you just want to change the value of a variable look to nessuno's answer.

Feeding dtype np.float32 to TensorFlow placeholder

I am trying to feed an numpy ndarray of type : float32 to a TensorFlow placeholder, but it's giving me the following error:
You must feed a value for placeholder tensor 'Placeholder' with dtype float
My place holders are defined as:
n_steps = 10
n_input = 13
n_classes = 1201
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
And the line it's giving me the above error is:
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
where my batch_x and batch_y are numpy ndarrays of dtype('float32'). The following are the types that I printed using pdb:
(Pdb)batch_x.dtype
dtype('float32')
(Pdb)x.dtype
tf.float32
I have also tried type-casting batch_x and batch_y to tf.float32 as it seems like x is of dtype tf.float32 but running the code with type-casting:
sess.run(optimizer, feed_dict={x: tf.to_float(batch_x), y: tf.to_float(batch_y)})
gives the following error:
TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, or numpy ndarrays.
How should I feed the placeholders? of what type should I use?
Any help/advice will be much appreciated!
For your first problem, are you sure that batch_y is also float32? You only provide the trace of the batch_x type, and batch_y is more likely to be integer, since it appears to be a one-hot encoding of your classes.
For the second problem, what you do wrong is you use tf.to_float, which is a tensor operation, on a regular numpy array. You should use numpy cast intstead:
sess.run(optimizer, feed_dict={x: batch_x.astype(np.float32), y: batch_y.astype(np.float32)})

Cannot make array in using tensorflow

In feeding batch_xs to x, I reshaped batch_xs, for BATCH_SIZE is 1.
Here is my source.
I'm not sure what is making the ValueError.
with tf.name_scope("input") as scope:
x = tf.placeholder(tf.float32, shape=[1, 784])
BATCH_SIZE = 1
DROP_OUT_RATE = 0.4
EPOCH = 1
MEMORIZE = 10
accuracy_array = []
loss = tf.nn.l2_loss(y - x) / BATCH_SIZE
train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
mnist_list = make_mnist_train_list(55000, 10)
test_list = make_mnist_test_list(5000, 10)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
for i in range(EPOCH):
for j in range(5500/BATCH_SIZE):
batch_xs = tf.reshape(mnist_list[0][j*BATCH_SIZE:j*BATCH_SIZE+1], [1, 784])
sess.run(train_step, feed_dict={x: batch_xs, keep_prob: (1.0 - DROP_OUT_RATE), r_keep_prob: (1.0 - DROP_OUT_RATE)})
if (i +1)% MEMORIZE == 0:
accuracy_array.append(loss.eval(session=sess, feed_dict={x: batch_xs, keep_prob: 1.0, r_keep_prob: 1.0}))
print(accuracy_array[ int(math.floor((i+1)/MEMORIZE -1))])
This gives me the Value error, which doesn't make sense to me.
ValueError: Argument must be a dense tensor
From the documentation here :
Each key in feed_dict can be one of the following types:
If the key is a Tensor, the value may be a Python scalar, string, list, or numpy ndarray that can be converted to the same dtype as that tensor. Additionally, if the key is a placeholder, the shape of the value will be checked for compatibility with the placeholder.
If the key is a SparseTensor, the value should be a SparseTensorValue.
The types that you can use as the "value" for a key in feed_dict should be Python primitive types or numpy arrays. You are using the result of tf.reshape, which is a TensorFlow Tensor type. You can simply use np.reshape if you want to feed a reshaped array.