Pytorch TypeError: eq() received an invalid combination of arguments - numpy

num_samples = 10
def predict(x):
sampled_models = [guide(None, None) for _ in range(num_samples)]
yhats = [model(x).data for model in sampled_models]
mean = torch.mean(torch.stack(yhats), 0)
return np.argmax(mean.numpy(), axis=1)
print('Prediction when network is forced to predict')
correct = 0
total = 0
for j, data in enumerate(test_loader):
images, labels = data
predicted = predict(images.view(-1,28*28))
total += labels.size(0)
correct += (predicted == labels).sum().item()
print("accuracy: %d %%" % (100 * correct / total))
Error:
correct += (predicted == labels).sum().item() TypeError:
eq() received an invalid combination of arguments - got (numpy.ndarray), but expected one of:
* (Tensor other)
didn't match because some of the arguments have invalid types: (!numpy.ndarray!)
* (Number other)
didn't match because some of the arguments have invalid types: (!numpy.ndarray!)
*

You are trying to compare predicted and labels. However, your predicted is an np.array while labels is a torch.tensor therefore eq() (the == operator) cannot compare between them.
Replace the np.argmax with torch.argmax:
return torch.argmax(mean, dim=1)
And you should be okay.

Related

while_loop error in Tensorflow

I tried to use while_loop in Tensorflow, but when I try to return the target output from callable in while loop, it gives me an error because the shape is increased every time.
The output should be contains (0 or 1) values based on data value (input array). If data value is large than 5 return 1 else return 0. The returned value must be added into output
This is the code::
import numpy as np
import tensorflow as tf
data = np.random.randint(10, size=(30))
data = tf.constant(data, dtype= tf.float32)
global output
output= tf.constant([], dtype= tf.float32)
i = tf.constant(0)
c = lambda i: tf.less(i, 30)
def b(i):
i= tf.add(i,1)
cond= tf.cond(tf.greater(data[i-1], tf.constant(5.)), lambda: tf.constant(1.0), lambda: tf.constant([0.0]))
output =tf.expand_dims(cond, axis = i-1)
return i, output
r,out = tf.while_loop(c, b, [i])
print(out)
sess= tf.Session()
sess.run(out)
The error::
r, out = tf.while_loop(c, b, [i])
ValueError: The two structures don't have the same number of elements.
First structure (1 elements): [tf.Tensor 'while/Identity:0' shape=()
dtype=int32]
Second structure (2 elements): [tf.Tensor 'while/Add:0' shape=()
dtype=int32, tf.Tensor 'while/ExpandDims:0' shape=unknown
dtype=float32>]
I use tensorflow-1.1.3 and python-3.5
How can I change my code to gives me the target result?
EDIT::
I edit the code based on #mrry answer, but I still have an issue that the output is incorrect answer
the output is numbers summation
a = tf.ones([10,4])
print(a)
a = tf.reduce_sum(a, axis = 1)
i =tf.constant(0)
c = lambda i, _:tf.less(i,10)
def Smooth(x):
return tf.add(x,2)
summ = tf.constant(0.)
def b(i,_):
global summ
summ = tf.add(summ, tf.cast(Smooth(a[i]), tf.float32))
i= tf.add(i,1)
return i, summ
r, smooth_l1 = tf.while_loop(c, b, [i, smooth_l1])
print(smooth_l1)
sess = tf.Session()
print(sess.run(smooth_l1))
the out put is 6.0 (wrong).
The tf.while_loop() function requires that the following four lists have the same length, and the same type for each element:
The list of arguments to the cond function (c in this case).
The list of arguments to the body function (b in this case).
The list of return values from the body function.
The list of loop_vars representing the loop variables.
Therefore, if your loop body has two outputs, you must add a corresponding argument to b and c, and a corresponding element to loop_vars:
c = lambda i, _: tf.less(i, 30)
def b(i, _):
i = tf.add(i, 1)
cond = tf.cond(tf.greater(data[i-1], tf.constant(5.)),
lambda: tf.constant(1.0),
lambda: tf.constant([0.0]))
# NOTE: This line fails with a shape error, because the output of `cond` has
# a rank of either 0 or 1, but axis may be as large as 28.
output = tf.expand_dims(cond, axis=i-1)
return i, output
# NOTE: Use a shapeless `tf.placeholder_with_default()` because the shape
# of the output will vary from one iteration to the next.
r, out = tf.while_loop(c, b, [i, tf.placeholder_with_default(0., None)])
As noted in the comments, the body of the loop (specifically the call to tf.expand_dims()) seems to be incorrect and this program won't work as-is, but hopefully this is enough to get you started.
If you see this error:
ValueError: The two structures don't have the same number of elements.
If you see it in a while_loop, that means your inputs and outputs out of the while loop have different shapes.
I solved it by making sure that I return the same structure of loop_vars from my while loop function, the condition function must also accept same loop vars.
Here is an example code
loop_vars = [i, loss, batch_size, smaller_str_lens]
def condition(*loop_vars):
i = loop_vars[0]
batch_size = loop_vars[2]
return tf.less(i, batch_size)
def body(*loop_vars):
i, loss, batch_size, smaller_str_lens = loop_vars
tf.print("The loop passed here")
## logic here
i = tf.add(i, 1)
return i, loss, batch_size, smaller_str_lens
loss = tf.while_loop(condition, compare_strings, loop_vars)[1]
The body func must return loop vars, and the condition func must accept loop vars

Set values to zero in tensorflow

I'm creating the SMAPE loss function in tensorflow and i need to set to 0 the values of the tensor diff before computing the reduce mean. Here is my code but it doesn't work:
function loss(yHat, y):
denominator = (tf.abs(yhat) + np.abs(y))/2.0
diff = tf.div(tf.abs(yhat - y),denominator)
other_variable = tf.get_variable("other_variable",
dtype=tf.float32,
initializer= diff)
comp = tf.equal(denominator, 0)
cond_diff = tf.scatter_update(other_variable, comp, 0)
return tf.reduce_mean(cond_diff)
it gives me this error
ValueError: initial_value must have a shape specified: Tensor("div_49:0", dtype=float32)
Just mask your diff instead of creating another variable.
mask = tf.where(denominator == 0, tf.ones_like(denominator), tf.zeros_like(denominator))
return tf.reduce_mean(tf.multiply(diff,mask))

Why are these two functions not equivalent?

This is taken from the Keras sample code:
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
This is an attempt to simplify the above code:
def sample(p, temperature=1.0):
p = np.exp(np.log(p) / temperature)
p = np.random.multinomial(1, p / p.sum(), 1)
return np.argmax(p)
However the second one fails with this error:
File "z.py", line 75, in sample
p = np.random.multinomial(1, p / p.sum(), 1)
File "mtrand.pyx", line 4593, in mtrand.RandomState.multinomial (numpy/random/mtrand/mtrand.c:37541)
ValueError: sum(pvals[:-1]) > 1.0
How can this be?
My system is 64-bit, so numpy's default dtype is float64, but some of the inputs coming into this function were 32-bit floats, so somewhere in there the mixing of the two datatypes caused the error.

Tensorflow: constructing the params tensor for tf.map_fn

import tensorflow as tf
import numpy as np
def lineeqn(slope, intercept, y, x):
return np.sign(y-(slope*x) - intercept)
# data size
DS = 100000
N = 100
x1 = tf.random_uniform([DS], -1, 0, dtype=tf.float32, seed=0)
x2 = tf.random_uniform([DS], 0, 1, dtype=tf.float32, seed=0)
# line representing the target function
rand1 = np.random.randint(0, DS)
rand2 = np.random.randint(0, DS)
T_x1 = x1[rand1]
T_x2 = x1[rand2]
T_y1 = x2[rand1]
T_y2 = x2[rand2]
slope = (T_y2 - T_y1)/(T_x2 - T_x1)
intercept = T_y2 - (slope * T_x2)
# extracting training samples from the data set
training_indices = np.random.randint(0, DS, N)
training_x1 = tf.gather(x1, training_indices)
training_x2 = tf.gather(x2, training_indices)
training_x1_ex = tf.expand_dims(training_x1, 1)
training_x2_ex = tf.expand_dims(training_x2, 1)
slope_tensor = tf.fill([N], slope)
slope_ex = tf.expand_dims(slope_tensor, 1)
intercept_tensor = tf.fill([N], intercept)
intercept_ex = tf.expand_dims(intercept_tensor, 1)
params = tf.concat(1, [slope_ex, intercept_ex, training_x2_ex, training_x1_ex])
training_y = tf.map_fn(lineeqn, params)
The lineeqn function requires 4 parameters, so params should be a tensor where each element is 4-element tensor. When I try to run the above code, I get the error TypeError: lineeqn() takes exactly 4 arguments (1 given). Can someone please explain what is wrong with the way I have constructed the params tensor? What does tf.map_fn do to the params tensor?
A similar question has been asked here. The reason you are getting this error is because the function called by map_fn - lineeqn in your case - is required to take exactly one tensor argument.
Rather than a list of arguments to the function, the parameter elems is expected to be a list of items, where the mapped function is called for each item contained in the list.
So in order to take multiple arguments to your function, you would have to unpack them yourself from each item, e.g.
def lineeqn(item):
slope, intercept, y, x = tf.unstack(item, num=4)
return np.sign(y - (slope * x) - intercept)
and call it as
training_y = tf.map_fn(lineeqn, list_of_parameter_tensors)
Here, you call the line equation for each tensor in the list_of_parameter_tensors, where each tensor would describe a tuple (slope, intercept, y, x) of packed arguments.
(Note that depending on the shape of the actual argument tensors, it might also be that instead of tf.concat you could have to use tf.pack.)

How to use maxout activation function in tensorflow?

I want to use maxout activation function in tensorflow, but I don't know which function should use.
I sent a pull request for maxout, here is the link:
https://github.com/tensorflow/tensorflow/pull/5528
Code is as follows:
def maxout(inputs, num_units, axis=None):
shape = inputs.get_shape().as_list()
if axis is None:
# Assume that channel is the last dimension
axis = -1
num_channels = shape[axis]
if num_channels % num_units:
raise ValueError('number of features({}) is not a multiple of num_units({})'
.format(num_channels, num_units))
shape[axis] = -1
shape += [num_channels // num_units]
outputs = tf.reduce_max(tf.reshape(inputs, shape), -1, keep_dims=False)
return outputs
Here is how it works:
I don't think there is a maxout activation but there is nothing stopping yourself from making it yourself. You could do something like the following.
with tf.variable_scope('maxout'):
layer_input = ...
layer_output = None
for i in range(n_maxouts):
W = tf.get_variable('W_%d' % d, (n_input, n_output))
b = tf.get_variable('b_%d' % i, (n_output,))
y = tf.matmul(layer_input, W) + b
if layer_output is None:
layer_output = y
else:
layer_output = tf.maximum(layer_output, y)
Note that this is code I just wrote in my browser so there may be syntax errors but you should get the general idea. You simply perform a number of linear transforms and take the maximum across all the transforms.
How about this code?
This seems to work in my test.
def max_out(input_tensor,output_size):
shape = input_tensor.get_shape().as_list()
if shape[1] % output_size == 0:
return tf.transpose(tf.reduce_max(tf.split(input_tensor,output_size,1),axis=2))
else:
raise ValueError("Output size or input tensor size is not fine. Please check it. Reminder need be zero.")
I refer the diagram in the following page.
From version 1.4 on you can use tf.contrib.layers.maxout.
Maxout is a layer such that it calculates N*M output for a N*1 input, and then it returns the maximum value across the column, i.e., the final output has shape N*1 as well. Basically it uses multiple linear fittings to mimic a complex function.