Unable to obtain moments using tensorflow - tensorflow

I want to calculate the moments of a vector x = np.random.normal(0,1,[1,500]). When I do mean, std = tf.nn.moments(x,axes=[0]), it throws this error:
File "/tmp/venv/local/lib/python2.7/site-packages/tensorflow/python/ops/nn.py", line 830, in moments
y = math_ops.cast(x, dtypes.float32) if x.dtype == dtypes.float16 else x
TypeError: data type not understood
I am using tensorflow==0.11.0. What is the correct syntax?

As shown in the documentation for tf.nn.moments, the input x must be a Tensor.
You should use something like the following:
x = tf.placeholder("float", [None,500])
mean, std = tf.nn.moments(x, axes=[0])
sess = tf.Session()
sess.run(tf.global_variables_initializer())
sample_mean, sample_std = sess.run([mean, std],
feed_dict={x: np.random.normal(0,1,[1,500])})
Note: This particular calculation does not make much sense, since there is only one data value. You may want to either increase the shape to something like [32, 500], or more likely change the axes from [0] to [1].
Regardless, the calculation will complete without errors, despite the calculated standard deviation being equal to 0, because the moments are calculated along an axis with one dimension.

Related

How to expand the output of GlobalAveragePooling2D() to be suitable for BiSeNet?

I am trying to build the BiseNet shown in the figure at "https://github.com/Blaizzy/BiSeNet-Implementation".
When I want to use the GlobalAveragePooling2D() in Keras(tf-backend) to finish the Attention Refined Module in Figure(b), I find the output shape of the GlobalAveragePooling2D() is not suitable for the next convolution.
I checked out many implementation of BiSeNet code in github, however, most of them use AveragePooling2D(size=(1,1)) instead. But AveragePooling2D(size=(1,1)) is completely non-sense.
So I define a lambda layer to do what I want (The selected code is shown as below). The lambda layer works but seems very ugly:
def samesize_globalAveragePooling2D(inputtensor):
# inputtensor shape:(?, 28,28,32)
x = GlobalAveragePooling2D()(inputtensor) # x shape:(?, 32)
divide = tf.divide(inputtensor, inputtensor) # divide shape:(?, 28,28,32)
x2 = x * divide # x2 shape:(?, 28,28,32)
global_pool = Lambda(function=samesize_globalAveragePooling2D)(conv_0)
Hope to get suggestion to make this lambda to be more graceful.
Thanks!
This could be done using a lambda layer on tf.reduce_mean.
tf.keras.layers.Lambda(lambda x: tf.reduce_mean(x, axis=[1, 2], keep_dims=True))

How to get batch_size if shape method in Keras & TF returns None for the batch_size?

I'm wrapping a function as a layer. In this function, I need to know what is the shape of the input. The first index of shape is the batch_size, I need to know it! The problem is that K.int_shape returns something like (None, 2, 10). But, this (None) thing should be known at runtime, right? it is still None and causes an error.
Basically, in my function I want to create a constant that is as long as the batch_size.
Here is my function for what its worth
def func(inputs):
max_iter=3
x, y= inputs
c= tf.complex(x, y)
print(K.int_shape(c))
z= tf.zeros(shape=K.int_shape(c), dtype='complex64')
#b=K.switch(K.greater( tf.abs(c) , 4), K.constant(1, shape=(1,1)), K.constant(0, shape=(1,1)))
for i in range(max_iter):
c= c * c + z
return c
layer= Lambda(func)
You can see where I created the constant z. I want its shape to be equal to the input shape. But this is causing an error with massive trace. If I replace that with a fixed shape it works. I traced the error to this damn None thing.
Instead of using int_shape, you can use tf.zeros_like to create z
z= tf.zeros_like(c, dtype='complex64')

TypeError: 'TensorShape' object is not callable

I am new to Tensorflow programming , i was digging up some functions and got this error in the snippet :
**with** **tf.Session()** as sess_1:
c = tf.constant(5)
d = tf.constant(6)
e = c + d
print(sess_1.run(e))
print(sess_1.run(e.shape()))
Error found :Traceback (most recent call last):
File "C:/Users/Ashu/PycharmProjects/untitled/Bored.py", line 15, in
print(sess_1.run(e.shape()))
TypeError: 'TensorShape' object is not callable
I didn't found it here so can anyone please clarify this silly doubt as i am new learner.Sorry for any typing mistake !
I have a one more doubt , when i uses simply eval() function it doesn't print anything in pycharm , i had to use it along with print() method. So my doubt is when print() method is used it doesn't print the dtype of the tensor , it simply print the tensor or python object value in pycharm.(Why i am not getting the output in the format like : array([1. , 1.,] , dtype=float32))Is it the Pycharm way to print the tensor in new version or is it something i am doing wrong ? So excited to know the thing behind this , please help and pardon if i am wrong at any place.
One confusing aspect of tensorflow for beginners is there are two types of shape: dynamic shape, given by tf.shape(x), and static shape, given by x.shape (assuming x is a tensor). While they represent the same concept, they are used very differently.
Static shape is the shape of a tensor known at run time. Its a data type in its own right, but it can be converted to a list using as_list().
x = tf.placeholder(shape=(None, 3, 4))
static_shape = x.shape
shape_list = x.shape.as_list()
print(shape_list) # [None, 3, 4]
y = tf.reduce_sum(x, axis=1)
print(y.shape.as_list()) # [None, 4]
During operations, tensorflow tracks static shapes as best it can. In the above example, y's shape was calculated based on the partially known shape of x's. Note we haven't even created a session, but the static shape is still known.
Since the batch size is not known, you can't use the static first entry in calculations.
z = tf.reduce_sum(x) / tf.cast(x.shape.as_list()[0], tf.float32) # ERROR
(we could have divided by x.shape.as_list()[1], since that dimension is known at run-time - but that wouldn't demonstrate anything here)
If we need to use a value which is not known statically - i.e. at graph construction time - we can use the dynamic shape of x. The dynamic shape is a tensor - like other tensors in tensorflow - which is evaluated using a session.
z = tf.reduce_sum(x) / tf.cast(tf.shape(x)[0], tf.float32) # all good!
You can't call as_list on the dynamic shape, nor can you inspect its values without going through a session evaluation.
As stated in the documentation, you can only call a session's run method with tensors, operations, or lists of tensors/operations. Your last line of code calls run with the result of e.shape(), which has type TensorShape. The session can't execute a TensorShape argument, so you're getting an error.
When you call print with a tensor, the system prints the tensor's content. If you want to print the tensor's type, use code like print(type(tensor)).

compute Hessians w.r.t higher rank variable not work neither by tf.hessians() nor tf.gradients()

When we need to calculate double gradient or Hessian, in tensorflow, we may use tf.hessians(F(x),x), or use tf.gradient(tf.gradients(F(x),x)[0], x)[0]. However, when x is not rank one, I was told the following error when use tf.hessians().
ValueError: Cannot compute Hessian because element 0 of xs does not
have rank one.. Tensor model_inputs/action:0 must have rank 1.
Received rank 2, shape (?, 1)
in following code:
with tf.name_scope("1st scope"):
self.states = tf.placeholder(tf.float32, (None, self.state_dim), name="states")
self.action = tf.placeholder(tf.float32, (None, self.action_dim), name="action")
with tf.name_scope("2nd scope"):
with tf.variable_scope("3rd scope"):
self.policy_outputs = self.policy_network(self.states)
# use tf.gradients twice
self.actor_action_gradients = tf.gradients(self.policy_outputs, self.action)[0]
self.actor_action_hessian = tf.gradients(self.actor_action_gradients, self.action)[0]
# or use tf.hessians
self.actor_action_hessian = tf.hessian(self.policy_outputs, self.action)
When using tf.gradients(), also causes an error:
in create_variables self.actor_action_hessian =
tf.gradients(self.actor_action_gradients, self.action)[0]
AttributeError: 'NoneType' object has no attribute 'dtype'
How can I fix this, does neither tf.gradients() nor tf.hessians() can be used in this case?
The second approach is fine, error is somewhere else, namely your graph is not connected.
self.actor_action_gradients = tf.gradients(self.policy_outputs, self.action)[0]
self.actor_action_hessian = tf.gradients(self.actor_action_gradients, self.action)[0]
errror is thrown in second line because self.actor_action_gradients is None, and so you can't compute its gradient. Nothing in your code suggests that self.policy_outputs depends on self.action (and it shouldn't, since its action that depends on policy, not policy on action).
Once you fix this you will notice, that "hessian" is not really a hessian but a vector, to form a proper hessian of f wrt. x you have to iterate over all values returned by tf.gradients, and compute tf.gradients of each one independently. This is a known limitation in TF, and no simpler way is available right now.

Creating new vector in tensorflow from argmax performed on another tensor

I have tensor that has shape (?, 3), looks like this [x, y, z] and I need to create function that take argmax of it, creates new vector and assign values with respect to dimension and argmax.
Example:
f(y):
v = tf.variable(tf.zeros(y.get_shape()))
index = tf.argmax(y)
v[index] = 1.0
return v
Unfortunately this doesn't work and I can't figure out how can one do it.
Are you sure that you want to create and assign to a tf.Variable here? It would probably be simpler to use the tf.one_hot() op (available from version 0.8 onwards) to build the result functionally, as you wouldn't have to worry about initialization, etc. For example, you could do the following:
def f(y):
index = tf.argmax(y, 1)
return tf.one_hot(index, tf.shape(y)[1], 1.0, 0.0)