As stated in the title, is there a TensorFlow equivalent of the numpy.all() function to check if all the values in a bool tensor are True? What is the best way to implement such a check?
Use tf.reduce_all, as follows:
import tensorflow as tf
a=tf.constant([True,False,True,True],dtype=tf.bool)
res=tf.reduce_all(a)
sess=tf.InteractiveSession()
res.eval()
This returns False.
On the other hand, this returns True:
import tensorflow as tf
a=tf.constant([True,True,True,True],dtype=tf.bool)
res=tf.reduce_all(a)
sess=tf.InteractiveSession()
res.eval()
You could use tf.experimental.numpy.all in tf 2.4
x = tf.constant([False, False])
tf.experimental.numpy.all(x)
One way of solving this problem would be to do:
def all(bool_tensor):
bool_tensor = tf.cast(bool_tensor, tf.float32)
all_true = tf.equal(tf.reduce_mean(bool_tensor), 1.0)
return all_true
However, it's not a TensorFlow dedicated funciton. Just a workaround.
Related
I got tensor class from Model.pred() that tensor class is <tf.python.framework.ops.Tensor> (not eager).
but I can't use them for custom loss function. So I tried convert 'that Tensor' to <tf.python.framework.ops.EagerTensor>.
If I convert them I can use .numpy() for a calculate in loss function.
Is there way to convert them?
or Can I get numpy in <... ops.Tensor>?
I'm using Tensorflow 2.3.0
You can either:
Try forcing eager execution with tf.config.run_functions_eagerly(True) or tf.compat.v1.enable_eager_execution() at the start of your code.
Or using a session (documentation here) and calling .eval() on your Tensor instead of .numpy().
Example code of the second possibility:
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
# Build a graph.
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
# Launch the graph in a session.
sess = tf.compat.v1.Session()
with sess.as_default():
print(c.eval())
sess.close()
I want to get the scalar value of a function parameter like following code does:
import tensorflow as tf
#tf.function
def test(key_value):
tf.print(key_value.numpy())
a = tf.constant(0)
test(a)
But there is no numpy function when running in autograph.
numpy is only available outside of tf.function, where Tensors have actual values. Within tf.function, you have access to a restricted API. As long as you pass the tensor to a TensorFlow op, you don't need to call numpy:
import tensorflow as tf
#tf.function
def test(key_value):
tf.print(key_value)
a = tf.constant(0)
test(a)
Have a look at this guide for more info.
I would like to benchmark some TensorFlow operations (for example between them or against PyTorch). However most of the time I will write something like:
import numpy as np
import tensorflow as tf
tf_device = '/GPU:0'
a = np.random.normal(scale=100, size=shape).astype(np.int64)
b = np.array(7).astype(np.int64)
with tf.device(tf_device):
a_tf = tf.constant(a)
b_tf = tf.constant(b)
%timeit tf.math.floormod(a_tf, b_tf)
The problem with this approach is that it does the computation in eager-mode (I think in particular that it has to perform GPU to CPU placement). Eventually, I want to use those ops in a tf.keras model and therefore would like to evaluate their performance in graph mode.
What is the preferred way to do it?
My google searches have led to nothing and I don't know how to use sessions like in tf 1.x.
What you are looking for is tf.function. Check this tutorial and this docs.
As the tutorial says, in TensorFlow 2, eager execution is turned on by default. The user interface is intuitive and flexible (running one-off operations is much easier and faster), but this can come at the expense of performance and deployability. To get performant and portable models, use tf.function to make graphs out of your programs.
Check this code:
import numpy as np
import tensorflow as tf
import timeit
tf_device = '/GPU:0'
shape = [100000]
a = np.random.normal(scale=100, size=shape).astype(np.int64)
b = np.array(7).astype(np.int64)
#tf.function
def experiment(a_tf, b_tf):
tf.math.floormod(a_tf, b_tf)
with tf.device(tf_device):
a_tf = tf.constant(a)
b_tf = tf.constant(b)
# warm up
experiment(a_tf, b_tf)
print("In graph mode:", timeit.timeit(lambda: experiment(a_tf, b_tf), number=10))
print("In eager mode:", timeit.timeit(lambda: tf.math.floormod(a_tf, b_tf), number=10))
I am trying to customize the loss function of keras. I saw the example:
import tensorflow as tf
import keras.backend as K
def mean_pred(y_true, y_pred):
return K.mean(y_pred)
Can I use something like:
def mean_pred(y_true, y_pred):
return tf.mean(y_pred)
Is there any difference?
Both are same and computes the mean of elements across dimensions of a tensor and Equivalent to Numpy mean i.e np.mean. Also it is tf.math.reduce_mean.
I'm beginner in tensorflow and i want to apply Frobenius normalization on a tensor but when i searched i didn't find any function related to it in tensorflow and i couldn't implement it using tensorflow ops, i can implement it with numpy operations, but how can i do this using tensorflow ops only ??
My implementation using numpy in python
def Frobenius_Norm(tensor):
x = np.power(tensor,2)
x = np.sum(x)
x = np.sqrt(x)
return x
def frobenius_norm_tf(M):
return tf.reduce_sum(M ** 2) ** 0.5