How can I convert Tensor to eagerTensor - numpy

I got tensor class from Model.pred() that tensor class is <tf.python.framework.ops.Tensor> (not eager).
but I can't use them for custom loss function. So I tried convert 'that Tensor' to <tf.python.framework.ops.EagerTensor>.
If I convert them I can use .numpy() for a calculate in loss function.
Is there way to convert them?
or Can I get numpy in <... ops.Tensor>?
I'm using Tensorflow 2.3.0

You can either:
Try forcing eager execution with tf.config.run_functions_eagerly(True) or tf.compat.v1.enable_eager_execution() at the start of your code.
Or using a session (documentation here) and calling .eval() on your Tensor instead of .numpy().
Example code of the second possibility:
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
# Build a graph.
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
# Launch the graph in a session.
sess = tf.compat.v1.Session()
with sess.as_default():
print(c.eval())
sess.close()

Related

K-Means of Tensorflow - Graph disconnected error

I am trying to write a function that runs KMeans on a dataset and outputs the cluster centroids. My aim is to use this in a custom keras layer, so I am using TensorFlow's implementation of KMeans that takes a tensor as the input dataset.
My problem however is that I can't make it work even as a standalone function. The problem comes from the fact that KMeans accepts a generator function that provides mini-batches instead of a plain tensor, but when I am using closure to do that, I get a graph disconnected error:
import tensorflow as tf # version: 2.4.1
from tensorflow.compat.v1.estimator.experimental import KMeans
#tf.function
def KMeansCentroids(inputs, num_clusters, steps, use_mini_batch=False):
# `inputs` is a 2D tensor
def input_fn():
# Each one of the lines below results in the same "Graph Disconnected" error. Tuples don't really needed but just to be consistent with the documentation
return (inputs, None)
return (tf.data.Dataset.from_tensor_slices(inputs), None)
return (tf.convert_to_tensor(inputs), None)
kmeans = KMeans(
num_clusters=num_clusters,
use_mini_batch=use_mini_batch)
kmeans.train(input_fn, steps=steps) # This is where the error happens
return kmeans.cluster_centers()
>>> x = tf.random.uniform((100, 2))
>>> c = KMeansCentroids(x, 5, 10)
The exact error is:
ValueError:
Tensor("strided_slice:0", shape=(), dtype=int32)
must be from the same graph as
Tensor("Equal:0", shape=(), dtype=bool)
(graphs are FuncGraph(name=KMeansCentroids, id=..) and <tensorflow.python.framework.ops.Graph object at ...>).
If I were to use a numpy dataset and convert to tensor inside the function, the code would work just fine.
Also, making input_fn() return directly tf.random.uniform((100, 2)) (ignoring the inputs argument), would again work. That's why I am guessing that tensorflow doesn't support closures since it needs to build the computation graph at the beginning.
But I don't see how to work around that.
Could it be a version error due to KMeans being a compat.v1.experimental module?
Note that the documentation of KMeans states for the input_fn():
The function should construct and return one of the following:
A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below.
A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
The problem you're facing is more about invoking tensor outside the created graph. Basically, when you called the .train function, a new graph will be created and that is with the graph defined in that input_fn and the graph defined in the model_fn.
kmeans.train(input_fn, steps=steps)
And, after that all the tensors those coming outside these functions will be treated as outsiders and won't part of this new graph. That's why you're getting a graph disconnected error for trying to use outsider tensor. To resolve this, you need to create the necessary tensors within these graphs.
import tensorflow as tf
from tensorflow.compat.v1.estimator.experimental import KMeans
#tf.function
def KMeansCentroids(num_clusters, steps, use_mini_batch=False):
def input_fn(batch_size):
pinputs = tf.random.uniform((100, 2))
dataset = tf.data.Dataset.from_tensor_slices((pinputs))
dataset = dataset.shuffle(1000).repeat()
return dataset.batch(batch_size)
kmeans = KMeans(
num_clusters=num_clusters,
use_mini_batch=use_mini_batch)
kmeans.train(input_fn = lambda: input_fn(5),
steps=steps)
return kmeans.cluster_centers()
c = KMeansCentroids(5, 10)
Here is some more info for reading, 1. FYI, I tested your code with a few versions of tf > 2, and I don't think it's related to version error or something.
Re-mentioning here for future readers. An alternative of using KMeans within Keras layers:
tf_kmeans.py
ClusteringLayer

Simple way to convert tensor to numpy array without eager mode in TF 2.2

I can't find a simple way to convert a tensor to a NumPy array without enabling eager mode, which gives a nice .numpy() method, but also slows down my model training.
I'd be super grateful for your suggestions. For context, I'm writing a custom metric for my TensorFlow model that relies on a scikit learn function, which only takes numpy arrays.
I've tried wrapping the tensors with np.array(), which throws a not implemented error. Also gave sessions and .eval() a go, but didn't get it to work either and seemed like too much for this simple job.
My specific error:
NotImplementedError: Cannot convert a symbolic Tensor (model_17/dense_17/Sigmoid:0) to a numpy array.
# Custom metric
def accuracy_ml(y_true, y_pred):
return accuracy_score(y_true, np.round(y_pred)) # ERROR here feeding tensor to sklearn function
# Model
cnn = simple_model(input_shape=(224, 224, 3),
num_classes=10,
base_model = base_ResNet101)
lr = 1e-2
loss_fn = tf.keras.losses.BinaryCrossentropy()
metrics = [accuracy_ml]
cnn.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=lr),
loss=loss_fn,
metrics=metrics)
# Simple baseline eval that fails
validation_steps=17
loss0, accuracy0 = cnn.evaluate(validation_batches, steps = validation_steps)
Wrapping my NumPy metric with tf.numpy_function() solved it. https://www.tensorflow.org/api_docs/python/tf/numpy_function

how to transfer type of data in Tesorflow code

Assuming the two models has been established in tensorflow,the model1 followed by model2.
The condition is that the output's type of model1 is a "tensor" and
the input type of model2 is requiring "ndarray" in creating structure of graph's model.(the data don't flow the graph) If we haven't build two or more Session, how we can combine model1 with model2.
(In fact, The library fuction requiring the input's type is "ndarray" can be call in model2. I don't want to code this process)
The sample is following
import tensorflow as tf
import cv2
img = cv2.read("star_sky.jpg")#assumpting shape of image is (256,256,3)
x_input = tf.placeholder(shape=(1,256,256,3),dtype=tf.float32)
W = tf.Variable(tf.random_normal([3,3,3,3]),dtype = tf.float32)
x_output_temp = tf.nn.conv2d(x_input,W,[1,1,1,1],padding="SAME")
#the other model want to use x_output to get Canny edge of image
x_output_ = x_output_temp[0]
x_output = cv2.Canny(x_output_,100,200)#number is parameter of threshold
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
img = [img]
x_output.eval({x_input:img})
If you want to use cv2 to process a tensorflow Tensor you need to do it inside a tf.py_func (which will convert the tensor to an ndarray at graph execution time and run the python code you pass on that array)

Tensorflow: How to feed a placeholder variable with a tensor?

I have a placeholder variable that expects a batch of input images:
input_placeholder = tf.placeholder(tf.float32, [None] + image_shape, name='input_images')
Now I have 2 sources for the input data:
1) a tensor and
2) some numpy data.
For the numpy input data, I know how to feed data to the placeholder variable:
sess = tf.Session()
mLoss, = sess.run([loss], feed_dict = {input_placeholder: myNumpyData})
How can I feed a tensor to that placeholder variable?
mLoss, = sess.run([loss], feed_dict = {input_placeholder: myInputTensor})
gives me an error:
TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, or numpy ndarrays.
I don't want to convert the tensor into a numpy array using .eval(), since that would slow my program down, is there any other way?
This has been discussed on GitHub in 2016, and please check here. Here is the key point by concretevitamin:
One key thing to note is that Tensor is simply a symbolic object. The values of your feed_dict are the actual values, e.g. a Numpy ndarry.
The tensor as a symbolic object is flowing in the graph while the actual values are outside of it, then we can only pass the actual values into the graph and the symbolic object can not exist outside the graph.
You can use feed_dict to feed data into non-placeholders. So, first, wire up your dataflow graph directly to your myInputTensor tensor data source (i.e. don't use a placeholder). Then when you want to run with your numpy data you can effectively mask myImportTensor with myNumpyData, like this:
mLoss, = sess.run([loss], feed_dict={myImportTensor: myNumpyData})
[I'm still trying to figure out how to do this with multiple tensor data sources however.]
One way of solving the problem is to actually remove the Placeholder tensor and replace it by your "myInputTensor".
You will use the myInputTensor as the source for the other operations in the graph and when you want to infer the graph with your np array as input data, you will feed a value to this tensor directly.
Here is a quick example:
import tensorflow as tf
import numpy as np
# Input Tensor
myInputTensor = tf.ones(dtype=tf.float32, shape=1) # In your case, this would be the results of some ops
output = myInputTensor * 5.0
with tf.Session() as sess:
print(sess.run(output)) # == 5.0, using the Tensor value
myNumpyData = np.zeros(1)
print(sess.run(output, {myInputTensor: myNumpyData}) # == 0.0 * 5.0 = 0.0, using the np value
This works for me in latest version...maybe you have older version of TF?
a = tf.Variable(1)
sess.run(2*a, feed_dict={a:5}) # prints 10

writing a custom cost function in tensorflow

I'm trying to write my own cost function in tensor flow, however apparently I cannot 'slice' the tensor object?
import tensorflow as tf
import numpy as np
# Establish variables
x = tf.placeholder("float", [None, 3])
W = tf.Variable(tf.zeros([3,6]))
b = tf.Variable(tf.zeros([6]))
# Establish model
y = tf.nn.softmax(tf.matmul(x,W) + b)
# Truth
y_ = tf.placeholder("float", [None,6])
def angle(v1, v2):
return np.arccos(np.sum(v1*v2,axis=1))
def normVec(y):
return np.cross(y[:,[0,2,4]],y[:,[1,3,5]])
angle_distance = -tf.reduce_sum(angle(normVec(y_),normVec(y)))
# This is the example code they give for cross entropy
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
I get the following error:
TypeError: Bad slice index [0, 2, 4] of type <type 'list'>
At present, tensorflow can't gather on axes other than the first - it's requested.
But for what you want to do in this specific situation, you can transpose, then gather 0,2,4, and then transpose back. It won't be crazy fast, but it works:
tf.transpose(tf.gather(tf.transpose(y), [0,2,4]))
This is a useful workaround for some of the limitations in the current implementation of gather.
(But it is also correct that you can't use a numpy slice on a tensorflow node - you can run it and slice the output, and also that you need to initialize those variables before you run. :). You're mixing tf and np in a way that doesn't work.
x = tf.Something(...)
is a tensorflow graph object. Numpy has no idea how to cope with such objects.
foo = tf.run(x)
is back to an object python can handle.
You typically want to keep your loss calculation in pure tensorflow, so do the cross and other functions in tf. You'll probably have to do the arccos the long way, as tf doesn't have a function for it.
just realized that the following failed:
cross_entropy = -tf.reduce_sum(y_*np.log(y))
you cant use numpy functions on tf objects, and the indexing my be different too.
I think you can use "Wraps Python function" method in tensorflow. Here's the link to the documentation.
And as for the people who answered "Why don't you just use tensorflow's built in function to construct it?" - sometimes the cost function people are looking for cannot be expressed in tf's functions or extremely difficult.
This is because you have not initialized your variable and because of this it does not have your Tensor there right now (can read more in my answer here)
Just do something like this:
def normVec(y):
print y
return np.cross(y[:,[0,2,4]],y[:,[1,3,5]])
t1 = normVec(y_)
# and comment everything after it.
To see that you do not have a Tensor now and only Tensor("Placeholder_1:0", shape=TensorShape([Dimension(None), Dimension(6)]), dtype=float32).
Try initializing your variables
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
and evaluate your variable sess.run(y). P.S. you have not fed your placeholders up till now.