I want to change the shape and the content of the tensor in a keras model. Tensor is the output of a layer and has
shape1=(batch_size, max_sentences_in_doc, max_tokens_in_doc, embedding_size)
and I want to convert to
shape2=(batch_size, max_documents_length, embedding_size)
suitable as input of the next layer. Here sentences are made of tokens, and are zero-padded so every sentence has length=max_tokens_in_sentence.
In detail:
I wanto to concatenate all the sentences of a batch taking only the nonzero part of the sentences;
then I zero-pad this concatenation to a length=max_document_length.
So passing from shape1 to shape2 is not only a reshape as mathematical operations are involved.
I created the function embedding_to_docs(x) that iterates on the tensor of shape1 to transform it into shape2. I call the function using a Lambda layer in the model, it works in debug with fictious data, but when I try to call it during the build of the model an error is raised:
Tensor objects are only iterable when eager execution is enabled. To iterate over this tensor use tf.map_fn.
def embedding_to_docs(x):
new_output = []
for doc in x:
document = []
for sentence in doc:
non_zero_indexes = np.nonzero(sentence[:, 0])
max_index = max(non_zero_indexes[0])
if max_index > 0:
document.extend(sentence[0:max_index])
if MAX_DOCUMENT_LENGTH-len(document) > 0:
a = np.zeros((MAX_DOCUMENT_LENGTH-len(document), 1024))
document.extend(a)
else:
document = document[0:MAX_DOCUMENT_LENGTH]
new_output.append(document)
return np.asarray(new_output)
...
# in the model:
tensor_of_shape2 = Lambda(embedding_to_docs)(tensor_of_shape1)
How to fix this?
You can use py_function, which allows you to switch from the graph mode (used by Keras) to the eager mode (where it is possible to iterate over tensors like in your function).
def to_docs(x):
return tf.py_function(embedding_to_docs, [x], tf.float32)
tensor_of_shape2 = Lambda(to_docs)(tensor_of_shape1)
Note that the code run within your embedding_to_docs must be written in tensorflow eager instead of numpy. This means that you'd need to replace some of the numpy calls with tensorflow. You'd surely need to replace the return line with:
return tf.convert_to_tensor(new_output)
Using numpy arrays will stop the gradient computation, but you are likely not interested in gradient flowing through the input data anyway.
Related
Currently I try to code my own loss function, but when returning the result (a tensor that consists of a list with the loss values) I get the following error:
ValueError: No gradients provided for any variable: ['conv2d/kernel:0', 'conv2d/bias:0', 'conv2d_1/kernel:0', 'conv2d_1/bias:0', 'dense/kernel:0', 'dense/bias:0', 'dense_1/kernel:0', 'dense_1/bias:0', 'dense_2/kernel:0', 'dense_2/bias:0'].
However in tutorials and in their docs they also use tf.recude_mean and when using it like them (they showed how to code mse loss function) I dont get the error, so it seems that I am missing something
My code:
gl = tfa.losses.GIoULoss()
def loss(y_true, y_pred):
batch_size = y_true.shape[0]
# now contains 32 lists (a batch) of bbxs -> shape is (32, 7876)
bbx_true = y_true.numpy()
# now contains 32 lists (a batch) of bbxs here we have to double access [0] in order to get the entry itself
# -> shape is (32, 1, 1, 7876)
bbx_pred = y_pred.numpy()
losses = []
curr_true = []
curr_pred = []
for i in range(batch_size):
curr_true = bbx_true[i]
curr_pred = bbx_pred[i][0][0]
curr_true = [curr_true[x:x+4] for x in range(0, len(curr_true), 4)]
curr_pred = [curr_pred[x:x+4] for x in range(0, len(curr_pred), 4)]
if len(curr_true) == 0:
curr_true.append([0., 0.,0.,0.])
curr_loss = gl(curr_true, curr_pred)
losses.append(curr_loss)
return tf.math.reduce_mean(losses, axis=-1)
Basically I want to achive bounding box regression and because of that I want to use the GIoUloss loss function. Because my model outputs 7896 neurons (the max amount of bounding boxes I want to predict according to my training set times 4) and the gioloss function needs the input as an array of lists with 4 elements each, I have to perform this transformation.
How do I have to change my code in order to also build up a gradient
Numpy don't provide autograd functions so you need to have Tensorflow tensors exclusively in your loss (otherwise the gradient is lost during backpropagation). So avoid using .numpy() and use the tensorflow operators and slicing on tensoflow tensors instead.
I am trying to write a function that runs KMeans on a dataset and outputs the cluster centroids. My aim is to use this in a custom keras layer, so I am using TensorFlow's implementation of KMeans that takes a tensor as the input dataset.
My problem however is that I can't make it work even as a standalone function. The problem comes from the fact that KMeans accepts a generator function that provides mini-batches instead of a plain tensor, but when I am using closure to do that, I get a graph disconnected error:
import tensorflow as tf # version: 2.4.1
from tensorflow.compat.v1.estimator.experimental import KMeans
#tf.function
def KMeansCentroids(inputs, num_clusters, steps, use_mini_batch=False):
# `inputs` is a 2D tensor
def input_fn():
# Each one of the lines below results in the same "Graph Disconnected" error. Tuples don't really needed but just to be consistent with the documentation
return (inputs, None)
return (tf.data.Dataset.from_tensor_slices(inputs), None)
return (tf.convert_to_tensor(inputs), None)
kmeans = KMeans(
num_clusters=num_clusters,
use_mini_batch=use_mini_batch)
kmeans.train(input_fn, steps=steps) # This is where the error happens
return kmeans.cluster_centers()
>>> x = tf.random.uniform((100, 2))
>>> c = KMeansCentroids(x, 5, 10)
The exact error is:
ValueError:
Tensor("strided_slice:0", shape=(), dtype=int32)
must be from the same graph as
Tensor("Equal:0", shape=(), dtype=bool)
(graphs are FuncGraph(name=KMeansCentroids, id=..) and <tensorflow.python.framework.ops.Graph object at ...>).
If I were to use a numpy dataset and convert to tensor inside the function, the code would work just fine.
Also, making input_fn() return directly tf.random.uniform((100, 2)) (ignoring the inputs argument), would again work. That's why I am guessing that tensorflow doesn't support closures since it needs to build the computation graph at the beginning.
But I don't see how to work around that.
Could it be a version error due to KMeans being a compat.v1.experimental module?
Note that the documentation of KMeans states for the input_fn():
The function should construct and return one of the following:
A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below.
A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
The problem you're facing is more about invoking tensor outside the created graph. Basically, when you called the .train function, a new graph will be created and that is with the graph defined in that input_fn and the graph defined in the model_fn.
kmeans.train(input_fn, steps=steps)
And, after that all the tensors those coming outside these functions will be treated as outsiders and won't part of this new graph. That's why you're getting a graph disconnected error for trying to use outsider tensor. To resolve this, you need to create the necessary tensors within these graphs.
import tensorflow as tf
from tensorflow.compat.v1.estimator.experimental import KMeans
#tf.function
def KMeansCentroids(num_clusters, steps, use_mini_batch=False):
def input_fn(batch_size):
pinputs = tf.random.uniform((100, 2))
dataset = tf.data.Dataset.from_tensor_slices((pinputs))
dataset = dataset.shuffle(1000).repeat()
return dataset.batch(batch_size)
kmeans = KMeans(
num_clusters=num_clusters,
use_mini_batch=use_mini_batch)
kmeans.train(input_fn = lambda: input_fn(5),
steps=steps)
return kmeans.cluster_centers()
c = KMeansCentroids(5, 10)
Here is some more info for reading, 1. FYI, I tested your code with a few versions of tf > 2, and I don't think it's related to version error or something.
Re-mentioning here for future readers. An alternative of using KMeans within Keras layers:
tf_kmeans.py
ClusteringLayer
Consider that I have a file data.csv which contains:
feature0,feature1,label
True,0.1,class_1
False,2.7,class_2
False,10.1,class_3
I would like to load this as a dataset and transform the label into a boolean such that it is true for class_1 and false otherwise. Here is my code:
import tensorflow as tf
data = tf.data.experimental.make_csv_dataset(
'data.csv',
32,
label_name='label',
shuffle=False,
num_epochs=1)
def view(ds, num_batches=1):
for f, l in ds.take(num_batches):
print('Features:')
print(f)
print('Labels:')
print(l)
def process_labels(features, label):
if label == 'class_1':
label = True
else:
label = False
# label = label=='class_1'
return features, label
view(data.map(process_labels))
This throws an error: InvalidArgumentError: Input to reshape is a tensor with 3 values, but the requested shape has 1 [[{{node Reshape}}]]. Why is that? This is all the more confusing that when I replace the if~else with the one-liner that's commented out, label = label=='class_1', the problem disappears. What's happening here?
I'm using TensorFlow 2.4.1 and Python 3.8.5.
The second argument of tf.data.experimental.make_csv_dataset is a batch size, which means that the dataset created has a the following shape: (batch_size, feature). Any function that you map on that dataset should work on a batch of data, and not just on one element of the dataset.
label = label=='class_1' works because of broadcasting, but your previous function does not.
You have two ways of making the function work:
either write a function that handles batches of data (i.e, your working solution)
call unbatch on the dataset. This may have negative performance effects, as the doc states:
Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch.
data.unbatch().map(process_labels).batch(32)
I try to accomplish if... elif..elif..else... in tensorflow, but some errors occurred. Then I try tf.cond, but it is a singe brunch.
labels is defined as a placeholder, it is a tensor that needs to be fed when training. The range of labels and newlogits is [0,27], but when computing accuracy, I want to map the labels and the logits to [0,3].
def tower_acc(logits, labels, batch_size):
newlogits=tf.argmax(logits,1)
resultlabels =[]
resultlogits =[]
for i in range(batch_size):
if labels[i]<=4:
tmplabel=0
elif 5<labels[i]<=9:
tmplabel=1
elif 10<labels[i]<=14:
tmplabel=2
else:
tmplabel=3
resultlabels.append(tmplabel)
for i in range(batch_size):
if newlogits[i]<=4:
tmplogit=0
elif 5<newlogits[i]<=9:
tmplogit=1
elif 10<newlogits[i]<=14:
tmplogit=2
else:
tmplogit=3
resultlogits.append(tmplogit)
correct_pred = tf.equal(resultlogits, resultlabels)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
return accuracy
The error is the following:
raise TypeError("Using a tf.Tensor as a Python bool is not allowed. "
TypeError: Using a tf.Tensor as a Python bool is not allowed. Use if t is not None: instead of if t: to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor.
You have to review Tensorflow basics.
Like the error says, you cannot treat Tensorflow tensors as Python booleans. label[i]<4 is a (boolean) tensorflow tensor. Think about it as a pointer into your Tensorflow graph - it doesn't have a value by itself (in your case, its value is obviously dependent on the placeholder you feed). Another problem with your code is that Tensorflow doesn't support a<x<b notation (you would need tf.logical_and for that).
While in priniciple, it is possible to nest tf.cond operations by using an inner tf.cond within the false_fn of an outer tf.cond, your entire approach to remapping integers is inappropriate - by using a for loop and ifs, you are trying to force the gpu to work serially.
Instead, define a lookup table with 28 elements, mapping each integer to 0, 1, 2 or 3 and use 'tf.gather' to map all of the labels from their 28-class representation to a 4-class representation. This mapping can be done at the same time for all of the labels, no loops needed.
I have a function get_image(...) that performs preprocessing on my input images. I gather all images that belong to the same batch in a list like this:
batch = [get_image(file_path) for file_path in batch_files]
Now I want to convert this list into one single tensor with the first dimension being the batch size dimension, such that I could feed it to the input placeholder of my network.
_ = self.sess.run([loss],feed_dict={ input_placeholder: batch })
Any idea how I could do that?
batch_concat = tf.placeholder(shape=[None] + self.image_shape, dtype=tf.float32)
for i in xrange(0,self.batch_size):
if i == 0:
tmp_batch = tf.expand_dims(batch[i], 0)
batch_concat = tmp_batch
else:
tmp_batch = tf.expand_dims(batch[i], 0)
batch_concat = tf.concat(0, [batch_concat, tmp_batch])
When I try to concatenate all tensors, I get the following error:
TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, or numpy ndarrays.
So maybe it would be enough to convert the tensor back into a numpy array before feeding it to the network?
In the TF r1.1 pack has been replaced with tf.stack
You can use tf.pack to pack a list of tensors into a batch.
image_list = [get_image(file_path) for file_path in batch_files]
image_batch = tf.pack(image_list)
You can also use tf.concat to concatenate the list along the first dimension and reshape it.
The issue here is using a tensor as a value in feed_dict. Instead of feeding batch as the value for input_placeholder, why don't you use batch instead of input_placeholder, assuming batch is your batched tensor?
So, instead of:
input_placeholder = tf.Placeholder(tf.int32)
loss = some_function(input_placeholder)
sess.run(loss, feed_dict={input_placeholder: batch})
Do:
loss = some_function(batch)
sess.run(batch)