I'm working with RaggedTensors to manipulate a dense tensor. Something like this :
out_left = tf.ragged.boolean_mask(input, index)
index = tf.math.logical_not(index)
out_right = tf.ragged.boolean_mask(input, index)
reconstruced_tensor = tf.concat([out_left, out_right], axis=-1)
reconstruced_tensor = reconstruced_tensor.to_tensor()
As you can see, in my example, I'm just splitting my input tensor using RaggedTensors and reconstructing it (I know its useless, but it's just for the sake of simplicity)
The problem I'm having is that I'm getting the following warning:
IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model_14/channel_roll_13/RaggedToTensor/boolean_mask_1/GatherV2:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model_14/channel_roll_13/RaggedToTensor/boolean_mask/GatherV2:0", shape=(None,), dtype=float32), dense_shape=Tensor("gradient_tape/model_14/channel_roll_13/RaggedToTensor/Shape:0", shape=(1,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory. "shape. This may consume a large amount of memory." % value)
Since I know the shape of the output Tensor, I would have thought Tensorflow would too. Is there any way I can explicitly specify the shape of the output tensor since Tensorflow doesn't seem to deduce it on its own?
Related
I have a generator that yields tf.sparse.SparseTensors. I want to turn this into a Tensorflow Dataset, but am running into some issues. I am using TF2. First, unlike regular Tensors, you cannot simply pass them in (and providing the correct data types for output_types). For a sparse tensor of [1,0,0,0,5,0], the error looks like
tensorflow.python.framework.errors_impl.InvalidArgumentError: TypeError: `generator` yielded an element that could not be converted to the expected type. The expected type was int64, but the yielded element was SparseTensor(indices=tf.Tensor(
E [[0]
E [4]], shape=(2, 1), dtype=int64), values=tf.Tensor([1 5], shape=(2,), dtype=int64), dense_shape=tf.Tensor([6], shape=(1,), dtype=int64)).
After doing some looking around on the internet, I found this open issue and tried to do something similar https://github.com/tensorflow/tensorflow/issues/16689 - read the indices, values, and shape as separate tensors into a TF Dataset, and then mapping over the dataset to create the sparse tensor. This is not working as shown in some of the examples in the github issue - tf.sparse.SparseTensor(indices, values, shape) does not seem to accept indices and shape in the form of a tf.Tensor - it will happily take in a list or numpy array, but not a Tensor. Since map is not eager, I also cannot call .numpy() on the Tensor either. What is best way to get this to work? I see there is tf.py_function/tf.numpy_function which could help, but constructing the output type can be tricky (though not impossible) for my use case - the incoming data is not fixed and can have a mix of sparse and dense tensors.
I'm using tf.data.experimental.make_csv_dataset to create a dataset from a .csv file. I'm also using tf.keras.layers.DenseFeatures as an input layer of my model.
I'm struggling to create a DenseFeatures layer properly so that it is compatible with my dataset in the case when batch_size parameter of make_csv_dataset is not equal to 1 (in case if batch_size=1 my setup works as expected).
I create DenseFeatures layer using a list of tf.feature_column.numeric_column elements with shape=(my_batch_size,), but it seems like in this case for some reason the input layer expects [my_batch_size,my_batch_size] shape instead of [my_batch_size,1].
With my_batch_size=19 I'm getting the following error when trying to fit the model:
ValueError: Cannot reshape a tensor with 19 elements to shape [19,19] (361 elements) for 'MyModel/Input/MyColumn1/Reshape' (op: 'Reshape') with input shapes: [19,1], [2] and with input
tensors computed as partial shapes: input[1] = [19,19].
If I don't specify shape when creating numeric_column it doesn't work either. I'm getting the following error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: The second input must be a scalar, but it has shape [19]
which assumes that numeric_column expects a scalar but recieves the whole batch in one Tensor.
How do I create an input layer of DenseFeatures so that it accepts the dataset produced by make_csv_dataset(batch_size=my_batch_size)?
From the tf.feature_column.numeric_column documentation:
shape: An iterable of integers specifies the shape of the Tensor. An integer can be given which means a single dimension Tensor with given width. The Tensor representing the column will have the shape of [batch_size] + shape.
This means that you must not pass the batch size to the shape argument: shape=().
Currently, with a batch size of 1, you get shape=(1,) that TF can handle thanks to broadcasting or something like that (dimensions of size 1 are easily added by TF if necessary), that's why it works.
Hope this can help. Provide more code if you want more help.
I have a Tensor which needs to be cropped with the indices of a tensor.
ex - Input (None,5x5x10) tensor
BoundingBox (None, 2) -- tensor
I want to have an operation that does the following
Output (None,3x2x10) --tensor
if BoundingBox[0,0] = 3, BoundingBox[0,1] = 2
This is same as tf.image.crop_to_bounding_box but this function does not tensor type bounding box as input. Please help.
Unfortunately this isn't possible with 'standard' tensor operations because the dimensions of the output could vary.
Consider the example where bounding_box[0] == [3,2] and bounding_box[1] == [4,2] then your output shape needs to be (None, 3 or 4, 2, 10) and (of course) having a dimension 3 or 4 is not allowed for standard tensors.
TensorFlow does, however, have the concept of a Ragged Tensor which could conceivably be used to represent crops of different dimensions but this is an unusual case and is unlikely to be suited to most mainstream downstream training operations. Still, it could be worth reading up on this to see if it fits your use case: link
sparse tensor.shape method returns a tensor object which seems to be of no use to extract the actual shape of the sparse tensor without resorting to run function.
To clarify what I mean, first consider a sparse tensor:
a = tf.SparseTensor(indices=[[0, 0, 0], [1, 2, 1]], values=[1.0+2j, 2.0], shape=[3, 4, 2])
a.shape returns:
tf.Tensor 'SparseTensor_1/shape:0' shape=(3,) dtype=int64
This is kind of no use.
Now, consider a dense tensor:
a = tf.constant(np.random.normal(0.0, 1.0, (4, 4)).astype(dtype=np.complex128))
a.get_shape() returns:
TensorShape([Dimension(4), Dimension(4)])
I can use this output and cast it into a list or tuple of integers without ever invoking run(). However, I cannot do the same for sparse tensor, unless I first convert sparse tensor to dense (which is not implemented for complex sparse tensor yet) and then call get_shape() method on it, but this is kind of redundant, defeats the purpose of using a sparse tensor in the first place and also leads to error down the road if the input sparse tensor is complex.
Is there a way to obtain the shape of a sparse tensor without invoking run() or converting it to a dense tensor first?
tf.SparseTensor is implemented as a triple of dense Tensors under the hood. The shape of a SparseTensor is just a Tensor; if you want to know its value, your best bet is to evaluate it using session.run:
print(sess.run(a.shape))
In general, Tensorflow does not promise to compute an exact shape even for dense tensors at graph construction time; shapes are best effort and may not even have a fixed value. So even for a dense Tensor you may have to evaluate the Tensor using run to get a precise shape.
I have used the model described here on the 0.6.0 branch. The code can be found here. I have done some minor changes to the linked code.
In my code I create two models, one for training and one for validation, very similar as it is done in the Tensorflow Tutorial.
with tf.variable_scope("model", reuse=None, initializer=initializer):
m = PTBModel_User(is_training=True, config=config, name='Training model')
with tf.variable_scope("model", reuse=True, initializer=initializer):
mtest = PTBModel_User(is_training=False, config=config_valid, name='Validation model')
The first model, the one for training, seems to be created just fine, but the second, used for validation, does not. The output gets a None dimension! The row I'm refering to is on row 134 in the linked code:
output = tf.reshape(tf.concat(1, outputs), [-1, size])
I've added these lines right after the reshape of the output:
output_shape = output.get_shape()
print("Model num_steps:", num_steps)
print("Model batch_size:", batch_size)
print("Output dims", output_shape[0], output_shape[1])
and that gives me this:
Model num_steps: 400
Model batch_size: 1
Output dims Dimension(None) Dimension(650)
This problem only happens with the 'validation model', not with the 'training model'. For the 'training model' I get expected output:
Model num_steps: 400
Model batch_size: 2
Output dims Dimension(800) Dimension(650)
(Note that with the 'validation model' I use a batch_size=1 instead of batch_size=2 that I use for the training model)
From what I understand, using -1 as input to the reshape function, will figure the output shape out automagically! But then why do I get None? Nothing in my config fed to the model has a None value.
Thank you for all the help and tips!
TL;DR: A dimension being None simply means that shape inference could not determine an exact shape for the output tensor, at graph-building time. When you run the graph, the tensor will have the appropriate run-time shape.
If you're not interested in how shape inference works, you can stop reading now.
Shape inference applies local rules, based on a "shape function" that takes the shapes of the inputs to an operation and computes (possibly incomplete) shapes for the outputs of an operation. To figure out why tf.reshape() gives an incomplete shape, we have to look at its inputs, and work backwards:
The shape argument to tf.reshape() includes a [-1], which means "figure the output shape automagically" based on the shape of the tensor input.
The tensor input is the output of tf.concat() on the same line.
The inputs to tf.concat() are computed by a tf.mul() in BasicLSTMCell.__call__(). The tf.mul() op multiplies the result of a tf.tanh() and a tf.sigmoid() op.
The tf.tanh() op produces an output of size [?, hidden_size], and the tf.sigmoid() op produces an output of size [batch_size, hidden_size].
The tf.mul() op performs NumPy-style broadcasting. A dimension will only be broadcast if it has size 1. Consider three cases where we compute tf.mul(x, y):
If x has shape [1, 10], and y has shape [5, 10], then broadcasting will happen, and the output shape will be [5, 10].
If x has shape [1, 10], and y has shape [1, 10], then there will be no broadcasting, and the output shape will be [1, 10].
However, if x has shape [1, 10], and y has shape [?, 10], there is insufficient static information to tell whether broadcasting will happen (even though we happen to know that case 2 applies at runtime).
Therefore, when batch_size is 1, the tf.mul() op produces an output with the shape [?, hidden_size]; but when batch_size is greater than 1, the output shape is [batch_size, hidden_size].
Where shape inference breaks down, it can be appropriate to use the Tensor.set_shape() method to add information. This would potentially be useful in the BasicLSTMCell implementation, where we know more than it is possible to infer about the shapes of the outputs.