Tflite: Resize output tensor based on input tensor contents - tensorflow

I am writing a custom op that outputs a tensor whose shape depends on the values of the input tensor. The problem is that we don't have access to the tensor values in the Prepare method. We can get the tensor shapes but the values are not available. How do I implement this?
On a related note, how do I support outputting a tensor with partially specified shape? The tensor would need to be allocated during the eval function, but I don't see an API to allocate tensors at run time.

Related

How can I tell tensorflow about the shape of a parse_tensor operation?

When I decode a tensor using tf.io.parse_tensor, the shape comes out as shape=<unknown>, which makse sense because tensorflow has no way to know the shape of the data that I will pass into this operation. However, if I know that the data I will provide has a certain shape, such as having exactly 2 dimensions, or having 3 rows and an unknown number of columns, how can I "tell" tensorflow this?
I need to do this because I am using functions like padded_batch later on that behave differently for different shapes (producing a ragged tensor versus a dense tensor).

how to get the shapes of a tensor inside a custom loss in tensorflow

I implemented a custom loss. I want to get the shape of input parameter, like y_true and y_pred. But
whatever I have tried, I can't get a valid shape. The methods I tried including y_true.shape, int_shape(shape), y_true.get_shape returned (None, None).
You know TensorFlow version <= 2.0 does not run eagerly. So when you run the graph you'll get a valid shape. Know the output shape and implement your loss function base on that. tf.shape(tensor) returns a valid shape.

How to batch CsvDataset correctly in Tensorflow 2.0?

I'm using tf.data.experimental.make_csv_dataset to create a dataset from a .csv file. I'm also using tf.keras.layers.DenseFeatures as an input layer of my model.
I'm struggling to create a DenseFeatures layer properly so that it is compatible with my dataset in the case when batch_size parameter of make_csv_dataset is not equal to 1 (in case if batch_size=1 my setup works as expected).
I create DenseFeatures layer using a list of tf.feature_column.numeric_column elements with shape=(my_batch_size,), but it seems like in this case for some reason the input layer expects [my_batch_size,my_batch_size] shape instead of [my_batch_size,1].
With my_batch_size=19 I'm getting the following error when trying to fit the model:
ValueError: Cannot reshape a tensor with 19 elements to shape [19,19] (361 elements) for 'MyModel/Input/MyColumn1/Reshape' (op: 'Reshape') with input shapes: [19,1], [2] and with input
tensors computed as partial shapes: input[1] = [19,19].
If I don't specify shape when creating numeric_column it doesn't work either. I'm getting the following error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: The second input must be a scalar, but it has shape [19]
which assumes that numeric_column expects a scalar but recieves the whole batch in one Tensor.
How do I create an input layer of DenseFeatures so that it accepts the dataset produced by make_csv_dataset(batch_size=my_batch_size)?
From the tf.feature_column.numeric_column documentation:
shape: An iterable of integers specifies the shape of the Tensor. An integer can be given which means a single dimension Tensor with given width. The Tensor representing the column will have the shape of [batch_size] + shape.
This means that you must not pass the batch size to the shape argument: shape=().
Currently, with a batch size of 1, you get shape=(1,) that TF can handle thanks to broadcasting or something like that (dimensions of size 1 are easily added by TF if necessary), that's why it works.
Hope this can help. Provide more code if you want more help.

Why is "step" argument necessary when predicting using data tensors? what does this error mean?

I am trying to predict() the output for a single data point d, using my trained Keras model loaded from a file. But I get a ValueError If predicting from data tensors, you should specify the 'step' argument. What does that mean?
I tried setting step=1, but then I get a different error ValueError: Cannot feed value of shape () for Tensor u'input_1:0', which has shape '(?, 600)'.
Here is my code:
d = np.concatenate((hidden[p[i]], hidden[x[i]])).resize((1,600))
hidden[p[i]] = autoencoder.predict(d,steps=)
The model is expecting (?,600) as input. I have concatenated two numpy arrays of shape (300,) each to get (600,), which is resized to (1,600). This (1,600) is my input to predict().
In my case, the input to predict was None (because I had a bug in another part of the code).
In official doc, steps refer to the total number of steps before stopping. So steps=1 means make predictions on one batch instead of making prediction on one record (single data point).
https://keras.io/models/sequential/
-> Define value of steps argument,
d = np.concatenate((hidden[p[i]],
hidden[x[i]])).resize((1,600))
hidden[p[i]] = autoencoder.predict(d,steps=1)
If you are using a test data generator, it is good practice to define the steps, as mentioned in the documentation.
If you are predicting a single instance, no need to define the steps. Just make sure the argument (i.e. instance 'd') is not None, otherwise that error will show up. Some reshaping may also be necessary.
in my case i got the same error, i just reshaped the data to predict with numpy function reshape() to the shape of the data originally used to train the model.

Applying tf.abs() on multidimensional tensor (tensorflow)

This should be super simple but I cannot find the right way to do it:
I have a tensor with shape (1, 1, 1024, 66) and I want to apply the operation tf.abs() on all values.
Doing tf.abs(tensor) gives the error:
TypeError: List of Tensors when single Tensor expected
tf.abs() does not have a parameter to specify the dimension.
How can I do this?
The problem was actually a different one. My tensor was not shape (1,1,1024,66), but the tensor was shape (1,1024,66) inside of a python list with length 1.
Doing tf.abs(tensor[0]) did the job.