tensorflow: deal with variable length tensors - tensorflow

I'm into a situation that I need to transform things like
[[1,2,3,-1]
[1,2,-1,-1]
[1,-1,-1,-1]]
into
[[1,2,3]
[1,2]
[1]]
for which -1 are paddings that I need to remove, I have tried map_fn, but it needs each unpacked tensor to have same shape after the process, I tried while loop in TF, but also the loop vars need to be tensors, and a list with variable length tensors can not be passed in, how can achieve this simple function?

Related

Keras custom loss with dynamic variable for slicing

First, I would like to say that I only have little experience in Keras/Tensorflow and probably lack some understanding on tensors manipulations.
I am using a model which input is an "oversized" matrix (NxN). That is, I feed it with data that can be smaller (ie. (KxK), K <= N) where "missing" data (to fit the NxN shape) is filled with zeros. The output is an encoded version (Nx2) of the input.
I'm using a custom loss function that I would like to be computed only on the (Kx2) first values of the model's output. To do so, I think the solution is to "slice" the y_pred tensor in my loss function since I don't want to simply mask it with a boolean tensor. However, I can't figure out how to pass K as a dynamic argument to my custom loss.
Wrapping the function within another function that takes an argument does not fit my needs since the K value will change on each data sample
Passing K in the model's input and getting it back through a function wrapp (eg. https://stackoverflow.com/a/55445837/6315123) as mentionned in the first point does not work either, since slices cannot be computed from Tensor (as far as I understand); and evaluate the tensor within the loss function doesn't seem possible.
How can I pass such an argument to my loss function ?
Thanks !

Tflite: Resize output tensor based on input tensor contents

I am writing a custom op that outputs a tensor whose shape depends on the values of the input tensor. The problem is that we don't have access to the tensor values in the Prepare method. We can get the tensor shapes but the values are not available. How do I implement this?
On a related note, how do I support outputting a tensor with partially specified shape? The tensor would need to be allocated during the eval function, but I don't see an API to allocate tensors at run time.

tf.split with a tensor of size not divisible by the number of output tensors

I'm trying to parallelize my tensorflow code. At one point I need to run this to split the tensors.
for variable_name, variable in kwargs.items():
input_variables_split[variable_name] = tf.split(variable, number_of_devices)
The problem with this code is that tf.split expects that the 0th axis of all variables will be divisible by the number_of_devices, which might sometimes not be true in my case. What would be the best way to solve this?

Feeding the input with Tensors instead of numpy arrays in TensorFlow

If the input data is in numpy array format, then we can declare a placeholder in the graph and feed the placeholder with the numpy array data. However, if the input data is already in Tensor format (this is the case when we load jpg files using tf.image.decode_jpeg), then we can't feed a Tensor to a placeholder. In this case, should we use non trainable TF Variables as placeholders, and feed the Tensor to these Variables by tf.assign?
Figured it out. You can simply feed batches of Tensors to the model. The model probably has a line that looks similar to op = optimizer.minimize(loss). Then, each time sess.run(op) is called, the model will be trained on the batch provided to it. Also, each time sess.run(op) is called, we should have a different batch if we use tf.train.batch to provide the batch.

tensorflow dynamically create placeholders

At each iteration I want to dynamically provide how many placeholders I want and then will feed data to them. Is that possible and how ? I tried to create the whole model (placeholders, loss, optimizer) inside epoch loop but that gave uninitialised variables error.
At present I have n=5 placeholders each of shape=(1, k) in a list and I feed data to them. But n needs to dynamically defined during data feeding inside epoch loop.
Maybe you misunderstood what a tensor is.
If you think of a tensor like a multi-dimensional list, you can understand that having a dynamically number of placeholder with a shape [1, k] is no sense.
Instead, you have to use a single tensor.
Thus, define your input placeholder as a tensor with shape [None, 1, k].
placeholder_ = tf.placeholder(tf.float32, [None, 1, k])
With this statement you define a placeholder with tf.float32 type and an undefined number of elements (the None part) with shape [1,k].
In every iteration, you have to feed the placeholder with the right values. Eg running
result = sess.run(defined_op, feed_dict={
placeholder_: numpy_ndarray_with_N_elements_with_shape_1_k
})
In that way you don't need to define new variables into the computational graph (that simply doesn't work) but feed it with the desired values.