Concat ragged arrays in Keras - tensorflow

I have several RaggedTensors that I want to concatenate; I am using Keras. Vanilla Tensorflow will happily concatenate them, so I tried the code:
card_feature = layers.concatenate([ragged1, ragged2, ragged3])
but it gave the error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/timeroot/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 925, in __call__
return self._functional_construction_call(inputs, args, kwargs,
File "/home/timeroot/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1084, in _functional_construction_call
base_layer_utils.create_keras_history(inputs)
File "/home/timeroot/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 191, in create_keras_history
_, created_layers = _create_keras_history_helper(tensors, set(), [])
File "/home/timeroot/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 222, in _create_keras_history_helper
raise ValueError('Tensorflow ops that generate ragged or sparse tensor '
ValueError: Tensorflow ops that generate ragged or sparse tensor outputs are currently not supported by Keras automatic op wrapping. Please wrap these ops in a Lambda layer:
```
weights_mult = lambda x: tf.sparse.sparse_dense_matmul(x, weights)
output = tf.keras.layers.Lambda(weights_mult)(input)
```
so then I tried:
concat_lambda = lambda xs: tf.concat(xs, axis=2)
card_feature = layers.Lambda(concat_lambda)([ragged1, ragged2, ragged3])
but it gave the exact same error, even though I had wrapped it. Is this a bug / is there a workaround?

Code to concatenate 3 Ragged Tensors is shown below:
import tensorflow as tf
print(tf.__version__)
Ragged_Tensor1 = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
Ragged_Tensor2 = tf.ragged.constant([[5, 3]])
Ragged_Tensor3 = tf.ragged.constant([[6,7,8], [9,10]])
print(tf.concat([Ragged_Tensor1, Ragged_Tensor2, Ragged_Tensor3], axis=0))
Output is shown below:
2.3.0
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], [], [5, 3], [6, 7, 8], [9, 10]]>
But it looks like you are trying to concatenate Ragged Tensor Operations. Please share your complete code so that we can try to help you.

Related

Doing .to() on a tensorflow EagerTensor

I am taking in a batch of images, and every image I am then splitting into patches using tf.image.extract_patches. I then want to pass those patches through a model to get embeddings/feats. The problem is that I get this error:
File "/home/fingerprint_firstpart.py", line 152, in <module>
main(args)
File "/home/fingerprint_firstpart.py", line 117, in main
patches_embs=get_patches_and_embs(image,fprinter)
File "/home/fingerprint_firstpart.py", line 55, in get_patches_and_embs
feat = fprinter(patches)
File "/home/fingerprinter.py", line 165, in __call__
x = x.to(self.device)
File "/home/kar/anaconda3/envs/styleanalysis/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 401, in __getattr__
self.__getattribute__(name)
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'to'
Here's the code in question:
def get_patches_and_embs(images, fprinter):
patches = tf.image.extract_patches(
images=images,
sizes=[1,384, 384,1],
strides= [1, 384, 384,1],
rates=[1, 1, 1, 1],
padding="VALID",
)
patches = tf.reshape(patches, [-1, 384, 384, 3])
patches = tf.transpose(patches, perm=[0, 3, 1, 2]) #at this point this is a tensor of size batch x 3 x 384 x 384, as the model needs as input
feat = fprinter(patches)
return feat
Now, if I take the patches tensorflow tensor and convert it to numpy, and then to pytorch the prgram works just fine. However, I'd like to avoid that if at all possible, so is there any way to do .to() on a tensorflow tensor?

How to suffle a ragged tensor in Tensorflow/Keras, using only tensorflow/keras operations

I am trying to shuffle a rangged tensor
shuffleTest = tf.random.uniform(
shape=[5, 8], minval=0, maxval=1000, dtype=tf.dtypes.int32, seed=None, name=None)
tf.random.shuffle(shuffleTest)
Works fine
digits = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
tf.random.shuffle(digits)
Results in
ValueError: TypeError: object of type 'RaggedTensor' has no len()
Is there an elegant way to shuffle a ragged tensor, using only tensorflow/keras operations. This is because I am creating a pipeline that will run on TPUs.
So far I came up with this workaround
digits = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
print(digits)
a = tf.random.shuffle(tf.range(digits.shape[0]))
b = tf.reshape(a, (digits.shape[0], 1))
shuffledDigits = tf.gather_nd(digits, b)
print(shuffledDigits)
Is there a more elegant way?

Usage of tf.space_to_batch_nd() and tf.batch_to_space_nd()

I am implementing OCR with RNN on top of CNN features. And I can not understand mechanics of tf.space_to_batch_nd() and tf.batch_to_space_nd().
I need to apply fully connected layer on each time (1-st dim) slice of input layer to reduce tensor rank from 4 to 3
I have working implementation of this via tf.reshape().
Input tensor shape is [1, 84, 7, 128], output should be [1, 84, 128]
My current implementation is:
with tf.variable_scope('dim_redux') as scope:
conv_out_shape = tf.shape(net)
print("Conv out:", str(net))
w_fc1 = weight_variable([7 * 128, 128])
b_fc1 = bias_variable([128])
conv_layer_flat = tf.reshape(net, [-1, 7 * 127])
features = tf.matmul(conv_layer_flat, w_fc1) + b_fc1
features = lrelu(h_bn3)
features = tf.reshape(features, [batch_size, int(84), CONV_FC_OUTPUT])
And with tf.space_to_batch_nd() and tf.batch_to_space_nd():
with tf.variable_scope('dim_redux') as scope:
net = tf.space_to_batch_nd(net, block_shape=[84, 1], paddings=[[0, 0], [0, 0]])
print(net)
net = tf.contrib.layers.flatten(net)
net = tf.contrib.layers.fully_connected(net, CONV_FC_OUTPUT, biases_initializer=tf.zeros_initializer())
net = lrelu(net)
print(net)
net = tf.batch_to_space_nd(net, block_shape=[84, 128], crops=[[0, 0], [0, 0]])
features = net
It seems that block shapes should be [1, 7], but only with this values [84, 1] tf.space_to_batch_nd() return tensor with correct shapes [84, 1, 7, 128].
With current params i hame following error:
File "/Users/akislinskiy/tag_price_ocr/ocr.py", line 336, in convolutional_layers
net = tf.batch_to_space_nd(net, block_shape=[84, 128], crops=[[0, 0], [0, 0]])
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 318, in batch_to_space_nd
name=name)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2632, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1911, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1861, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 595, in call_cpp_shape_fn
require_shape_fn)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 659, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Shape must be at least rank 3 but is rank 2 for 'convolutions/dim_redux/BatchToSpaceND' (op: 'BatchToSpaceND') with input shapes: [84,128], [2], [2,2].

What kind of tensorflow placeholder shape to use to contain such sequence?

This is the sequence I intend to input:
sequence = [[[113, 162, 159], [3, 163, 417], [393, 77, 333], [420, 214, 382], [308, 441, 175], [152, 80, 477], [184, 101, 54], [417, 277, 487], [494, 329, 315], [413, 386, 319]],
[425, 132, 407],
[405]]
However, I am unable to determine what shape of placeholder to use for it.
x = tf.placeholder(tf.float32, shape=[None, None, 3], name='probable_solutions')
sess = tf.Session()
init_op = tf.global_variables_initializer()
sess.run(init_op)
sess.run(x, feed_dict={x: [sequence[0], sequence[1], sequence[2]]})
This gives me the following error:
ValueError: setting an array element with a sequence.
Here's the full code-
https://pastebin.com/cq44wcir
(I've also marked a few questions in the pastebin code - you can find them by searching for '#~~#', no quotes, in the text)
Collected from :
first of all before sending it to feed_dict, sequence should be a numpy array.
of course, you can convert it to numpy array easily but that's not the solution.
It's clear that you are trying to create an array from a list which isn't shaped like a multi-dimensional array.
Any array which isn't "Generalized" cannot be used as feed_dict

How can use tf.train.batch to support padding variable length constant?

I am doing practice using TensorFlow, and my code is as following:
a = tf.constant([[1,2,3],
[1,2,0],
[1,2,4],
[1,2],
[1,3,4,2],
[1,2,3]])
b = tf.reshape(tf.range(12), [6,2])
num_epochs = 3
batch_size = 2
num_batches = 3
# dequeue ops
a_batched, b_batched = tt.slice_input_producer([a, b], num_epochs = num_epochs, capacity=48, shuffle=False)
aa, bb = tt.batch([a_batched, b_batched], batch_size=batch_size, dynamic_pad=True)
aa3 = tf.reduce_mean(aa)
bb3 = tf.reduce_mean(bb)
loss = tf.squared_difference(aa3, bb3)
sess = tf.Session()
sess.run([tf.global_variables_initializer(),
tf.local_variables_initializer()])
coord = tf.train.Coordinator()
threads = queue_runner_impl.start_queue_runners(sess=sess)
for i in range(num_batches*num_epochs):
print sess.run(loss)
print '='*30
coord.request_stop()
coord.join(threads)
Since the variable a is with variable length, the code runs into the error:
Traceback (most recent call last):
File "small_input_with_no_padding.py", line 16, in
[1,2,3]])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 99, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 376, in make_tensor_proto
_GetDenseDimensions(values)))
ValueError: Argument must be a dense tensor: [[1, 2, 3], [1, 2, 0], [1, 2, 4], [1, 2], [1, 3, 4, 2], [1, 2, 3]] - got shape [6], but wanted [6, 3].
I want to test how can tf.train.batch pad the input with variable length. So how can I fix this error? Thank you a lot!
You can't create a constant tensor by a variable length list which can not be converted to a dense tensor.
a = tf.constant([[1,2,3], [1,2,0], [1,2,4], [1,2], [1,3,4,2], [1,2,3]])