List of Tensors when single Tensor expected - tensorflow

I use concat to get tensors as the input of CNN. But got the error: List of Tensors when single Tensor expected
image_raw = img.tobytes()
image = tf.decode_raw(image_raw, tf.uint8)
image = tf.reshape(image, [1, image_height, image_width, 3])
image_val = image
for i in range(batch_size-1):
image_val = tf.concat(0,[image_val,image])
return image_val
I have searched the answers for these question, add
image_val = tf.stack([image_val],0) before return, but still get the same error ,why?
**build environment:**
TensorFlow version 0.12
python 3.5

The error List of Tensors when single Tensor expected comes from the fact you wrote tf.concat(0,[image_val,image]) instead of tf.concat([image_val,image],0).

Maybe check again the type of image_height, image_width because sometimes it is necessary to cast these into an integer dtype, e.g. tf.cast(image_height, tf.int32)

Related

Feeding tf.data Dataset with multidimensional output to Keras model

I want to feed a tf.data Dataset to a Keras model, but I get the following error:
AttributeError: 'DatasetV1Adapter' object has no attribute 'ndim'
This dataset will be used to solve a segmentation problem, so both input and output will be images (3D tensors)
The dataset is created with this code:
dataset = tf.data.Dataset.list_files(TRAIN_PATH + "*.png",shuffle=False)
def process_path(file_path):
img = tf.io.read_file(file_path)
img = tf.image.decode_png(img, channels=3)
train_image_path=tf.strings.regex_replace(file_path,"image","mask")
mask = tf.io.read_file(train_image_path)
mask = tf.image.decode_png(mask, channels=1)
mask = tf.squeeze(mask)
mask = tf.one_hot(tf.cast(mask, tf.int32), Num_Classes, axis = -1)
return img,mask
dataset = dataset.map(process_path)
dataset = dataset.batch(32,drop_remainder=True)
Taking an item from the dataset shows that I get a tuple containing an input tensor and an output tensor, whose dimensions are correct:
Input: (batch-size, image height, image width, 3 channels)
Output: (batch-size, image height, image width, 4 channels)
When fitting the model I get an error:
model.fit(dataset, epochs = 50)
I've solved the provem moving to Keras 2.4.3 and Tensorflow 2.2
Everything was right but apparently the previous release of Keras did not manage this tf.data correctly.
Here's a tutorial I've found very useful on this.

Transform 3D Tensor to 4D

I am using the VGG16 Model, which expects a 4D Tensor as input. When I call model.fit(xtrain, ytrain, ...) my xtrain is a list of 3D Tensor [size, size, features] - so in this case: [224,224,3]
What I want is 4D Tensors with [len(images), size, size, features]
How could I modify my code to get there?
I tried tf.expand_dims and tf.concant but it didn't work.
# Transforming my image to a 3D Tensor
image = tf.io.read_file(image)
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.resize(image, [IMG_SIZE, IMG_SIZE])
image = image / 255.0
Error msg after model.fit:
Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (224, 224, 3)
It looks like you are reading in only a single image and passing that. If that's the case, you can add a dimension of 1 to the first axis of the image. There's lots of ways to do that.
Using reshape:
image = image.reshape(1, 224, 224, 3)
Using some fancy numpy slicing notation to add an axis (personal favorite):
image = image[None, ...]
Using numpy.expand_dims() as explained in Abhijit's answer.
I imagine you want to be reading a bunch of images in though. Possibly an issue with your input process? Can you wrap your read in a loop and read multiple files? Something like:
images = []
for file in image_files:
image = tf.io.read_file(file)
# ...
images.append(image)
images = np.asarray(images)
numpy.expand_dims(image, axis=0)

split tensors based on size not known at graph time

in tensorflow I want to do the following:
receive N 1D tensors
concat them as a big 1D tensor of shape [m]
call a function that process this tensor and generates a tensor of shape [m]
split the resulting tensor in N 1D tensors
However at graph creation time, I don't know the size of each of the 1D tensors, which creates issues. Here's a snippet of what I'm doing:
def stack(tensors):
sizes = tf.convert_to_tensor([t.shape[0].value for t in tensors])
tensor_stacked = tf.concat(tensors, axis=0)
res = my_function(tensor_stacked)
return tf.split(res, sizes, 0)
tensor_A = tf.placeholder(
tf.int32,
shape=[None],
name=None
)
tensor_B = tf.placeholder(
tf.int32,
shape=[None],
name=None
)
res = stack([tensor_A, tensor_B])
This will fail on the "concat" line with the message
TypeError: Failed to convert object of type to Tensor. Contents: [None, None]. Consider casting elements to a supported type.
Is there any way I can do this in tensorflow ? At graph-time the "sizes" variables will always contain unknown sizes because the length of the 1D tensors is never known
Ok, in the meantime I found the answer
Apparently it's enough to replace the call to tensor.shape[0] to tf.shape(tensor)[0]
So now I have:
def stack(tensors):
sizes = tf.convert_to_tensor([tf.shape(t)[0] for t in tensors])
print(sizes)
tensor_stacked = tf.concat(tensors, axis=0)
res = my_function(tensor_stacked)
return tf.split(res, sizes, 0)

TensorFlow DecodePng throws Value Error

For decoding a png image, normally we use the following segment of code.
image_placeholder = tf.placeholder(tf.string)
image_tensor = tf.read_file(image_placeholder)
image_tensor = tf.image.decode_png(image_tensor, channels=1)
For deploying a model using Tensorflow serving, I followed the example of Inception_saved_model for my own version of model. Below is the code used in that program to read the incoming tensorproto.
image_placeholder = tf.placeholder(tf.string, name='images')
feature_configs = {'images': tf.FixedLenFeature(shape=[], dtype=tf.string), }
tf_example = tf.parse_example(image_placeholder, feature_configs)
image_tensor = tf_example['images']
image_tensor = tf.image.decode_png(image_tensor, channels=1)
When I use this code, Decode_png throws Value error:
ValueError: Shape must be rank 0 but is rank 1 for 'DecodePng' (op: 'DecodePng') with input shapes: [?].
Can someone help me on where I am going wrong? The code I presented here is similar to the one given in the Inception example.
tf.parse_example operates on a batch ("rank 1"), and decode_png expects a single image (a scalar string, "rank 0"). I'd either use tf.parse_single_example or add a reshape to scalar (shape=[]) before using decode_png.

How to apply tf.map_fn on a sequence feature? Getting an error: TensorArray dtype is string but Op is trying to write dtype uint8

I am writing a sequence to sequence model that maps video to text. I have the frames of the video encoded as JPEG strings in a sequence feature of the SequenceExample proto. When building my input pipeline, I am doing the following to get an array of decoded jpegs:
encoded_video, caption = parse_sequence_example(
serialized_sequence_example,
video_feature="video/frames",
caption_feature="video/caption_ids")
decoded_video = tf.map_fn(lambda x: tf.image.decode_jpeg(x, channels=3), encoded_video)
However, I am getting the following error:
InvalidArgumentError (see above for traceback): TensorArray dtype is string but Op is trying to write dtype uint8.
My goal is to apply image = tf.image.convert_image_dtype(image, dtype=tf.float32) after decoding it to get the pixel values of uint8 between [0,255] to float between [0,1].
I tried to the following:
decoded_video = tf.map_fn(lambda x: tf.image.decode_jpeg(x, channels=3), encoded_video, dtype=tf.uint8)
converted_video = tf.map_fn(lambda x: tf.image.convert_image_dtype(x, dtype=tf.float32), decoded_video)
However, I still get the same error. Anybody has any idea what might be going wrong? Thanks in advance.
Nevermind. Just had to explicitly add a dtype of tf.float32 in the following line:
converted_video = tf.map_fn(lambda x: tf.image.convert_image_dtype(x, dtype=tf.float32), decoded_video, dtype=tf.float32)

Categories