Incompatible Shape between input image and expected model input image - tensorflow

I am trying to train a FCN model for segmentation. I have an array of train images of size (256,256,3) and another array of ground truth(masks) of size (256,256,3). My model summary is as below :
Model Structure
I am not sure how to train the model so i am using model.fit() to train it.
Model Compile and fit
But i am getting incompatible input image size error
Error
Can anyone, please help me to rectify the code and train the model. Thank you

Related

Train Custom YOLOv5s Detector

model should predict some sort of result
enter image description here
I tried to see the prediction of YOLOv5 but at but at this stage it is not working

Can't save model in saved_model format when finetune bert model

When training the bert model, the weights are saved well, but the entire model is not saved.
After model.fit,
save model as model.save_weights('bert_xxx.h5') and load_weights works fine,
but since only weights are saved, the model frame must be loaded separately.
So I want to save the entire model at once.
However, the following error occurs.
The tensorflow version was 2.4, and the bert code used https://qiita.com/namakemono/items/4c779c9898028fc36ff3
Why is only the weights saved and not the entire model?
And how can I save the whole model??

Weird behaviour with Keras training and 'model is not defined

I am training a stacked LSTM to classify text into a collection of different labels using Tensorflow Keras, but I am running into an odd issue regarding training. output after trying to train
I am not sure why the code runs without error, yet the model has not been trained. Furthermore, when I try to evaluate the model, test_loss, test_acc = model.evaluate(val_ds), I get name 'model' is not defined .
Not sure why this is happening. Any suggestions? The link to the file is https://github.com/tkowalski9938/Maureen-Crops.-Message-Predictor/blob/master/model.ipynb . Thanks for the help

how to connect the pretrained model's input to the output of tf.train.shuffle_batch?

In classify_image.py, the input image is fed with a loaded image in
predictions = sess.run(softmax_tensor,{'DecodeJpeg/contents:0': image_data})
What if I want to add new layers to the inception model and train the whole model again? Are the variables loaded from classify_image_graph_def.pb trainable? I saw that freeze_graph.py used convert_variables_to_constants to produce freezed graph. So can those loaded weights be trained again, are they constants? And how can I connect the input('shuffle_batch:0') to the inception model to the output of tf.train.shuffle_batch?
The model used in classify_image.py has its variables frozen into constants, and doesn't have any gradient ops, so it's not easy to turn it back into something trainable. You can see how we remove one layer and replace it with something trainable here:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py
It's hard to generalize though. You'd be better off looking at some examples of fine-tuning here:
https://github.com/tensorflow/models/tree/master/inception#how-to-fine-tune-a-pre-trained-model-on-a-new-task

How to retrain inception-v1 model?

I have successfully gone through the official tutorial, which explains how to retrain inception-v3 model and later successfully retrained the same model o train the model for specific purposes.
The model, however, is complex and slow compared to other, simpler models, such as inception-v1 which accuracy is good enough for some tasks. Specifically, I would like to retrain the model to use it on Android and ideally the performance in terms of speed should be comparable to original TensorFlow Android demo. Anyway, I tried to retrain the inception-v1 model from this link with following modifications in retrain.py:
BOTTLENECK_TENSOR_NAME = 'avgpool0/reshape:0'
BOTTLENECK_TENSOR_SIZE = 2048
MODEL_INPUT_WIDTH = 224
MODEL_INPUT_HEIGHT = 224
MODEL_INPUT_DEPTH = 3
JPEG_DATA_TENSOR_NAME = 'input'
RESIZED_INPUT_TENSOR_NAME = 'input'
As opposed to inception v3, inception v1 does not have any decodeJpeg or resize nodes:
inception v3 nodes:
DecodeJpeg/contents
DecodeJpeg
Cast
ExpandDims/dim
ExpandDims
ResizeBilinear/size
ResizeBilinear
...
pool_3
pool_3/_reshape/shape
pool_3/_reshape
softmax/weights
softmax/biases
softmax/logits/MatMul
softmax/logits
softmax
inception v1 nodes:
input
conv2d0_w
conv2d0_b
conv2d1_w
conv2d1_b
conv2d2_w
conv2d2_b
...
softmax1_pre_activation
softmax1
avgpool0/reshape/shape
avgpool0/reshape
softmax2_pre_activation/matmul
softmax2_pre_activation
softmax2
output
output1
output2
so I guess the images have to be reshaped before being fed into the graph.
Right now the error occurs when hitting the following function:
def run_bottleneck_on_image(sess, image_data, image_data_tensor,
bottleneck_tensor):
"""Runs inference on an image to extract the 'bottleneck' summary layer.
Args:
sess: Current active TensorFlow Session.
image_data: Numpy array of image data.
image_data_tensor: Input data layer in the graph.
bottleneck_tensor: Layer before the final softmax.
Returns:
Numpy array of bottleneck values.
"""
bottleneck_values = sess.run(
bottleneck_tensor,
{image_data_tensor: image_data})
bottleneck_values = np.squeeze(bottleneck_values)
return bottleneck_values
Error:
TypeError: Cannot interpret feed_dict key as Tensor: Can not convert a
Operation into a Tensor.
I guess the data on input node of inception v1 graph has to be reshaped to match the data after passing the following nodes in inception v3:
DecodeJpeg/contents
DecodeJpeg
Cast
ExpandDims/dim
ExpandDims
ResizeBilinear/size
ResizeBilinear
If anyone has already managed to retrain the inception v1 model or has an idea how to reshape the data in inception v1 case to match inception v3, I would be very thankful for any tips or suggestions.
Not sure if you have solved this or not but I am working on a similar problem.
I am trying to use a different model (not Inception-v1 or Inception-v3) with the Inception-v3 transfer learning tutorial. This post seems to be on the right track of remapping the input of the new model (in your case inception-v1) to play nice with the jpeg encoding used in the rest of the tutorial:
feeding image data in tensorflow for transfer learning
The only problem I am having is a error in my input saying "Cannot convert a tensor of type uint8 to an input type of float32" but this may at least put you on the right track.
Good Luck!
(For the ones who are still interested)
Bottleneck tensor size should be 1024 for inception-v1. For me, the following setup works with mentioned inception-v1 for this retrain script. No need for jpeg data tensor or else.
bottleneck_tensor_name = 'avgpool0/reshape:0'
bottleneck_tensor_size = 1024
input_width = 224
input_height = 224
input_depth = 3
resized_input_tensor_name = 'input:0'