Error incompatible shapes in function model.fit() - tensorflow

I am new in Keras. I want to try U-net. I used this tutorial from tensorflow: https://github.com/tensorflow/models/blob/master/samples/outreach/blogs/segmentation_blogpost/image_segmentation.ipynb.
I used the code for U-net creation with my own dataset. They have got images 256x256x3 and I made my images with same shape.
Now, I got error:
InvalidArgumentError: Incompatible shapes: [1376256] vs. [458752]
[[{{node training/Adam/gradients/loss/conv2d_23_loss/mul_grad/BroadcastGradientArgs}}] ]
It is in function model.fit(). I have got 130 training examples and batch size is 5 (I know, that those number are small...).
Please, Can anybody know, what can cause this error in function model.fit()?
Thank you for your help.

1376256 is exactly 3 x 458752. I suspect you are not correctly accounting for your channels somewhere. As this appears to be on your output layer, it may be that you're trying to predict 3 classes when there are only 1.
In future, or if this doesn't help, please provide more information including the code for your model and number of classes you're trying to predict, so people can better help.

Related

create_training_graph() failed when converted MobileFacenet to quantize-aware model with TF-lite

I am trying to quantize MobileFacenet (code from sirius-ai) according to the suggestion
and I think I met the same issue as this one
When I add tf.contrib.quantize.create_training_graph() into training graph
(train_nets.py ln.187: before train_op = train(...) or in train() utils/common.py ln.38 before gradients)
It did not add quantize-aware ops into the graph to collect dynamic range max\min.
I assume that I should see some additional nodes in tensorboard, but I did not, thus I think I did not successfully add quantize-aware ops in training graph.
And I try to trace tensorflow, found that I got nothing with _FindLayersToQuantize().
However when I add tf.contrib.quantize.create_eval_graph() to refine the training graph. I can see some quantize-aware ops as act_quant...
Since I did not add ops in training graph successfully, I have no weights to load in eval graph.
Thus I got some error message as
Key MobileFaceNet/Logits/LinearConv1x1/act_quant/max not found in checkpoint
or
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value MobileFaceNet/Logits/LinearConv1x1/act_quant/max
Does anyone know how to fix this error? or how to get quantized MobileFacenet with good accuracy?
Thanks!
H,
Unfortunately, the contrib/quantize tool is now deprecated. It won't be able to support newer models, and we are not working on it anymore.
If you are interested in QAT, I would recommend trying the new TF/Keras QAT API. We are actively developing that and providing support for it.

Bayesian Network in Pomegranate: ValueError: Sample does not have the same number of dimensions as the model

I am trying to model a Bayesian Network in python using Pomegranate package. The network should be learned from data. So I am using .from_samples method. However I am having trouble using the method .predict_proba() and it gives me error.
This is how I build the model:
model = BayesianNetwork.from_samples(X_train, algorithm='chow-liu')
and this is how I do prediction:
model.predict_proba(X_train)
and this is the error I get:
ValueError: Sample does not have the same number of dimensions as the model. Your help would be highly appreciated.
I got the answer: you should define your state_names when calling the from_samples method.
Another question is how do we do classification using this model?
you should use predict() method to predict the state of the not-valued nodes.
Check the documentation for more details.
Also, in the repository you can find some interesting tutorials that will help you.
Please add [] around the sample you are passing

TensorFlow Wide and Deep Model , how many features can I use ?

In this wide and deep model with tensorflow https://www.tensorflow.org/tutorials/wide_and_deep, is there a limit of number of features? I mean is it possible to use 20 columns for training and prediction ?
I tried to train my model with 20 columns, and to predict, but I had this error below
Exception during running the graph: Unable to get element as bytes.
I didn't really understand this error, but I think it is linked to the number of features, cause when I tried with 19 columns, prediction worked!
PS: I'm working on GCP with GCS and GCMLE
Here is the model on my github https://github.com/SofiaAmel/censusTest/blob/master/trainer/model.py
There are no limits to the number of columns. The error you are seeing probably indicates some problem specifically with the column you added.

Tensorflow object detection:ValueError: cannot reshape array

I have trained "SSD with Mobilenet" model from Tensorflow. And training went fine.
Now when I try to test the performance of inference graph by running object_detection_tutorial.ipynb on an image, I get following error:
ValueError: cannot reshape array of size X into shape (a,b,c)
X,a,b,c are different values for different test images.
I don't think image size is causing the issue as model must perform independent of input image size. Infact, I get this error even with an image I used for training.
Please help here.
As suggested by #Mandroid, programmatically changing the input image to 3 channel might be the way to go, but this is how I ended up solving my issue.
Note: I am not sure that removing alpha from the images might have some consequences. This is some kind of information loss however.
Replacing image = Image.open(<image_path>) with image = Image.open(<image_path>).convert('RGB') did the job for me.

TensorFlow Segmentor

I've written the following segmentor and I can't get the accuracy to work. In fact I'm always getting accuracy of 0.0 whatever the size of my sample.
I think the problem is at the sigmoid layer at the end of U() function where a tensor of continuous elements between 0 and 1 (conv10) is further compared to a binary tensor and therefore there's no chance of getting any equality between the two.
UPDATE: The code can be found as git here
I've resolve the issue. The problem was the numpy arrays conversion to placeholder at feed level. The new update code as git can be found at : https://github.com/JulienBelanger/TensorFlow-Image-Segmentation