image segmentation with keras extract each class from output? - tensorflow

hey all i have work in image multi class segmentation I train in forked repo from GitHub https://github.com/qubvel/segmentation_models
the output of the prediction image have always 7 shape (320, 320, 7) prediction out put when i have two classes found in the output i need to get each class so i need to extract each shape that presents class individual so how i can do this

Related

Feed image data without class label

I am trying to implement image super resolution using SRGAN. In the process, I used DIV2K dataset (http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip) as my source.
I have worked with image classification using CNN (I used keras.layers.convolutional.Conv2D). But in this case we don't have class label in my data source.
I have unzipped the file and kept in D:\Unzipped\DIV2K_train_HR. Then used following command to read the files.
img_dataset = tensorflow.keras.utils.image_dataset_from_directory("D:\\unzipped")
Then created the model as follows
model = Sequential()
model.add(Conv2D(filters=64,kernel_size=(3,3),activation="relu",input_shape=(256,256,3)))
model.add(AveragePooling2D(pool_size=(2,2)))
model.add(Conv2D(filters=64,kernel_size=(3,3),activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.compile(optimizer='sgd', loss='mse')
model.fit(img_dataset,batch_size=32, epochs=10)
But I am getting error : "Graph execution error". I am unable to find the root cause behind this error. Is this error appearing as the class label is missing (I think as per code DIV2K_train_HR is treated as one class label)? Or is this happening due to images don't have one specific size?
Note: This code does not match with SRGAN architecture. I am new to GAN and trying to move ahead step by step. I got stuck in the first step itself.
Yes, the error message is because you don't have labels in your dataset.
As a first step in GAN network you need to create a discriminator model: given some image it should recognize if it is a real or fake image. You can take images from your dataset and label them as 1 ("real images"). Then generate "fake images" by down-sampling and up-sampling images from your dataset and label them as 0. Train your discriminator model so that it can distinguish between original and processed images.
After that, you create generator model. The generator model takes a down-sampled version of the image as an input and creates an up-sampled version in original resolution. GAN model combines generator and discriminator models by passing output from generator to discriminator. The target label is 1, i.e. we want generator create up-sampled versions of images, which discriminator can't distinguish from the real ones. Now train GAN network (set 'trainable' to false for discriminator model weights).
After your generator manages to produce images, which discriminator can't distinguish from the real, you take them, label as 0 and train discriminator again. Then train generator again etc.
The process continues until discriminator can't distinguish fake images from the real ones anymore (i.e. accuracy doesn't exceed 0.5).
Please see a simple example on ("Generative Adversarial Networks"):
https://github.com/ageron/handson-ml3/blob/main/17_autoencoders_gans_and_diffusion_models.ipynb
This code is explained in ch. 17 in book "Hands-on Machine Learning with Scikit-Learn, Keras and TensorFlow (3rd edition)" by Aurélien Géron.

False prediction from efficientnet transfer learning

I'm new to transfer learning in TensorFlow and I choose tfhub to simplify finding a dataset, but now I'm confused because my model gives me a wrong prediction when I try to use an image from the internet. I used the efficientnet_v2_imagenet1k_b0 feature vector without fine-tuning to train a rock-paper-scissors dataset from https://www.kaggle.com/drgfreeman/rockpaperscissors. I used image data generator and flow from directory for data processing.
This is my model here
This is my train result here
This is my test result here
It's the second time I get something like this when using transfer learning with tfhub. I want to know why this happened and how to fix it, so this problem doesn't happen again. Thanks a lot for your help and sorry for my bad English.
I downloaded your code to my local machine and the dataset as well.
Had to make a few adjustments to make it run locally.
I believe the model efficientnet_v2_imagenet1k_b0 is different
from the newer efficient net models in that this version DOES
require pixel levels to be scaled between 0 and 1. I ran the model
with and without rescaling and it works well only if the pixlels
are rescaled. Below is the code I used to test if the model correctly predicts
an image downloaded from the internet. It worked as expected.
import cv2
class_dict=train_generator.class_indices
print (class_dict)
rev_dict={}
for key, value in class_dict.items():
rev_dict[value]=key
print (rev_dict)
fpath=r'C:\Temp\rps\1.jpg' # an image downloaded from internet that should be paper class
img=plt.imread(fpath)
print (img.shape)
img=cv2.resize(img, (224,224)) # resize to 224 X 224 to be same size as model was trained on
print (img.shape)
plt.imshow(img)
img=img/255.0 # rescale as was done with training images
img=np.expand_dims(img,axis=0)
print(img.shape)
p=model.predict(img)
print (p)
index=np.argmax(p)
print (index)
klass=rev_dict[index]
prob=p[0][index]* 100
print (f'image is of class {klass}, with probability of {prob:6.2f}')
the results were
{'paper': 0, 'rock': 1, 'scissors': 2}
{0: 'paper', 1: 'rock', 2: 'scissors'}
(300, 300, 3)
(224, 224, 3)
(1, 224, 224, 3)
[[9.9902594e-01 5.5121275e-04 4.2284720e-04]]
0
image is of class paper, with probability of 99.90
You had this in your code
uploaded = files.upload()
len_file = len(uploaded.keys())
This did not run because files was not defined
so could not find what causes your misclassification problem.
Remember in flow_from_directory, if you do not specify the color mode it defaults to rgb. So even though training images are 4 channel PNG the
actual model is trained on 3 channels. So make sure the images you want to predict are 3 channels.
To help really need to see the code for how you provide your data to model.predict. However as a guess, remember efficientnet needs to have the pixels in the range from0 to 255 so do not scale your images. Make sure your test images are rgb an of the same size as the image size used in training. Also need to see code for how you process the predictions

Output vector given an image for Siamese model

This page (https://keras.io/examples/mnist_siamese/) highlights how we train a Siamese model. The model will output a score given two images input. What I want to do is that, during inference, given an image, I want it to return a 128-dimension vector that represents the image, how should I achieve that?
If you run model.summary() you will see a summary of all model layers. In your case 'model' appears to be the layer of interest. Then you can select the layer that contains the 128D output using the get_layer() method. Finally you can extract the output as below.
model.get_layer('model').output

can not detect the correct class when using Tensorflow object detection API

I am using tensorflow object detection API to train and detect my dataset. This dataset has 5 classes(every class has 50 images), and it contains two very similar classes(red and black). After the train process, I detected the test images and found that, the model always detects a target of red class as a target of black class, the other classes is detected as correct class.
I trained the model with faster_rcnn_resnet101_breads.congfig, and using fine_tune_checkpoint.I set the learning_rate to 0.003(the original is 0.0003).
Can you tell me what is wrong with my model, and how much the learning_rate should I set?
The compare result of my config file and samples config file:compare result
train curves:train curves
> black class: https://i.stack.imgur.com/eWrlK.jpg
> red class: https://i.stack.imgur.com/TuRjg.jpg

How to use self trained model in Tensorflow for image classification

I used the following documentation to train my own model to classify flowers as described there:
https://github.com/tensorflow/models/tree/master/inception#how-to-train-from-scratch
bazel-bin/inception/flowers_train --batch_size=32 --train_dir=/tmp/flowers_train --data_dir=/tmp/flowers_data
I specified --max_steps=30 only to see if I can use the model as expected for classification afterwards.
After these training steps I get the following files:
model.ckpt-29.data-00000-of-00001
model.ckpt-29.index
model.ckpt-29.meta
Unfortunately I actually don't know how to use these three files for image classification. Is there any example showing the necessary steps?
There's a section on how to evaluate (https://github.com/tensorflow/models/tree/master/inception#how-to-evaluate). It will use the saved model (those three files) to classify images and test it against the ground truth labels. You can dig into the code (models/inception/inception/inception_eval.py) to see how it loads and does the raw inference.