How to use self trained model in Tensorflow for image classification - tensorflow

I used the following documentation to train my own model to classify flowers as described there:
https://github.com/tensorflow/models/tree/master/inception#how-to-train-from-scratch
bazel-bin/inception/flowers_train --batch_size=32 --train_dir=/tmp/flowers_train --data_dir=/tmp/flowers_data
I specified --max_steps=30 only to see if I can use the model as expected for classification afterwards.
After these training steps I get the following files:
model.ckpt-29.data-00000-of-00001
model.ckpt-29.index
model.ckpt-29.meta
Unfortunately I actually don't know how to use these three files for image classification. Is there any example showing the necessary steps?

There's a section on how to evaluate (https://github.com/tensorflow/models/tree/master/inception#how-to-evaluate). It will use the saved model (those three files) to classify images and test it against the ground truth labels. You can dig into the code (models/inception/inception/inception_eval.py) to see how it loads and does the raw inference.

Related

Use Tensorflow2 saved model for object detection

im quite new to object detection but i managed to train my first Tensorflow custom model yesterday. I think it worked fine besides some warnings, at least i got my exported_model folder with checkpoint, saved model and pipeline.config. I built it with exporter_main_v2.py from Tensorflow. I just loaded some images of deers and want to try to detect some on different pictures.
That's what i would like to test now, but i dont know how. I already did an object detection tutorial with pre trained models and it worked fine. I tried to just replace config_file_path, saved_model_path and image_path with the paths linking to my exported model but it didnt work:
error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\tensorflow\tf_io.cpp:42: error: (-2:Unspecified error) FAILED: ReadProtoFromBinaryFile(param_file, param). Failed to parse GraphDef file: D:\VSCode\Machine_Learning_Tests\Tensorflow\workspace\exported_models\first_model\saved_model\saved_model.pb in function 'cv::dnn::ReadTFNetParamsFromBinaryFileOrDie'
There are endless tutorials on how to train custom detection but i cant find a good explanation how to manually test my exported model.
Thanks in advance!
EDIT: I need to know how to build a script where i can import a model i saved with Tensorflow exporter_main_v2.py and an image i want to test this model on and get a result, either in text or with rectangels in picture. Seeing many tutorials but none works for me with a model i saved with Tensorflow exporter_main_v2.py
From the error it looks like you have a model saved as .pb. If you want to do inference you can write something like this:
# load the model
model = tf.keras.models.load_model(my_model_dir)
prediction = model.predict(x=x_test, ...)
You'll have to set x which is the only mandatory argument. It is your test dataset (the images you want to obtain predictions from). Also, predict is useful when you have a great amount of images to predict. It handles the prediction in a batched way, avoiding filling up the memory. If you have just a few you can use directly the __call__() method of your model, like this:
prediction = model(x_test, training=False)
More about prediction can be found at the Tensorflow documentation.

Load data for Mask RCNN

I want to train Mask-RCNN on my own dataset. I already have the segmented images (ground truths) of leaves which look something like the image below:
How can I load the dataset for training Mask RCNN?
Since Mask RCNN is pre-trained on COCO dataset,you need to train it with these images. For that purpose you have to label them and train it. Since a mask is involved use a tool such as VGG annotator to do the necessary annotation and labelling, it will generate a json file depending on your classes. Later based on your requirement you have to run the .py files for your classes, train and then generate it for testing.
You will have to convert this to TFrecords for the MASK RCNN model to be able to read the image and its annotations. Please refer this medium article 'https://medium.com/#vijendra1125/custom-mask-rcnn-using-tensorflow-object-detection-api-101149ce0765'
You can use coco annotations ( here is an example ) to annotate your dataset and then just run it like you use coco dataset.
Also you can check this code : https://github.com/matterport/Mask_RCNN/blob/master/samples/shapes/train_shapes.ipynb

Tensorflow image classification example

This is my first time doing image classification, I followed this tutorial:
https://www.tensorflow.org/tutorials/images/classification
I'm wondering, how do I take that model, and actually use it to make predictions?
I would just to put one image into the model, and would ideally like to get a prediction % of whether it thinks its a dog or a cat.
I saved the model using:
model.save(my_model.h5)
But am really lost at the next steps.
There's another Tensorflow tutorial which uses model.predict() specifically: Basic classification: Classify images of clothing
Not sure if my code is correct all the way but I tried to extend the prediction part of the cats/dogs tutorial using model.predict_generator() though I can't seem to entirely understand the results I get. Adapted code from this second tutorial: Tutorial on using Keras flow_from_directory and generators
# Preparing the testing dataset
test_dir = os.path.join(os.getcwd(), 'cat_dog_testing') # directory with test images
test_image_generator = ImageDataGenerator(rescale=1./255) # rescaling pixels 0 to 1
test_generator = test_image_generator.flow_from_directory(batch_size=6,
directory=test_dir,
shuffle=False,
target_size=(IMG_HEIGHT,IMG_WIDTH),
class_mode=None)
STEP_SIZE_TEST=test_generator.n//test_generator.batch_size
test_generator.reset()
pred=model_new.predict_generator(test_generator, steps=STEP_SIZE_TEST, verbose=1)
I built a tensorflow image classification workflow so that you can both train and classify images with no code. It's on FlyteHub if you want to see it
https://flytehub.org/trainandclassifyimages
Happy to collaborate if you have improvements you want to make to the codebase :)

Tensorflow : Is it possible to identify the data is used for training?

I have created text classification model(.pb) using tensorflow. Prediction is good.
Is it possible to check the sentence using for prediction is already used to train the model or not. I need to retrain the model when new sentence is given to model to predict.
I did some research and couldn't find a way to get the train data only with the pb file because that file only stores the features and not the actual train data(obviously),but if you have the dataset,then you can easily verify duh....
I don't think you can ever find the exact train data with only the trained model,cause the model only contains the features and not the actual train data

why are my tensorflow events files empty?

I am running the tensorflow object detection API and using the SSD_mobilenet model.I have the model.cpkt as well as the graph.pbtxt in my training dir. But in my training dir I found that my events files are empty. It seems that no data was written to my events. Could anyone help me,please!!!
Tensorflow event files will be generated based on the summaries what we have added in code.
For example, suppose you are training a convolutional neural network for recognizing MNIST digits. You'd like to record how the learning rate varies over time, and how the objective function is changing. Collect these by attaching tf.summary.scalar ops to the nodes that output the learning rate and loss respectively. Then, give each scalar_summary a meaningful tag, like 'learning rate' or 'loss function'.
For example:
Add a scalar summary for the snapshot loss.
tf.summary.scalar('loss', loss)
Please refer the below link:
https://www.tensorflow.org/guide/summaries_and_tensorboard