I'm testing out this object detection implementation on a small subset of the DOTA dataset using Google Colab. The training is going fine, but the the images in Tensorboard are washed out and beige. Could there be an issue with the images when they converted to a tfrecord, or is there some issue with Tensorboard/ Colab compatibility? I'm using tensorflow-gpu = 1.13.1 & tensorboard 1.13.1. See below screenshot for commands I used to open tensorboard and the issues with the images.
There was in fact a problem with image normalization caused bu user error editing code.
Related
I have this problem of Semantic Segmentation data in Google Colab and Jupyter Notebook. Here's an example. This is the picture from Jupyter Notebook.
https://i.stack.imgur.com/slBeI.png
This is the picture from Google Colab.
https://i.stack.imgur.com/BNXOw.png
As you can see, the Segmentation map is different between these 2 pictures. Is there anyone has this problem related? And is there any solution for this type of problems? Thanks Ahead!
I am attempting to train an object detection model using Tensorflow's Object Detection API 2 and Tensorflow 2.3.0. I have largely been using this article as a resource in preparing the data and training the model.
Most articles which use the Object Detection API download a pre-trained model from the Tensorflow model zoo prior to fine-tuning.
The Tensorflow Model Zoo is a set of links on a Github page set up by the Object Detection team. When I click one such link (using Google Chrome), a new tab opens briefly as if a download is starting, then immediately closes and a download does not occur. Hyperlinks to other models I have found in articles also have not worked.
To anyone who has worked with fine-tuning using the Object Detection API: What method did you use to download a pre-trained model? Did the model zoo links work? If not, what resource did you use instead?
Any help is much appreciated.
I solved this problem on my own, so if anyone else is having a similar issue: try a different browser. The model zoo downloads were not working for me in Google Chrome. However, when I tried the download on Microsoft Edge, it worked immediately and I was able to proceed.
In Google Chrome, you can copy the link of the model from Readme file and then paste that into another tab. The download will start automatically.
I am trying to create a mobile app that uses object detection to detect a specific type of object. To do this I am starting with the Tensorflow object detection example Android app, which uses TF2 and ssd_mobilenet_v1.
I'd like to try Few-Shot training (Colab link) so I started by replacing the example app's SSD Mobilenet v1 download with the Colab's output file model.tflite, however this causes the the app to crash with following error:
java.lang.IllegalStateException: This model does not contain associated files, and is not a Zip file.
at org.tensorflow.lite.support.metadata.MetadataExtractor.assertZipFile(MetadataExtractor.java:313)
at org.tensorflow.lite.support.metadata.MetadataExtractor.getAssociatedFile(MetadataExtractor.java:164)
at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.create(TFLiteObjectDetectionAPIModel.java:126)
at org.tensorflow.lite.examples.detection.DetectorActivity.onPreviewSizeChosen(DetectorActivity.java:99)
I realize the Colab uses ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz - does this mean there are changes needed in the app code - or is there something more fundamentally wrong with my approach?
Update: I also tried the Lite output of the Colab tf2_image_retraining and got the same error.
The fix apparently was https://github.com/tensorflow/examples/compare/master...cachvico:darren/fix-od - .tflite files can now be zip files including the labels, but the example app doesn't work with the old format.
This doesn't throw error when using the Few Shots colab output. Although I'm not getting results yet - pointing the app at pictures of rubber ducks not yet work.
I have a couple of own tfrecord file made by myself.
They are working perfectly in tf1, I used them in several projects.
However if i want to use them in Tensorflow Object Detection API with tf2 (running the model_main_tf2.py script), I see the following in tensorboard:
tensorboard images tab
It totally masses up the images.
(Running the /work/tfapi/research/object_detection/model_main.py script or even legacy_train and they looks fine)
Is tf2 using different kind of encoding in tfrecords?
Or what can cause such results?
I am running sample program which comes packaged with Tensorflow object detection API(object_detection_tutorial.ipynb).
Program runs fine with no error, but bounding boxes are not diaplayed at all.
My environment is as follows:
Windows 10
Python 3.6.3
What can be the reason?
With regards
Manish
It seems that the latest version of the model ssd_mobilenet_v1_coco_2017_11_08 doesn't work and outputs abnormally low value. Replacing it in the Jupyter Notebook with an older version of the model works fine for me:
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
Ref: https://github.com/tensorflow/models/issues/2773
Please try updated SSD models in the detection zoo : https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md. This should be fixed.