Google AutoML Vision AI Missing Images - google-vision

I've used the web UI for automl, however, the original dataset that was uploaded contained 17237 images while the GCP bucket is only displaying 16732 images. What happened during upload that nearly 500 images are missing?
Also, there seems to be another discrepancy in how many images that were used in training the our automl model. The model used only 16681. I am not sure where these images are being lost.
I would appreciate any thoughts.
I have confirmed that the above numbers are correct, but I am still unsure where the images are being lost.

Related

Huge size of TF records file to store on Google Cloud

I am trying to modify a tensorflow project so that it becomes compatible with TPU.
For this, I started with the code explained on this site.
Here COCO dataset is downloaded and first its features are extracted using InceptionV3 model.
I wanted to modify this code so that it supports TPU.
For this, I added the mandatory code for TPU as per this link.
Withe TPU strategy scope, I created the InceptionV3 model using keras library and loaded model with ImageNet weights as per existing code.
Now, since TPU needs data to be stored on Google Cloud storage, I created a tf records file using tf.Example with the help of this link.
Now, I tried to create this file in several ways so that it will have the data that TPU will find through TFRecordDataset.
At first I directly added image data and image path to the file and uploaded it to GCP bucket but while reading this data, I realized that this image data is not useful as it does not contain shape/size information which it will need and I had not resized it to the required dimension before storage. This file size became 2.5GB which was okay.
Then I thought lets only keep image path at cloud, so I created another tf records file with only image path, then I thought that this may not be an optimized code as TPU will have to open the image individually resize it to 299,299 and then feed to model and it will be better if I have image data through .map() function inside TFRecordDataset, so I again tried, this time by using this link, by storing R, G and B along with image path inside tf records file.
However, now I see that the size of tf records file is abnormally large, like some 40-45GB and ultimately, I stopped the execution as my memory was getting filled up on Google Colab TPU.
The original size of COCO dataset is not that large. It almost like 13GB.. and from that the dataset is being created with only first 30,000 records. so 40GB looks weird number.
May I know what is the problem with this way of feature storage? Is there any better way to store image data in TF records file and then extract through TFRecordDataset.
I think the COCO dataset processed as TFRecords should be around 24-25 GB on GCS. Note that TFRecords aren't meant to act as a form of compression, they represent data as protobufs so it can be optimally loaded into TensorFlow programs.
You might have more success if you refer to: https://cloud.google.com/tpu/docs/coco-setup (corresponding script can be found here) for converting COCO (or a subset) into TFRecords.
Furthermore, we have implemented detection models for COCO using TF2/Keras optimized for GPU/TPU here which you might find useful for optimal input pipelines. An example tutorial can be found here. Thanks!

Accessing Cloud Storage from Cloud ML Engine during prediction

I am trying to build an image classifier, that will recognise the class for a test image based on a similiarity measure between the test image and a dataset of labeled images. Basically, I want to use a KNN classifier that takes as input the bottleneck features of a pretrained CNN model.
I would like to store this dataset of labeled images (the bottleneck features) in a seperate bucket in the Google Cloud Storage and give my model access to this dataset during prediction, since the file size of my saved model would be to big, when adding this dataset to the saved model (Google restricts the file size to 250MB). Unfortunately, I can't find a way to access a bucket from a SavedModel. Does anyone have any idea how to solve this?
Code running on the service currently only has access to public GCS buckets. You can contact us offline (cloudml-feedback#google.com) and we may be able to increase your file size quota.

Is it necessary to copy images and annotations to Cloud bucket when training Object Detection model?

I am training my own dataset, and have successfully trained a few before.
however I have been copying the Images and Annotations folder to the cloud bucket as well, is this necessary? I already have all the TF record and config files, are images/annotations necessary to have in the bucket?
my assumption is the images are necessary, because when running tensorboard to view images and steps, tensorboard needs the image to display?
is this correct? thanks.
this image shows, that the images are inactive. also, there is no PRECISON.MAP chart.
you don't need to move your images and annotations folder to the cloud bucket. When creating the tfrecords, the images is included. For example, for the Pascal dataset, you can see it here.
'image/encoded': dataset_util.bytes_feature(encoded_jpg)

How to train a set of images (apart form the tutorial) and use it to classify in Tensorflow?

I am using from the TensorPy GitHub repo (built by Michael Mintz) (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify_image.py)to easily handle image classifications of either individual or multiple images directly from web pages. However i want to train the system with my own set of images and then run the classifier. Could anyone please point me in the right direction for training a new model and how do i use it in classifying a set of bulk images?
Thank you.

Tensorflow Serving with image input

I'm trying to send image input over http to classify using tensorflow. I have looked in detail in the c++ code for https://www.tensorflow.org/versions/r0.9/tutorials/image_recognition/index.html
I have implemented the inception-v3 example model using C++ API. It takes image input in the following form:
bazel-bin/tensorflow/examples/label_image/label_image --image=my_image.png
However, I want to add the case of:
bazel-bin/tensorflow/examples/label_image/label_image --image=http://www.somewebsite.com/my_image.png
This is due to the fact that it only accepts local image files. I want to add the functionality to take file pointers from online images and classify it in memory. I'm currently working on this, but so far no luck. Can anyone offer some insight how I would go about implementing this?