export_inference_graph.py vs export_tflite_ssd_graph.py - tensorflow

The output of export_inference_graph.py is
- model.ckpt.data-00000-of-00001
- model.ckpt.info
- model.ckpt.meta
- frozen_inference_graph.pb
+ saved_model (a directory)
while the output of export_tflite_ssd_graph.py
- tflite_graph.pbtxt
- tflite_graph.pb
What is difference in both the frozen files?

I assume you are trying to use your object detection model on mobile devices. For which you need to convert your model to tflite version.
But, you cannot convert models like fasterRCNN to tflite. You need to go for SSD models to be used for mobile devices.
Another way to use model like fasterRCNN in your deployment is,
Use AWS EC2 tensorflow AMI, deploy your model on cloud and have it routed to your website domain or mobile device. When server gets an image through http form that user fills, model will process it on your cloud server and send it back to your required terminal.

Related

Tensorflow server returns an error when using model on S3

I'm experimenting with the tensorflow server and I succeeded to request the half_plus_two model example in the most simple setting (see docs here). By simple setting I mean here embedding the model (more precisely the directory which contains a version subdir and all the model files under this subdir) in my docker container and starting the tensorflow_model_server either with model_name and model_base_path as parameters or with model_config file parameter.
When I try to put model on S3 (private S3 storage, not AWS), the server starts and finds the model as seen in the logs:
I tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:403] File-
system polling update: Servable:{name: half_plus_two version: 1}; Servable path:
s3://tftest/half_plus_two/1; Polling frequency: 1
The request to the model does not succeed anymore though. The error I get is :
Attempting to use uninitialized value b\n\t [[{{node b/read}}]]
It's like if using S3 does not let enough time to the model to initialize its values. Does anyone know how to solve this problem?
I finally sort it out. The problem was that the S3 content was not correct. It was containing all the files needed (ok) + the directories (not ok). Source of the problem was my copy procedure from GCP to S3. This procedure is based on google.cloud storage client. So when I did:
blobs = storage_client.list_blobs(bucketName, prefix=savedDir)
and looped over the blobs to copy each object in S3, I was also copying directories. Apparently the S3 connector from tensorflow-server was not liking it.

How to train BERT model with SQUAD 2.0 in Cloud TPU v2?

Disclaimer: I am very new to Neural Network and Tensorflow.
I am trying to create a QA application where user asks a question and the application gives the answer. Most of the traditional methods I tried did not work or is not accurate enough or requires manual intervention. I was researching about unsupervised QA application, that is when I came across BERT.
BERT as google claims is state of the art neural network model and achieved highest score in leader board for Squad 2.0. I wish to use this model for my application and test it's performance.
I have created a Windows 2012 Datacenter edition Virtual Machine in Compute Engine. I have created Cloud TPU using ctpu.
I have the BERT large uncased model in Cloud Storage.
How do I train the BERT large uncased model with SQUAD 2.0?
Please feel free to correct me if I am wrong, I have the understanding that Cloud TPU is just a device like CPU or GPU. However if you read this, they are explaining like Cloud TPU is a virtual machine ("On Cloud TPU you can run with BERT-Large as...").
Where do I run run_squad.py as mentioned in here?
python run_squad.py \
--vocab_file=$BERT_LARGE_DIR/vocab.txt \
--bert_config_file=$BERT_LARGE_DIR/bert_config.json \
--init_checkpoint=$BERT_LARGE_DIR/bert_model.ckpt \
--do_train=True \
--train_file=$SQUAD_DIR/train-v2.0.json \
--do_predict=True \
--predict_file=$SQUAD_DIR/dev-v2.0.json \
--train_batch_size=24 \
--learning_rate=3e-5 \
--num_train_epochs=2.0 \
--max_seq_length=384 \
--doc_stride=128 \
--output_dir=gs://some_bucket/squad_large/ \
--use_tpu=True \
--tpu_name=$TPU_NAME \
--version_2_with_negative=True
How to access the the storage bucket files from Virtual Machine for this argument vocab_file?
Is the external IP address the value for $TPU_NAME environment variable?
So TPUs currently only read from GCS. The model that you've downloaded should be uploaded to another GCS bucket of your own creation. That's how the TPU will access vocab_file and other files.

Can I customize Tensorflow Serving?

I am studying Tensorflow Serving. I am not familiar with Tensorflow and have many difficulties, but I am studying through google documents or other documents.
For example, after downloading the Tensorflow Serving source file and then compiling it
tensorflow_model_server --port = 9000 --model_name = mnist --model_base_path = / tmp / mnist_model
will work normally and communicate with clients using gRPC.
However, should I use tensorflow-serving only with binary files already provided by Google like tensorflow_model_server?
Or can I include the header in C ++ and add it to the library so that I can write the program arbitrarily?
For serving, you can either use the tensorflow serving C++ API, here is some code example.
Additionally, Google also provide a docker image that serve the models and expose client API in RESTful and gRPC style, so that you can write a client in any languages.
The tensorflow_model_server is part of that Dockerized server, and you will need to write your client to interact with it, here are some code examples to make RESTful or gRPC calls to the server.

How to access images directly from Google Cloud Storage (GCS) when using Keras?

I have developed a model in Keras that works perfectly when reading data stored locally. However, I now want to take advantage of Google Cloud Platform's GPUs for training the model. I have set up the GPU on GCP and am working in a Jupyter notebook. I have moved my images to Google Cloud Storage.
My question is:
How can I access these images (specifically the directories - training, validation, test) directly from Cloud Storage using the Keras' flow_from_directory method of the ImageDataGenerator class?
here's my directory structure in Google Cloud Storage (GCS):
mybucketname/
class_1/
img001.jpg
img002.jpg
...
class_2/
img001.jpg
img002.jpg
...
class_3/
img001.jpg
img002.jpg
...
While I haven't yet figured out a way to read the image data directly from GCS, in the meantime I can copy the files directly from Cloud Storage to the VM via import os, sys os.system('gsutil cp -r gs://mybucketname/ .')

Access model version number in client when the model server loads the latest model based on incremental model numbers

I am serving two different models using the same model server via the model config file in tensorflow serving (1.3.0). Since the model version policy is set to "latest" by default, as soon as I upload a new version of a model, it gets loaded into the model server and the client can use it all fine. However, I would like my client to be aware of which version of the model it is serving. How can I propagate the model version number (which is the directory name) of the model to the client from the server? My model server code is similar to main.cc and client code is inspired from this example provided in tensorflow serving repository.