How to train a Keras model on GCP with fit_generator - tensorflow

I have an ML model developed in Keras and I can train it locally by calling its fit_generator and providing it with my custom generator. Now I want to use GCP to train this model. I've been following this article that shows how I can train a Keras model on GCP but it does not say what should I do if I need to load all my data into memory, process it and then feed it to the model through a generator.
Does anyone know how I can use GCP if I have a generator?

In the example you are following, the Keras model gets converted into an estimator, using the function model_to_estimator; this step is not necessary in order to use GCP, as GCP supports compiled Keras models. If you keep the model as a Keras model, you can call either its function fit (which supports the use of generators since TensorFlow 1.12) or fit_generator and pass them your generator as the first argument. If it works locally for you, then it should also be able to work in GCP. I have been able to run models in GCP similar to the one in the url you shared and using generators without any problems.
Also be advised that the gcloud ml-engine commands are being replaced by gcloud ai-platform. I recommend you follow this guide, as it is more updated than the one you linked to.

Related

How to use the models under tensorflow/models/research/object_detection/models?

I'm looking into training an object detection network using Tensorflow, and I had a look at the TF2 Model Zoo. I noticed that there are noticeably less models there than in the directory /models/research/models/, including the MobileDet with SSDLite developed for the jetson xavier.
To clarify, the readme says that there is a MobileDet GPU with SSDLite, and that the model and checkpoints trained on COCO are provided, yet I couldn't find them anywhere in the repo.
How is one supposed to use those models?
I already have a custom-trained MobileDetv3 for image classification, and I was hoping to see a way to turn the network into an object detection network, in accordance with the MobileDetv3 paper. If this is not straightforward, training one network from scratch could be ok too, I just need to know where to even start from.
If you plan to use the object detection API, you can't use your existing model. You have to choose from a list of models here for v2 and here for v1
The documentation is very well maintained and the steps to train or validate or run inference (test) on custom data is very well explained here by the TensorFlow team. The link is meant for TensorFlow version v2. However, if you wish to use v1, the process is fairly similar and there are numerous blogs/videos explaining how to go about it

How to convert a tensorflow hub pretrained model as to be consumable by tensorflow serving

I am trying to use this for my object detection task. The problems I am facing are:
On running the saved_model_cli command, I am getting the following output. There is no signature defined with tag-set "serve" also the method name is empty
The variable folder in the model directory only contains a few bytes of data which means the weights are not actually written to disk.
The model format seems to be HubModule V1 which seems to be the issue, any tips on making the above model servable are highly appreciated.
TF2 SavedModels should not have this problem, only Hub.Modules from TF1 since Hub.Modules use the signatures for other purposes. You can take a hub.Module and build a servable SavedModel, but it's quite complex and involves building the signatures yourself.
Instead, I recommend checking out the list of TF2 object detection models on TFHub.dev for a model you can use instead of the model you are using: https://tfhub.dev/s?module-type=image-object-detection&tf-version=tf2
These models should be servable with TF Serving

Online Predictions for Keras model via API

I have an image classification deep learning CNN model (.h5 file) trained using Keras and Tensorflow 2 that I want to use online for predictions. I want an API that takes the single input image over HTTP and responds with the predicted class labels using the trained model. Is there an API provided by Keras or Tensorflow to do the same?
There's two basic options:
Use TensorFlow Serving - it provides ready-to-go REST API server, the only thing that you need to do is to convert your model to .pb format.
Write your own simple REST server (on Flask, for example) which will call model.predict() on the inputs (that approach may be easier to start with, but it will be hard to scale/optimize for heavy load.

Can we compile a Tensorflow Model?

Let me be more clear giving our use case: We developed a service that makes predictions using Tensorflow. One of our clients would like to use it locally (in his on-premise servers), and we don't want because it's like giving him the model that he can replicate and train (we are billing improvements/maintenance).
If there is a way to make our TF model compiled, he would not be able to find the model graph and parameters. Is there a way to compile a Tensorflow model in an irreversible way?
If not, is there another way to protect our model?

Already implemented neural network on Google Cloud Platform

I have implemented a neural network model using Python and Tensorflow, which normally runs on my own computer.
Now I would like to train it on new datasets on the Google Cloud Platform. Do you think it is possible? Do I need to change my code?
Thank you very much for your help!
Google Cloud offers the Cloud ML Engine service, which allows to train your models and perform predictions without the need of running and maintaining an instance with the required software.
In order to run the TensorFlow NN models you already have, you will not need to change your code, you will only have to package the trainer appropriately, as described in the documentation, and run a ML Engine job that performs the training itself. Once you have your model, you can also deploy it in the same service and later get predictions with different features depending on your requirements (urgency in getting the predictions, data set sources, etc.).
Alternatively, as suggested in the comments, you can always launch a Compute Engine instance and run there your TensorFlow model as if you were doing it locally in your computer. However, I would strongly recommend the approach I proposed earlier, as you will be saving some money, because you will only be charged for your usage (training jobs and/or predictions) and do not need to configure an instance from scratch.