export_inference_graph with google function or cloudML serverless - tensorflow

I use the TensorFlow models object detection to train a model on the cloud with this tutorial and I would like to know if there is an option to export the model also with the Cloud ML engine or with Google Cloud Function?
In their tutorial there is an only local example
I have train model and now I don't want to create an instance (or use my laptop) to create the exported .pb file for inference
Thanks for the help

Take a look at this Tutorials:
https://cloud.google.com/ai-platform/docs/getting-started-keras
https://cloud.google.com/ai-platform/docs/getting-started-tensorflow-estimator

Related

Does Tensorflow server serve/support non-tensorflow based libraries like scikit-learn?

Actually we are creating a platform to be able to put AI usecases in production. TFX is the first choice but what if we want to use non-tensorflow based libraries like scikit learn etc and want to include a python script to create models. Will output of such a model be served by tensorflow server. How can I make sure to be able to run both tensorflow based model and non-tensorflow based libraries and models in one system design. Please suggest.
Mentioned below is the procedure to Deploy and Serve a Sci-kit Learn Model in Google Cloud Platform.
First step is to Save/Export the SciKit Learn Model using the below code:
from sklearn.externals import joblib
joblib.dump(clf, 'model.joblib')
Next step is to upload the model.joblib file to Google Cloud Storage.
After that, we need to create our model and version, specifying that we are loading up a scikit-learn model, and select the runtime version of Cloud ML engine, as well as the version of Python that we used to export this model.
Next, we need to present the data to Cloud ML Engine as a simple array, encoded as a json file, like shown below. We can use JSON Library as well.
print(list(X_test.iloc[10:11].values))
Next, we need to run the below command to perform the Inference,
gcloud ml-engine predict --model $MODEL_NAME --version $VERSION_NAME --json-instances $INPUT_FILE
For more information, please refer this link.

How to train a Keras model on GCP with fit_generator

I have an ML model developed in Keras and I can train it locally by calling its fit_generator and providing it with my custom generator. Now I want to use GCP to train this model. I've been following this article that shows how I can train a Keras model on GCP but it does not say what should I do if I need to load all my data into memory, process it and then feed it to the model through a generator.
Does anyone know how I can use GCP if I have a generator?
In the example you are following, the Keras model gets converted into an estimator, using the function model_to_estimator; this step is not necessary in order to use GCP, as GCP supports compiled Keras models. If you keep the model as a Keras model, you can call either its function fit (which supports the use of generators since TensorFlow 1.12) or fit_generator and pass them your generator as the first argument. If it works locally for you, then it should also be able to work in GCP. I have been able to run models in GCP similar to the one in the url you shared and using generators without any problems.
Also be advised that the gcloud ml-engine commands are being replaced by gcloud ai-platform. I recommend you follow this guide, as it is more updated than the one you linked to.

Can I use AWS Sagemaker without S3

If I am not using the notebook on AWS but instead just the Sagemaker CLI and want to train a model, can I specify a local path to read from and write to?
If you use local mode with the SageMaker Python SDK, you can train using local data:
from sagemaker.mxnet import MXNet
mxnet_estimator = MXNet('train.py',
train_instance_type='local',
train_instance_count=1)
mxnet_estimator.fit('file:///tmp/my_training_data')
However, this only works if you are training a model locally, not on SageMaker. If you want to train on SageMaker, then yes, you do need to use S3.
For more about local mode: https://github.com/aws/sagemaker-python-sdk#local-mode
As far as I know, you cannot do that. Sagemaker's framework and estimator API makes it easy for SageMaker to feed in data to the model at every iteration or epoch. Feeding from local would drastically slow down the process.
That begs the question - qhy not use S3. Its cheap and fast.

Train Tensorflow on Google Cloud ML

I have a model that I am trying to train on my local machine, but it needs more RAM than I have on my computer.
Because of this, I wish to train this model on Google Cloud ML.
This model that I am trying to train uses Reinforcement Learning and takes some actions and receives rewards from an environment developed in Python that takes as input a CSV file.
How can I export these to be trained on Google Cloud ML?
Can these rewards files be stored in Google cloud storage? Tensorflow reads such files natively is you use tf.file

How to save a tensorflow model trained in google datalab notebook for offline prediction?

I am using Google Cloud Datalab notebook to train my tensorflow model. I want to save the trained model for offline prediction. However, I am clueless on how to save the model. Should I use any tensorflow model saving method or is there any datalab/google cloud storage specific method to do so? Any help in this regard is highly appreciated.
You can use any tensorflow model saving method, but I would suggest that you save it into a Google Cloud Storage bucket and not to local disk. Most tensorflow methods accept Google Cloud Storage paths in place of file names, using the gs:// prefix.
I would suggest using the SavedModelBuilder as it is currently the most portable. There is an example here: https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/flowers/trainer/model.py#L393