Deploying a trained model - tensorflow

I have trained my Neural Style transfer model and got .ckpt files after training. Now I want to deploy this model using tensorflow-serving. How can I proceed further ?

Install Docker and Pull Docker Tensorflow serving image.
$docker pull tensorflow/serving
copy your SavedModel to the container's model folder:
$docker cp models/ serving_base:/models/
Follow instructions from https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/docker.md , and you should be able to run serving image to host model.
Check onto below link for more details -
https://www.tensorflow.org/tfx/serving/docker

Related

Tensorflow Serving: How to re train a model that is currently served in production using tensorflow serving?

How to re train a model with new data that is currently served in production using tensorflow serving?
Do we have to train the model manually and serve it again? Or is there any automated way of doing this.
I am using tensorflow serving with docker.
Basically the idea is that:
Considering there is already a model served using tensorflow serving, and in the future I get some bunch of additional data and I want the model to be fitted with this data then, how can we do this training to the same model?
Question 1: I do have a script to train the model, but does the training have to be done locally/manually?
Answer: As far as i understand you are talking it should be done locally or in some remote server, you can do wherever as per convenience the main important step for tensorflow serving is to save model in the respective format that could be used by the server. Please refer to the link on how to save as well as how to load it in the serving docker container.
serving tensorflow model
Question 2: Suppose I create a entirely new model (apart from modelA currently server), how can I load it to tensorflow serving again? Do I have to manually load it to the docker target path?
Answer: Yes if you are loading it without using serving config, you will have to manually shut down container, remap the path in the command and then load it in the docker container. That is where the serving config helps you to load models in runtime only.
Question 3: TFX document says to update the model.config file for adding new models, but how can I update it when the serving is running.
Answer: A basic configuration file would look like this
config {
name: 'my_first_model'
base_path: '/tmp/my_first_model/'
model_platform: 'tensorflow'
}
config {
name: 'my_second_model'
base_path: '/tmp/my_second_model/'
model_platform: 'tensorflow'
}
}
This file would be needed to be mapped before starting your docker container and of course the path as well where different models will be located. This config file when changed will load new models accordingly in the serving docker container. You can also maintain different versions of the same model as well. For more info please refer to this link serving config. This file is automatically looked up by the serving periodically and as soon as it detects some change it will automatically load new models without the need to restart the docker container.

Heroku large size problem for deploying rasa

I have an issue when I'm trying to deploy my Docker image of RASA in heroku.
here is the screenshot:
terminal
How I can avoid this large size:
I used requirement.txt to install RASA, here is the screen shot:
requirements.txt
Can you help me please?
One option is to increase the Dyno size (this is not free).
Alternatively you can build a small Docker image, for example not using the Tensorflow or Spacy model (which are pretty large).
You typically need those if you want, for example, to use their NER models (extracts names, locations, etc..)
This is an example on how to build a Rasa instance which can fit in the free-tier:
# from rasa base image
FROM rasa/rasa:1.8.0
# copy Rasa config and the Rasa generated model
COPY . /app
# script to run rasa core
COPY startup.sh /app/scripts/startup.sh

Generate SavedModel from Tensorflow model to serve it on Google Cloud ML

I used TF Hub to retrain a model for image classification. Now I would like to serve it in the cloud. For that i need a SavedModel. The retrain.py script from TF Hub uses tf.saved_model.simple_save to generate the SavedModel after the training is done.
What confuses me is the .pb file inside the SavedModel folder that I get from that method is much smaller than the final .pb saved after the training.
simple_save is also now deprecated and I tried to get my SavedModel after the training is done following this SO issue.
But my variables folder is empty. How can I incorporate that building of SavedModel inside the retrain.py to replace the simple_save method ? Tips would be much appreciated.
To deploy your model to Google Cloud ML, you need a SavedModel which can be produced from tf.saved_model api.
Below are the steps for hosting your trained models in cloud with Cloud ML Engine.
Upload your saved model to a Cloud Storage bucket by setting up a cloud storage bucket using BUCKET_NAME="your_bucket_name"
Select a region for your bucket and set a REGION environment variable.
EGION=us-central1
Create a new bucket gsutil mb -l $REGION gs://$BUCKET_NAME
Upload using
SAVED_MODEL_DIR=$(ls ./your-export-dir-base | tail -1)
gsutil cp -r $SAVED_MODEL_DIR gs://your-bucket
Create a Cloud ML Engine model resource and model version.
Also for your question on incorporating savedmodel inside retrain.py, you need to pass saved model as an argument to the tfhub_module line as below.
python retrain.py --image_dir C: ...\\code\\give the path here --tfhub_module C:
...give the path to saved model directory

TensorBoard without callbacks for Keras docker image in SageMaker

I'm trying to add TensorBoard functionality to this SageMaker example: https://github.com/awslabs/amazon-sagemaker-examples/blob/master/hyperparameter_tuning/keras_bring_your_own/hpo_bring_your_own_keras_container.ipynb
The issue is that SageMaker's Estimator.fit() does not seem to support Keras models compiled with callbacks.
Now from this git issue post it was described that what I need to do for TensorBoard functionality is,
"You need your code inside the container to save checkpoints to S3,
and you need to periodically sync your local Tensorboard log directory
with your S3 checkpoints."
So to sum it all up, to enable TensorBoard in SageMaker with this custom Keras docker image, it looks like I need a way of periodically uploading a file to an S3 bucket during training without using callbacks. Is this possible to do? I was considering trying to shove this code into a custom loss function, but I'm not sure if this would be the way to go about it. Any help is greatly appreciated!

How can I serve the Faster RCNN with Resnet 101 model with tensorflow serving

I am trying to serve the Faster RCNN with Resnet 101 model with tensorflow serving.
I know I need to use tf.saved_model.builder.SavedModelBuilder to export the model definition as well as variables, then I need a script like inception_client.py provided by tensorflow_serving.
while I am going through the examples and documentation and experimenting, I think someone may have done the same thing. So plase help if you have done the same or know how to get it done. Thanks in advance.
Tensorflow Object Detection API has its own exporter script that is more sophisticated than the outdated examples found under Tensorflow Serving.
While building Tensorflow Serving, make sure you pull the latest master commit of tensorflow/tensorflow (>r1.2) and tensorflow/models
Build Tensorflow Serving for GPU
bazel build -c opt --config=cuda tensorflow_serving/...
If you face errors regarding crosstool and nccl, follow the solutions at
https://github.com/tensorflow/serving/issues/186#issuecomment-251152755
https://github.com/tensorflow/serving/issues/327#issuecomment-305771708
Usage
python tf_models/object_detection/export_inference_graph.py \
--pipeline_config_path=/path/to/ssd_inception_v2.config \
--trained_checkpoint_prefix=/path/to/trained/checkpoint/model.ckpt \
--output_directory /path/to/output/1 \
--export_as_saved_model \
--input_type=image_tensor
Note that during export all variables are converted into constants and baked into the protobuf binary. Don't be panicked if you don't find any files under saved_model/variables directory
To start the server,
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=inception_v2 --model_base_path=/path/to/output --enable_batching=true
As for the client, the examples under Tensorflow Serving work well