I am new to Tensorflow serving and am looking for a way to dynamically add models to TF Serving running in a Docker image. My goal (if it is possible) is to have a TF Server Docker image that can consume new models or new versions of an existing model while the Docker image is running.
ReloadConfig API will help you.
Related
I have some models running in AI-Platforms under GCP which are serving predictions without a problem.
Now I am trying to automate this deployment process using kubernets pipelines so that model version gets updated periodically. I tried to create some pipelines using available samples but non of these are for AI platforms.
The training of the model has been handled by AI-Platform Jobs with following parameters:
Python: 3.7
Framework: Tensorflow
Framework version: 2.1
ML runtime version: 2.1
Trained model are being created parodically and are being saved in buckets.
How can I automate this deployment process using pipelines.
If there is another alternative approach for this automation, I would like to try it as well.
Is there a way of running the Embedding Projector inside my GCP Jupyterlab instance (or through any other GCP service) as opposed to using the public https://projector.tensorflow.org ?
The TensorFlow documentation mentions that Embeddings Projector can be run inside Tensorboard, but doesn't provide any links or details.
Unfortunately there is not an Google Cloud product available that brings those projector functionalities specifically but you can run the projector Tensorboard plugin in AI Notebooks (Jupyterlab) locally.
Here's the source Tensorboard's projector plugin repository and here's the step by step guide where the projector plugin has been used for that specific use case you mentioned. Bear in mind that this step by step guide is done on Tensorflow 1.1x not 2.0.0.
If you want to use Tensorflow 2.0.0 you will need to import the plugin like this
from tensorboard.plugins import projector
and then migrate all the Tensorflow 1.1x code to >= 2.0 described in the guide in order to get the same log files as the guide. If you already have the neccesary files to make your custom projector you just need to select the plugin inside the Tensorboard UI.
Tensorboard Projector plugin selection
You can also make a web embedding into an IFrame if using the public Tensorboard tool (I understand that this is not your case but this might be helpful to other people searching for an alternative solution). Opening an AI Notebook and pasting the following code would do the job.
import IPython
url = 'https://projector.tensorflow.org/'
IPython.display.IFrame(url, width=1333, height=900)
Remember to change the width and height values if you need to.
Suppose I have used keras (with tensorflow backend) on a Linux VM and have trained and saved that model on my Linux VM. Now I have a requirement to use that model on a Windows machine. Is there some way of exporting that model to some sort of cross-platform format so that I can use code on my Windows machine to import and save it for later use there?
I want to convert my tensorflow 1.1 based model into tensorflow lite in order to serve the model locally and remotely for a PWA. The official guide only offers Python APIs for 1.11 at the earliest. Command line tools only seem to work starting at 1.7. Is it possible to convert a 1.1 model to tensorflow lite? Has anyone had experience with this?
The tf module is an out-of-the-box pre-trained model using BIDAF. I am having difficulty serving the full tf app on Heroku, which is unable to run it. I would like to try a tf lite app to see if hosting it locally will make it faster, and easier to set up as a PWA.
Tutorial on the github page for Tensorflow object detection API also has information on running the training on Google Cloud Platform.
But I need to run the training on AWS instance. I have the TFRecords files with me. Is there any tutorial etc available for same?Googling doesn't help much.I am new to AWS.
You need to launch an instance which already has Tensorflow installed on it. AWS has prepared AMIs for that.
see here: https://aws.amazon.com/tensorflow/
Then you just upload anything to the instance and run the script.