Deploying recommendation model from Tensorflow/models after training? - tensorflow

I followed the small tutorial here: https://github.com/tensorflow/models/tree/master/official/recommendation
to train a recommendation model based on the ml-1m movielens dataset. How would I go about deploying this to start using it?
I've tried adding my own code to convert the keras model into tflite to put on firebase, but converter.convert() throws a value error. I've looked into Tensorflow serving, but the checkpoint that it outputs does not follow the format needed from what it appears. I'm not even sure how to format the input data to get recommendations.
I am new to ml and tensorflow, so I appreciate details. Thank you.

The content recommendation codelab from Firebase ML codelabs has detailed steps on how to train, convert, and deploy a TFLite recommendations model to a mobile app.

Related

Can't manage to open TensorFlow SavedModel for usage in Keras

I'm kinda new to TensorFlow and Keras, so please excuse any accidental stupidity, but I have an issue. I've been trying to load in models from the TensorFlow Detection Zoo, but haven't had much success.
I can't figure out how to read these saved_model folders (they contain a saved_model.pb file, and an assets and variables folder), so that they're accepted by Keras. Nor can I figure out a way to convert these models so that they may be loaded in. I've tried converting the SavedModel to ONNX, and then convert the ONNX-model to Keras, but that didn't work. Trying to load the original model as a saved_model, and then trying to to save this loaded model in another format gave me no success either.
Since you are new to Tensorflow (and I guess deep learning) I would suggest you stick with the API because the detection zoo models best interface with the object detection API. If you have already downloaded the model, you just need to export it using the exporter_main_v2.py script. This article explains it very well link.

Convert PoseNet TensorFlow.js params to TensorFlow Lite

I'm fairly new to TensorFlow so I apologize if I'm saying something absurd.
I've been playing with the PoseNet model in the browser using TensorFlow.js. In this project, I can change the algorithm and parameters so I can get better results on the detection of certain poses. The most important params in my use case are the Multiplier, Quant Bytes and Output Stride.
So far so good, I have the results I want. However, I want to convert these results to TensorFlow Lite so I can use it in an iOS application. I managed to find the PoseNet model in a TensorFlow Lite file (tflite) and I even found an iOS app example provided by TensorFlow to I'm able to load up the model file and have it working on iOS.
The problem is...I'm unable to change the params (Multiplier, Quant Bytes and Output Stride) on the iOS app. I can't find it anywhere how I can do this. I've tried searching for these params in the iOS app source code, I've tried to find ways to convert a TensorFlow.js model to TensorFlow Lite so I can load the model with the params I want in the app but no luck.
I'm writing this post so maybe you guys can point me in the right direction so I'm able to "translate" what I have on TensorFlow.js to TensorFlow Lite.
EDIT:
This is what I've learned in the last couple of days:
TFLite is designed for serving fixed model with lightweight runtime. Thus, modifying model parameters on demand is not a design goal for it.
I looked at the TF.js code for PoseNet, and found similar design. It seems you can modify parameters, because they actually have different models for each params. https://github.com/tensorflow/tfjs-models/blob/b72c10bdbdec6b04a13f780180ed904736fa52a5/posenet/src/checkpoints.ts#L37
TFLite models generally don't support dynamic parameters. Output stride Multiplier and Quant Bytes are fixed params when the neural network is created.
So what I want to do is to extract weights from TF.js model, and put then into existing MobileNet code.
And that's where I need help now. Could anyone point me in the direction to load and change the model so I can then convert it to tflite with my own params?
EDIT2:
I found a repo that is helping me convert TF.js models to TF Lite Griffin98/posenet_tfjs2tflite. I still can't define the Quant Bytes tho.

Batch prediction for object detection fails on Google Cloud platform

I exported a faster_rcnn_resnet101 model with custom classes for serving predictions and deployed it on Cloud ML platform so that I can use Cloud ML prediction engine. Online prediction works but the results fail when I try batch prediction. It appears that the official documentation is outdated and needs an update.
I tried formatting my data in both ways mentioned here. In addition I also tried the request format mentioned here.
I also tried the steps mentioned in the google cloud blog.
Local prediction and online prediction work but the batch prediction fails. Any help will be much appreciated.
Error Log:
('Exception during running the graph: assertion failed: [Unable to decode bytes as JPEG, PNG, GIF, or BMP]\n\t [[node map/while/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert (defined at /usr/local/lib/python2.7/dist-packages/google/cloud/ml/prediction/frameworks/tf_prediction_lib.py:210) = Assert[T=[DT_STRING], summarize=3, _device="/job:localhost/replica:0/task:0/device:CPU:0"](map/while/decode_image/cond_jpeg/cond_png/cond_gif/is_bmp, map/while/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert/data_0)]]', 1)
Our apologies, but batch prediction is not currently supported for custom models.

Use Google Cloud Machine Learning service to predict with a locally retrained Inception model

I have locally retrained the Inception model using the retrain.py file from Google Code Lab TensorFlow for Poets and want to use Google Cloud machine Learning service to make predictions.
Specifically, I want to modify the retrain.py file, so my TensorFlow application is prepared for
gcloud beta ml predict --instances=INSTANCES --model=MODEL
(i.e., prediction only; no need for Google Cloud ML training ala gloud beta ml jobs submit training).
I understand conceptually that the retrain.py file must be modified as described in Preparing a Model.
But there is no complete answer showing all the lines of code in the retrain.py file after being modified. And the popularity of Google Code Lab TensorFlow for Poets and Pete Warden’s screencasts about retraining Inception makes one expect this to be a very common example of image classification among the TensorFlow community; which means an answer will benefit many in the community.
Will someone please answer with their version of the retrain.py file after being modified as described in Preparing a Model?
Note 1:
I have researched my question to confirm it has not been answered…
… The question asked by Davide Biraghi and answered by JoshGC “Q: How predict an image in google machine learning” does not show any modifications to the retrain.py file that retrains the Inception model in Google Code Lab TensorFlow for Poets.
… The question asked by KlezFromSpace and answered by rhaertel80 (with helpful comments by Robert Lacok) “Q: Deploy Retrained inception model on Google cloud machine learning” does not show all the lines of code in the retrain.py file after being modified for: Defining outputs; Creating inputs; Supporting variable batch sizes; Using instance keys; Adding input and output collections to the graph; and Exporting (saving) the final model. (See above Preparing a Model.)
… The question asked by Vinkeet Kaushik and answered by Robert Lacok (with helpful comments by mrry) “Q: Export a basic Tensorflow model to Google Cloud ML” is not specific to the retrain.py file that retrains the Inception model in Google Code Lab TensorFlow for Poets.
Note 2:
I assume the jpeg image for which prediction is to be made is
gcloud beta ml predict --instances=INSTANCES --model=MODEL
where INSTANCES is the path to a JSON file with information about the image as per the question asked by Davide Biraghi and answered by rhaertel80 “Q: How convert a jpeg image into json file in Google machine learning”
Note 3:
I assume I will manually store the EXPORT and EXPORT.META files saved by the modified retrain.py file at the URL I use to create MODEL in Google Cloud Console.
This posting yesterday by Google's Slaven Bilac appears to be the answer.

Exporting graphdef protobuf during training of inception model

Has anyone incorporated the ability to save a graphdef protobuf along with each checkpoint in the inception-v3 model?
Lively discussions at https://github.com/tensorflow/tensorflow/issues/616 provided some solutions to export graph protobufs, but I cannot get any of them working using the inception training model (inception_train.py).
I am trying to implement graph.util.convery_variables_to_constants, but I am failing to capture trainable variables, as I always get assertion errors: [variable] is not in graph. I am using output names from tensorboard, which may be incorrect procedure.
Any solutions to this issue would definitely be of interest to the general community, not only for learning but deployment.
Thanks!