How to deploy TensorFlow model to cloud? - tensorflow

I want to deploy my deep learning model to the cloud so that I can upload photos on an iPhone App and the model handles the detection and returns the output.
What would be the best way to do this?
Thank you!

There are multiple ways you can deploy a tensorflow model. Read through the following official documentation.
TensorFlow - Serving Models

Related

CreateML what kind of ObjectDetector Network is trained?

I used CreateML do train a new custom ObjectDector.
Everything worked well so far.
Now I am just wondering, what kind of Network is trained in the background?
Is it something like YOLO or Mobilenet?
I did not found anything on the official documentation:
https://developer.apple.com/documentation/createml#overview
There are two options:
TinyYOLOv2
Using transfer learning. This uses a built-in feature extractor model (VisionFeaturePrint.Objects). This is available with Create ML in Xcode 12.

TF Lite Retraining on Mobile

Let's assume I made an app that has machine learning in it using a tflite file.
Is it possible that I could retrain this model right inside the app?
I have tried to use the Model Maker which is provided by TensorFlow, but, without this, i don't think there's any other way to retrain your model with just the app i made.
Do you mean training on the device when the app is deployed? If yes, TFLite currently doesn't support training in general. But there's some experimental work in this direction with limited support as shown by https://github.com/tensorflow/examples/blob/master/lite/examples/model_personalization.
Currently, the retraining of a TFLite model, as you found out w/ Model Maker, has to happen offline w/ TF before the app is deployed.

export_inference_graph with google function or cloudML serverless

I use the TensorFlow models object detection to train a model on the cloud with this tutorial and I would like to know if there is an option to export the model also with the Cloud ML engine or with Google Cloud Function?
In their tutorial there is an only local example
I have train model and now I don't want to create an instance (or use my laptop) to create the exported .pb file for inference
Thanks for the help
Take a look at this Tutorials:
https://cloud.google.com/ai-platform/docs/getting-started-keras
https://cloud.google.com/ai-platform/docs/getting-started-tensorflow-estimator

How to use a custom model with Tensorflow Hub?

My goal is to test out Google's BERT algorithm in Google Colab.
I'd like to use a pre-trained custom model for Finnish (https://github.com/TurkuNLP/FinBERT). The model can not be found on TFHub library. I have not found a way to load model with Tensorflow Hub.
Is there a neat way to load and use a custom model with Tensorflow Hub?
Fundamentally: yes. Everyone can create the kind of models that TF Hub hosts, and I hope authors of interesting models do consider that.
For TF1 and the hub.Module format tailored to it, see
https://www.tensorflow.org/hub/tf1_hub_module#creating_a_new_module
For TF2 and its revised SavedModel format, see
https://www.tensorflow.org/hub/tf2_saved_model#creating_savedmodels_for_tf_hub
That said, a sophisticated model like BERT requires a bit of attention to export it with all bells and whistles, so it helps to have some tooling to build on. The BERT reference implementation for TF2 at https://github.com/tensorflow/models/tree/master/official/nlp/bert comes with an open-sourced export_tfhub.py script, and anyone can use that to export custom BERT instances created from that code base.
However, I understand from https://github.com/TurkuNLP/FinBERT/blob/master/nlpl_tutorial/training_bert.md#general-info that you are using Nvidia's fork of the original TF1 implementation of BERT. There are Hub modules created from the original research code, but the tooling to that end has not been open-sourced, and Nvidia doesn't seem to have added their own either.
If that's not changing, you'll probably have to resort to doing things the pedestrian way and get acquainted with their codebase and load their checkpoints into it.

Chatbot: generative model using Python framework deployed in Slack

Could somebody tell me whether it is possible to develop a chatbot using python ML frameworks such as tensorflow and deploy in Slack using Slack's apps?
As far as I have read we could develop some retrieval based model using node.js. But I'm looking for a generative model.
Anything to help me get started is much appreciated.
Thanks!
In order to use a generative model you have to use tensorflow sequence to sequence architecture . Here's the official TF tutorial from TF on sequence-to-sequence architecture .
So after training the model you can set this as an API which is easy. Then may be you can set it with the slack API for sure.