ExampleGen on production - tensorflow

I was wondering how is ExampleGen used in production? I understand that their outputs can be feeded into the TFDV components of TFX to validate schema, skews, and others.
But I get lost since ExampleGen generates a train & eval split, and I don’t find why you would split the data in production into train & eval.
As far as I know, TFX is more suitable for deploying models into production, if I'm going to make a non-productive model maybe just using Tensorflow could work.
So ym questions are:
Is TFX are used for the modeling/dev part? i.e. before deploying your model.
Is it suitable to develop a model in Tensorflow and then migrate it to TFX for the production part?
Thanks!

The ExampleGen TFX Pipeline component ingests data into TFX pipelines.
In simple words, ExampleGen fetches the data from external data sources such as CSV, TFRecord, Avro, Parquet and BigQuery and generates tf.Example and tf.SequenceExample records which can be read by other TFX components. For more info, Please refer The ExampleGen TFX Pipeline Component
Is TFX are used for the modeling/dev part? i.e. before deploying your model.
Yes TFX can be used for modeling, training, serving inference, and managing deployments to online, native mobile, and JavaScript targets. Once you model is trained on TFX, you can deploy your model using TF serving and other deployment targets.
Is it suitable to develop a model in Tensorflow and then migrate it to TFX for the production part?
Once you have developed and trained a model using TFX pipeline, you can deploy it using TF serving system. You can also serve tensorflow models using TF serving. Please refer Serving a TensorFlow Model

Related

How to use the models under tensorflow/models/research/object_detection/models?

I'm looking into training an object detection network using Tensorflow, and I had a look at the TF2 Model Zoo. I noticed that there are noticeably less models there than in the directory /models/research/models/, including the MobileDet with SSDLite developed for the jetson xavier.
To clarify, the readme says that there is a MobileDet GPU with SSDLite, and that the model and checkpoints trained on COCO are provided, yet I couldn't find them anywhere in the repo.
How is one supposed to use those models?
I already have a custom-trained MobileDetv3 for image classification, and I was hoping to see a way to turn the network into an object detection network, in accordance with the MobileDetv3 paper. If this is not straightforward, training one network from scratch could be ok too, I just need to know where to even start from.
If you plan to use the object detection API, you can't use your existing model. You have to choose from a list of models here for v2 and here for v1
The documentation is very well maintained and the steps to train or validate or run inference (test) on custom data is very well explained here by the TensorFlow team. The link is meant for TensorFlow version v2. However, if you wish to use v1, the process is fairly similar and there are numerous blogs/videos explaining how to go about it

Does Tensorflow server serve/support non-tensorflow based libraries like scikit-learn?

Actually we are creating a platform to be able to put AI usecases in production. TFX is the first choice but what if we want to use non-tensorflow based libraries like scikit learn etc and want to include a python script to create models. Will output of such a model be served by tensorflow server. How can I make sure to be able to run both tensorflow based model and non-tensorflow based libraries and models in one system design. Please suggest.
Mentioned below is the procedure to Deploy and Serve a Sci-kit Learn Model in Google Cloud Platform.
First step is to Save/Export the SciKit Learn Model using the below code:
from sklearn.externals import joblib
joblib.dump(clf, 'model.joblib')
Next step is to upload the model.joblib file to Google Cloud Storage.
After that, we need to create our model and version, specifying that we are loading up a scikit-learn model, and select the runtime version of Cloud ML engine, as well as the version of Python that we used to export this model.
Next, we need to present the data to Cloud ML Engine as a simple array, encoded as a json file, like shown below. We can use JSON Library as well.
print(list(X_test.iloc[10:11].values))
Next, we need to run the below command to perform the Inference,
gcloud ml-engine predict --model $MODEL_NAME --version $VERSION_NAME --json-instances $INPUT_FILE
For more information, please refer this link.

How to train a Keras model on GCP with fit_generator

I have an ML model developed in Keras and I can train it locally by calling its fit_generator and providing it with my custom generator. Now I want to use GCP to train this model. I've been following this article that shows how I can train a Keras model on GCP but it does not say what should I do if I need to load all my data into memory, process it and then feed it to the model through a generator.
Does anyone know how I can use GCP if I have a generator?
In the example you are following, the Keras model gets converted into an estimator, using the function model_to_estimator; this step is not necessary in order to use GCP, as GCP supports compiled Keras models. If you keep the model as a Keras model, you can call either its function fit (which supports the use of generators since TensorFlow 1.12) or fit_generator and pass them your generator as the first argument. If it works locally for you, then it should also be able to work in GCP. I have been able to run models in GCP similar to the one in the url you shared and using generators without any problems.
Also be advised that the gcloud ml-engine commands are being replaced by gcloud ai-platform. I recommend you follow this guide, as it is more updated than the one you linked to.

What exactly is "Tensorflow Distibuted", now that we have Tensorflow Serving?

I don't understand why "Tensorflow Distributed" still exists, now that we have Tensorflow Serving. It seems to be some way to use core Tensorflow as a serving platform, but why would we want that when Tensorflow Serving and TFX is a much more robust platform? Is it just legacy support? If so, then the Tensorflow Distributed pages should make that clear and point people towards TFX.
Distributed Tensorflow can support training one model in many machines by implementing a parameter server, with either data parallelism or model parallelism.

Deploying a custom built TensorFlow model within H2O

I am looking into using H2O to create a client-facing application from which they will be able to import data and run ML models on. As H2O only offers a limited number of models at the moment, is there any way to build custom models (an LSTM in TensorFlow, for example), import them into H2O where they can then be run just like any of H2O's included models?
It seems as though H2O's Deep Water was the nearest solution to this, but they have now discontinued its development.
In other words, is there any way to facilitate for different types of models that H2O does not support? (SVM, RNN, CNN, GAN, etc.)
Sorry, deploying non-H2O-3 models within H2O-3 is unsupported.