CNTK deployment for real time predictions - cntk

TensorFlow has a separate project for its production usage, as noted here, called TensorFlow Serving.
How should I use CNTK in a production environment, and how should I handle it's deployment? Hopefully one could deploy trained models in to a server/cluster that serves the model with RPC or HTTP REST API.
If no such tool exists, what should be the first steps to develop it and a good architecture to roll out on my own?

We do support serving CNTK models in a production environment. You can find information about model evaluation/inference: https://github.com/Microsoft/CNTK/wiki/CNTK-Evaluation-Overview. For deployment, you need deploy the dlls specified here. A tutorial for deploying CNTK in Azure is available here.

No such tool exists from the CNTK team, but the new APIs in C++ or python should make the task pretty easy once you have a trained model.

Related

Yolov4 object detection with AntMedia live stream

I am new to yolov4 and the antmedia, but I would like to do object detection with live antmedia stream, anyone can help me on this matter, please.
I have tried with darknet but it's not working with live stream for me
Ant Media Server developers are working on a plugin architecture. The solution regarding this issue may come in the next version. You can test it yourself. If you buy Enterprise version, you can see Ant Media Server source codes that Ant Media Server uses TensorFlow library for object detection. You can integrate Jolov4 into your enterprise project.

Tensorflow Federated TFF still a Simulation Environment?

TensorFlow Federated (TFF) is an open-source framework for ML and other computations on decentralized data.
As per Stack overflow link
TFF only provides a simulation environment for use in Federated
Learning (FL) research. There is not yet a "real world" FL deployment
platform.
But, tensorFlow release history shows that now there are many release versions for TF 2.x as well.
https://github.com/tensorflow/federated/releases
Can anybody comment, if TFF is still or simulation environment or can be used as "real world" FL deployment platform?
At this time, TensorFlow Federated does not have out-of-the-box support for what would generally be considered production federated learning. A production-level deployment runtime still needs to be built.
For different flavors of federated learning this maybe easier or harder.
It may be possible to create a system for cross-silo FL using the executor components already available in TensorFlow Federated. Particularly it maybe possible to extend and build something on top of the remote execution stack (e.g. tff.framework.RemoteExecutor)
However for cross-device FL it maybe signifcantly more work, as there are no integrations or components for deploying and execution computations on mobile operating systems (iOS, Android, etc) yet.

Can you serve models from different tensorflow versions in the same binary of tensorflow/serving?

Say I have two saved models one from tensorflow 1.8 and the other from tensorflow 2.2. Serving both of those could run into compatibility issues.
Would it be possible to serve both of those in the same tensorflow/serving binary ?
My intuition suggests NO one cannot, at least not easily.
I am not an expert in bazel files but I presume compiling tensorflow/serving needs to build and link the tensorflow core lib. I am not sure whether one could link together two different versions of the tensorflow core library together.
I guess one could compile the tensorflow/serving binary in two different release points 1.8.0 and also 2.2.0 and deploy both of those binaries in your infrastructure separately. Then one needs to manage at the model discovery layer and request routing layer about which model needs to be loaded in which tensorflow/serving binary and also which predict request should talk to which tensorflow/serving endpoint.
I'm definitely not an expert on the deep inner workings of TensorFlow, so take this with a grain of salt. But I think what you want to do may actually be pretty easy.
My very approximate (and possibly completely incorrect) understanding is that the TensorFlow APIs are a sort of wrapper that creates a graph representing whatever computation you'd like to do, and that the compiled graph is cross-compatible between at least some versions, even if the APIs used to create and manipulate it aren't.
Empirically, I've been able to take models built with TensorFlow 1.15.x and put them into TensorFlow Serving on 2.3.0 with absolutely no problems at all.

How to setting up environment for tensor flow developement

I cloned tensorflow repository to my pc. How should I setting up my environment for development?I have no I idea about avaialable files
If you want to start simple(not build tensorflow from source by yourslef), you can follow this link to install it.
Then you can go through this tutorial to get familiar with how tensorflow works.
I believe the best documents for tensorflow are all on its official site(As you can see, the two links above are all from the official site of tensorflow).

TensorFlow + cloud-ml : deploy custom native op / reader

I was wondering if it was possible to deploy Tensorflow custom ops or custom reader written in C++ inside cloud-ml.
It looks like cloud-ml does not accept running native code in its standard mode (I'm not really interested in using a virtualized environment), at least for Python package they only accept pure python with no C dependency.
Likely the easiest way to do this is to include as an extra package the build of the entire custom Tensorflow Wheel that includes the op. For specifying extra packages see: https://cloud.google.com/ml-engine/docs/how-tos/packaging-trainer#to_include_custom_dependencies_with_your_package
For building a TF wheel from source see: https://www.tensorflow.org/install/install_sources#build_the_pip_package
You could also try to download/install just the .so file for the new op, but that would require either downloading it inside the setup.py of your training package or inside the training python code itself.
Note that you can currently only upload custom packages during Training, and not during Batch or Online Prediction, so a model trained using a custom TF version may not work with the prediction service.