Tensorflow Federated TFF still a Simulation Environment? - tensorflow

TensorFlow Federated (TFF) is an open-source framework for ML and other computations on decentralized data.
As per Stack overflow link
TFF only provides a simulation environment for use in Federated
Learning (FL) research. There is not yet a "real world" FL deployment
platform.
But, tensorFlow release history shows that now there are many release versions for TF 2.x as well.
https://github.com/tensorflow/federated/releases
Can anybody comment, if TFF is still or simulation environment or can be used as "real world" FL deployment platform?

At this time, TensorFlow Federated does not have out-of-the-box support for what would generally be considered production federated learning. A production-level deployment runtime still needs to be built.
For different flavors of federated learning this maybe easier or harder.
It may be possible to create a system for cross-silo FL using the executor components already available in TensorFlow Federated. Particularly it maybe possible to extend and build something on top of the remote execution stack (e.g. tff.framework.RemoteExecutor)
However for cross-device FL it maybe signifcantly more work, as there are no integrations or components for deploying and execution computations on mobile operating systems (iOS, Android, etc) yet.

Related

Can you serve models from different tensorflow versions in the same binary of tensorflow/serving?

Say I have two saved models one from tensorflow 1.8 and the other from tensorflow 2.2. Serving both of those could run into compatibility issues.
Would it be possible to serve both of those in the same tensorflow/serving binary ?
My intuition suggests NO one cannot, at least not easily.
I am not an expert in bazel files but I presume compiling tensorflow/serving needs to build and link the tensorflow core lib. I am not sure whether one could link together two different versions of the tensorflow core library together.
I guess one could compile the tensorflow/serving binary in two different release points 1.8.0 and also 2.2.0 and deploy both of those binaries in your infrastructure separately. Then one needs to manage at the model discovery layer and request routing layer about which model needs to be loaded in which tensorflow/serving binary and also which predict request should talk to which tensorflow/serving endpoint.
I'm definitely not an expert on the deep inner workings of TensorFlow, so take this with a grain of salt. But I think what you want to do may actually be pretty easy.
My very approximate (and possibly completely incorrect) understanding is that the TensorFlow APIs are a sort of wrapper that creates a graph representing whatever computation you'd like to do, and that the compiled graph is cross-compatible between at least some versions, even if the APIs used to create and manipulate it aren't.
Empirically, I've been able to take models built with TensorFlow 1.15.x and put them into TensorFlow Serving on 2.3.0 with absolutely no problems at all.

Deploying Tensorflow models as Windows exe

I want to use Tensorflow 1.4 for my ML modeling needs. My use case requires:
Training the model on GPU <--- I know how to do this with TF
Deploying the trained model on an ordinary box - as an .exe on CPU running Windows (for inference) <----
I don't know how to do this.
Can somebody tell me if TF 1.4 supports this and if so, point me to a guide or explain how its done ?
This is a little late but this video on youtube covers it pretty well.
He uses pyinstaller which grabs everything needed and puts it all either into one executable without anything else, or a folder with the exe in there and other stuff.
I've tried this myself and it works pretty well, although since pyinstaller smashes everything needed into one folder which gets really huge, it includes the entire tensorflow library, the python interpreter and if you use tensorflow-gpu, it also includes the cudnn files as well which are like 600mb, effectively leaving you with over a 1gb worth of files in the end.
That can be reduced by excluding modules that you don't need, I recommend creating a virtual environment and start with a clean installation of python.
Hope this helps in anyway.

Why does Bazel's rules_closure downloads platform specific binaries instead of sources?

I noticed on the rules_closure repository (used by tensorflow when building it with //tensorflow/tools/pip_package:build_pip_package) that there are rules to build some dependencies like nodejs and protoc through the filegroup_external interface.
Why is the reason for not building it from scratch like other dependencies?
I ask because this approach compromises portability, as it needs to list the binaries for each platform that tries to build tensorflow (and it is even worse when there is no binary-ready for your platform).
This build configuration works deterministically, out of the box, with no systems dependencies, on recent Linux/Mac/Windows systems with Intel CPUs, and incurs no additional build latency. Our goal has been to optimize for the best build experience, for what's in our support matrix. I agree with you that an escape hatch should exist for other systems. Feel free to open an issue with the rules_closure project and CC: #jart so we can discuss more how to solve that.

CNTK deployment for real time predictions

TensorFlow has a separate project for its production usage, as noted here, called TensorFlow Serving.
How should I use CNTK in a production environment, and how should I handle it's deployment? Hopefully one could deploy trained models in to a server/cluster that serves the model with RPC or HTTP REST API.
If no such tool exists, what should be the first steps to develop it and a good architecture to roll out on my own?
We do support serving CNTK models in a production environment. You can find information about model evaluation/inference: https://github.com/Microsoft/CNTK/wiki/CNTK-Evaluation-Overview. For deployment, you need deploy the dlls specified here. A tutorial for deploying CNTK in Azure is available here.
No such tool exists from the CNTK team, but the new APIs in C++ or python should make the task pretty easy once you have a trained model.

Embedded Linux and Cross Compiling

I'm just now starting to learn embedded linux system development and I'm wondering how to go about a few things. Mainly, I have questions about cross compiling. I know what cross compiling is but I'm wondering how to actually go about the whole process when it comes to writing the makefile and deploying the application to the board (mainly the makefile part though).
I've researched a good amount online and found a ton of different things have to be set whether it's in regards to the toolchain, the processor, etc. Are there any good resources to learn this topic and master it or could anyone explain the best way to go about it?
EDIT:
I'm not wondering about how to cross compile in general. I'm wondering about cross compiling already existing applications (e.g. openCV, samba, etc) for a target system from the host system (especially when there is no support regarding the process with the application, which is common).
Basically you just need a special embedded Linux distribution, that will take care of cross-compilation process. Take a look at for example Buildroot. In folder package you'll find package recipe examples.
For your own software build process you can take a look at CMake. libuci recipe shows, how to use CMake based projects in Buildroot.
This answer is based on my own experience, so you're to justify, if it suits your needs.
I learned everything about embedding Linux with these guys: http://free-electrons.com/
They not only offer free docs but also courses for successfully running your box with custom Linux distros. In my case, I achieved embedding uClinux in an board with MMU-less 32 bit CPU with 32 MB RAM. The Linux image just occupied 1MB.