single mangement system covers several ML frameworks - tensorflow

Question: is there any open source project which covers all ML framework management in a single system?
Scenario Description: in some education scenario, many studies and teachers would like to use different ML frameworks such as Tensorflow, Caffe, Mxnet, etc. It's hard for environment guys to prepare all of them one by one.

Maybe you can use the AWS Deep Learning AMI. The AMI has all the frameworks you mentioned pre-installed for you.
The AMI itself is free of cost. You only pay for the EC2 instances you use.

Related

Are there any way to do federated learning with real multiple machines using tensorflow-federated API?

I am studying about tensorflow-federated API to make federated learning with real multiple machines.
But I found the answer on this site that not support to make real multiple federated learning using multiple learning.
Are there no way to make federated learning with real multiple machines?
Even I make a network structure for federated learning with 2 clients PC and 1 server PC, Is it impossible to consist of that system using tensorflow federated API?
Or even if I apply the code, can't I make the system I want?
If you can modify the code to configure it, can you give me a tip?If not, when will there be an example to configure on a real computer?
In case you are still looking for something: If you're not bound to TensorFlow, you could have a look at PySyft, which is using PyTorch. Here is a practical example of a FL system built with one server and two Raspberry Pis as clients.
TFF is really about expressing the federated computations you wish to execute. In terms of physical deployments, TFF includes two distinct runtimes: one "reference executor" which simply interprets the syntactic artifact that TFF generates, serially, all in Python and without any fancy constructs or optimizations; another still under development, but demonstrated in the tutorials, which uses asyncio and hierarchies of executors to allow for flexible executor architectures. Both of these are really about simulation and FL research, and not about deploying to devices.
In principle, this may address your question (in particular, see tff.framework.RemoteExecutor). But I assume that you are asking more about deployment to "real" FL systems, e.g. data coming from sources that you don't control. This is really out of scope for TFF. From the FAQ:
Although we designed TFF with deployment to real devices in mind, at this stage we do not currently provide any tools for this purpose. The current release is intended for experimentation uses, such as expressing novel federated algorithms, or trying out federated learning with your own datasets, using the included simulation runtime.
We anticipate that over time the open source ecosystem around TFF will evolve to include runtimes targeting physical deployment platforms.

Using google compute engine for tensorflow project

Google is offering 300$ for free trail registration for google cloud. I want to use this opportunity to pursue few projects using tensorflow. But unlike AWS, I am not able to find much information on the web regarding how to configure a google compute engine. Can anyone please suggest me or point to resources which will help me?
I already looked into google cloud documentation, while they are clear they really dont give any suggestions as to what kind of CPUs to use or for that matter I cannot see any GPU instances when I tried to create a VM instance. I want to use something on the lines of AWS g2.2xlarge instance.
GPUs on Google Cloud are in alpha:
https://cloud.google.com/gpu/
The timeline given for public availability is 2017:
https://cloudplatform.googleblog.com/2016/11/announcing-GPUs-for-Google-Cloud-Platform.html
I would suggest that you think carefully about whether you want to "scale up" (getting a single very powerful machine to do your training) or "scale out" (distributing your training). In many cases, scaling out works out better and cheaper and Tensorflow/CloudML are set up help you do that.
Here are directions on how to get Tensorflow going in a Jupyter notebook on a Google Compute Engine VM:
https://codelabs.developers.google.com/codelabs/cpb102-cloudml/#0
The first few steps are TensorFlow, the last steps are Cloud ML.

Orleans grains similar to NServiceBus Sagas?

I Just watched the video of how Orleans was used to build Halo 4's Distributed cloud services
http://channel9.msdn.com/Events/Build/2014/3-641
I suggest you read through both sets of documentation and see which features most closely match your requirements:
http://docs.particular.net/nservicebus/sagas-in-nservicebus
https://dotnet.github.io/orleans/Documentation/Introduction.html
After going through Richard's course in the Pluralsight, I think that both overlap in functionalities. In my understanding, the grains are virtual, single threaded and live in a distributed environment like cloud.

What is a good FAT file system for ARM7-TDMI

I'm using the ARM7TDMI-S (NXP processor) and I need a file system capable of reading/writing to an SD card. There are so many available, what have people used and been happy with? One that requires the least amount of setup is best - so the less I have to do to get it started (i.e. write device drivers to NXP's hardware) the better.
I am currently using CMX's RTOS as the OS for this project.
I suggest that you use either EFSL or Chan's FAT File System Module. I have used both on MMC/SC cards without problems. The choice between them may come down to the license terms and pre-existing ports to your target. Martin Thomas's ARM Projects site has examples for both libraries.
FAT is popular precisely because it's so simple. The main problems with FAT are performance (because of its simplicity, it's not very fast) and its limited size (2GB for FAT16, though 2TB for FAT32)

Cfengine vs Chef

What are the differences in term of features between Cfengine and Chef?
Chef has much greater integration with "cloud" VM hosting providers, and a greater amount of recipe sharing than CFEngine.
CFEngine takes less resources when it runs, and runs on a much greater range of computing environments from embedded devices to supercomputers, and on a lot more operating systems -- it's just a few small C binaries and a couple of C libraries, so it is more portable.
From the Chef FAQ:
How is it different than Cfengine?
It bears very little in common with Cfengine, other than embracing Single Copy Nirvana.
I don't know much about Chef, just been reading their web site, and I'm quite familiar with Cfengine; so take my answer with a grain of salt.
From what I gathered, the main difference is that Cfengine runs on both Linux/Unixes and Windows, while Chef only support Linux/Unixes.