Orleans grains similar to NServiceBus Sagas? - nservicebus

I Just watched the video of how Orleans was used to build Halo 4's Distributed cloud services
http://channel9.msdn.com/Events/Build/2014/3-641

I suggest you read through both sets of documentation and see which features most closely match your requirements:
http://docs.particular.net/nservicebus/sagas-in-nservicebus
https://dotnet.github.io/orleans/Documentation/Introduction.html

After going through Richard's course in the Pluralsight, I think that both overlap in functionalities. In my understanding, the grains are virtual, single threaded and live in a distributed environment like cloud.

Related

Are there any way to do federated learning with real multiple machines using tensorflow-federated API?

I am studying about tensorflow-federated API to make federated learning with real multiple machines.
But I found the answer on this site that not support to make real multiple federated learning using multiple learning.
Are there no way to make federated learning with real multiple machines?
Even I make a network structure for federated learning with 2 clients PC and 1 server PC, Is it impossible to consist of that system using tensorflow federated API?
Or even if I apply the code, can't I make the system I want?
If you can modify the code to configure it, can you give me a tip?If not, when will there be an example to configure on a real computer?
In case you are still looking for something: If you're not bound to TensorFlow, you could have a look at PySyft, which is using PyTorch. Here is a practical example of a FL system built with one server and two Raspberry Pis as clients.
TFF is really about expressing the federated computations you wish to execute. In terms of physical deployments, TFF includes two distinct runtimes: one "reference executor" which simply interprets the syntactic artifact that TFF generates, serially, all in Python and without any fancy constructs or optimizations; another still under development, but demonstrated in the tutorials, which uses asyncio and hierarchies of executors to allow for flexible executor architectures. Both of these are really about simulation and FL research, and not about deploying to devices.
In principle, this may address your question (in particular, see tff.framework.RemoteExecutor). But I assume that you are asking more about deployment to "real" FL systems, e.g. data coming from sources that you don't control. This is really out of scope for TFF. From the FAQ:
Although we designed TFF with deployment to real devices in mind, at this stage we do not currently provide any tools for this purpose. The current release is intended for experimentation uses, such as expressing novel federated algorithms, or trying out federated learning with your own datasets, using the included simulation runtime.
We anticipate that over time the open source ecosystem around TFF will evolve to include runtimes targeting physical deployment platforms.

single mangement system covers several ML frameworks

Question: is there any open source project which covers all ML framework management in a single system?
Scenario Description: in some education scenario, many studies and teachers would like to use different ML frameworks such as Tensorflow, Caffe, Mxnet, etc. It's hard for environment guys to prepare all of them one by one.
Maybe you can use the AWS Deep Learning AMI. The AMI has all the frameworks you mentioned pre-installed for you.
The AMI itself is free of cost. You only pay for the EC2 instances you use.

zookeeper vs redis server sync

I have a small cluster of servers I need to keep in sync. My initial thought on this was to have one server be the "master" and publish updates using redis's pub/sub functionality (since we are already using redis for storage) and letting the other servers in the cluster, the slaves, poll for updates in a long running task. This seemed to be a simple method to keep everything in sync, but then I thought of the obvious issue: What if my "master" goes down? That is where I started looking into techniques to make sure there is always a master, which led me to reading about ideas like leader election. Finally, I stumbled upon Apache Zookeeper (through python binding, "pettingzoo"), which apparently takes care of a lot of the fault tolerance logic for you. I may be able to write my own leader selection code, but I figure it wouldn't be close to as good as something that has been proven and tested, like Zookeeper.
My main issue with using zookeeper is that it is just another component that I may be adding to my setup unnecessarily when I could get by with something simpler. Has anyone ever used redis in this way? Or is there any other simple method I can use to get the type of functionality I am trying to achieve?
More info about pettingzoo (slideshare)
I'm afraid there is no simple method to achieve high-availability. This is usually tricky to setup and tricky to test. There are multiple ways to achieve HA, to be classified in two categories: physical clustering and logical clustering.
Physical clustering is about using hardware, network, and OS level mechanisms to achieve HA. On Linux, you can have a look at Pacemaker which is a full-fledged open-source solution coming with all enterprise distributions. If you want to directly embed clustering capabilities in your application (in C), you may want to check the Corosync cluster engine (also used by Pacemaker). If you plan to use commercial software, Veritas Cluster Server is a well established (but expensive) cross-platform HA solution.
Logical clustering is about using fancy distributed algorithms (like leader election, PAXOS, etc ...) to achieve HA without relying on specific low level mechanisms. This is what things like Zookeeper provide.
Zookeeper is a consistent, ordered, hierarchical store built on top of the ZAB protocol (quite similar to PAXOS). It is quite robust and can be used to implement some HA facilities, but it is not trivial, and you need to install the JVM on all nodes. For good examples, you may have a look at some recipes and the excellent Curator library from Netflix. These days, Zookeeper is used well beyond the pure Hadoop contexts, and IMO, this is the best solution to build a HA logical infrastructure.
Redis pub/sub mechanism is not reliable enough to implement a logical cluster, because unread messages will be lost (there is no queuing of items with pub/sub). To achieve HA of a collection of Redis instances, you can try Redis Sentinel, but it does not extend to your own software.
If you are ready to program in C, a HA framework which is often forgotten (but can be quite useful IMO) is the one coming with BerkeleyDB. It is quite basic but support off-the-shelf leader elections, and can be integrated in any environment. Documentation can be found here and here. Note: you do not have to store your data with BerkeleyDB to benefit from the HA mechanism (only the topology data - the same ones you would put in Zookeeper).

Experiences with message based master-worker frameworks (Java/Python/.Net)

I am designing a distributed master-worker system which, from 10,000 feet, consists of:
Web-based UI
a master component, responsible for generating jobs according to a configurable set of algorithms
a set of workers running on regular pc's, a HPC cluster, or even cloud
a digital repository
messaging based middleware
different categories of tasks, with running times ranging from < 1s to ~6hrs. Tasks are computation heavy, rather than data/IO heavy. The volume of tasks is not expected to be great (as far as I can see now). Probably maxing around 100/min.
Strictly speaking there is no need to move outside of the Windows ecosystem but I would be more comfortable with a cross-platform solution to keep options open (nb. some tasks are Windows only).
I have pretty much settled on RabbitMQ as a messaging layer and Fedora-commons seems to be the most mature off-the-shelf repository. As for the master/worker logic I am evaluating:
Java-based: Grails + Postgres + DOSGi or GridGain with
Zookeeper
Python-based: Django + Postgres + Celery
.net-based: ASP.NET MVC + SQL Server + NServiceBus + Sharepoint or Zentity as the repository
I have looked at various IoC/DI containers but doubt they are really the best fit for a task execution container and add extra layers/complexity. But maybe I'm wrong.
Currently I am leaning towards the python solution (keep it lightweight) but I would be interested in any experiences/suggestions people have to share, particularly with the .net stack. Open source/scalability/resilience features are plus points.
PS: A more advanced future requirement will be the ability for the user to connect directly to a running task (using a web UI) and influence its behaviour (real-time steering). A direct communication channel will be needed to do this (doing this over AMQP does not seem like a good idea).
Dirk
With respect to the master / worker logic and the Java option.
Nimble (see http://www.paremus.com/products/products_nimble.html) with its OSGi Remote Services stack might provide an interesting / agile pure OSGi approach. You still have to decided on a specific distribution mechanism. But given that the USe Case is computationally heavy & data-lite, using the Essence RMI transport that ships with Nimble RSA with a simple front end load balancer function might work really well.
An good approach to 'direct communication channel' - would be to leverage DDS - this a low latency Publication / Subscription peer to peer messaging standard - used in distributed command/control type environments. I think there is a bare-bones OSS project somewhere but we (Paremus) work with RTI in this area.
Hope the above is of background interest.

distributed caching on mono

I'm searching for a distributed caching solution on mono similar to java's terracotta and infinispan. I want to use it as a level 2 cache for nhibernate. Velocity and sharedcache have no mono support, and memcached isn't distributed nor have high availability.
Best Regards,
sirmak
You are looking for a more sophisticate data grid solution that will provide scaling and high availability, memcache I find is a bit too primitive for such a requirments. I would advise to look into Gigaspaces XAP or VMware Gemfire. Both are java product that have .net clients both are very strong. Gigaspaces may offer a bit more co-location capabilities.
I think you meant "replicated" instead of "distributed". Memcached is indeed distributed, but not replicated. However, you can make it replicated with this patch.