Dominant Resource Fairness in YARN - hadoop-yarn

I would like to implement Dominant Resource Fairness (DRF) or other scheduling algorithms in apache yarn. Does anybody know how to implement it? Is there any source?
Cheers

Yes, you can refer to DominantResourceFairnessPolicy.java in the fair scheduler for more information.

Related

Orleans grains similar to NServiceBus Sagas?

I Just watched the video of how Orleans was used to build Halo 4's Distributed cloud services
http://channel9.msdn.com/Events/Build/2014/3-641
I suggest you read through both sets of documentation and see which features most closely match your requirements:
http://docs.particular.net/nservicebus/sagas-in-nservicebus
https://dotnet.github.io/orleans/Documentation/Introduction.html
After going through Richard's course in the Pluralsight, I think that both overlap in functionalities. In my understanding, the grains are virtual, single threaded and live in a distributed environment like cloud.

Whats the main differences between Mapreduce and apache's hama?

Hi I am finding it difficult comparing mapreduce with hama, I understand that hama uses this bulk synchronous parallel model and that the worker nodes can communicate with one another whereas in apache's hadoop the worker nodes only communicate to the namenode correct? If so I don't understand the benefits hama would have over a standard mapreduce in hadoop thanks!
Can you go through this PDF link
This explains the difference between MapReduce and BSP(Apache Hama offers Bulk Synchronous Parallel computing engine).
MapReduceframework has been used to solve a number of non-trivial problems in academia. Putting MapReduce on strong theoretical foundations is crucial in understanding its capabilities. T
whileHamause BSP model of computation, underlining the relevance of BSP to modern parallel algorithm design and defining a subclass of BSP algorithms that can be efficiently implemented in MapReduce.

How is batch processing done in Hazelcast?

I am just a beginner in Hazelcast. I am still learning the basics of it. How is batch processing being done in Hazelcast? Is there any related architecture for it?
See hazelcast ExecutorService component:
http://www.hazelcast.com/docs/2.5/manual/single_html/#ExecutorService

inteldrmfb api?

I have a kernel 2.6.31 booting from a USB stick using Intel 915 based KMS to get to graphics mode. It appears to be setting itself to the native resolution and its booting nicely into framebuffer console with a beautiful Tux logo!
Question is, how do I access the inteldrmfb? How do I get it into /dev? Will udev do this for me?
What is the API for programming the framebuffer directly?
Thanks,
FM
Try the directfb library?
edit:
The kernel API is documented in linux/Documentation/fb. See Documentation/fb/framebuffer.txt in the kernel git tree, for starters. Doesn't seem to have changed for a long time. Probably still accurate. Kernel<->user APIs tend to be stable.
http://www.linux-fbdev.org/HOWTO/index.html. May be useful, but it's probably not as useful as the kernel docs.
Probably, as you say, the library source would be the best reference on how to do things.
If you're not seeing /dev/fb0, even though you have the module loaded, then maybe you need to configure udev for it? Or just mknod yourself.

distributed caching on mono

I'm searching for a distributed caching solution on mono similar to java's terracotta and infinispan. I want to use it as a level 2 cache for nhibernate. Velocity and sharedcache have no mono support, and memcached isn't distributed nor have high availability.
Best Regards,
sirmak
You are looking for a more sophisticate data grid solution that will provide scaling and high availability, memcache I find is a bit too primitive for such a requirments. I would advise to look into Gigaspaces XAP or VMware Gemfire. Both are java product that have .net clients both are very strong. Gigaspaces may offer a bit more co-location capabilities.
I think you meant "replicated" instead of "distributed". Memcached is indeed distributed, but not replicated. However, you can make it replicated with this patch.