Replicated message queue - replication

I am looking for a message queue which would replicate messages across a cluster of servers. I am aware that this will cause a performance hit, but that's what the requirements are - message persistence is very important.
The replication can be asynchronous, but it should be there - if there's a large backlog of messages waiting for processing, they shouldn't be lost.
So far I didn't manage to find anything from the well-known MQs. HornetQ for example supported message replication in 2.0 but in 2.2 it seems to be removed. RabbitMQ doesn't replicate messages at all, etc.
Is there anything out there that could meet my requirements?

There are at least three ways of tackling this that come to mind, depending upon how robust you need the solution to be.
One: pick any messaging tech, then replicate your disk-storage. Using something like DRBD you can have the file-backed storage copied to another machine under the covers. If your primary box dies, you should be able to restart on your second machine from the replicated files.
Two: Keep looking. There are various commercial systems that definitely do this, two such (no financial benefit on my part) are Informatica Ultra Messaging (formerly 29West) and Solace. These are commonly used in the financial community.
Three: build your own. ZeroMQ is one such toolkit that you could use to roll-your-own system from pre-built messaging blocks. Even a system that does not officially support it could fairly easily be configured to publish all messages to two queues. Your reader would have to drain both somehow, so this may well be a non-starter, but possible in any case.
Overall: do test your performance assumptions, as all of these will have various performance implications in various scenarios.

Amazon SQS is designed with this very thing in mind, but because of the consistency model (which is a part of messaging anyway), you're responsible for de-duplicating messages on the consumer side. Granted, SQS maybe somewhat slow and the costs can add up for lots of messages, but if you want to guarantee that no messages are lost, then it's a pretty solid way to go.

new Kafka 0.8.1 offers replication!

Related

How to do performance testing of RabbitMQ Cluster to do further fine tuning?

I have created a RabbitMQ Cluster which is successfully queuing messages being generated by application. I need to do performance testing of the cluster to find out overall efficiency of the cluster and take decisions to do further fine tuning to enhance performance. We tried with PerfTest java tool. But could not achieve much.
I guess the questions begin with which interface are you looking to test? That will decide your tool(s) which support that interface.
Are you looking to both push and pop?
How many queues?
How many producers and how many consumers? Will you create a slight vacuum with the consumers to affect an always or nearly empty queue set?
How will you define efficiency? Is this defined by the number of items in the queue, the time to push or pop from the queue or some combination of the the previous?
???

Architecture of distributed tasks execution with priorities

We are looking into creating a distributed system for task execution, where the tasks have priorities in .NET (C#). There are a lot of options, I would like to get your take on it. The options & their disadvantages are:
1) Amazon's SWF (Simple WorkFlow) - in .NET we can't use a framework such as java's FLOW which simplifies. this means a lot of boilerplate code. In addition, this offering from amazon doesn't seem to be very popular (so: no community support, and might eventually disappear)
2) Building our own on top of a queuing system
2.a) SQS - not really a FIFO, and using 2 queues (normal and high priority) won't give us granular control over the priorities (we might be able to live with that)
2.b) RabbitMQ - administrative overhead (setting it up, configuring it in cluster mode for reliability, etc)
3) I have received another suggestion to use "event driven" without queues. I can't see how it's possible, maybe someone can help clarify it to me? (oh, and, is it related to a technology called Akka (actor based))
Thank you
SQS is probably going to be the simplest - very little code is required, and the cost is extremely low and the setup time is minimal.
If 2 queues and hi/low priority is not enough then create 3 queues, or 5 queues or 10 queues - you can be as granular as you need to be.
You can have multiple worker machines scanning all the queues in priority in order, or have some machines just dedicated to processing the hi-priortiy queues, and these machines could be bigger/faster if you want to process even quicker.
Another option is to have seperate auto-scaling policies that will spin up more/faster machines based on a small increase in the length of the high-priority queues, but only scale up smaller/cheaper machines, when the low-priority queue gets very long....lots of options to choose from and fine-tune you solution.

What pub/sub protocols have subscriber based data propagation?

I'm trying to evaluate different pub/sub messaging protocols on their ability to horizontally scale without producing unnecessary cross chatter.
My architecture will have NodeJS servers with web socket clients connected. I plan on using a consistent hashing based router to direct clients to servers based off of the topics they're interested in subscribing to. This would mean that for a given topic, only a subset of servers will have clients subscribing to that topic. Messages will then be published to a pub/sub broker, which would be responsible for fanning out that data to servers that have subscribers.
The situation I want to avoid is one in which every broker receives every request, and the network becomes saturated. This is a clear issue with scaling Redis Pub/Sub. Adding servers shouldn't create an n squares' problem.
The number of clients on the pub/sub protocol would be the number of servers. Ideally, each server would be able to have a local broker to fan out data efficiently to multiple NodeJS processes, as to avoid unnecessary network bandwidth. In most cases, for a given topic, all subscribers would be on that same server.
What pub/sub protocols offer this sort of topic based data propagation?
The protocols I'm evaluating are: MQTT, RabbitMQ, ZMQ, nanomsg. This isn't inclusive, and SAAS options are acceptable.
The quality assurance constraints are easy. At most once, or at least once are both adequate. Acknowledgment isn't important. Event order isn't important. We're looking for fire and forget, with an emphasis on horizontal scalability.
First, let me address a risk of mis-understanding
In many cases, similar words do not mean the same thing. The more the abbreviations.
Having that said, let me review a PUB/SUB terminus technicus.
Martin SUSTRIK's and Pieter HINTJENS' team in imatix & 250bpm have developed a few smart messaging frameworks over the past decades, so these guys know a lot about the architecture benefits, constraints and implementation compromises.
That said helps me to state that these fathers, who introduced grounds of the modern messaging, do not consider PUB/SUB to be a protocol.
It is, at least in nanomsg & ZeroMQ, rather a smart Distributed Scaleability-focused Formal Communication Pattern -- i.e. a behaviour emulated by all involved parties.
Both ZeroMQ and nanomsg are broker-less.
In this sense, asking "what protocols" does not have solid grounds.
Let's start from the "data propagation" side
In initial ZeroMQ implementations PUB had no other choice but distribute all messages to all SUB-s that were in a connected-state. Pieter HINTJENS explained numerous times this decision that actual subscription-based filtering was performed on SUB-side ( distributed in 1:all-connected manner ).
It came much later to implement PUB-side subscription based filtering and you may check revisions history to find since which version this started to avoid 1:all-connected broadcasts of data.
Similarly, you may check the nanomsg remarks from Martin SUSTRIK, who gave many indepth posts on performance improvements designed in his fabulous nanomsg project.
Scaleability as a priority No.1
If Scaleability is the focus of your post and if it were a serious Project, my question number one would be what is the quantitative metric for comparing feasible candidates according to such Project goal - i.e. what is the feasibility translated into a utility function to score candidates to compare all the parallel attributes your Project is interested in?

Can I use lpop/rpop to create a simple queue system with Redis?

I tried several message/job queue systems but they all seem to add unnecessary complexity and I always end up with the queue process dying for no reason and cryptic log messages.
So now I want to make my own queue system using Redis. How would you go about doing this?
From what I have read, Redis is good because it has lpop and rpush methods, and also a pub/sub system that could be used to notify the workers that there are new messages to be consumed. Is this correct?
Yes you can. In fact there are a number of package which do exactly this ... including Celery and RQ for Python and resque for Ruby and ports of resque to Java (Jesque and Javascript (Coffee-resque).
There's also RestMQ which is implemented in Python, but designed for use with any ReSTful system.
There are MANY others.
Note that Redis LISTs are about the simplest possible network queuing system. However, making things robust over the simple primitives offered by Redis is non-trivial (and may be impossible for some values of "robust" --- at least on the server side). So many of these libraries for using Redis as a queue add features and protocols intended to minimize the chances of lost messages while ensuring "at-most-once" semantics. Many of these use the RPOPLPUSH Redis primitive with some other processing on the secondary LIST to handle acknowledgement of completed work and re-dispatch of "lost" units. (Consider the case where some client as "popped" a work unit off your queue and died before the work results were posted; how do you detect and mitigate for that scenario?)
In some cases people have cooked up elaborate bits of server side (Redis Lua EVAL) scripting to handle more reliable queuing. For example implementing something like RPOPLPUSH but replacing the "push" with a ZADD (thus adding the item and a timestamp to a "sorted set" representing work that's "in progress"). In such systems the work is completed with a ZREM and scanned for "lost" work using ZRANGEBYSCORE.
Here are some thoughts on the topic of implementing a robust queuing system by Salvatore Sanfilippo (a.k.a. antirez, author of Redis): Adventures in message queues where he discusses the considerations and forces which led him to work on disque.
I'm sure you'll find some detractors who argue that Redis is a poor substitute for a "real" message bus and queuing system (such as RabbitMQ). Salvatore says as much in his 'blog entry, and I'd welcome others here to spell out cogent reasons for preferring such systems.
My advice is to start with Redis during your early prototyping; but to keep your use of the system abstracted into some consolidated bit of code. Celery, among others, actually does this for you. You can start using Celery with a Redis backend and readily replace the backend with RabbitMQ or others with little effect on the bulk of your code.
For a catalog of alternatives, consider perusing: http://queues.io/

zookeeper vs redis server sync

I have a small cluster of servers I need to keep in sync. My initial thought on this was to have one server be the "master" and publish updates using redis's pub/sub functionality (since we are already using redis for storage) and letting the other servers in the cluster, the slaves, poll for updates in a long running task. This seemed to be a simple method to keep everything in sync, but then I thought of the obvious issue: What if my "master" goes down? That is where I started looking into techniques to make sure there is always a master, which led me to reading about ideas like leader election. Finally, I stumbled upon Apache Zookeeper (through python binding, "pettingzoo"), which apparently takes care of a lot of the fault tolerance logic for you. I may be able to write my own leader selection code, but I figure it wouldn't be close to as good as something that has been proven and tested, like Zookeeper.
My main issue with using zookeeper is that it is just another component that I may be adding to my setup unnecessarily when I could get by with something simpler. Has anyone ever used redis in this way? Or is there any other simple method I can use to get the type of functionality I am trying to achieve?
More info about pettingzoo (slideshare)
I'm afraid there is no simple method to achieve high-availability. This is usually tricky to setup and tricky to test. There are multiple ways to achieve HA, to be classified in two categories: physical clustering and logical clustering.
Physical clustering is about using hardware, network, and OS level mechanisms to achieve HA. On Linux, you can have a look at Pacemaker which is a full-fledged open-source solution coming with all enterprise distributions. If you want to directly embed clustering capabilities in your application (in C), you may want to check the Corosync cluster engine (also used by Pacemaker). If you plan to use commercial software, Veritas Cluster Server is a well established (but expensive) cross-platform HA solution.
Logical clustering is about using fancy distributed algorithms (like leader election, PAXOS, etc ...) to achieve HA without relying on specific low level mechanisms. This is what things like Zookeeper provide.
Zookeeper is a consistent, ordered, hierarchical store built on top of the ZAB protocol (quite similar to PAXOS). It is quite robust and can be used to implement some HA facilities, but it is not trivial, and you need to install the JVM on all nodes. For good examples, you may have a look at some recipes and the excellent Curator library from Netflix. These days, Zookeeper is used well beyond the pure Hadoop contexts, and IMO, this is the best solution to build a HA logical infrastructure.
Redis pub/sub mechanism is not reliable enough to implement a logical cluster, because unread messages will be lost (there is no queuing of items with pub/sub). To achieve HA of a collection of Redis instances, you can try Redis Sentinel, but it does not extend to your own software.
If you are ready to program in C, a HA framework which is often forgotten (but can be quite useful IMO) is the one coming with BerkeleyDB. It is quite basic but support off-the-shelf leader elections, and can be integrated in any environment. Documentation can be found here and here. Note: you do not have to store your data with BerkeleyDB to benefit from the HA mechanism (only the topology data - the same ones you would put in Zookeeper).