NIO disadvantages in ActiveMQ - activemq

I've been working on configuring an ActiveMQ broker, and one thing that confuses me is that everything I've read describes NIO being "a good choice if you need to scale" or a "something to look at if you need more speed", so my question is why don't they just say "always use NIO"? All I've read is advantages but presumably there are reasons not to use it (otherwise it would just be the default). What are they?

Complexity. It is usually more simple to code for 1 thread per connection.
Also, I think NIO may be slightly slower in the small volume case (1, 2, 3 connections). Generally you wouldn't design a system to perform well in the small volume case.. but if you know you are never going to have > 2 connections for an application... maybe NIO is overkill / actually harmful.

The NIO transport scales better because it is more efficient and does not spawn a thread per connection. Also, the NIO transport extends the TCP transport, so all the options for the underlying socket still apply. To my knowledge, there is no downside to using NIO because overall it should be more efficient than the TCP transport. There's no good reason that I can recall for NIO not being the default transport.

By adding more threads you are increasing the complexity, and the risk of potential non-delivery of messages.
While you increase the number of parallel messages that can be handled, you may experience a delay in transferring each individual message.
More complexity brings less reliability, so keep an eye on bugs like this, which has been Unresolved since 2019:
https://issues.apache.org/jira/browse/AMQ-7343

Related

Reliable transport protocol on top of UDP

UDP has one good feature - it is connectionless. But it has many bad features - packets can be lost, arrive multiple times, there is no packet sequence - packet 2 can arrive faster than 1. How to keep good and remove bad?. Is there any good implementations that provide reliable transport protocol on top of udp so that we are still conectionless but without mentioned problems. One example of what can be done with it is mosh.
What you describe as bad isn't really bad depending on the context.
For example UDP is used a lot in realtime streaming, delivery confirmation and resending is useless in this context.
That being said there are e few implementations that you might want to look at:
ENet (http://enet.bespin.org/)
RUDP (https://en.wikipedia.org/wiki/Reliable_User_Datagram_Protocol)
UDT (https://en.wikipedia.org/wiki/UDP-based_Data_Transfer_Protocol)
I work in embedded context:
CoAP (https://en.wikipedia.org/wiki/Constrained_Application_Protocol) also implements a lot of these features, so its worth a look.
What is your reason for not choosing TCP?

Distributed Locking for Device

We have distributed cluster weblogic setup.
Our Use Case was whenever Device Contact our system need to compute Parameter and provision to the device. There can be concurrent request from devices. We cant reject any request from devices.So we are going with Async Processing approach.
Here problem we are facing is whenever device contacts we need to lock the source device as well as neighbor devices to provision optimized parameter.
Since we have cluster system, we require a distributed locking system which provides high performance.
Could you suggest us any framework/suggestion in java for distributed locking which suits to our requirement ?
Regards,
Sakumar
Typically, when you sense a need for distributed locking, that indicates a design flaw. Distributed locking is usually either slow or unsafe. It's slow when done correctly because strong consistency guarantees are required to ensure two processes can't hold the same lock at the same time, and unsafe when consistency constraints are relaxed in favor of performance gains.
Often you can find a better solution than distributed locking by doing something like consistent hashing to ensure related requests are handled by the same process. Similarly, leader election can be a more performant alternative to distributed locking if you can elect a leader and route related requests to the leader. But certainly there must be some cases where these solutions are not possible, and so I'd better answer your question...
Assuming fault tolerance is a requirement, and considering the performance and safety concerns mentioned above, Hazelcast may be a good option for your use case. It's a fast embedded in-memory data grid that has a distributed Lock implementation. Often it's nice to use an embedded system like Hazelcast rather than relying on another cluster, but Hazelcat does have the potential for consistency issues in certain partition scenarios, and that could result in two processes acquiring a lock. TBH I've heard more than a few complaints about locks in Hazelcast, but no doubt others have had positive experiences.
Alternatively, ZooKeeper is likely the most common system for distributed locking in Java. However, ZooKeeper tends to be significantly slower for writes than reads since its quorum based - though it is relatively fast and very mature - and locking is a write-heavy work load. Also, in contrast to Hazelcast, one major downside to ZooKeeper is that it's a separate cluster and thus a dependency on another external system. I think ZooKeeper's stability and maturity makes it worth a look.
There doesn't currently seem to be many proven projects in between Hazelcast (an embedded eventually strongly consistent framework) and ZooKeeper (a strongly consistent external service) which is why (disclaimer: self promotion incoming) I created Atomix to provide safe distributed locking and leader elections as an embedded system for Java. It's a decent option if you need a framework like Hazelcast that has the same (actually stronger) consistency guarantees as ZooKeeper.
If performance and scalability is paramount and you're expecting high rates of requests, you will likely have to sacrifice consistency and look at a Hazelcast or something similar.
Alternatively, if fault tolerance is not a requirement (I don't think you spshould cities that it is) you can even just use a Redis instance :-)

Can I use lpop/rpop to create a simple queue system with Redis?

I tried several message/job queue systems but they all seem to add unnecessary complexity and I always end up with the queue process dying for no reason and cryptic log messages.
So now I want to make my own queue system using Redis. How would you go about doing this?
From what I have read, Redis is good because it has lpop and rpush methods, and also a pub/sub system that could be used to notify the workers that there are new messages to be consumed. Is this correct?
Yes you can. In fact there are a number of package which do exactly this ... including Celery and RQ for Python and resque for Ruby and ports of resque to Java (Jesque and Javascript (Coffee-resque).
There's also RestMQ which is implemented in Python, but designed for use with any ReSTful system.
There are MANY others.
Note that Redis LISTs are about the simplest possible network queuing system. However, making things robust over the simple primitives offered by Redis is non-trivial (and may be impossible for some values of "robust" --- at least on the server side). So many of these libraries for using Redis as a queue add features and protocols intended to minimize the chances of lost messages while ensuring "at-most-once" semantics. Many of these use the RPOPLPUSH Redis primitive with some other processing on the secondary LIST to handle acknowledgement of completed work and re-dispatch of "lost" units. (Consider the case where some client as "popped" a work unit off your queue and died before the work results were posted; how do you detect and mitigate for that scenario?)
In some cases people have cooked up elaborate bits of server side (Redis Lua EVAL) scripting to handle more reliable queuing. For example implementing something like RPOPLPUSH but replacing the "push" with a ZADD (thus adding the item and a timestamp to a "sorted set" representing work that's "in progress"). In such systems the work is completed with a ZREM and scanned for "lost" work using ZRANGEBYSCORE.
Here are some thoughts on the topic of implementing a robust queuing system by Salvatore Sanfilippo (a.k.a. antirez, author of Redis): Adventures in message queues where he discusses the considerations and forces which led him to work on disque.
I'm sure you'll find some detractors who argue that Redis is a poor substitute for a "real" message bus and queuing system (such as RabbitMQ). Salvatore says as much in his 'blog entry, and I'd welcome others here to spell out cogent reasons for preferring such systems.
My advice is to start with Redis during your early prototyping; but to keep your use of the system abstracted into some consolidated bit of code. Celery, among others, actually does this for you. You can start using Celery with a Redis backend and readily replace the backend with RabbitMQ or others with little effect on the bulk of your code.
For a catalog of alternatives, consider perusing: http://queues.io/

How does Redis achieve the high throughput and performance?

I know this is a very generic question. But, I wanted to understand what are the major architectural decision that allow Redis (or caches like MemCached, Cassandra) to work at amazing performance limits.
How are connections maintained?
Are connections TCP or HTTP?
I know that it is completely written in C. How is the memory managed?
What are the synchronization techniques used to achieve high throughput inspite
of competing read/writes?
Basically, what is the difference between a plain vanilla implementation of a machine with in memory cache and server that can respond to commands and a Redis box? I also understand that the answer needs to be very huge and should include very complex details for completion. But, what I'm looking for are some general techniques used rather than all nuances.
There is a wealth of of information in the Redis documentation to understand how it works. Now, to answer specifically your questions:
1) How are connections maintained?
Connections are maintained and managed using the ae event loop (designed by the Redis author). All network I/O operations are non blocking. You can see ae as a minimalistic implementation using the best network I/O demultiplexing mechanism of the platform (epoll for Linux, kqueue for BSD, etc ...) just like libevent, libev, libuv, etc ...
2) Are connections TCP or HTTP?
Connections are TCP using the Redis protocol, which is a simple telnet compatible, text oriented protocol supporting binary data. This protocol is typically more efficient than HTTP.
3) How is the memory managed?
Memory is managed by relying on a general purpose memory allocator. On some platforms, this is actually the system memory allocator. On some other platforms (including Linux), jemalloc has been selected since it offers a good balance between CPU consumption, concurrency support, fragmentation and memory footprint. jemalloc source code is part of the Redis distribution.
Contrary to other products (such as memcached), there is no implementation of a slab allocator in Redis.
A number of optimized data structures have been implemented on top of the general purpose allocator to reduce the memory footprint.
4) What are the synchronization techniques used to achieve high throughput inspite of competing read/writes?
Redis is a single-threaded event loop, so there is no synchronization to be done since all commands are serialized. Now, some threads also run in the background for internal purposes. In the rare cases they access the data managed by the main thread, classical pthread synchronization primitives are used (mutexes for instance). But 100% of the data accesses made on behalf of multiple client connections do not require any synchronization.
You can find more information there:
Redis is single-threaded, then how does it do concurrent I/O?
What is the difference between a plain vanilla implementation of a machine with in memory cache and server that can respond to commands and a Redis box?
There is no difference. Redis is a plain vanilla implementation of a machine with in memory cache and server that can respond to commands. But it is an implementation which is done right:
using the single threaded event loop model
using simple and minimalistic data structures optimized for their corresponding use cases
offering a set of commands carefully chosen to balance minimalism and usefulness
constantly targeting the best raw performance
well adapted to modern OS mechanisms
providing multiple persistence mechanisms because the "one size does fit all" approach is only a dream.
providing the building blocks for HA mechanisms (replication system for instance)
avoiding stacking up useless abstraction layers like pancakes
resulting in a clean and understandable code base that any good C developer can be comfortable with

Replicated message queue

I am looking for a message queue which would replicate messages across a cluster of servers. I am aware that this will cause a performance hit, but that's what the requirements are - message persistence is very important.
The replication can be asynchronous, but it should be there - if there's a large backlog of messages waiting for processing, they shouldn't be lost.
So far I didn't manage to find anything from the well-known MQs. HornetQ for example supported message replication in 2.0 but in 2.2 it seems to be removed. RabbitMQ doesn't replicate messages at all, etc.
Is there anything out there that could meet my requirements?
There are at least three ways of tackling this that come to mind, depending upon how robust you need the solution to be.
One: pick any messaging tech, then replicate your disk-storage. Using something like DRBD you can have the file-backed storage copied to another machine under the covers. If your primary box dies, you should be able to restart on your second machine from the replicated files.
Two: Keep looking. There are various commercial systems that definitely do this, two such (no financial benefit on my part) are Informatica Ultra Messaging (formerly 29West) and Solace. These are commonly used in the financial community.
Three: build your own. ZeroMQ is one such toolkit that you could use to roll-your-own system from pre-built messaging blocks. Even a system that does not officially support it could fairly easily be configured to publish all messages to two queues. Your reader would have to drain both somehow, so this may well be a non-starter, but possible in any case.
Overall: do test your performance assumptions, as all of these will have various performance implications in various scenarios.
Amazon SQS is designed with this very thing in mind, but because of the consistency model (which is a part of messaging anyway), you're responsible for de-duplicating messages on the consumer side. Granted, SQS maybe somewhat slow and the costs can add up for lots of messages, but if you want to guarantee that no messages are lost, then it's a pretty solid way to go.
new Kafka 0.8.1 offers replication!