I have two questions regarding ActiveMQ.
On my environment, I set up 3 ActiveMQ in 3 servers and share one Database. Is it possible to run the 3 ActiveMQ in the 3 servers to share the same database? I tried to set it up. However, it looks like 3 brokers cannot share the same database. Is it correct?
Also, I did some Failover testing, it looks like the ActiveMQ cannot guarantee the message order. e.g. I set up the 3 ActiveMQ into ServerA,ServerB and ServerC. And then, I published MessageA,MessageB into ServerA and published MessageC into ServerB. The ServerA ServerB and ServerC had been set up as Failover servers. When I shut down ServerA, the only MessageC can be consumed. However, the consumed message order should be MessageA,MessageB and MessageC. I need to keep this message order even through the ServerA is down. Is it possible to configure ActiveMQ to guarantee the message order for Failover?
Thank you!
You can set all 3 to the same DB. They will act like a master-slave failover. Only one instance will run and other two will be waiting for a lock from the DB to takeover.
If you follow #1, it will guarantee the order, but you'll be using one server at a time (and the centralized DB as storage)
Related
We have a setup with a number of redis (2.8) servers (lets say 4) and as many redis sentinels. On startup of each machine, we set a pre-select machine as master through the command line and all the rest as slaves of that. and the sentinels all monitor these machines. The clients first connect to the local sentinel and retrieve the master's IP address and then connect there.
This setup is trouble free most of the time but sometimes the sentinels go out of sync with servers. if I name the machines A,B,C and D - sentinels will think B is master while redis servers are all connected to A as the master. bringing down redis server on B doesnt help either. I had to bring it down and manually "Sentinel failover" on A to fix the issue. Question is
1. What causes this to happen and whats the easiest and quickest way to fix this ?
2. What is best configuration - is there something better than this ?
The only time you should set a master is the first time. Once sentinel has taken over management of replication you should let it do it. This includes on restarts. Don't use the command line to set replication. Let sentinel and redis manage it. This is why you're getting issues - you've told sentinel it is authoritative, but you are telling the Redis servers to ignore sentinel.
Sentinel stores the status in its Config file, so when it restarts it can resume the last configuration. So even on restart, let sentinel do it's job.
Also, if you have 4 servers (be specific, not "let's say") you should be running a quorum of three on your monitor statement in sentinel. With a quorum of two you can wind up with two masters
I have the following Redis/Sentinel configuration:
Redis master A + N slaves
M sentinels watching A, named masterA
the client application query the sentinels for masterA, then query and modify A
Now say A is outdated and I want to replace it by a new Redis master called B (with minimum down time / data loss.). In the end of the operation, I want this:
Redis master B + N slaves
the client application querying and modifying B
I could proceed as follows:
Have the sentinels start watching B, named masterB
Have each slave of A become a slave of B
From there, I am stuck because the client application still asks for masterA when talking to the sentinels. I have two questions:
Is there a way to switch masters names, such that B becomes known as masterA for the sentinels, and therefore for the client application as well?
Is it better to modify the client application code to handle the switch from an old master to a new master?
One way of achieving your aim is to follow the age old solution of "adding another level of indirection".
A particularly effective method is to have your clients talk to a TCP proxy (e.g. HAProxy) and have it pass the traffic to the current master.
To keep the TCP proxy is sync you can do something similar to http://blog.haproxy.com/2014/01/02/haproxy-advanced-redis-health-check/ which makes HAProxy Sentinel aware.
The major plus for this solution is that it makes your clients very simple - they only connect to one place and the traffic is always forwarded to the correct Redis instance.
One issue with this solution is that HAProxy's configuration DSL does not have the ability to deal with the period when a Redis server restarts and announces itself initially as a master before the sentinels make it a slave. This will lead to missed writes and inconsistent state which depending on you application could be fine or maybe not.
To deal with this I have started to develop a "smarter" daemon to keep HAProxy in sync with the current master. My solution is at https://github.com/mdevilliers/redishappy.
I am trying to set up cluster of brokers, which should have same feature like rabbitMQ cluster, but over WAN (my machines are in different locations), so rabbitMQ cluster does not work.
I am looking to alternatives, rabbitMQ federation is just backup the messages in the downstream, can not make sure they have exactly the same messages available at any time (downstream still keeps the old messages already consumed in the upstream)
how about ActiveMQ Master/Slave, I have found :
http://activemq.apache.org/how-do-distributed-queues-work.html
"queues and topics are all replicated between each broker in the cluster (so often to a master and maybe a single slave). So each broker in the cluster has exactly the same messages available at any time so if a master fails, clients failover to a slave and you don't loose a message."
My concern is that if it can automatically update to make sure Master/Slave always have the same messages, which means the consumed messages in Master will also disappear in Slaves.
Thanks :)
ActiveMQ has various clustering features.
First there is High Availability - "Master/Slave". The idea is that several physical servers act as a single logical ActiveMQ broker. If one goes down, another takes it place without losing data. You can do that by sharing the message store (shared file system or shared JDBC), or you could setup a replicated cluster, which replicates read/writes to the master down to all slaves (you need three+ servers). ActiveMQ is using LevelDB and Apache Zookeeper to achieve this.
The other format of cluster available in ActiveMQ is to be able to distribute load and separate security over several logical brokers. Brokers are then connected in a network of brokers. Messages are by default passed around to the broker with available consumers for that message. However, there is a rich toolbox of features in ActiveMQ to tweak a network of brokers to do things as always send a copy of a message to specific broker etc. It takes some messing with the more advanced features though (static network connectors and queue mirroring, maybe more).
Maybe there is a better way to solve your requirements, which is not really specified in the question?
I have a two activeMQ(5.6.0) brokers. They use a shared kaha database so only one can be 'running' at once.
I have a (asp.net) webservice that puts a message on a queue, locally if I start and stop the brokers the webservice fails over correctly
when I test with the brokers on seperate machines it sometimes works but often I get "socketException: Connection reset" errors and the message is lost.
The connection string I am using is below. Note that I am aware NMS does not understand the priority backup command but I have left it there for the future.
failover:(tcp://MACHINE1:61616,tcp://MACHINE2:62616)?transport.initialReconnectDelay=1000&transport.timeout=10000&randomize=false&priorityBackup=true
How can I make my fail over between brokers fool proof?
The shared Kaha database was on a simple share. Currently activeMQ (or windows) cannot reliably get or release the lock in this configuration. The shared database must sit on a 'real' SAN so that both instances of the queue software see the database as being on a local filestore not a network location.
See this page for more info http://activemq.apache.org/shared-file-system-master-slave.html
I'm trying to set up three brokers in a network for load balancing -- clients and producers can connect to any of these brokers.
Questions:
What is the recommended topology to use to network these brokers? More specifically, what is the networkConnector configuration to use on each of these brokers? should duplex setting be enabled? (I guess duplex setting depends on the topology we choose)
A->B->C->A or A<-->B<-->C<-->A
Client should use failover protocol to connect to these brokers, right? e.g. failover://(tcp://b1:6161, tcp://b2:6161, tcp://b3:6161)
Any duplicate message handling required on the client side in case of restarts? See http://forum.springsource.org/showthread.php?108461-Failover-issue-in-ActiveMQ -- not clear why duplicate message issue exists here
Ideally we want to set up topology as shown in this post http://edelsonmedia.com/?p=143 -- not clear how to set up networkConnector on masters and slaves.
1.) I can't actually recommend a topology. This choice depends on the number of hops (between the broker where the messages enters the cluster and the broker where the consumer conects to) you can accept. In a heave traffic scenario every hop adds to the network load.
In my company we use a hypercube network (every broker knows every other broaker) and it works great.
Generaly you should make sure that your node configurations are as similar as possible. Using duplex makes sure you have less connections to configure (since the connection from B to A is already part of the duplex connection from A to B) but it introduce a large number of differences into your config file.
Personaly i created my own start script for ActiveMQ that auto-generated the connection config based on the dns names of my cluster (mycluster-01 to 06).
2.) yes. You might want to add ?randomize=false if you want to make sure the client uses the first entry in the list.
3.) Duplicate entries can happen if there are failures during message transport or as race conditions during heavy load. In general one message only is owned by one broker.
4.) dont set up network connectors between masters and slaves (REALLY DONT). Use the pure Master Slave feature of activeMQ and configure the master for each slave (you don't have to configure anything on the masters). For the all Masters configure NetworkConnections to the other Masters with failover to their slaves)
http://activemq.apache.org/pure-master-slave.html