Can anyone recommend a good Load Balancer for TCP messages sent to a Netty Server.
Messages are encoded/decoded using Google Protobuf.
You can take a look at Bruno de carvalho's Netty load balancer which will give you a perspective to the issue.
Related
I need your help to suggest me how best I can achieve load balancing using the below diagram. here I am trying to create 2 machines with Master and expecting that the consumer/publisher application will use one common URL( a load-balanced one), where I should not expose the individual VM machine info and port ID. just that load balancer should take care of routing..
this is typically what we do with help of F5 load balancer or HTTP load balancer ..just wondering can be achieved over ActiveMQ and its advisable..?
on other side, I also tried configuring this way on weblogic to consume data from ActiveMQ queue
failover://(tcp://localhost:61616,tcp://localhost:61617)?randomize=true but this does not help.. or WebLogic is not understanding this format.
Messaging connections are stateful. They are not stateless like HTTP connections, and therefore cannot be load-balanced in the same way as HTTP connections. It may be possible to configure an F5 to deal with stateful messaging connections, but I can't say for sure. I'm not an expert on F5.
Both the ActiveMQ Artemis broker itself as well as the JMS client shipped with the broker have load-balancing functionality built in. There's too much to cover here so I recommend you review the clustering documentation for the relevant details.
You might also try using the broker balancer feature. It's currently experimental, but it should be ready to use in the 2.21.0 release coming in the March/April time-frame. It can act like an F5 for your messaging connections, but it can do some more intelligent things like always sending certain clients to the same node which can facilitate certain use-cases which are not possible in a traditional cluster.
The URL failover://(tcp://localhost:61616,tcp://localhost:61617)?randomize=true which are you using is for the OpenWire JMS client shipped with ActiveMQ 5.x. If you're using the core JMS client shipped with ActiveMQ Artemis then you should be using a URL like this instead:
(tcp://localhost:61616,tcp://localhost:61617)?ha=true
I am designing a system where a huge number of real-time data generated from devices is to be transferred to subscribers preferably over websockets. I have decided to use Spring STOMP Websockets as it was quicker to set-up, understand and had a few things supported out of the box like RabbitMQ and Security. And also because the plan is to use Spring for another REST API so Spring as a choice of tech stack. RabbitMQ is the message broker that I have decided on. However I can not find good amount of guidance on how to scale such a system.
The possible solution I am thinking of is:
To add HAProxy in front of STOMP broker instances and also between
STOMP Brokers and a RabbitMQ cluster, HAProxy will act as a
load-balancer in both cases. Spring STOMP broker will then be pointing to the HAProxy as broker relay host. The requirement is to have high availability and no data loss.
As I do not have prior experience with Websockets, I would like to get guidance on if this solution sounds correct or if there is anything that I am missing here?
Note: In this system, both the message producers and consumers are actually websocket Java clients. I took the sample code from https://github.com/nickebbutt/stomp-websockets-java-client and created two separate clients - One that only sends the messages i.e. device data(Producers) and other that subscribes to these messages(Consumer). Thus both connect using same websocket URL to same STOMP broker. With above system implementation the clients will point to HAProxy for websocket connection.
Just an updated on this, I did experimentation by creating the above set-up and it worked i.e. I was able to connect to websocket stomp server/send/receive data with RabbitMQ broker and use of HAProxy load balancing as described. The broker host/port configured in Spring was pointing to HAProxy which in turn was forwarding requests to RabbitMQ backend. Similarly, the websocket clients were connecting to Spring STOMP websocket server application via HAProxy.
I'm trying to create a Load Balancer to be in front of a Zookeeper 3.4.6 cluster. When I do that the cluster works well but an exception is thrown:
WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#357] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x0, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:745)
It means that Zookeeper is understanding Load Balancer as a client and it's tryong to stablish a connection with it. But the Load Balancer just pings TCP 2181 and comes out.
You are trying to use a load balancer between your ZooKeeper cluster and clients?
When you give your clients a ZooKeeper connection string in the form of multiple endpoints like this: "server1,server2,server3...", the clients will pick one of the servers and switch over in case of failure. This way, if all your clients have the same ZooKeeper endpoints string, you will end up with a balanced pool.
If you put a standard load balancer between the clients and the server, it can cause failures like this. A load balancer doesn't play well with the way ZooKeeper expects its clients to behave. A client needs to maintain an open TCP connection to a specific server it has a session on, sending periodic heartbeats.
There are certain limitations to the way ZooKeeper clients load balance themselves (e.g. connections won't rebalance in case of server restarts), but fixing these limitations would require a ZooKeeper protocol aware load balancing logic, probably as part of the client implementation.
We have WCF service X: deployed on server A and Server B, host address:
http://127.0.0.1:8777/ServiceX/
And we load balance the two servers. We accesss the service via http://myappserver/ServiceX
We need to use per-session mode, and we set [reliable sessions] as true:
We don't find any issue till now based on testing. But the below linked MSDN article says that Do not use reliable sessions for Load Balancing with the WSHttp Binding? Please can someone give more details? Thanks a lot.
WCF Load Balancing http://msdn.microsoft.com/en-us/library/ms730128.aspx
Reliable Messaging means all your messages from your established client reach the same endpoint behind any intermediaries like routers and load balancers.
Load balancing means your calls will be distributed across all nodes as the load balancer sees fit.
Those two goals are mutually exclusive. You can have one or the other, not both.
I have not had time to try this myself yet, but I found this old blog entry (https://blogs.msdn.microsoft.com/drnick/2007/07/13/sticky-sessions/):
This division according to groups would allow a feature like reliable messaging to work because the same server would be used to process all of the messages in the reliable session. The feature that this division method represents is typically called “sticky sessions” or some other phrase for affinitization in the load balancer.
Given that you mention that your firewall supports sticky sessions, I suspect/hope you will be fine.
I have a Glassfish v2u2 cluster with two instances and I want to to fail-over between them. Every document that I read on this subject says that I should use a load balancer in front of Glassfish, like Apache httpd. In this scenario failover works, but I again have a single point of failure.
Is Glassfish able to do that fail-over without a load balancer in front?
The we solved this is that we have two IP addresses which both respond to the URL. The DNS provider (DNS Made Easy) will round robin between the two. Setting the timeout low will ensure that if one server fails the other will answer. When one server stops responding, DNS Made Easy will only send the other host as the server to respond to this URL. You will have to trust the DNS provider, but you can buy service with extremely high availability of the DNS lookup
As for high availability, you can have cluster setup which allows for session replication so that the user won't loose more than potentially one request which fails.
Hmm.. JBoss can do failover without a load balancer according to the docs (http://docs.jboss.org/jbossas/jboss4guide/r4/html/cluster.chapt.html) Chapter 16.1.2.1. Client-side interceptor.
As far as I know glassfish the cluster provides in-memory session replication between nodes. If I use Suns Glassfish Enterprise Application Server I can use HADB which promisses 99.999% of availability.
No, you can't do it at the application level.
Your options are:
Round-robin DNS - expose both your servers to the internet and let the client do the load-balancing - this is quite attractive as it will definitely enable fail-over.
Use a different layer 3 load balancing system - such as "Windows network load balancing" , "Linux Network Load balancing" or the one I wrote called "Fluffy Linux cluster"
Use a separate load-balancer that has a failover hot spare
In any of these cases you still need to ensure that your database and session data etc, are available and in sync between the members of your cluster, which in practice is much harder.