spring cloud bus rabbitmq - rabbitmq

We're using spring cloud config server. Spring config clients get updates using spring control bus (RabbitMQ).
Looks like every config client instance creates a queue connected to the 'spring.cloud.bus' exchange.
Any scalability limits on how many app instances can connect to a 'spring.cloud.bus' exchange ?
I suppose RabbitMQ could be scaled to handle this.
Looking for any guidelines on this.
Many thanx,

The spring cloud config server can have multiple instances since it is stateless. That coupled with a RabbitMQ cluster should scale to a very large number of instances.
A viable solution would be spring cloud config behind a load balancer with a RabbitMQ cluster.

Related

Load Balancing with multi broker ActiveMQ artemis instance

I need your help to suggest me how best I can achieve load balancing using the below diagram. here I am trying to create 2 machines with Master and expecting that the consumer/publisher application will use one common URL( a load-balanced one), where I should not expose the individual VM machine info and port ID. just that load balancer should take care of routing..
this is typically what we do with help of F5 load balancer or HTTP load balancer ..just wondering can be achieved over ActiveMQ and its advisable..?
on other side, I also tried configuring this way on weblogic to consume data from ActiveMQ queue
failover://(tcp://localhost:61616,tcp://localhost:61617)?randomize=true but this does not help.. or WebLogic is not understanding this format.
Messaging connections are stateful. They are not stateless like HTTP connections, and therefore cannot be load-balanced in the same way as HTTP connections. It may be possible to configure an F5 to deal with stateful messaging connections, but I can't say for sure. I'm not an expert on F5.
Both the ActiveMQ Artemis broker itself as well as the JMS client shipped with the broker have load-balancing functionality built in. There's too much to cover here so I recommend you review the clustering documentation for the relevant details.
You might also try using the broker balancer feature. It's currently experimental, but it should be ready to use in the 2.21.0 release coming in the March/April time-frame. It can act like an F5 for your messaging connections, but it can do some more intelligent things like always sending certain clients to the same node which can facilitate certain use-cases which are not possible in a traditional cluster.
The URL failover://(tcp://localhost:61616,tcp://localhost:61617)?randomize=true which are you using is for the OpenWire JMS client shipped with ActiveMQ 5.x. If you're using the core JMS client shipped with ActiveMQ Artemis then you should be using a URL like this instead:
(tcp://localhost:61616,tcp://localhost:61617)?ha=true

Mulesoft On Premise Cluster vs Cloud Hub worker scaleout

Cloud hub workers are NOT clustered , however we get Message loss protection and workload distribution across mule instances using Persistent queues. Also we can use default persistent object store (_defaultUserObjectStore ) for distributed caching ( with tweak). Correct me if I am wrong here.
With above features present , What is that we are missing in CloudHub as compared to On -premise clusters ? ( Is it Concurrency / one-time message delivery issue preventions ?)
First of all why did Mulesoft not enable clustering feature on Cloud hub ?
I would say that with the above features present you do not miss out anything. Also keep in mind that even in the On Prem HA Cluster the shared queues and states (object stores) are by default keept in shared memory and there is no persistens if the complete cluster goes down. To get the persistence you need to do tweaks also for a on prem cluster. As such for true message reliability I would suggest you look at a external message broker or service such as Anypoint MQ.
As for why Mulesoft did no enable clustering I can not answer since I'm not a Mulesoft employee. However best practices in integrations and API design is to keep the application stateless. When this is followed and you use a external message broker, such as Anypoint MQ, to implement the reliable messaging pattern the need for the Mule runtime HA cluster capabilities are small.

Spring STOMP Broker Relay + RabbitMQ Cluster with HA Proxy fronting each for load balancing

I am designing a system where a huge number of real-time data generated from devices is to be transferred to subscribers preferably over websockets. I have decided to use Spring STOMP Websockets as it was quicker to set-up, understand and had a few things supported out of the box like RabbitMQ and Security. And also because the plan is to use Spring for another REST API so Spring as a choice of tech stack. RabbitMQ is the message broker that I have decided on. However I can not find good amount of guidance on how to scale such a system.
The possible solution I am thinking of is:
To add HAProxy in front of STOMP broker instances and also between
STOMP Brokers and a RabbitMQ cluster, HAProxy will act as a
load-balancer in both cases. Spring STOMP broker will then be pointing to the HAProxy as broker relay host. The requirement is to have high availability and no data loss.
As I do not have prior experience with Websockets, I would like to get guidance on if this solution sounds correct or if there is anything that I am missing here?
Note: In this system, both the message producers and consumers are actually websocket Java clients. I took the sample code from https://github.com/nickebbutt/stomp-websockets-java-client and created two separate clients - One that only sends the messages i.e. device data(Producers) and other that subscribes to these messages(Consumer). Thus both connect using same websocket URL to same STOMP broker. With above system implementation the clients will point to HAProxy for websocket connection.
Just an updated on this, I did experimentation by creating the above set-up and it worked i.e. I was able to connect to websocket stomp server/send/receive data with RabbitMQ broker and use of HAProxy load balancing as described. The broker host/port configured in Spring was pointing to HAProxy which in turn was forwarding requests to RabbitMQ backend. Similarly, the websocket clients were connecting to Spring STOMP websocket server application via HAProxy.

Service grid in a micro services environment

We are using apache ignite as a IMDG in our micro services environment.
For scalability and load balancing we are considering to use a service registry like eureka or consul which is supported by spring cloud for the deployed micro services.
There is a concept of service grid providing support for node singleton and cluster singleton in apache ignite.
I also see WCF,weblogic and JBoss to having the same sort of features.
I am trying to understand what these service grids are and if i can use them to achieve the same benefits as the eureka service registry provided by netflix and supported by spring cloud.
Can someone guide if i can achieve the same using service grid in apache ignite.
No, you cannot use Apache Ignite Service Grid for the same purposes as Eureka. Eureka is used for load balancing and service discovery over WAN. Using Ignite clusters spanning over multiple AWS zones and remote client machines is not the most efficient way of using it.
More information on Ignite Service Grid can be found here - http://apacheignite.gridgain.org/docs/service-grid
Thanks!
UPD (for the 1st comment):
You cannot (in most cases) span and effectively use Ignite over WAN networks with high latencies and lower throughput characteristics.
As far as local clusters in non-cloud environments - go ahead! This is the best environment for systems of such kind.

How to use kubernetes replication controllers to replicate message-based services

We usually use message passing to send messages to decoupled services. This makes service discovery a non-issue, because (with AMQP in RabbitMQ for instance) you can use the broker's routing capability to dispatch messages to the right queues that feed the correct services. Load balancing is also handled by the message broker.
Enter kubernetes.
The use case that is usually laid out when talking about service replication and re-spawning failing services, is when your clients use some active protocol like http to contact a service, even if this service handles requests asynchronously. In this context, it is a natural fit to have replication controllers, that manage a group of services and a single entry point to load balance between them.
I like kubernetes' intuitive concepts, like rolling deployments, but how to you control this beasts that don't have an http interface ?
UPDATE:
I am not trying to set up a cluster of message brokers. I am looking at message consumers as services. Service clients don't connect directly to the services, they send messages to the message broker. The message broker acts as a load balancer in a way, and dispatches the messages to the subscribed queue consumers. These consumers implement the service.
My question gravitates around the fact that most usage patterns in demos handle services that are called via http, and kubernetes does a good job here to create a service proxy for these services, and a replication controller. Is it possible to create replication controllers for my kind of service, which does not have a http interface per se, and have all the benefits of rolling updates, and minimum instances?
I'm not sure I entirely understand the question. Are you asking how to use RabbitMQ with Kubernetes? Or how to set up a RabbitMQ cluster: https://www.rabbitmq.com/clustering.html? Or how rolling updates interact with RabbitMQ? Or something else?
I think you should be able to create one service and one replication controller per server, and then use the service DNS names in the cluster configuration file. This is the current approach used to run Zookeeper, also. We have a long-standing TODO to make this less verbose (https://github.com/GoogleCloudPlatform/kubernetes/issues/260), but the current approach should be straightforward. You do lose the ability to use a single kubectl rolling-update command to update the cluster, but it's also straightforward to update the instances individually.