Spring reactor rsocket connection limits - spring-webflux

I'm managing socket client using spring integration TCP and try to use RSocket.
My target server has connections limit so I need like max-connections.
Would RSocket support this?
If It's not support, am i using rate-limit(third part lib)?

Related

Load Balancing with multi broker ActiveMQ artemis instance

I need your help to suggest me how best I can achieve load balancing using the below diagram. here I am trying to create 2 machines with Master and expecting that the consumer/publisher application will use one common URL( a load-balanced one), where I should not expose the individual VM machine info and port ID. just that load balancer should take care of routing..
this is typically what we do with help of F5 load balancer or HTTP load balancer ..just wondering can be achieved over ActiveMQ and its advisable..?
on other side, I also tried configuring this way on weblogic to consume data from ActiveMQ queue
failover://(tcp://localhost:61616,tcp://localhost:61617)?randomize=true but this does not help.. or WebLogic is not understanding this format.
Messaging connections are stateful. They are not stateless like HTTP connections, and therefore cannot be load-balanced in the same way as HTTP connections. It may be possible to configure an F5 to deal with stateful messaging connections, but I can't say for sure. I'm not an expert on F5.
Both the ActiveMQ Artemis broker itself as well as the JMS client shipped with the broker have load-balancing functionality built in. There's too much to cover here so I recommend you review the clustering documentation for the relevant details.
You might also try using the broker balancer feature. It's currently experimental, but it should be ready to use in the 2.21.0 release coming in the March/April time-frame. It can act like an F5 for your messaging connections, but it can do some more intelligent things like always sending certain clients to the same node which can facilitate certain use-cases which are not possible in a traditional cluster.
The URL failover://(tcp://localhost:61616,tcp://localhost:61617)?randomize=true which are you using is for the OpenWire JMS client shipped with ActiveMQ 5.x. If you're using the core JMS client shipped with ActiveMQ Artemis then you should be using a URL like this instead:
(tcp://localhost:61616,tcp://localhost:61617)?ha=true

Spring STOMP Broker Relay + RabbitMQ Cluster with HA Proxy fronting each for load balancing

I am designing a system where a huge number of real-time data generated from devices is to be transferred to subscribers preferably over websockets. I have decided to use Spring STOMP Websockets as it was quicker to set-up, understand and had a few things supported out of the box like RabbitMQ and Security. And also because the plan is to use Spring for another REST API so Spring as a choice of tech stack. RabbitMQ is the message broker that I have decided on. However I can not find good amount of guidance on how to scale such a system.
The possible solution I am thinking of is:
To add HAProxy in front of STOMP broker instances and also between
STOMP Brokers and a RabbitMQ cluster, HAProxy will act as a
load-balancer in both cases. Spring STOMP broker will then be pointing to the HAProxy as broker relay host. The requirement is to have high availability and no data loss.
As I do not have prior experience with Websockets, I would like to get guidance on if this solution sounds correct or if there is anything that I am missing here?
Note: In this system, both the message producers and consumers are actually websocket Java clients. I took the sample code from https://github.com/nickebbutt/stomp-websockets-java-client and created two separate clients - One that only sends the messages i.e. device data(Producers) and other that subscribes to these messages(Consumer). Thus both connect using same websocket URL to same STOMP broker. With above system implementation the clients will point to HAProxy for websocket connection.
Just an updated on this, I did experimentation by creating the above set-up and it worked i.e. I was able to connect to websocket stomp server/send/receive data with RabbitMQ broker and use of HAProxy load balancing as described. The broker host/port configured in Spring was pointing to HAProxy which in turn was forwarding requests to RabbitMQ backend. Similarly, the websocket clients were connecting to Spring STOMP websocket server application via HAProxy.

How to modify spring-websocket to interface with broker via MQTT instead of STOMP?

I'm building a spring-websocket application that currently uses RabbitMQ as a message broker via the STOMP protocol. The rest of our organization mostly uses IBM Websphere MQ as a message broker, so we'd like to convert it away from RabbitMQ. However Websphere MQ doesn't support the STOMP protocol, which is spring-websocket's default. MQTT seems like the easiest supported protocol to use instead. Ideally our front-end web clients will continue to use STOMP, but I'm also OK with migrating them to MQTT if needed.
What classes do I need to overwrite to make spring-websocket interface with the broker via MQTT instead of STOMP? This article provides some general guidance that I should extend AbstractMessageBrokerConfiguration, but I'm unclear where to begin.
Currently I'm using the standard configuration methods: registry.enableStompBrokerRelay and registerStompEndpoints in AbstractWebSocketMessageBrokerConfigurer
Ryan has some good pointers.
The main work is going to be creating a replacement for StompBrokerRelayMessageHandler with an MqttBrokerMessageHandler that not only talks to an MQTT broker but also adapts client STOMP frames to MQTT and vice versa. The protocols are similar enough that it may be possible to find common ground but you won't know until you try.
Note that we did have plans for for MQTT support https://jira.spring.io/browse/SPR-12581 but the key issue was that SockJS which is required over the Web for fallback support does not support binary messages.
Here's my stab at this after reviewing the spring-websocket source code:
Change WebSocketConfig:
Remove #EnableWebSocketMessageBroker
Add new annotation: #EnableMqttWebSocketMessageBroker
Create MqttBrokerMessageHandler that extends AbstractBrokerMessageHandler -- suggest we copy and edit StompBrokerRelayMessageHandler
Create a new class that EnableMqttWebSocketMessageBroker imports: DelegatingMqttWebSocketMessageBrokerConfiguration
DelegatingMqttWebSocketMessageBrokerConfiguration extends AbstractMessageBrokerConfiguration directly and routes to MqttBrokerMessageHandler
Add this to server.xml on WebSphere Liberty:
<feature>websocket-1.1</feature>

spring cloud bus rabbitmq

We're using spring cloud config server. Spring config clients get updates using spring control bus (RabbitMQ).
Looks like every config client instance creates a queue connected to the 'spring.cloud.bus' exchange.
Any scalability limits on how many app instances can connect to a 'spring.cloud.bus' exchange ?
I suppose RabbitMQ could be scaled to handle this.
Looking for any guidelines on this.
Many thanx,
The spring cloud config server can have multiple instances since it is stateless. That coupled with a RabbitMQ cluster should scale to a very large number of instances.
A viable solution would be spring cloud config behind a load balancer with a RabbitMQ cluster.

When connecting to a JMS server, does the client have to be using the same API that the server is using?

For example, since our server is using TIBCO EMS, would I be able to connect to it using OpenJMS or WeblogicJMS?
JMS standardizes the API, but not the wire-protocol. So all JMS implementations are based on the same API interfaces, but you will require different implementation libraries/jar-files in your class-path that match the server you're connecting to. In the TIBCO EMS case, if you're connecting to a EMS, you'll need tibjms.jar and possibly other of this jars; you cannot use something from OpenJMS etc. instead since they use different wire-protocols.
JMS is pretty much the same as JDBC in this regard.