I am trying to find an option in ActiveMQ (5.13) that would allow me to configure the broker with a maximum number of amqp connections from one client. The target is to prevent one malicious or malfunctioning client to consume all the connections available on the broker preventing other clients to connect.
I am aware of the possibility to set maximumConnections on the transportConnector, but, as long as I understand correctly, this is a global limit on all the connections so it would not help in this case.
Is my understanding of the maximumConnections correct?
Is there a way to configure maxConnections per client on the broker?
No, there is no such property for a per client limit in ActiveMQ. You'd first need to narrow down what you define as a single client as that can be defined differently by different people. IP Address might not work as there can be several different applications coming from a single address depending on network topology or application co-location within a single box etc.
Related
As explained in this question, we have a network of brokers consisting of three brokers on different servers.
The network connectors are configured as follows:
<networkConnectors>
<networkConnector uri="static:(ssl://broker2:61616,ssl://broker3:61616)" networkTTL="5"/>
</networkConnectors>
We are also considering to add the following parameters to the network connector as we think this might improve the behavior (due to advise on this blog post):
dynamicOnly="true"
decreaseNetworkConsumerPriority="true"
suppressDuplicateQueueSubscriptions="true"
However, it is also scary to do as we feel we do not fully understand what is happening right now and so cannot really be sure of the effect these settings will have on the behavior. The official documentation is not really clear on this (neither on this point nor many others by the way).
UPDATE:
What we want to achieve is that messages are as much as possible handled on the broker where they first arrive. Clients (as shown in the other post) are connected via Wifi, but have a fallback to 4G. In practice, we see that they regularly switch network and therefore connect to a different broker. We want to limit the traffic over the network connectors.
These settings should get you that 'prefer local' behavior you want:
decreaseNetworkConsumerPriority="true"
suppressDuplicateQueueSubscriptions="true"
Also, add messagTTL = 4 and consumerTTL = 1. This allows messages to hop around n + 1 times. (Where n is the number of brokers in your cluster). Also, consumerTTL = 1 means brokers will only see consumers from their immediate peer, and not see over multiple hops.
In your use case, drop the networkTTL setting-- messageTTl and consumerTTL replace it and give you more control over message hops and consumer awareness.
A server listening on a UDP port, many clients can connect to it, there are many groups of clients connected to it. In a group one client is sending message and the server needs to route the message to the rest in the group. Like this many groups could be running simultaneously. How can we test what is the maximum number of connections the server can handle without inducing a visible lag in the response time ?
Firstly, let me desrcibe your network topology again. There is a server and many clients, clients are divided into several groups. A client sends a message to the server, and then the server sends something to the other clients in that group.
If the topology is like what I describe above, is the connections limitation you want to reach about how many clients the server can send to at the same time? Or do you want to know how many clients can send to server at the same time?
The way to test these two different circumstances may be using multi-thread or go routine if you can write by go. But they need to set different judge to give out the limitation.
I'm facing an issue where one client subscribed (by mistake) 4000+ time on the same topic (through ~100 connections). This resulted in the ActiveMQ going very low on resource and becoming very slow.
Is there some kind of mechanism to prevent this? Like one client/user could subscribe X times maximum on a topic?
I'm not aware of any feature in ActiveMQ 5.x that would provide the functionality you're looking for.
However, ActiveMQ Artemis has per-user resource limits. Therefore, if your broker is secured such that clients have to connect with a username and password (which it should be) then you can enforce a per-use connection limit using something like this in broker.xml:
<resource-limit-settings>
<resource-limit-setting match="myUser">
<max-connections>5</max-connections>
</resource-limit-setting>
</resource-limit-settings>
It's also worth noting that when a consumer creates a subscription on a topic then a queue is created which holds all the messages for the subscription. You can limit the number of queues a user can create (and thereby the number of subscriptions) by using the max-queues config parameter, e.g.:
<resource-limit-settings>
<resource-limit-setting match="myUser">
<max-queues>3</max-queues>
</resource-limit-setting>
</resource-limit-settings>
I have a middleware based on Apache Camel which does a transaction like this:
from("amq:job-input")
to("inOut:businessInvoker-one") // Into business processor
to("inOut:businessInvoker-two")
to("amq:job-out");
Currently it works perfectly. But I can't scale it up, let say from 100 TPS to 500 TPS. I already
Raised the concurrent consumers settings and used empty businessProcessor
Configured JAVA_XMX and PERMGEN
to speed up the transaction.
According to Active MQ web Console, there are so many messages waiting for being processed on scenario 500TPS. I guess, one of the solution is scale the ActiveMQ up. So I want to use multiple brokers in cluster.
According to http://fuse.fusesource.org/mq/docs/mq-fabric.html (Section "Topologies"), configuring ActiveMQ in clustering mode is suitable for non-persistent message. IMHO, it is true that it's not suitable, because all running brokers use the same store file. But, what about separating the store file? Now it's possible right?
Could anybody explain this? If it's not possible, what is the best way to load balance persistent message?
Thanks
You can share the load of persistent messages by creating 2 master/slave pairs. The master and slave share their state either though a database or a shared filesystem so you need to duplicate that setup.
Create 2 master slave pairs, and configure so called "network connectors" between the 2 pairs. This will double your performance without risk of loosing messages.
See http://activemq.apache.org/networks-of-brokers.html
This answer relates to an version of the question before the Camel details were added.
It is not immediately clear what exactly it is that you want to load balance and why. Messages across consumers? Producers across brokers? What sort of concern are you trying to address?
In general you should avoid using networks of brokers unless you are trying to address some sort of geographical use case, have too many connections for a signle broker to handle, or if a single broker (which could be a pair of brokers configured in HA) is not giving you the throughput that you require (in 90% of cases it will).
In a broker network, each node has its own store and passes messages around by way of a mechanism called store-and-forward. Have a read of Understanding broker networks for an explanation of how this works.
ActiveMQ already works as a kind of load balancer by distributing messages evenly in a round-robin fashion among the subscribers on a queue. So if you have 2 subscribers on a queue, and send it a stream of messages A,B,C,D; one subcriber will receive A & C, while the other receives B & D.
If you want to take this a step further and group related messages on a queue so that they are processed consistently by only one subscriber, you should consider Message Groups.
Adding consumers might help to a point (depends on the number of cores/cpus your server has). Adding threads beyond the point your "Camel server" is utilizing all available CPU for the business processing makes no sense and can be conter productive.
Adding more ActiveMQ machines is probably needed. You can use an ActiveMQ "network" to communicate between instances that has separated persistence files. It should be straight forward to add more brokers and put them into a network.
Make sure you performance test along the road to make sure what kind of load the broker can handle and what load the camel processor can handle (if at different machines).
When you do persistent messaging - you likely also want transactions. Make sure you are using them.
If all running brokers use the same store file or tx-supported database for persistence, then only the first broker to start will be active, while others are in standby mode until the first one loses its lock.
If you want to loadbalance your persistence, there were two way that we could try to do:
configure several brokers in network-bridge mode, then send messages
to any one and consumer messages from more than one of them. it can
loadbalance the brokers and loadbalance the persistences.
override the persistenceAdapter and use the database-sharding middleware
(such as tddl:https://github.com/alibaba/tb_tddl) to store the
messages by partitions.
Your first step is to increase the number of workers that are processing from ActiveMQ. The way to do this is to add the ?concurrentConsumers=10 attribute to the starting URI. The default behaviour is that only one thread consumes from that endpoint, leading to a pile up of messages in ActiveMQ. Adding more brokers won't help.
Secondly what you appear to be doing could benefit from a Staged Event-Driven Architecture (SEDA). In a SEDA, processing is broken down into a number of stages which can have different numbers of consumer on them to even out throughput. Your threads consuming from ActiveMQ only do one step of the process, hand off the Exchange to the next phase and go back to pulling messages from the input queue.
You route can therefore be rewritten as 2 smaller routes:
from("activemq:input?concurrentConsumers=10").id("FirstPhase")
.process(businessInvokerOne)
.to("seda:invokeSecondProcess");
from("seda:invokeSecondProcess?concurentConsumers=20").id("SecondPhase")
.process(businessInvokerTwo)
.to("activemq:output");
The two stages can have different numbers of concurrent consumers so that the rate of message consumption from the input queue matches the rate of output. This is useful if one of the invokers is much slower than another.
The seda: endpoint can be replaced with another intermediate activemq: endpoint if you want message persistence.
Finally to increase throughput, you can focus on making the processing itself faster, by profiling the invokers themselves and optimising that code.
Book Essential WCF claims that NetTcpBinding.MaxConnections limits the number of connections to an endpoint. Thus if property is set to value of 10, then only 10 concurrent connections will be allowed to that endpoint.
But the following blog http://kennyw.com/work/indigo/181 claims this property this property doesn’t limit the number of concurrent connections, but instead only specifies max number of connections that will be cached and reused by another channel:
MaxConnections for TCP is not a hard
and fast limit, but rather a knob on
the connections that we will cache in
our connection pool. That is, if you
set MaxConnections=2, you can still
open 4 client channels on the same
factory simultaneously. However, when
you close all of these channels, we
will only keep two of these
connections around (subject to
IdleTimeout of course) for future
channel usage. This helps performance
in cases where you are creating and
disposing client channels. This knob
will also apply to the equivalent
usage on the server-side as well (that
is, when a server-side channel is
closed, if we have less than
MaxConnections in our server-side pool
we will initiate I/O to look for
another new client channel).
So which is true?
EDIT:
First of all, you mean NetTcpBinding.MaxConnections, right?
Yes, thank you ... I've corrected the typo
See official docs at http://msdn.microsoft.com/en-us/library/system.servicemodel.nettcpbinding.maxconnections.aspx and especially http://msdn.microsoft.com/en-us/library/ms731078.aspx - the behavior is actually different depending if it's the server or the client, but in no case is it a hard limit on the number of connections. (On the client, it's a limit on the connections that are pooled, and on the server it's a limit on connections that haven't been accepted yet by the ServiceModel layer).
a) I assume by “pooled” you mean number of connection that will be reused by other channels. But the blog says this is the case for both client and the server, while if I understand you correctly, you’re saying on server it means number of connections waiting to be accepted by ServiceModel layer?
Thus if property is set to 10, then only 10 connections will be allowed to wait to be accepted and if another connection tries to wait, it will immediately be rejected?
First of all, you mean NetTcpBinding.MaxConnections, right?
See official docs at http://msdn.microsoft.com/en-us/library/system.servicemodel.nettcpbinding.maxconnections.aspx and especially http://msdn.microsoft.com/en-us/library/ms731078.aspx - the behavior is actually different depending if it's the server or the client, but in no case is it a hard limit on the number of connections. (On the client, it's a limit on the connections that are pooled, and on the server it's a limit on connections that haven't been accepted yet by the ServiceModel layer).