Can rabbitmq suggest prefetch/QoS value based on its metrics? - rabbitmq

RabbitMQ allows QoS.
https://www.rabbitmq.com/consumer-prefetch.html
The question is more about the optimal values.
Can RabbitMQ propose optimal on its metrics value?

No, it doesn't. However, you can estimate it based on your use case.
I suggest you read this blog post from CloudAMQP: https://www.cloudamqp.com/blog/2017-12-29-part1-rabbitmq-best-practice.html
This is really well written and provides a lot of useful advice. In their section "How to set correct prefetch value?" they describe three cases:
Single/few consumers and short processing time: prefetch ~= round_trip / processing_time
Many consumers and short processing time: prefetch < round_trip / processing_time
Many consumers and long processing time: prefetch set to 1

Related

"max allowed size 128000 bytes" reached when there are a lot of publisher/subscribers

Im using distributed pub/sub in an Akka.net cluster and I've begun seeing this error when pub/sub grows to approx. 1000 subscribers and 3000 publishers.
max allowed size 128000 bytes, actual size of encoded Akka.Cluster.Tools.PublishSubscribe.Internal.Delta was 325691 bytes
I don't know, but I'm guessing distributed pub/sub is trying to pass the pub/sub list to other actor systems on the cluster?
Anyway, I'm a little hesitant about boosting size limits because of this post. So what would be a reasonable approach to correcting this?
You may want to tackle with distributed pub/sub HOCON settings. Messages in Akka.Cluster.DistributePubSub are grouped together and send as deltas. You may be interested in two settings:
akka.cluster.pub-sub.max-delta-elements = 3000 says how many items can maximally consist on delta message. 3000 is the default value and you may want to lower it in order to reduce the size of the delta message (which seems to be an issue in your case).
akka.cluster.pub-sub.gossip-interval = 1s indirectly affects how often gossips will be sent. The more often they're send, the smaller they may be - assuming continuously highly saturated channel.
If these won't help, you may also think about reducing the size of your custom messages by introducing custom serializers with smaller payload footprint.

Optimization techniques

In the above diagram, we have a producer and a consumer. The producer takes about 1 unit of time to produce something and the consumer take about 9 units of time (4 to read and compute the data and 5 to write it back to the database). From a design standpoint, what might be my options to ensure that consumer does not start lagging behind? What can I do (like caching, ensure proper indexing in the DB) to make this better?
I don't know the hidden details of what your system is exactly like but the initial suggestion which instantly popped into my mind is to create multiple threads for both consumer and producer and use threadpool to reuse the threads. You must create more threads for consumer than producer as consumer is slow and the control flow is synchronous. You should try to perform tuning to decide what should be the ratio of number of consumer to producer threads so that there will be always some consumer threads available to consume the events created by the producer thread instantly.
Again, I don't know what's the exact requirement. For example, using multiple threads will affect the order of execution of events streams resulting in inconsistency. So if you don't require the events to be processed and persisted in exact order they are coming, you can certainly boost the performance by parallelization (using threadpool).
Good luck!

Hazelcast distributed lock fairness

Is there any way to achieve hazelcast distributed lock fairness?
It doesn't support now.
Please advise
Thankyou
Hazelcast distributed ILocks do not support fairness as is stated in the docs. Blocking operations are put in wait set and picked up randomly, so it can be quite unfair in some situations.
Implementing fairness with distributed locks would decrease performance greatly. Even if it would satisfy your use-case, it might not meet your performance requirements.
In most of the situations Hazelcast EntryProcessor achieves what ILock would offer. It has a FIFO based work queue so processor requests going to the same partitions will be guaranteed to run in FIFO order.
Hazelcast has a variety of distributed data structures. I am sure with the right combination of usage, you can achieve fairness for your use case.

rabbitmq hundred queue bindings to a topic exchange with hundred unique key

Let's say I have 200 events which are going to be placed in multiple queues (or not) and I was thinking of binding each queue to a topic exchange with 200 unique keys. Am i going to see bottleneck in performance by adding 200 unique bindings between one queue and one exchange?
if yes, do I have an alternative?
Thanks in advance
In general, it is less likely (like snow on July 4th) that routing will be the most resources consuming part. For further reading on routing please refer to Very fast and scalable topic routing – part 1 and Very fast and scalable topic routing – part 2.
As to particular case it depends on resources available to RabbitMQ server(s), messages flow, bindings, bindings key complexity, etc. Anyway, it is always better to run some load tests first to figure out bottlenecks, but again, it is less likely that routing will be the cause of significant performance degradation.

How to limit/control sampling rate in Apache Jmeter?

Ok, so I have control over the below parameters in Apache JMeter:
Number of Threads (users)
Ramp-up period (in seconds)
How do I test for varying sampling rate and not varying user addition rate? Even a fixed sampling rate would do.
Thanks in advance :)
Got my answer :)
Sampling rate can be limited by setting timer parameter.
Right click on your test plan, then Add, then Timers.
There are a varieties of timers to cater to various needs. I used constant timer in my case.
Particular timer you need is: http://code.google.com/p/jmeter-plugins/wiki/ThroughputShapingTimer
Constant Throughput Timer is excellent to regulate the number of hits without changing the threads and/or delays.
http://jmeter.apache.org/usermanual/component_reference.html#Constant_Throughput_Timer
It can also be controlled from outside (not easy).