We are using Ignite's distributed datastructure - IgniteQueue. Please find below the Server details
Server 1: Initializes the queue and continuously runs.
Server 2: Producer. Produces contents to the queue. Started now and then
Server 3: Consumer. Consumes contents from the queue. Started now and then
Issue: When there is a time gap of 10 minutes between producer and consumer, the data in the queue is getting lost.
Could you please provide the correct configuration[eviction] that persists the contents in the queue until Server 1 is stopped?
Ultimately there shouldn't be any data loss.
There is no eviction for queues. And by default there are no backups, so most likely when you start and stops servers, you cause rebalancing and eventual loss of some entries. I suggest to do the following:
Start consumer and producer as clients rather than servers. Server topology that holds the data should always be as stable as possible.
Use CollectionConfiguration#setBackups to configure one or more backups for underlying cache used for the queue. This will help to preserve the state even if one of the server fails.
Done as per Valentin Kulichenko's comment as below
Server 1: Initializes the queue and continuously runs.
Client 1: Producer. Produces contents to the queue. Started now and then
Client 2: Consumer. Consumes contents from the queue. Started now and then
Code to make an Ignite Client :
Ignition.setClientMode(true)
val ignite = Ignition.start()
Related
We have currently using a service bus in Azure and for various reasons, we are switching to RabbitMQ.
Under heavy load, and when specific tasks on backend are having problem, one of our queues can have up to 1 million messages waiting to be processed.
RabbitMQ can have a maximum of 50 000 messages per queue.
The question is how can we design the rabbitMQ infrastructure to continue to work when messages are temporarily accumulating?
Note: we want to host our RabbitMQ server in a docker image inside a kubernetes cluster.
we imagine an exchange that would load balance mesages between queues in nodes behind.
But what is unclear to us is how to dynamically add new queues on demand if we detect that queues are getting full.
RabbitMQ can have a maximum of 50 000 messages per queue.
There is no this kind of limit.
RabbitMQ can handle more messages using quorum or classic queues with lazy.
With stream queues RabbitMQ can handle Millions of messages per second.
we imagine an exchange that would load balance messages between queues in nodes behind.
you can do that using different bindings.
kubernetes cluster.
I would suggest to use the k8s Operator
But what is unclear to us is how to dynamically add new queues on demand if we detect that queues are getting full.
There is no concept of FULL in RabbitMQ. There are limits that you can put using max-length or TTL.
A RabbitMQ queue will never be "full" (no such limitation exists in the software). A queue's maximum length rather depends on:
Queue settings (e.g max-length/max-length-bytes)
Message expiration settings such as x-message-ttl
Underlying hardware & cluster setup (available RAM and disk space).
Unless you are using Streams (new feature in v 3.9) you should always try to keep your queues short (if possible). The entire idea of a Message Queue (in it's classical sense) is that a message should be passed along as soon as possible.
Therefore, if you find yourself with long queues you should rather try to match the load of your producers by adding more consumers.
guys,
I set up a activeMQ cluster following http://activemq.apache.org/replicated-leveldb-store.html.
It works fine with persistent message.
But I find that non_persistent messages won't sync from master to slave. Is there any way to solve this?
The simple answer is to use persistent messages if you want them to survive a failover.
Non persistent messages are not expected to survive broker failovers and the system should not rely on them being there.
Typical scenarios for non persistent messages are
Periodic updates with high frequency where the last message has the current status (i.e. stock exchange rates, time before the bus arrives to a stop etc)
Messages with a (short) expiry time
Messages that can be resent in case of timeout. Typical with request/response - if no response arrives within X seconds, request again.
Unimportant data such as real time statistics that you can live without.
The benefit is performance as the message does not have to be synced with slaves, does not have to be stored on disk etc. you will have way higher troughput.
i would like to setup a JMS Queue on a Glassfish v3 Server for saving some protocoll informations on a sql server.
My first try ended up in lot's of deadlocks on the sql server.
My first question is: Are the messages in a queue processes after each other or in parallel. How do it set it up to process the messages after each other. Time does not play a role. I want to bring only a minimum load to the sql server.
The second: Where can i see how much messages are waiting in the queue for processing?
I had a look into the monitoring of glassfish and also the
http://server:adminport/__asadmin/get?monitor=true&pattern=server.applications.ear.test.war.TestMessageDrivenBean.*
But i could not see a "tobeprocessed" value or s.t. like that.
Many thanks,
Hasan
The listener you bind to the queue will process messages as they arrive. It responds to an onMessage event. You don't have to set up anything.
You do have to worry about what happens if the queue backs up because the listener(s) can't keep up.
You should also configure an error queue where messages that can't be processed go.
Have you thought about making the queue and database operation transactional? That way the message is put back on the queue if the database INSERT fails. You'll need an XA JDBC driver and a transaction manager to do it.
I am using Celery with RabbitMQ. Lately, I have noticed that a large number of temporary queues are getting made.
So, I experimented and found that when a task fails (that is a tasks raises an Exception), then a temporary queue with a random name (like c76861943b0a4f3aaa6a99a6db06952c) is formed and the queue remains.
Some properties of the temporary queue as found in rabbitmqadmin are as follows -
auto_delete : True
consumers : 0
durable : False
messages : 1
messages_ready : 1
And one such temporary queue is made everytime a task fails (that is, raises an Exception). How to avoid this situation? Because in my production environment a large number of such queues get formed.
It sounds like you're using the amqp as the results backend. From the docs here are the pitfalls of using that particular setup:
Every new task creates a new queue on the server, with thousands of
tasks the broker may be overloaded with queues and this will affect
performance in negative ways. If you’re using RabbitMQ then each
queue will be a separate Erlang process, so if you’re planning to
keep many results simultaneously you may have to increase the Erlang
process limit, and the maximum number of file descriptors your OS
allows
Old results will not be cleaned automatically, so you must make
sure to consume the results or else the number of queues will
eventually go out of control. If you’re running RabbitMQ 2.1.1 or
higher you can take advantage of the x-expires argument to queues,
which will expire queues after a certain time limit after they are
unused. The queue expiry can be set (in seconds) by the
CELERY_AMQP_TASK_RESULT_EXPIRES setting (not enabled by default).
From what I've read in the changelog, this is no longer the default backend in versions >=2.3.0 because users were getting bit in the rear end by this behavior. I'd suggest changing the results backend if this not the functionality you need.
Well, Philip is right there. The following is a description of how I solved it. It is a configuration in celeryconfig.py.
I am still using CELERY_BACKEND = "amqp" as Philip had said. But in addition to that, I am now using CELERY_IGNORE_RESULT = True. This configuration will ensure that the extra queues are not formed for every task.
I was already using this configuration but still when a task fails, the extra queue was formed. Then I noticed that I was using another configuration which needed to be removed which was CELERY_STORE_ERRORS_EVEN_IF_IGNORED = True. What this did that it did not store the results for all tasks but did only for errors (tasks which failed) and hence one extra queue for a task which failed.
The CELERY_TASK_RESULT_EXPIRES dictates the time to live of the temp queues. The default is 1 day. You can modify this value.
The reason this is happening is because celery workers remote control is enabled (it is enabled by default).
You can disable it by setting the CELERY_ENABLE_REMOTE_CONTROL setting to False
However, note that you will lose the ability to do things like add_consumer, cancel_consumer etc using the celery command
amqp backend creates a new queue for each task. If you want to avoid it, you can use rpc backend which keeps results in a single queue.
In your config, set
CELERY_RESULT_BACKEND = 'rpc'
CELERY_RESULT_PERSISTENT = True
You can read more about this on celery docs.
We have a system using SQL server service broker inside a single database.
This database is mirrored using high-safety mode and a witness.
We have a routing application that receives messages from a queue and forward those messages to a node's queue.
On every 8 nodes we have another application that receives that message, process it and send back the status to the routing queue.
For some unknown reason, that application did not see that her queue was already present in the system and re-created that queue again and again. Now I have 20000 queues and 20000 associated service in the system instead of 8.
I started to delete them but it is really slow (+/- 3 minutes to delete 50 queues). Is this normally so slow ? Does the mirroring interfere with SSB ? Is there another method to delete all those queues ?
Thanks