Configurations of JMS Queue for saving low priority informations on sql server - glassfish

i would like to setup a JMS Queue on a Glassfish v3 Server for saving some protocoll informations on a sql server.
My first try ended up in lot's of deadlocks on the sql server.
My first question is: Are the messages in a queue processes after each other or in parallel. How do it set it up to process the messages after each other. Time does not play a role. I want to bring only a minimum load to the sql server.
The second: Where can i see how much messages are waiting in the queue for processing?
I had a look into the monitoring of glassfish and also the
http://server:adminport/__asadmin/get?monitor=true&pattern=server.applications.ear.test.war.TestMessageDrivenBean.*
But i could not see a "tobeprocessed" value or s.t. like that.
Many thanks,
Hasan

The listener you bind to the queue will process messages as they arrive. It responds to an onMessage event. You don't have to set up anything.
You do have to worry about what happens if the queue backs up because the listener(s) can't keep up.
You should also configure an error queue where messages that can't be processed go.
Have you thought about making the queue and database operation transactional? That way the message is put back on the queue if the database INSERT fails. You'll need an XA JDBC driver and a transaction manager to do it.

Related

Ignite : Persist until server stops

We are using Ignite's distributed datastructure - IgniteQueue. Please find below the Server details
Server 1: Initializes the queue and continuously runs.
Server 2: Producer. Produces contents to the queue. Started now and then
Server 3: Consumer. Consumes contents from the queue. Started now and then
Issue: When there is a time gap of 10 minutes between producer and consumer, the data in the queue is getting lost.
Could you please provide the correct configuration[eviction] that persists the contents in the queue until Server 1 is stopped?
Ultimately there shouldn't be any data loss.
There is no eviction for queues. And by default there are no backups, so most likely when you start and stops servers, you cause rebalancing and eventual loss of some entries. I suggest to do the following:
Start consumer and producer as clients rather than servers. Server topology that holds the data should always be as stable as possible.
Use CollectionConfiguration#setBackups to configure one or more backups for underlying cache used for the queue. This will help to preserve the state even if one of the server fails.
Done as per Valentin Kulichenko's comment as below
Server 1: Initializes the queue and continuously runs.
Client 1: Producer. Produces contents to the queue. Started now and then
Client 2: Consumer. Consumes contents from the queue. Started now and then
Code to make an Ignite Client :
Ignition.setClientMode(true)
val ignite = Ignition.start()

Using Message Broker for database replications (currently RabbitMQ )

When my system's data changes I publish every single change to at least 4 different consumers (around 3000 messages a second) so I want to use a message broker.
Most of the consumers are responsible to update their database tables with the change.
(The DBs are different - couch, mysql, etc therefor solutions such as using their own replication mechanism or using db triggers is not possible)
questions
Does anyone have an experience with data replication between DBs using a message broker?
is it a good practice?
What do I do in case of failures?
Let's say, using RabbitMQ, the client removed 10,000 messages from the queue, acked, and threw an exception each time before handling them. Now they are lost. Is there a way to go back in the queue?
(re-queueing them will mess their order ).
Is using rabbitMQ a good practice? Isn't the ability to go back in the queue as in Kafka important to fail scenarios?
Thanks.
I don't have experience with DB replication using message brokers, but maybe this can help put you in the right track:
2. What do I do in case of failures?
Let's say, using RabbitMQ, the client removed 10,000 messages from the
queue, acked, and threw an exception each time before handling them.
Now they are lost. Is there a way to go back in the queue?
You can use dead lettering to avoid losing messages. I'd suggest to not ack until you are sure the consumers have processed them successfully, unless it is a long-running task. In case of failure, basic.reject instead of basic.ack to send them to a dead-letter queue. You have a medium throughput, so gotta be careful with that.
However, the order is not guaranteed. You'll need to implement a manual mechanism to recover them in the order they were published, maybe by using message headers with some sort of timestamp or id mechanism, to re-process them in the correct order.

NServiceBus ServiceInsight - Monitor Multiple Error and Audit

I have a couple questions regarding ServiceInsight that I was hoping someone could shed some light on.
Can I monitor multiple error queues and audit queues? If so how do I configure it to monitor those queues.
I understand that messages processed in the error queue are moved to the error.log queue. What happens to the messages processed in the audit queue, i.e where do they go after the management service processes them.
Where are the messages ultimately stored by the management process, i.e. are they stored in RavenDB and if so under what database.
In addition, how do I remove or delete message conversations in the endpoint explorer. For example, let’s say I just want to clear everything out.
Any additional insight (no pun intended) you can provide regarding the management and use of insight would be greatly appreciated.
Question: Can I monitor multiple error queues and audit queues? If so how do I configure it to monitor those queues.
Answer: ServiceInsight receives its data from a management service (AKA "ServiceControl") that collects its data from audit (and error) queues. A single instance of ServiceControl can connect to a single audit and error queues (in a single transport type). If you install multiple ServiceControl instances that collect auditing and error data form multiple queues, you can use serviceInsight to connect to each of the ServiceControl instances. Currently (in beta) ServiceInsight supports one connection at a time, but you can easily switch between connection or open multiple instances of ServiceInsight, each connecting to a different ServiceControl instance.
Question: I understand that messages processed in the error queue are moved to the error.log queue. What happens to the messages processed in the audit queue, i.e where do they go after the management service processes them.
Answer: audit messages are consumed, processed and stored in the ServiceControl instance auditing database (RavenDB).
Question: Where are the messages ultimately stored by the management process, i.e. are they stored in RavenDB and if so under what database.
Answer: Yes, they are stored (by default) in the embedded RavenDB database that is used by the management service (AKA "ServiceControl"). You can locate it under "C:\ProgramData\Particular\ServiceBus.Management"
Question: In addition, how do I remove or delete message conversations in the endpoint explorer. For example, let’s say I just want to clear everything out.
Answer: We will be adding full purge / delete support for this purpose in an upcoming beta update. for immediate purging of old messages, you can use the RavenDB studio based on the path specific above.
Please let me know of these answer your questions and do not hesitate to raise any other questions you may have!
Best regards,
Danny Cohen
Particular Software (NServiceBus Ltd.)

Is there a way for workers in celery to tell the broker "don't send me the next message until I told you so"

I am creating an app where order of execution is important. My tasks involves persisting data on a db. I wanted to make sure that the next data on the queue will never be process until the current executing tasks have been committed successfully. If there's an exception just keep on retrying the current tasks. But I'm not sure how retry works in celery.
Does it requeue the message and put it in front of queue making sure this message will be executed first.
or
Give chance to the next messages in the queue and retry later.
Sounds like a celery chain is what you need.

Deletion of SQL Service Broker queue is slow

We have a system using SQL server service broker inside a single database.
This database is mirrored using high-safety mode and a witness.
We have a routing application that receives messages from a queue and forward those messages to a node's queue.
On every 8 nodes we have another application that receives that message, process it and send back the status to the routing queue.
For some unknown reason, that application did not see that her queue was already present in the system and re-created that queue again and again. Now I have 20000 queues and 20000 associated service in the system instead of 8.
I started to delete them but it is really slow (+/- 3 minutes to delete 50 queues). Is this normally so slow ? Does the mirroring interfere with SSB ? Is there another method to delete all those queues ?
Thanks