Deletion of SQL Service Broker queue is slow - sql-server-2005

We have a system using SQL server service broker inside a single database.
This database is mirrored using high-safety mode and a witness.
We have a routing application that receives messages from a queue and forward those messages to a node's queue.
On every 8 nodes we have another application that receives that message, process it and send back the status to the routing queue.
For some unknown reason, that application did not see that her queue was already present in the system and re-created that queue again and again. Now I have 20000 queues and 20000 associated service in the system instead of 8.
I started to delete them but it is really slow (+/- 3 minutes to delete 50 queues). Is this normally so slow ? Does the mirroring interfere with SSB ? Is there another method to delete all those queues ?
Thanks

Related

Redistributing messages from dead letter queue in rabbitmq

I need a solution like consistent hash exchange on a dead letter queue.
Background:
I'm processing IOT device sensor data, firstly data goes server1
Then data goes to server2 (all using RabbitMQ)
Now we are trying to horizontally scale the server2's, for this example we will have a single Server1, and 3x server2's:
Server1
Server2A
Server2B
Server2C
Each IOT device is identified using its unique IotDeviceId (similar to IMEI number)
After processing on server1, each IOT device message needs to stick to the same server2, for this problem we decided to use RabbitMQ consistent hash exchange on IotDeviceId, this seems to work well.
I have the following queues on the server1
Server2AQueue
Server2BQueue
Server2CQueue
Each IotDeviceId will only ever go to same server2 queue, which is exactly what I wanted.
However, I now need to handle a failure for when I'm sleeping, where the other servers take the load evenly until I fix the problematic server.
If for example Server2A goes down, the messages will stay in Server2AQueue for a certain amount of time, and then eventually end up in the DeadLetterQueue
I can write an app or shovel these messages from DeadLetterQueue into one of the other queues, however I would like to evenly distribute load (by IotDeviceId) to the remaining 2 queues, and not push everything to 1 of the healthy queues.
It needs to STICK, so I cannot have the same device sending to different Server2's, so it needs to stick to the failover queue.
Is there a way to do this with RabbitMQ (or other solutions).
To recap
I need to have a similar consistent hash exchange on IotDeviceId for deadletter queue.

How to have more than 50 000 messages in a RabbitMQ Queue

We have currently using a service bus in Azure and for various reasons, we are switching to RabbitMQ.
Under heavy load, and when specific tasks on backend are having problem, one of our queues can have up to 1 million messages waiting to be processed.
RabbitMQ can have a maximum of 50 000 messages per queue.
The question is how can we design the rabbitMQ infrastructure to continue to work when messages are temporarily accumulating?
Note: we want to host our RabbitMQ server in a docker image inside a kubernetes cluster.
we imagine an exchange that would load balance mesages between queues in nodes behind.
But what is unclear to us is how to dynamically add new queues on demand if we detect that queues are getting full.
RabbitMQ can have a maximum of 50 000 messages per queue.
There is no this kind of limit.
RabbitMQ can handle more messages using quorum or classic queues with lazy.
With stream queues RabbitMQ can handle Millions of messages per second.
we imagine an exchange that would load balance messages between queues in nodes behind.
you can do that using different bindings.
kubernetes cluster.
I would suggest to use the k8s Operator
But what is unclear to us is how to dynamically add new queues on demand if we detect that queues are getting full.
There is no concept of FULL in RabbitMQ. There are limits that you can put using max-length or TTL.
A RabbitMQ queue will never be "full" (no such limitation exists in the software). A queue's maximum length rather depends on:
Queue settings (e.g max-length/max-length-bytes)
Message expiration settings such as x-message-ttl
Underlying hardware & cluster setup (available RAM and disk space).
Unless you are using Streams (new feature in v 3.9) you should always try to keep your queues short (if possible). The entire idea of a Message Queue (in it's classical sense) is that a message should be passed along as soon as possible.
Therefore, if you find yourself with long queues you should rather try to match the load of your producers by adding more consumers.

Ignite : Persist until server stops

We are using Ignite's distributed datastructure - IgniteQueue. Please find below the Server details
Server 1: Initializes the queue and continuously runs.
Server 2: Producer. Produces contents to the queue. Started now and then
Server 3: Consumer. Consumes contents from the queue. Started now and then
Issue: When there is a time gap of 10 minutes between producer and consumer, the data in the queue is getting lost.
Could you please provide the correct configuration[eviction] that persists the contents in the queue until Server 1 is stopped?
Ultimately there shouldn't be any data loss.
There is no eviction for queues. And by default there are no backups, so most likely when you start and stops servers, you cause rebalancing and eventual loss of some entries. I suggest to do the following:
Start consumer and producer as clients rather than servers. Server topology that holds the data should always be as stable as possible.
Use CollectionConfiguration#setBackups to configure one or more backups for underlying cache used for the queue. This will help to preserve the state even if one of the server fails.
Done as per Valentin Kulichenko's comment as below
Server 1: Initializes the queue and continuously runs.
Client 1: Producer. Produces contents to the queue. Started now and then
Client 2: Consumer. Consumes contents from the queue. Started now and then
Code to make an Ignite Client :
Ignition.setClientMode(true)
val ignite = Ignition.start()

Can non_persistent messages sync between master and slave in activemq master_slave mode with zookeeper?

guys,
I set up a activeMQ cluster following http://activemq.apache.org/replicated-leveldb-store.html.
It works fine with persistent message.
But I find that non_persistent messages won't sync from master to slave. Is there any way to solve this?
The simple answer is to use persistent messages if you want them to survive a failover.
Non persistent messages are not expected to survive broker failovers and the system should not rely on them being there.
Typical scenarios for non persistent messages are
Periodic updates with high frequency where the last message has the current status (i.e. stock exchange rates, time before the bus arrives to a stop etc)
Messages with a (short) expiry time
Messages that can be resent in case of timeout. Typical with request/response - if no response arrives within X seconds, request again.
Unimportant data such as real time statistics that you can live without.
The benefit is performance as the message does not have to be synced with slaves, does not have to be stored on disk etc. you will have way higher troughput.

Configurations of JMS Queue for saving low priority informations on sql server

i would like to setup a JMS Queue on a Glassfish v3 Server for saving some protocoll informations on a sql server.
My first try ended up in lot's of deadlocks on the sql server.
My first question is: Are the messages in a queue processes after each other or in parallel. How do it set it up to process the messages after each other. Time does not play a role. I want to bring only a minimum load to the sql server.
The second: Where can i see how much messages are waiting in the queue for processing?
I had a look into the monitoring of glassfish and also the
http://server:adminport/__asadmin/get?monitor=true&pattern=server.applications.ear.test.war.TestMessageDrivenBean.*
But i could not see a "tobeprocessed" value or s.t. like that.
Many thanks,
Hasan
The listener you bind to the queue will process messages as they arrive. It responds to an onMessage event. You don't have to set up anything.
You do have to worry about what happens if the queue backs up because the listener(s) can't keep up.
You should also configure an error queue where messages that can't be processed go.
Have you thought about making the queue and database operation transactional? That way the message is put back on the queue if the database INSERT fails. You'll need an XA JDBC driver and a transaction manager to do it.