NserviceBus input queue got exploded - nservicebus

I have a a NesrviceBus host that his job is to send HTTP request to our customers for each new incoming MSMQ message.
Recently one customer begun to "return" HTTP timeouts, Which cause:
1) The input queue got exploded from new messages
2) starvation of all the other customers )-:
My solution is to split the host and to install new host for each customer.
Any other ideas?

You could specify an acceptable timeout for you not to be "exploded from new messages" and then catch the timeout and defer the message for a while until assuming the client would respond quicker.
And to avoid the starvation while waiting for requests you could set up the number of threads your worker is using so it processes more than one message at the time.

Related

ActiveMQ persistent store is full and consumer is blocked

I'm doing a test to see how the flow control behaves. I created a fast producer and slow consumers and set my destination queue policy highwater mark to 60 percent..
the queue did reach 60% so messages now went to the store, now the store is full and blocking as expected..
But now i cannot get my consumer to connect and pull from the queue.. Seem that blocking is also blocking the consumer from getting in to start pulling from the queue..
Is this the correct behavior?
The consumer should not be blocked by flow-control. Otherwise messages could not be consumed to free up space on the broker for producers to send additional messages.
So this issues surfaced when I was using a on demand jms service. The service will queue or dequeue via a REST services. The consumers are created on demand.. If the broker is being blocked as im my case being out of resource, then you cannot create a new consumer.
I've since modified the jms service to use a consumer pool(implemented a object pool pattern). The consumer pool is initialized when the application starts and this resolved the blocking issue

Requests failing with a timeout, why?

I have configured MassTransit with RabbitMQ as transport. And I just use an instance of generic IRequestClient to send requests to a consumer that then should return a response.
My problem is that every other request fails with a TimeoutException. Execute it once, the next time it fails, and then it works again.
The Consumer is not even invoked when failed.
What can be the reason for this?
I have other services share a similar name in their requests and consumers. I have tried to figure out if that is the problem.
You should post the configuration code of your application using the request client and the one configuring the consumer.
If you have other consumers with the same name, it's likely they're on the same queue if you're using ConfigureEndpoints, which could be the root cause of the issue.
Since it's every-other-message that times out, that would make sense since RabbitMQ will load balance the queue across the different services with the same queue name.

WCF stops response for requests

When maxConcurrentCalls is 100% WCF stops response for incoming requests over existing connections. However, in the same situation, in test enviroment requests come over existing connections well. Requests over new connections timeout as expected. Binding – net.tcp. What can stop request processing over existing connection in production?
After a week of investigation the cause was found. It was an application that issued many requests (and used all available workers - maxConcurrentCalls became 100%). Before receiving replies that app hangs. Because of our app has long send timeout (1 hour) all workers could not send reply during that timeout and just waiting. New request could not be processed because all workers were busy.
All in all long send timeout is evil. If it were short (default 1 minute) hang requests could be aborted mush earlier and another requests could be processed as normal.

Check if BeginPeek is still Subscribed

I am using BeginPeek() /no params/ to subscribe to messages coming in to my private queue. This is being done in a service hosted in NServiceBus host. When NServiceBus encounters transport connection timeout exception (i'm seeing circuit breaker armed logs and timeout exception logs), the peek event subscription seems get lost. When database connectivity becomes stable and new messages come in to my queue, the service is no longer notified.
Any ideas or suggestions on how to address this?

WAS net.msmq service messages stuck in retry queue

We are hosting a net.msmq service in IIS7.
The queue is transactional.
Messages arrive in the queue and are picked up correctly by the service.
If an exception occurs, message is put into the retry queue.
The retry delay is set to 1 hour, however when this time elapses the message is not "re-tried".
If we browse to the .svc or send another message to the main queue then the retry messages are also picked up.
So basically messages get stuck in the retry queue until somethin "boots up" the site pool again.
has anyone come across this same problem?
Sounds like your service's AppDomain is getting unloaded due to inactivity. That's always a pain in the neck with anything hosted in IIS, and usually the solution is to create something that will keep the AppDomain alive by pinging it every few minutes (you could easily expose a second MSMQ-based endpoint on your service and just send a message to it every ten seconds on so to keep it alive).