Testing with in-memory NServiceBus - testing

I'm attempting to create a high level test in my solution, and I want to 'catch' messages sent to the bus.
Here's what I do:
nUnit [SetUp] spins up the WebAPI project in IISExpress
SetUp also creates the bus
Send a HTTP request to the API
Verify whatever I want to verify
The WebAPI part of the whole test works fine. The creation of the bus and kicking it off seems great too. It even finds my fake message handler. The problem is the handler never receives the command from the queue, they just stay in the RabbitMQ queue forever.
Here's how the bus is being configured:
var bus = Configure.With()
.DefineEndpointName("Local")
.Log4Net()
.UseTransport<global::NServiceBus.RabbitMQ>()
.UseInMemoryTimeoutPersister()
.RijndaelEncryptionService()
.UnicastBus();
.CreateBus();
In the log from NServiceBus starting up, I see that my fake handler is being associated with the command:
2014-09-24 15:29:59,007 [Runner thread] DEBUG NServiceBus.Unicast.MessageHandlerRegistry
[(null)] <(null)> - Associated 'Bloo.MyCommand' message with 'Blah.FakeMyCommandHandler' handler
So seeing as the message lands in the correct RabbitMQ queue, I'm assuming everything up until the handler point is working fine.
I've tried putting waits in my [TearDown] so that the bus lives a little longer - hoping to give the handler time to receive the message. I've also tried spinning off the in-memory bus for the consumer part of the interactoin into a new thread with no luck.
Has anyone else tried this?
This is only the first step, what I would love to do is create a fake bus that records messages being sent to it. The need for RabbitMQ is just to get myself going (the bounds of my solution are WebAPI on the front and the bus at the back).
Cheers

You forgot to call .Start() on the bus, that's why it didn't listen for messages.
See here for more info: http://docs.particular.net/nservicebus/hosting-nservicebus-in-your-own-process-v4.x
Also, consider using NServiceBus.Testing for unit testing your handlers and sagas:
https://www.nuget.org/packages/NServiceBus.Testing

I'm guessing your messages are just sitting in your queue forever because your end point is listening on "Local.MachineName" queue instead of "Local"
If you set the ScaleOut to be SingleBrokerQueue this should sort the issue.
Configure.ScaleOut(s => s.UseSingleBrokerQueue());
var bus = Configure.With()
.DefineEndpointName("Local")
...

If you are attempting to do full integration tests, using actual queues, then this answer won't help you.
If you are doing more focused tests, i.e. testing individual components that rely on the bus, I would recommend that you use a mocking framework (I like Moq) and mock out IBus. You can then verify that messages you expected to be sent to the bus were indeed sent.

Related

ActiveMQ CMS: Can messages be lost between creating a consumer and setting a listener?

Setting up a CMS consumer with a listener involves two separate calls: first, acquiring a consumer:
cms::MessageConsumer* cms::Session::createConsumer( const cms::Destination* );
and then, setting a listener on the consumer:
void cms::MessageConsumer::setMessageListener( cms::MessageListener* );
Could messages be lost if the implementation subscribes to the destination (and receives messages from the broker/router) before the listener is activated? Or are such messages queued internally and delivered to the listener upon activation?
Why isn't there an API call to create the consumer with a listener as a construction argument? (Is it because the JMS spec doesn't have it?)
(Addendum: this is probably a flaw in the API itself. A more logical order would be to instantiate a consumer from a session, and have a cms::Consumer::subscribe( cms::Destination*, cms::MessageListener* ) method in the API.)
I don't think the API is flawed necessarily. Obviously it could have been designed a different way, but I believe the solution to your alleged problem comes from the start method on the Connection object (inherited via Startable). The documentation for Connection states:
A CMS client typically creates a connection, one or more sessions, and a number of message producers and consumers. When a connection is created, it is in stopped mode. That means that no messages are being delivered.
It is typical to leave the connection in stopped mode until setup is complete (that is, until all message consumers have been created). At that point, the client calls the connection's start method, and messages begin arriving at the connection's consumers. This setup convention minimizes any client confusion that may result from asynchronous message delivery while the client is still in the process of setting itself up.
A connection can be started immediately, and the setup can be done afterwards. Clients that do this must be prepared to handle asynchronous message delivery while they are still in the process of setting up.
This is the same pattern that JMS follows.
In any case I don't think there's any risk of message loss regardless of when you invoke start(). If the consumer is using an auto-acknowledge mode then messages should only be automatically acknowledged once they are delivered synchronously via one of the receive methods or asynchronously through the listener's onMessage. To do otherwise would be a bug in my estimation. I've worked with JMS for the last 10 years on various implementations and I've never seen any kind of condition where messages were lost related to this.
If you want to add consumers after you've already invoked start() you could certainly call stop() first, but I don't see any problem with simply adding them on the fly.

RabbitMQ as both Producer and Consumer in a single application

I am currently learning RabbitMQ and AMQP in general. I started working with some tutorials I found online and all of them show more or less the same example - a Spring Boot web app that, upon a REST call, produces a message and puts in onto a RabbitMQ queue and then, another class from the same app, which is configured as the Consumer of that message consumes it and processes the handler method.
I can't wrap my head around why this is beneficial in any way. The upside I understand is that the handler is executed in a separate thread, while the controller method can return right after sending the message to the queue. However, why would this be in any way better than just using Spring's #Async annotation on that handler method and calling it explicitly? In that case I suppose we would achieve the same thing, while not having to host and manage a seperate instance of a message broker like RabbitMQ.
Can someone please explain? Thanks.
Very simply:
with RabbitMq you can have persistent messages and a much safer and consistent exception management. In case the machine crashes, already pushed messages are not lost.
A message can be pushed to an exchange and consumed by more parallel consumers, that helps scaling the application in case the consumer code is too slow.
and a lot of other reasons...

WCF service hosted on Azure App service never seems to finish threads opened for processing

I have deployed a WCF service to Azure App Service that performs just one task - send a message to the topic. Although app works fine with normal load, it starts experiencing higher thread count as soon as load on the app increases.
The app instance becomes unhealthy when the threads count limit is reached.
Those threads stay in waiting state forever. We tried scaleout option on thread count metrics but the app just keeps on adding more instances as the earlier instance still had almost all threads waiting and remain unhealthy forever.
This is performed in the below sequence.
Accept a request.
initialize a Service bus topic client
Send the requested message to the topic.
Closed the topic client.
While sending a burst of 1000 requests, the app works but the number of threads initiated always stays in the waiting state. However, while these threads are waiting CPU stays at 0%. The average response time from this service is also under 100 ms avg.
After sending 1000 requests to this service, I see a similar number of threads open.
What could be the potential root cause of this issue? Is there any issue with my code to send the message to the topic?
public async Task SendAsync(Message message)
{
try
{
await _topicClient.SendAsync(message);
}
catch(Exception exc)
{
throw new Exception(exc.Message);
}
finally
{
await _topicClient.CloseAsync();
}
}
enter image description here
The code sample you provided does not really tell us much. We do not know how SendAsync(Message message) is being invoked. Is your image your queue count that drops to 0 before accepting more messages? I'm assuming a client calls your WCF app service which tells it send the message to service bus?
It does sound like you are hitting the 1000 maximum connections. Your _topicClinet should be a singleton for your app domain that all clients use. You also should only need one app service instance if all you're doing is message forwarding. No need for scaling unless there's more processing that you haven't alluded to.
Have a look at the Service Bus messaging best practices doc for more suggestions.
Thanks for responding. These are good suggestions and I will look to review my implementation inline with these.
The good news is that I was able to resolve the issue, it wasn't related to the topic client as I thought earlier. It was due to how I was registering dependency injection.
I am implementing a WCF service based on .Net Framework 4.8 and initially, we did not include Global.asax but registered DI in the service controller constructor. The implementation worked till we realized (as part of performance testing) it seems to add additional threads when we added ILogger dependency. Those additional threads never cool down but were adding up as the service received more requests.
To resolve, I moved DI registration into Application_Start in global.asax.

NServiceBus worker unregistration not working

I'm trying to unregister a worker node as described here, but the procedure doesn't seem to work correctly. If the distributor has any control messages related to the disconnecting worker when running the unregistration script, next time messages come in, it will consume those (effectively sending more work to the node). It's only afterwards that the distributor will reject control messages coming from the node.
Has anyone got this to work correctly = the worker should not receive any new messages RIGHT AFTER unregistration?
What you describe is the intended behaviour for the unregister operation. It was designed and implemented like that.
It does not actively remove the existing ready messages for the worker from the distributors storage queue. It only makes sure that the worker will not send any new ready messages back to, and thus request more work from, the distributor after the unregister.

How to detect alarm-based blocking RabbitMQ producer?

I have a producer sending durable messages to a RabbitMQ exchange. If the RabbitMQ memory or disk exceeds the watermark threshold, RabbitMQ will block my producer. The documentation says that it stops reading from the socket, and also pauses heartbeats.
What I would like is a way to know in my producer code that I have been blocked. Currently, even with a heartbeat enabled, everything just pauses forever. I'd like to receive some sort of exception so that I know I've been blocked and I can warn the user and/or take some other action, but I can't find any way to do this. I am using both the Java and C# clients and would need this functionality in both. Any advice? Thanks.
Sorry to tell you but with RabbitMQ (at least with 2.8.6) this isn't possible :-(
had a similar problem, which centred around trying to establish a channel when the connection was blocked. The result was the same as what you're experiencing.
I did some investigation into the actual core of the RabbitMQ C# .Net Library and discovered the root cause of the problem is that it goes into an infinite blocking state.
You can see more details on the RabbitMQ mailing list here:
http://rabbitmq.1065348.n5.nabble.com/Net-Client-locks-trying-to-create-a-channel-on-a-blocked-connection-td21588.html
One suggestion (which we didn't implement) was to do the work inside of a thread and have some other component manage the timeout and kill the thread if it is exceeded. We just accepted the risk :-(
The Rabbitmq uses a blocking rpc call that listens for a reply indefinitely.
If you look the Java client api, what it does is:
AMQChannel.BlockingRpcContinuation k = new AMQChannel.SimpleBlockingRpcContinuation();
k.getReply(-1);
Now -1 passed in the argument blocks until a reply is received.
The good thing is you could pass in your timeout in order to make it return.
The bad thing is you will have to update the client jars.
If you are OK with doing that, you could pass in a timeout wherever a blocking call like above is made.
The code would look something like:
try {
return k.getReply(200);
} catch (TimeoutException e) {
throw new MyCustomRuntimeorTimeoutException("RabbitTimeout ex",e);
}
And in your code you could handle this exception and perform your logic in this event.
Some related classes that might require this fix would be:
com.rabbitmq.client.impl.AMQChannel
com.rabbitmq.client.impl.ChannelN
com.rabbitmq.client.impl.AMQConnection
FYI: I have tried this and it works.