How can we use #RabbitListener and #JMSListener alternatively based on env? - rabbitmq

Actually, I have on premises spring boot application which consumes rabbitMQ messages using #RabbitListener and I have migrated the same application to azure which consumes AzureServiceBus messages using #JMSListener.
We are maintaining same code for both on premises and Azure . So, because of these two listeners, I'm planning to replicate the same consumer code in two different classes with same content with two different Listeners
consumer with JMSListener:
#JmsListener(destination = "${queue}", concurrency = "${threads}", containerFactory = "defaultContainer")
public Message processMessage(#Payload final String message) {
//do stuff with same content
}
consumer with RabbitListener:
#RabbitListener(queues = "${app.rabbitmq.queue}")
public Message processMessage(#Payload final String message) {
//do stuff with same content
}
Is there any possibility of avoiding the duplicate code in two classes ? How can we handle listeners on a fly with only one consumer? Can any one please suggest me out ?

You can add both annotations to the same method with the autoStartup property set according to which Spring profile is active.
For #RabbitListener there is an autoStartup property on the annotation itself but, in both cases, there are Spring Boot properties auto-startup to control whether the container starts or not.

Related

Using Multiple Topic Config in on Producer Spring Reactive Kafka

I am new to Kafka and We are using Spring Web Flux in the application. We have a requirement to push two different messages to two different Topics say T1 and T2. Kafka Broker is the same.
We are using ReactiveKafkaProducerTemplate and it working fine.
#Bean
public ReactiveKafkaProducerTemplate<String, Object> reactiveKafkaProducerTemplate(
KafkaProperties properties) {
final Map<String, Object> props = properties.buildProducerProperties();
return new ReactiveKafkaProducerTemplate<String, Object>(SenderOptions.create(props));
}
Now we have requirement to compress only one Topic[T1] content alone as the message size is more on the Topic T1.
Do we have something like RoutingKafkaTemplate support in Reactive Kafka or Project Reactor where we can modify the Producer Config as per Topic needs?
No; there is no equivalent; you need to configure two templates with different producer configs.

Project Reactor Schedulers elastic using old threadlocal value

I am using spring webflux to call one service from another via Schedulers.elastic()
Mono<Integer> anaNotificationCountObservable = wrapWithRetryForFlux(wrapWithTimeoutForFlux(
notificationServiceMediatorFlux.getANANotificationCountForUser(userId).subscribeOn(reactor.core.scheduler.Schedulers.elastic())
)).onErrorReturn(0);
In main thread i am setting one InhertitableThreadLocal variable and in the child thread I am trying to access it and it is working fine.
This is my class for storing threadlocal
#Component
public class RequestCorrelation {
public static final String CORRELATION_ID = "correlation-id";
private InheritableThreadLocal<String> id = new InheritableThreadLocal<>();
public String getId() {
return id.get();
}
public void setId(final String correlationId) {
id.set(correlationId);
}
public void removeCorrelationId() {
id.remove();
}
}
Now the issue is first time its working fine meaning the value i am setting in threadlocal is passed to other services.
But second time also, it is using old id(generated in last request).
I tried using Schedulers.newSingle() instead of elastic(), then its working fine.
So think since elastic() is re-using threads, thats why it is not able to clear / or it is re-using.
How should i resolve issue.
I am setting thread local in my filter and clearing the same in myfiler
requestCorrelation.setId(UUID.randomUUID().toString());
chain.doFilter(req,res)
requestCorrelation.removeCorrelationId();
You should never tie resources or information to a particular thread when leveraging a reactor pipeline. Reactor is itself scheduling agnostic; developers using your library can choose to schedule work on another scheduler - if you decide to force a scheduling model you might lose performance benefits.
Instead you can store data inside the reactor context. This is a map-like structure that’s tied to the subscriber and independent of the scheduling arrangement.
This is how projects like spring security and micrometer store information that usually belongs in a threadlocal.

NServiceBus Removing IBus - Utilising IPipelineContext and IMessageSession

I am in the process of migrating NServiceBus up to v6 and am at a roadblock in the process of removing reference to IBus.
We build upon a common library for many of our applications (Website, Micro Services etc) and this library has the concept of IEventPublisher which is essentially a Send and Publish interface. This library has no knowledge of NSB.
We can then supply the implementation of this IEventPublisher using DI from the application, this allows the library's message passing to be replaced with another technology very easily.
So what we end up with is an implementation similar to
public class NsbEventPublisher : IEventPublisher
{
IEndpointInstance _instance;
public NsbEventPublisher(IEndpointInstance endpoint)
{
instance = endpoint;
}
public void Send(object message)
{
instance.Send(message, sendOptions);
}
public void Publish(object message)
{
instance.Publish(message, sendOptions);
}
}
This is a simplification of what actually happens but illustrates my problem.
Now when the DI container is asked for an IEventPublisher it knows to return a NsbEventPublisher and it knows to resolve the IEndpointInstance as we bind this in the bootstrapper for the website to the container as a singleton.
All is fine and my site runs perfect.
I am now migrating the micro-services (running in NSB.Host) and the DI container is refusing to resolve IEndpointInstance when resolving the dependencies within a message handler. Reading the docs this is intentional and I should be using IMessageHandlerContext when in a message handler.
https://docs.particular.net/nservicebus/upgrades/5to6/moving-away-from-ibus
The docs even elude to the issue I have in the bottom example around the class MyContextAccessingDependency. The suggestion is to pass the message context through the method which puts a hard dependency on the code running in the context of a message handler.
What I would like to do is have access to a sender/publisher and the DI container can give me the correct implementation. The code does not need any concept of the caller and if it was called from a message handler or from a self hosted application that just wants to publish.
I see that there is two interfaces for communicating with the "Bus" IPipelineContext and IMessageSession which IMessageHandlerContext and IEndpointInstance interfaces extend respectively.
What I am wondering is there some unification of the two interfaces that gets bound by NSB into the container so I can accept an interface that sends/publishes messages. In a handler it is an IMessageHandlerContext and on my self hosted application the IEndPointInstance.
For now I am looking to change my implementation of IEventPublisher depending on application hosting. I was just hoping there might be some discussion about how this approach is modeled without a reliable interface to send/publish irrespective of what initiated the execution of the code path.
A few things to note before I get to the code:
The abstraction over abstraction promise, never works. I have never seen the argument of "I'm going to abstract ESB/Messaging/Database/ORM so that I can swap it in future" work. ever.
When you abstract message sending functionality like that, you'll lose some of the features the library provides. In this case, you can't perform 'Conversations' or use 'Sagas' which would hinder your overall experience, e.g. when using monitoring tools and watching diagrams in ServiceInsight, you won't see the whole picture but only nugets of messages passing through the system.
Now in order to make that work, you need to register IEndpointInstance in your container when your endpoint starts up. Then that interface can be used in your dependency injection e.g. in NsbEventPublisher to send the messages.
Something like this (depending which IoC container you're using, here I assume Autofac):
static async Task AsyncMain()
{
IEndpointInstance endpoint = null;
var builder = new ContainerBuilder();
builder.Register(x => endpoint)
.As<IEndpointInstance>()
.SingleInstance();
//Endpoint configuration goes here...
endpoint = await Endpoint.Start(busConfiguration)
.ConfigureAwait(false);
}
The issues with using IEndpointInstance / IMessageSession are mentioned here.

How to get the source address in Rebus?

How do I get the source address in a message received?
The context is that I'm designing a monitor for a service bus implemented with Rebus. I use the publish - subscribe pattern thus a message is always published on a topic. The monitor subscribes to all topics in order to supervise that a service has send something and so is alive and healthy. Though in a message handler the received message don't contain any source address or information identifying the service publishing. This means it's not possible to supervise which services are alive and healthy. Of course I can create an attribute "Service" identifying the service publishing in all messages. This implies that each service have to set the attribute before publishing a message, which I find a bit cumbersome. The source address is there and can identify the service publishing.
When you're in a Rebus message handler, you can access the IMessageContext - either by having it injected by your IoC container (which is the preferrent way, because of the improved testability), or by accessing the static MessageContext.Current property.
The message context gives you access to a couple of things, where the headers of the incoming transport message can be used to get the return address of the message (which, by default, is set to the sender's input queue).
Something like this should do the trick:
public class SomeHandler : IHandleMessages<SomeMessage>
{
readonly IMessageContext _messageContext;
public class SomeHandler(IMessageContext messageContext)
{
_messageContext = messageContext;
}
public async Task Handle(SomeMessage message)
{
var headers = _messageContext.TransportMessage.Headers;
var returnAddress = headers[Headers.ReturnAddress];
// .. have fun with return address here
}
}

Ninject DependencyCreation and EventBroker extensions. Ensuring a one-to-one subscription

I'm using Ninject Event Broker extensions and I have two services. ServiceOne is the Publisher of an event. ServiceTwo is the subscriber. ServiceOne doesn't have a hard dependency to ServiceTwo, I'm creating the dependency using the DependencyCreation extension.
Here are the requirements:
I want to define a one-to-one event between these two objects. Only the ServiceTwo instance created by DependencyCreation should receive the event.
If there are other instances of ServiceTwo further down in the object
graph they shouldn't receive the event. (this shouldn't be the case
but I want to account for it)
ServiceTwo should be disposed of when ServiceOne is disposed.
This is a web application and the life of ServiceOne should only be
for one request.
Basically I'm just trying to recreate the behaviour of me writing:
var publisher = new Publisher();
var subscriber = new Subscriber();
var subscriber2 = new Subscriber();
publisher.MyEvent += subscriber.MyEventHandler;
One publisher. One subscriber. Subscriber2 doesn't get the event.
Here's my code:
this.Bind<IServiceOne, ServiceOne>().To<ServiceOne>().Named("ServiceOne").OwnsEventBroker("ServiceOne").RegisterOnEventBroker("ServiceOne");
this.Kernel.DefineDependency<IServiceOne, IServiceTwo>();
this.Bind<IServiceTwo>().To<ServiceTwo>().WhenParentNamed("ServiceOne").InDependencyCreatorScope().RegisterOnEventBroker("ServiceOne");
Two questions.
Does this fulfill my requirements?
Is there a better way?
I don't normally like to answer my own question but seeing as this has been quiet for a while, I've been testing my code sample and it appears to work fine. To clean up the creation of these dependencies and the whole event broker registration process I created some extension methods. First off an IsPublisher extension that creates a scoped event broker:
public static ISubscriberBuildingSyntax IsPublisher<TPublisher>(this IBindingWhenInNamedWithOrOnSyntax<TPublisher> syntax)
{
string name = Guid.NewGuid().ToString();
syntax.Named(name);
syntax.OwnsEventBroker(name).RegisterOnEventBroker(name);
return new SubscriberBuildingSyntax<TPublisher>(syntax, name);
}
Secondly, a generic CreateSubscriberDependency method that creates a dependency using Dependency Creator:
public ISubscriberBuildingSyntax CreateSubscriberDependency<TSubscriber>() where TSubscriber : class
{
this.syntax.Kernel.DefineDependency<TPublisher, TSubscriber>();
this.syntax.Kernel.Bind<TSubscriber>().ToSelf().WhenParentNamed(this.name).InDependencyCreatorScope().RegisterOnEventBroker(this.name);
return this;
}
I can then call this like so:
this.Bind<IRegistrationService>().To<RegistrationService>()
.IsPublisher()
.CreateSubscriberDependency<RoleService>();
This creates an Event Broker scoped to the RegistrationService instance with a RoleService dependency that is tied to the life of RegistrationService.
I can then register RegistrationService with InRequestScope to limit this to the life of one request.