Is there a PubSubHubbub Server implementation in java? - websub

Is there a PubSubHubbub Server not publisher implementation in java?

Check this out. A straight-forward subscriber implementation written in Java for the PubSubHubBub 0.3 protocol.
Usage:
Subscriber subscriber = new SubscriberImpl("subscriber-host",8888);
Subscription subscription = subscriber.subscribe(URI.create("http://feed-host/my-push-enabled-feed.xml"));
subscription.setNotificationCallback(new NotificationCallback()
{
#Override
public void handle(SyndFeed feed)
{
//TODO: Do something with the feed
}
} );

Related

MassTransit 6.3.2 Consumer is never called when using IBusControl.ConnectConsumer

When I attach consumers during intial message bus config, the consumers are called as expected.
When I attach the consumers after bus config, using ConnectConsumer the consumers are never called; The temporary queue/exchange is created, but it doesn't seen to know of the consumers that are supposed to be attached to that queue.
There is another service/consumer on the bus that is receiving Request messages being published here and publishing Response messages that should be consumed here.
Any idea why this is not working?
NOTE: I know that the "preferred" way of connecting consumers to the bus in the bus config (as in the working example); this is not an option for me as in practice, the bus is being creating/configed in a referenced assembly and the end-user programmers that are adding consumers to the bus do not have access to the bus configuration method. This is something that used to be trivial in version 2; it seems later version make such usecases much more difficult - not all use-cases have easy access to the bus creation/config methods.
Ex.
public class TestResponseConsumer : IConsumer<ITestResponse>
{
public Task Consume(ConsumeContext<ITestResponse> context)
{
Console.WriteLine("TestResponse received");
return Task.CompletedTask;
}
}
...
This works (consumer gets called):
public IBusControl ServiceBus;
public IntegrationTestsBase()
{
ServiceBus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
cfg.Host("vmdevrab-bld", "/", h => {
h.Username("guest");
h.Password("guest");
});
cfg.ReceiveEndpoint("Int_Test", e =>
{
e.Consumer<TestResponseConsumer>();
});
cfg.AutoStart = true;
});
ServiceBus.Start();
}
~IntegrationTestsBase()
{
ServiceBus.Stop();
}
}
This does not work:
[TestMethod]
public void Can_Receive_SampleResponse()
{
try
{
ITestRequest request = new TestRequest(Guid.NewGuid(), Guid.NewGuid(), Guid.NewGuid());
ServiceBus.ConnectConsumer<TestResponseConsumer>();
ServiceBus.Publish<ITestRequest>(request);
mre.WaitOne(60000);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
Assert.Fail();
}
}
It doesn't work because as explained in the documentation when you connect a consumer to the bus endpoint, no exchange bindings are created. Published messages will not be delivered to the bus endpoint.
If you want to connect consumers to the bus after it has been started, you should use ConnectReceiveEndpoint() instead, which is also covered in the documentation.
var handle = bus.ConnectReceiveEndpoint("secondary-queue", x =>
{
x.Consumer<TestResponseConsumer>();
})
var ready = await handle.Ready;
The endpoint can be stopped when it is no longer needed, otherwise it will be stopped when the bus is stopped.

Spring Integration testing a Files.inboundAdapter flow

I have this flow that I am trying to test but nothing works as expected. The flow itself works well but testing seems a bit tricky.
This is my flow:
#Configuration
#RequiredArgsConstructor
public class FileInboundFlow {
private final ThreadPoolTaskExecutor threadPoolTaskExecutor;
private String filePath;
#Bean
public IntegrationFlow fileReaderFlow() {
return IntegrationFlows.from(Files.inboundAdapter(new File(this.filePath))
.filterFunction(...)
.preventDuplicates(false),
endpointConfigurer -> endpointConfigurer.poller(
Pollers.fixedDelay(500)
.taskExecutor(this.threadPoolTaskExecutor)
.maxMessagesPerPoll(15)))
.transform(new UnZipTransformer())
.enrichHeaders(this::headersEnricher)
.transform(Message.class, this::modifyMessagePayload)
.route(Map.class, this::channelsRouter)
.get();
}
private String channelsRouter(Map<String, File> payload) {
boolean isZip = payload.values()
.stream()
.anyMatch(file -> isZipFile(file));
return isZip ? ZIP_CHANNEL : XML_CHANNEL; // ZIP_CHANNEL and XML_CHANNEL are PublishSubscribeChannel
}
#Bean
public SubscribableChannel xmlChannel() {
var channel = new PublishSubscribeChannel(this.threadPoolTaskExecutor);
channel.setBeanName(XML_CHANNEL);
return channel;
}
#Bean
public SubscribableChannel zipChannel() {
var channel = new PublishSubscribeChannel(this.threadPoolTaskExecutor);
channel.setBeanName(ZIP_CHANNEL);
return channel;
}
//There is a #ServiceActivator on each channel
#ServiceActivator(inputChannel = XML_CHANNEL)
public void handleXml(Message<Map<String, File>> message) {
...
}
#ServiceActivator(inputChannel = ZIP_CHANNEL)
public void handleZip(Message<Map<String, File>> message) {
...
}
//Plus an #Transformer on the XML_CHANNEL
#Transformer(inputChannel = XML_CHANNEL, outputChannel = BUS_CHANNEL)
private List<BusData> xmlFileToIngestionMessagePayload(Map<String, File> xmlFilesByName) {
return xmlFilesByName.values()
.stream()
.map(...)
.collect(Collectors.toList());
}
}
I would like to test multiple cases, the first one is checking the message payload published on each channel after the end of fileReaderFlow.
So I defined this test classe:
#SpringBootTest
#SpringIntegrationTest
#ExtendWith(SpringExtension.class)
class FileInboundFlowTest {
#Autowired
private MockIntegrationContext mockIntegrationContext;
#TempDir
static Path localWorkDir;
#BeforeEach
void setUp() {
copyFileToTheFlowDir(); // here I copy a file to trigger the flow
}
#Test
void checkXmlChannelPayloadTest() throws InterruptedException {
Thread.sleep(1000); //waiting for the flow execution
PublishSubscribeChannel xmlChannel = this.getBean(XML_CHANNEL, PublishSubscribeChannel.class); // I extract the channel to listen to the message sent to it.
xmlChannel.subscribe(message -> {
assertThat(message.getPayload()).isInstanceOf(Map.class); // This is never executed
});
}
}
As expected that test does not work because the assertThat(message.getPayload()).isInstanceOf(Map.class); is never executed.
After reading the documentation I didn't find any hint to help me solved that issue. Any help would be appreciated! Thanks a lot
First of all that channel.setBeanName(XML_CHANNEL); does not effect the target bean. You do this on the bean creation phase and dependency injection container knows nothing about this setting: it just does not consult with it. If you really would like to dictate an XML_CHANNEL for bean name, you'd better look into the #Bean(name) attribute.
The problem in the test that you are missing the fact of async logic of the flow. That Files.inboundAdapter() works if fully different thread and emits messages outside of your test method. So, even if you could subscribe to the channel in time, before any message is emitted to it, that doesn't mean your test will work correctly: the assertThat() will be performed on a different thread. Therefore no real JUnit report for your test method context.
So, what I'd suggest to do is:
Have Files.inboundAdapter() stopped in the beginning of the test before any setup you'd like to do in the test. Or at least don't place files into that filePath, so the channel adapter doesn't emit messages.
Take the channel from the application context and if you wish subscribe or use a ChannelInterceptor.
Have an async barrier, e.g. CountDownLatch to pass to that subscriber.
Start the channel adapter or put file into the dir for scanning.
Wait for the async barrier before verifying some value or state.

spring.rabbitmq.listener.simple.retry.enabled=true is ignored if I configure manually DirectMessageListenerContainer

I'm trying to activate deadletterqueue on rabbitmq with properties
spring.rabbitmq.listener.simple.retry.enabled=true
spring.rabbitmq.listener.simple.retry.max-attempts=10
It works fine when I use annotation
public class SimpleConsumer {
#RabbitListener(queues = "messages.queue")
public void handleMessage(String message){
throw new RuntimeException();
}
}
but if I configure manually MessageListenerContainer, it doesn't work.
Below my configurations:
#Bean
SimpleMessageListenerContainer directMessageListenerContainer(
ConnectionFactory connectionFactory,
Queue simpleQueue,
MessageConverter jsonMessageConverter,
SimpleConsumer simpleConsumer)
{
return new SimpleMessageListenerContainer(connectionFactory){{
setQueues(simpleQueue);
setMessageListener(new MessageListenerAdapter(simpleConsumer, jsonMessageConverter));
// setDefaultRequeueRejected(false);
}};
}
If I set setDefaultRequeueRejected to true it try to resolve consumer infinite time (if throw exception).
If I set setDefaultRequeueRejected to false it try to resolve consumer one time and then use deadLetterConsumer.
What #RabbitListener(queues = "messages.queue") do under the hood for use spring.rabbitmq.listener configurations?
below my code on github
https://github.com/crakdelpol/dead-letter-spike.git
see branch "retry-by-configuration"
It adds a retry interceptor to the container's advice chain. See the documentation.
Spring Retry provides a couple of AOP interceptors and a great deal of flexibility to specify the parameters of the retry (number of attempts, exception types, backoff algorithm, and others). Spring AMQP also provides some convenience factory beans for creating Spring Retry interceptors in a convenient form for AMQP use cases, with strongly typed callback interfaces that you can use to implement custom recovery logic. See the Javadoc and properties of StatefulRetryOperationsInterceptor and StatelessRetryOperationsInterceptor for more detail.
...
#Bean
public StatefulRetryOperationsInterceptor interceptor() {
return RetryInterceptorBuilder.stateful()
.maxAttempts(5)
.backOffOptions(1000, 2.0, 10000) // initialInterval, multiplier, maxInterval
.build();
}
Then add the interceptor to the container adviceChain.
EDIT
See the documentation I pointed you to; you need to add the recoverer to the interceptor:
The MessageRecover is called when all retries have been exhausted. The RejectAndDontRequeueRecoverer does exactly that. The default MessageRecoverer consumes the errant message and emits a WARN message.
Here is a complete example:
#SpringBootApplication
public class So67433138Application {
public static void main(String[] args) {
SpringApplication.run(So67433138Application.class, args);
}
#Bean
Queue queue() {
return QueueBuilder.durable("so67433138")
.deadLetterExchange("")
.deadLetterRoutingKey("so67433138.dlq")
.build();
}
#Bean
Queue dlq() {
return new Queue("so67433138.dlq");
}
#Bean
SimpleMessageListenerContainer container(ConnectionFactory cf) {
SimpleMessageListenerContainer smlc = new SimpleMessageListenerContainer(cf);
smlc.setQueueNames("so67433138");
smlc.setAdviceChain(RetryInterceptorBuilder.stateless()
.maxAttempts(5)
.backOffOptions(1_000, 2.0, 10_000)
.recoverer(new RejectAndDontRequeueRecoverer())
.build());
smlc.setMessageListener(msg -> {
System.out.println(new String(msg.getBody()));
throw new RuntimeException("test");
});
return smlc;
}
#RabbitListener(queues = "so67433138.dlq")
void dlq(String in) {
System.out.println("From DLQ: " + in);
}
}
test
test
test
test
test
2021-05-12 11:19:42.034 WARN 70667 ---[ container-1] o.s.a.r.r.RejectAndDontRequeueRecoverer : Retries exhausted for message ...
...
From DLQ: test

Is there way to build no of listener for a queue by using configuration file in AMQP

I have published 50K objects to a specific queue. I have one listener which picks each object and process that. But obviously it will take more time to process all 50k objects. So i want to place 3 more listeners which can parallel process those objects. For this purpose am i need to write two more listener classes? with same code? that will be duplicate of code. Is there any approach we can configure number of listeners we want, so that internally it will create instances for same listener to handle the load?Can any one help me the better way to stand 3 more listeners to handle the load to increase processing.
====Rabbit mq configuration file piece of code=============
#Bean
public SubscriberGeneralQueue1 SubscriberGeneralQueue1(){
return new SubscriberGeneralQueue1();
}
#Bean
public SimpleMessageListenerContainer rpcGeneralReplyMessageListenerContainer(ConnectionFactory connectionFactory,MessageListenerAdapter listenerAdapter1 ) {
SimpleMessageListenerContainer simpleMessageListenerContainer = new SimpleMessageListenerContainer(connectionFactory);
simpleMessageListenerContainer.setQueues(replyQueueRPC());
simpleMessageListenerContainer.setTaskExecutor(taskExecutor());
simpleMessageListenerContainer.setMessageListener(listenerAdapter1);
simpleMessageListenerContainer.setMaxConcurrentConsumers(60);
return simpleMessageListenerContainer;
}
#Bean
#Qualifier("listenerAdapter1")
MessageListenerAdapter listenerAdapter1(SubscriberGeneralQueue1 generalReceiver) {
return new MessageListenerAdapter(generalReceiver, "receivegeneralQueueMessage");
}
===Listener code================
#EnableRabbit
public class SubscriberGeneralQueue1 {
/*#Autowired
#Qualifier("asyncGeneralRabbitTemplate")
private AsyncRabbitTemplate asyncGeneralRabbitTemplate;*/
#Autowired
private ExecutorService executorService;
#Autowired
private GeneralProcess generalProcess;
List <RequestPojo> requestPojoGeneral = new ArrayList<RequestPojo>();
#RabbitHandler
#RabbitListener(containerFactory = "simpleMessageListenerContainerFactory", queues ="BulkSolve_GeneralrequestQueue")
public void subscribeToRequestQueue(#Payload RequestPojo sampleRequestMessage, Message message) throws InterruptedException {
long startTime=System.currentTimeMillis();
//requestPojoGeneral.add(sampleRequestMessage);
//System.out.println("List size issssss:" +requestPojoGeneral.size() );
//generalProcess.processRequestObjectslist(requestPojoGeneral);
generalProcess.processRequestObjects(sampleRequestMessage);
System.out.println("message in general listener is:" + sampleRequestMessage.getDistance());
System.out.println("Message payload is:" + sampleRequestMessage);
System.out.println("Message payload1111 is:" + message );
//return requestPojoGeneral;
}
}
===simplemessagelistenercontainerFactory configuration===========
#Bean
public SimpleRabbitListenerContainerFactory simpleMessageListenerContainerFactory(ConnectionFactory connectionFactory,
SimpleRabbitListenerContainerFactoryConfigurer configurer) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setTaskExecutor(taskExecutor());
factory.setMaxConcurrentConsumers(60);
configurer.configure(factory, connectionFactory);
return factory;
}
====Suggested changes=====
#RabbitHandler
#Async
#RabbitListener(containerFactory = "simpleMessageListenerContainerFactory", queues ="BulkSolve_GeneralrequestQueue")
public void subscribeToRequestQueue(#Payload RequestPojo sampleRequestMessage, Message message) throws InterruptedException {
long startTime=System.currentTimeMillis();
//requestPojoGeneral.add(sampleRequestMessage);
//System.out.println("List size issssss:" +requestPojoGeneral.size() );
//generalProcess.processRequestObjectslist(requestPojoGeneral);
generalProcess.processRequestObjects(sampleRequestMessage);
System.out.println("message in general listener is:" + sampleRequestMessage.getDistance());
System.out.println("Message payload is:" + sampleRequestMessage);
System.out.println("Message payload1111 is:" + message );
//return requestPojoGeneral;
}
}
configuration:
#Bean
public SimpleRabbitListenerContainerFactory simpleMessageListenerContainerFactory(ConnectionFactory connectionFactory,
SimpleRabbitListenerContainerFactoryConfigurer configurer) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setTaskExecutor(taskExecutor());
factory.setMaxConcurrentConsumers(60);
factory.setConsecutiveActiveTrigger(1);
configurer.configure(factory, connectionFactory);
return factory;
}
#Bean
public SimpleMessageListenerContainer rpcGeneralReplyMessageListenerContainer(ConnectionFactory connectionFactory,MessageListenerAdapter listenerAdapter1 ) {
SimpleMessageListenerContainer simpleMessageListenerContainer = new SimpleMessageListenerContainer(connectionFactory);
simpleMessageListenerContainer.setQueues(replyQueueRPC());
simpleMessageListenerContainer.setTaskExecutor(taskExecutor());
simpleMessageListenerContainer.setMessageListener(listenerAdapter1);
simpleMessageListenerContainer.setMaxConcurrentConsumers(100);
simpleMessageListenerContainer.setConsecutiveActiveTrigger(1);
return simpleMessageListenerContainer;
}
That's can be done with the concurrency option of the ListenerContainer:
Threads from the TaskExecutor configured in the SimpleMessageListenerContainer are used to invoke the MessageListener when a new message is delivered by RabbitMQ Client. If not configured, a SimpleAsyncTaskExecutor is used. If a pooled executor is used, ensure the pool size is sufficient to handle the configured concurrency. With the DirectMessageListenerContainer, the MessageListener is invoked directly on a RabbitMQ Client thread. In this case, the taskExecutor is used for the task that monitors the consumers.
Please, start reading from here: https://docs.spring.io/spring-amqp/docs/current/reference/html/_reference.html#receiving-messages
And also see here: https://docs.spring.io/spring-amqp/docs/current/reference/html/_reference.html#containerAttributes
concurrentConsumers (concurrency) - The number of concurrent consumers to initially start for each listener.
UPDATE
Alright! I see what's going on.
We have there a code like this:
boolean receivedOk = receiveAndExecute(this.consumer); // At least one message received
if (SimpleMessageListenerContainer.this.maxConcurrentConsumers != null) {
if (receivedOk) {
if (isActive(this.consumer)) {
consecutiveIdles = 0;
if (consecutiveMessages++ > SimpleMessageListenerContainer.this.consecutiveActiveTrigger) {
considerAddingAConsumer();
consecutiveMessages = 0;
}
}
}
so, we check for possible parallelism only after the first message is processed. So, in your case it is going to happen after 1 minute.
Another flag to considerAddingAConsumer() is about a consecutiveActiveTrigger option with is this by default:
private static final int DEFAULT_CONSECUTIVE_ACTIVE_TRIGGER = 10;
So, in your case to allow to parallel just exactly the next message you should also configure a :
/**
* If {#link #maxConcurrentConsumers} is greater then {#link #concurrentConsumers}, and
* {#link #maxConcurrentConsumers} has not been reached, specifies the number of
* consecutive cycles when a single consumer was active, in order to consider
* starting a new consumer. If the consumer goes idle for one cycle, the counter is reset.
* This is impacted by the {#link #txSize}.
* Default is 10 consecutive messages.
* #param consecutiveActiveTrigger The number of consecutive receives to trigger a new consumer.
* #see #setMaxConcurrentConsumers(int)
* #see #setStartConsumerMinInterval(long)
* #see #setTxSize(int)
*/
public final void setConsecutiveActiveTrigger(int consecutiveActiveTrigger) {
Assert.isTrue(consecutiveActiveTrigger > 0, "'consecutiveActiveTrigger' must be > 0");
this.consecutiveActiveTrigger = consecutiveActiveTrigger;
}
to 1. Because 0 is not going to work anyway.
For better performance you also may consider to make your subscribeToRequestQueue() with the #Async to really hand off the processing from the consumer thread to some other to avoid that 1 minute to wait for one more consumer to start.

Spring WebFlux (Flux): how to publish dynamically

I am new to Reactive programming and Spring WebFlux. I want to make my App 1 publish Server Sent event through Flux and my App 2 listen on it continuously.
I want Flux publish on-demand (e.g. when something happens). All the example I found is to use Flux.interval to periodically publish event, and there seems no way to append/modify the content in Flux once it is created.
How can I achieve my goal? Or I am totally wrong conceptually.
Publish "dynamically" using FluxProcessor and FluxSink
One of the techniques to supply data manually to the Flux is using FluxProcessor#sink method as in the following example
#SpringBootApplication
#RestController
public class DemoApplication {
final FluxProcessor processor;
final FluxSink sink;
final AtomicLong counter;
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
public DemoApplication() {
this.processor = DirectProcessor.create().serialize();
this.sink = processor.sink();
this.counter = new AtomicLong();
}
#GetMapping("/send")
public void test() {
sink.next("Hello World #" + counter.getAndIncrement());
}
#RequestMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent> sse() {
return processor.map(e -> ServerSentEvent.builder(e).build());
}
}
Here, I created DirectProcessor in order to support multiple subscribers, that will listen to the data stream. Also, I provided additional FluxProcessor#serialize which provide safe support for multiproducer (invocation from different threads without violation of Reactive Streams spec rules, especially rule 1.3). Finally, by calling "http://localhost:8080/send" we will see the message Hello World #1 (of course, only in case if you connected to the "http://localhost:8080" previously)
Update For Reactor 3.4
With Reactor 3.4 you have a new API called reactor.core.publisher.Sinks. Sinks API offers a fluent builder for manual data-sending which lets you specify things like the number of elements in the stream and backpressure behavior, number of supported subscribers, and replay capabilities:
#SpringBootApplication
#RestController
public class DemoApplication {
final Sinks.Many sink;
final AtomicLong counter;
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
public DemoApplication() {
this.sink = Sinks.many().multicast().onBackpressureBuffer();
this.counter = new AtomicLong();
}
#GetMapping("/send")
public void test() {
EmitResult result = sink.tryEmitNext("Hello World #" + counter.getAndIncrement());
if (result.isFailure()) {
// do something here, since emission failed
}
}
#RequestMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent> sse() {
return sink.asFlux().map(e -> ServerSentEvent.builder(e).build());
}
}
Note, message sending via Sinks API introduces a new concept of emission and its result. The reason for such API is the fact that the Reactor extends Reactive-Streams and has to follow the backpressure control. That said if you emit more signals than was requested, and the underlying implementation does not support buffering, your message will not be delivered. Therefore, the result of tryEmitNext returns the EmitResult which indicates if the message was sent or not.
Also, note, that by default Sinsk API gives a serialized version of Sink, which means you don't have to care about concurrency. However, if you know in advance that the emission of the message is serial, you may build a Sinks.unsafe() version which does not serialize given messages
Just another idea, using EmitterProcessor as a gateway to flux
import reactor.core.publisher.EmitterProcessor;
import reactor.core.publisher.Flux;
public class MyEmitterProcessor {
EmitterProcessor<String> emitterProcessor;
public static void main(String args[]) {
MyEmitterProcessor myEmitterProcessor = new MyEmitterProcessor();
Flux<String> publisher = myEmitterProcessor.getPublisher();
myEmitterProcessor.onNext("A");
myEmitterProcessor.onNext("B");
myEmitterProcessor.onNext("C");
myEmitterProcessor.complete();
publisher.subscribe(x -> System.out.println(x));
}
public Flux<String> getPublisher() {
emitterProcessor = EmitterProcessor.create();
return emitterProcessor.map(x -> "consume: " + x);
}
public void onNext(String nextString) {
emitterProcessor.onNext(nextString);
}
public void complete() {
emitterProcessor.onComplete();
}
}
More info, see here from Reactor doc. There is a recommendation from the document itself that "Most of the time, you should try to avoid using a Processor. They are harder to use correctly and prone to some corner cases." BUT I don't know which kind of corner case.