Spring - Rabbit template - Bulk operation - rabbitmq

Anyone knows if it is possible to send a collection of messages to a queue using Rabbit template?
Obviously I can send them one at a time, but I want to do it in a single bulk operation (to gain performance).
Thanks!

You can create a bean of BatchingRabbitTemplate and use it. Here is a working example bean:
#Bean
public BatchingRabbitTemplate batchingRabbitTemplate(ConnectionFactory connectionFactory) {
BatchingStrategy strategy = new SimpleBatchingStrategy(500, 25_000, 3_000);
TaskScheduler scheduler = new ConcurrentTaskScheduler();
BatchingRabbitTemplate template = new BatchingRabbitTemplate(strategy, scheduler);
template.setConnectionFactory(connectionFactory);
// ... other settings
return template;
}
Now you can inject BatchingRabbitTemplate in another bean and use it:
#Bean
public ApplicationRunner runner(BatchingRabbitTemplate template) {
MessageProperties props = //...
return args -> template.send(new Message("Test").getBytes(), props);
}

See Reference Manual about batching support:
Starting with version 1.4.2, the BatchingRabbitTemplate has been introduced. This is a subclass of RabbitTemplate with an overridden send method that batches messages according to the BatchingStrategy; only when a batch is complete is the message sent to RabbitMQ.

Related

JdbcTemplate separate transactions created for each query

I'm using jdbc for some sql queries and i wanted to execute all separate queries in one method in one transaction. I tried to set configuration setting only for transaction in one query and read it in another:
#Transactional
public void testJDBC() {
SqlRowSet rowSet =jdbcTemplate.queryForRowSet("select set_config('transaction_test','im_here',true)");
String result;
while (rowSet.next()) {
result = rowSet.getString("set_config");
System.out.println("Result1: "+result);
}
SqlRowSet rowSet2 =jdbcTemplate.queryForRowSet("select current_setting('transaction_test',true)");
String result2;
while (rowSet2.next()) {
result2 = rowSet2.getString("current_setting");
System.out.println("Result2: "+result2);
}
}
But my second query uses other transaction or both queries are not transactional, becouse result looks like this:
Result1: im_here
Result2:
I dont get it what is wrong here that despite Transactional annotation it is still not transactional.
Here are my beans setting:
#Bean
public PlatformTransactionManager transactionManager(EntityManagerFactory emf) {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(emf);
return transactionManager;
}
public BasicDataSource getApacheDataSource(){
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName(environment.getRequiredProperty("jdbc.driverClassName"));
dataSource.setUrl(getUrl());
dataSource.setUsername(getEnvironmentProperty("spring.datasource.username"));
dataSource.setPassword(getEnvironmentProperty("spring.datasource.password"));
}
#Bean
public JdbcTemplateExtended jdbc(){
return new JdbcTemplateExtended(getApacheDataSource());
}
I think making sure #Transactional annotations are being handled well is the first step in troubleshooting. To do this, add the following settings to application.properties (or application.yml file). I assume you are using spring boot.
logging:
level:
org:
springframework:
transaction:
interceptor: trace
If you run the logic after applying the above settings, you can see the following log message.
2020-10-02 14:45:07,162 TRACE - Getting transaction for [com.Class.method]
2020-10-02 14:45:07,273 TRACE - Completing transaction for [com.Class.method]
Make sure the #Transactional annotation is handled properly by the TransactionInterceptor.
Note: The behavior of the #Transactional annotation works on proxy objects. If you call from a method of the same class or create a class directly instead of autowired, the proxy object is not created and hence the #Transactional annotation's expected behavior is not applied.

spring amqp RPC copy headers from request to response

I'm looking for a way to copy some headers from the request message to the response message when I use RabbitMq in RPC mode.
so far I have tried with setBeforeSendReplyPostProcessors but I can only access the response and add headers to it. but I don't have access to the request to get the values I need.
I have also tried with the advice chain, but the returnObject is null after proceeding so I can't modify it (I admit I don't understand why it is null... I thought I could get the object to modify it):
#Bean
public SimpleRabbitListenerContainerFactory simpleRabbitListenerContainerFactory(SimpleRabbitListenerContainerFactoryConfigurer simpleRabbitListenerContainerFactoryConfigurer, ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory simpleRabbitListenerContainerFactory = new SimpleRabbitListenerContainerFactory();
simpleRabbitListenerContainerFactory.setAdviceChain(new MethodInterceptor() {
#Override
public Object invoke(MethodInvocation invocation) throws Throwable {
Object returnObject = invocation.proceed();
//returnObject is null here
return returnObject;
}
});
simpleRabbitListenerContainerFactoryConfigurer.configure(simpleRabbitListenerContainerFactory, connectionFactory);
return simpleRabbitListenerContainerFactory;
}
a working way is to change my method annotated with #RabbitListener so it returns a Message and there I can access both the requesting message (via arguments of the annotated method) and the response.
But I would like to do it automatically, since I need this feature at different places.
Basicaly I want to copy one header from the request message to the response.
this code do the job, but I want to do it through an aspect, or an interceptor.
#RabbitListener(queues = "myQueue"
, containerFactory = "simpleRabbitListenerContainerFactory")
public Message<MyResponseObject> execute(MyRequestObject myRequestObject, #Header("HEADER_TO_COPY") String headerToCopy) {
MyResponseObject myResponseObject = compute(myRequestObject);
return MessageBuilder.withPayload(myResponseObject)
.setHeader("HEADER_RESPONSE", headerToCopy)
.build();
}
The Message<?> return type support was added for this reason, but we could add an extension point to allow this, please open a GitHub issue.
Contributions are welcome.

Streaming objects from S3 Object using Spring Aws Integration

I am working on a usecase where I am supposed to poll S3 -> read the stream for the content -> do some processing and upload it to another bucket rather than writing the file in my server.
I know I can achieve it using S3StreamingMessageSource in Spring aws integration but the problem I am facing is that I do not know on how to process the message stream received by polling
public class S3PollerConfigurationUsingStreaming {
#Value("${amazonProperties.bucketName}")
private String bucketName;
#Value("${amazonProperties.newBucket}")
private String newBucket;
#Autowired
private AmazonClientService amazonClient;
#Bean
#InboundChannelAdapter(value = "s3Channel", poller = #Poller(fixedDelay = "100"))
public MessageSource<InputStream> s3InboundStreamingMessageSource() {
S3StreamingMessageSource messageSource = new S3StreamingMessageSource(template());
messageSource.setRemoteDirectory(bucketName);
messageSource.setFilter(new S3PersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),
"streaming"));
return messageSource;
}
#Bean
#Transformer(inputChannel = "s3Channel", outputChannel = "data")
public org.springframework.integration.transformer.Transformer transformer() {
return new StreamTransformer();
}
#Bean
public S3RemoteFileTemplate template() {
return new S3RemoteFileTemplate(new S3SessionFactory(amazonClient.getS3Client()));
}
#Bean
public PollableChannel s3Channel() {
return new QueueChannel();
}
#Bean
IntegrationFlow fileStreamingFlow() {
return IntegrationFlows
.from(s3InboundStreamingMessageSource(),
e -> e.poller(p -> p.fixedDelay(30, TimeUnit.SECONDS)))
.handle(streamFile())
.get();
}
}
Can someone please help me with the code to process the stream ?
Not sure what is your problem, but I see that you have a mix of concerns. If you use messaging annotations (see #InboundChannelAdapter in your config), what is the point to use the same s3InboundStreamingMessageSource in the IntegrationFlow definition?
Anyway it looks like you have already explored for yourself a StreamTransformer. This one has a charset property to convert your InputStreamfrom the remote S3 resource to the String. Otherwise it returns a byte[]. Everything else is up to you what and how to do with this converted content.
Also I don't see reason to have an s3Channel as a QueueChannel, since the start of your flow is pollable anyway by the #InboundChannelAdapter.
From big height I would say we have more questions to you, than vise versa...
UPDATE
Not clear what is your idea for InputStream processing, but that is really a fact that after S3StreamingMessageSource you are going to have exactly InputStream as a payload in the next handler.
Also not sure what is your streamFile(), but it must really expect InputStream as an input from the payload of the request message.
You also can use the mentioned StreamTransformer over there:
#Bean
IntegrationFlow fileStreamingFlow() {
return IntegrationFlows
.from(s3InboundStreamingMessageSource(),
e -> e.poller(p -> p.fixedDelay(30, TimeUnit.SECONDS)))
.transform(Transformers.fromStream("UTF-8"))
.get();
}
And the next .handle() will be ready for String as a payload.

Send MessageProperties [priority=anyInteger] while publishing message in RabbitMQ

we are using rabbit MQ and Spring Integration in our project. Every Message has a deliver mode, header, properties, and payload part.
We want to add properties i.e) priority with value 2(any integer) , payload with "test message 3" and publish the message to the queue named OES. please see screen shot.
How to add the messageproperties i.e) priority =2(or any value) in the below outbound-channel-adapter(Spring Integration). I know we can add "headers" by adding into "mapped-request-headers" but i would like to add the properties. There are no properties defined for the MessageProperties in "outbound-channel-adapter". Is there a way to overcome this issue.
We have no issues with payload, it is going already. we want to add only the MessageProperties with priority=2(any value). how to add that in the outbound-channel-adapter(no need of hardcoding, should be generic)?
<!-- the mapped-request-headers should be symmetric with
the list on the consumer side defined in consumerbeans.consumerHeaderMapper() -->
<int-amqp:outbound-channel-adapter id="publishingAmqpAdapter"
channel="producer-processed-event-channel"
amqp-template="amqpPublishingTemplate"
exchange-name="events_forwarding_exchange"
routing-key-expression="headers['routing-path']"
mapped-request-headers="X-CallerIdentity,routing-path,content-type,route_to*,event-type,compression-state,STANDARD_REQUEST_HEADERS"
/>
Other configuration:
<!-- chain routes and transforms the ApplicationEvent into a json string -->
<int:chain id="routingAndTransforming"
input-channel="producer-inbound-event-channel"
output-channel="producer-routed-event-channel">
<int:transformer ref="outboundMessageTracker"/>
<int:transformer ref="messagePropertiesTransformer"/>
<int:transformer ref="eventRouter"/>
<int:transformer ref="eventToJsonTransformer"/>
</int:chain>
<int:transformer id="messagePayloadCompressor"
input-channel="compress-message-payload"
output-channel="producer-processed-event-channel"
ref="payloadCompressor"/>
#Configuration("amqpProducerBeans")
#ImportResource(value = "classpath:com/apple/store/platform/events/si/event-producer-flow.xml")
public class AmqpProducerBeans {
#Bean(name = { "amqpPublishingTemplate" })
public AmqpTemplate amqpTemplate() {
logger.debug("creating amqp publishing template");
RabbitTemplate rabbitTemplate = new RabbitTemplate(producerConnectionFactory());
SimpleMessageConverter converter = new SimpleMessageConverter();
// following needed for retry logic
converter.setCreateMessageIds(true);
rabbitTemplate.setMessageConverter(converter);
return rabbitTemplate;
}
/*Other code commented */
}
Other Code:
import org.springframework.integration.Message;
import org.springframework.integration.annotation.Transformer;
import org.springframework.integration.message.GenericMessage;
public class PayloadCompressor {
#Transformer
public Message<byte[]> compress(Message<String> message){
/* some code commented */
Map<String, Object> headers = new HashMap<String, Object>();
headers.putAll(message.getHeaders());
headers.remove("compression-state");
headers.put("compression-state", CompressionState.COMPRESSED);
Message<byte[]> compressedMessage = new GenericMessage<byte[]>(compressedPayload, headers);
return compressedMessage;
}
If we are not using spring integration, then we can use channel.basicPublish below way and send the MessageProperties.
ConnectionFactory factory = new ConnectionFactory();
factory.setVirtualHost("/");
factory.setHost("10.102.175.30");
factory.setUsername("rahul");
factory.setPassword("rahul");
factory.setPort(5672);
Connection connection = factory.newConnection();
System.out.println("got connection "+connection);
Channel channel = connection.createChannel();
MessageProperties msgproperties= new MessageProperties() ;
MessageProperties.BASIC.setPriority(3);
// set Messageproperties with priority
    String exchangeName = "HeaderExchange";
      String routingKey = "testkey";
      //routingkey
      byte[] messageBodyBytes = "Message having priority value 3".getBytes();
      channel.basicPublish(exchangeName,
                           routingKey,
                           true,
                           msgproperties.BASIC,
                           messageBodyBytes);
Please let me know if you need more details.
Properties are already mapped automatically - see the header mapper.
Simply use a <header-enricher/> to set the appropriate header and it will be mapped to the correct property. In the case of priority, the constant is here for the amqp-specific header constants, see here.

Apache camel nested routes

I am new to Apache camel. I have very common use case that i am struggling to configure camel route. The use case is to take execution context
Update database using execution context.
Then using event on the execution context, create a byte message and send over MQ.
Then in the next step again use execution context and perform event processing.
Update database using execution context.
So basically its kind of nested routes. In the below configuration I need to have access to the executionContext that executionController has created in the updateSchedulerState, sendNotification, processEvent and updateSchedulerState i.e. steps annotated as 1,2, 3 and 4 respectively.
from("direct:processMessage")
.routeId("MessageExecutionRoute")
.beanRef("executionController", "getEvent", true)
.beanRef("executionController", "updateSchedulerState", true) (1)
.beanRef("executionController", "sendNotification", true) (2)
.beanRef("messageTransformer", "transform", true)
.to("wmq:NOTIFICATION")
.beanRef("executionController", "processEvent", true) (3)
.beanRef("eventProcessor", "process", true)
.beanRef("messageTransformer", "transform", true)
.to("wmq:EVENT")
.beanRef("executionController", "updateSchedulerState", true); (4)
Kindly let me know how should i configure the route for the above use case.
Thanks,
Vaibhav
So you need to access this executionContext in your beans at various points in the route?
If I understand correctly, you can put this executionContext in an exchange Property, and it will persist throughout the route.
Setting the exchange property can be done via the Exchange.setProperty() method or various camel dsl functions such as like this:
from("direct:xyz)
.setProperty("awesome", constant("YES"))
//...
You can access exchange properties from a bean by adding a method argument of type Exchange, like this:
public class MyBean {
public void foo(Something something, Exchange exchange) {
if ("YES".equals(exchange.getProperty("awesome"))) {
// ...
}
}
}
Or via #Property like this:
public class MyBean {
public void foo(Something something, #Property String awesome) {
if ("YES".equals(awesome)) {
// ...
}
}
}
This presumes you are using later versions of camel.
Does this help?