How to correlate request & reply when using raw (not using Gateway) Spring Integration? - rabbitmq

I am learning about Spring-Integration and have a basic understanding about Gateway and Service-Activators. I love the concept of Gateway. Spring Integration generates the proxy for gateway at run-time. This proxy hides all the messaging details from the consumer of the gateway. In addition, the generated proxy might also be co-relating request and reply.
With the objective of learning, I set out to implement request and reply correlation using raw Spring Integration features and not using Gateway. I am able to set the correlation identifier in the request header, but not able to specify correlation identifier while receiving reply for the channel. The following (at the end of the question) is the code snippet for the same. Also how does the correlation stuff works against a message broker (e.g. RabbitMQ)? Does RabbitMQ provides an ability to retrieve a message with a specific header (correlation identifier) in it?
public class RemoteProxyCalculatorService implements CalculatorService
{
public int Square(int n)
{
UUID uuid = SendRequest(n, "squareRequestChannel");
int squareOfn = ReceiveReply("squareReplyChannel", uuid);
return squareOfn;
}
private <T> UUID SendRequest(T payload, String requestChannel)
{
UUID requestID = UUID.randomUUID();
Message<T> inputMessage = MessageBuilder.withPayload(payload)
.setCorrelationId(requestID)
.build();
MessageChannel channel = (MessageChannel)context.getBean(requestChannel, MessageChannel.class);
channel.send(inputMessage);
return requestID;
}
#SuppressWarnings("unchecked")
private <T> T ReceiveReply(String replyChannel, UUID requestID)
{
//How to consume requestID so as to receive only the reply related to the request posted by this thread
PollableChannel channel = (PollableChannel)context.getBean(replyChannel);
Message<?> groupMessage = channel.receive();
return (T)groupMessage.getPayload();
}
private ClassPathXmlApplicationContext context;
}
Thanks.

The simplest way to correlate within an app doesn't even require a correlationId header. Instead you can create a QueueChannel instance (that you don't share) and provide that as s the replyChannel header on the Message you send. Whatever downstream component ultimately responds, it will find that header in the Message.
Regarding RabbitMQ, our outbound-gateway simply applies a similar technique, but using the replyTo property of the AMQP Message.
Hope that helps.
-Mark

Problem is with common reply channel. The solution (Mark suggested the similar) will look like this.
public class RemoteProxyCalculatorService
{
public int Square(int n)
{
PollableChannel replyChannel = SendRequest(n, "squareRequestChannel");
int squareOfn = ReceiveReply(replyChannel);
return squareOfn;
}
private <T> PollableChannel SendRequest(T payload, String requestChannel)
{
UUID requestID = UUID.randomUUID();
QueueChannel replyQueueChannel = new QueueChannel();
Message<T> inputMessage = MessageBuilder.withPayload(payload)
.setCorrelationId(requestID)
.setReplyChannel(replyQueueChannel)
.build();
MessageChannel channel = context.getBean(requestChannel, MessageChannel.class);
channel.send(inputMessage);
return replyQueueChannel;
}
#SuppressWarnings("unchecked")
private <T> T ReceiveReply(PollableChannel replyChannel)
{
Message<?> groupMessage = replyChannel.receive();
return (T) groupMessage.getPayload();
}
private ClassPathXmlApplicationContext context;
}

If you want to use common reply channel then I think this is what you are looking for.
public class RemoteProxyCalculatorService
{
public int Square(int n)
{
PollableChannel replyChannel = SendRequest(n, "squareRequestChannel");
int squareOfn = ReceiveReply(replyChannel);
return squareOfn;
}
private <T> PollableChannel SendRequest(T payload, String requestChannel)
{
UUID requestID = UUID.randomUUID();
Message<T> inputMessage = MessageBuilder.withPayload(payload)
.setCorrelationId(requestID)
.setReplyChannel(myMessageHandler.getSubscribedChannel())
.build();
// Create a Pollable channel for two things
// 1. Pollable channel is where this thread should look for reply.
QueueChannel replyQueueChannel = new QueueChannel();
// 2. Message Handler will send reply to this Pollable channel once it receives the reply using correlation Id.
myMessageHandler.add(requestID, replyQueueChannel);
MessageChannel channel = context.getBean(requestChannel, MessageChannel.class);
channel.send(inputMessage);
return replyQueueChannel;
}
#SuppressWarnings("unchecked")
private <T> T ReceiveReply(PollableChannel replyChannel)
{
Message<?> groupMessage = replyChannel.receive();
return (T) groupMessage.getPayload();
}
private ClassPathXmlApplicationContext context;
#Autowired
private MyMessageHandler myMessageHandler;
}
/**
* Message Handler
*
*/
public class MyMessageHandler implements MessageHandler
{
private final Map<Object, MessageChannel> idChannelsMap = new TreeMap<>();
private final Object lock = new Object();
private final SubscribableChannel subscribedChannel;
public MyMessageHandler(SubscribableChannel subscribedChannel)
{
this.subscribedChannel = subscribedChannel;
}
#Override
public void handleMessage(Message<?> message) throws MessagingException
{
synchronized (lock)
{
this.idChannelsMap.get(message.getHeaders().getCorrelationId()).send(message);
this.idChannelsMap.remove(message.getHeaders().getCorrelationId());
}
}
public void add(Object correlationId, MessageChannel messageChannel)
{
synchronized (lock)
{
this.idChannelsMap.put(correlationId, messageChannel);
}
}
public SubscribableChannel getSubscribedChannel()
{
return subscribedChannel;
}
}

Related

Is there a way to get a LifecycleOwner in FirebaseMessagingService

I'm developing a chat app and I'm using Firebase Cloud Messaging for notifications.
I found that it was best to save my notifications (notification info) in Local database i.e Room so it help me to handle the badge counts and the clearing of specific chat notifications.
Steps:
Setup my FirebaseMessagingService and tested. (Getting my notifications successfully);
Setup Room database and tested to insert and get all data (LiveData) (working good);
I want to observe the liveData inside MyFirebaseMessagingService but to do so, I need a LivecycleOwner and I don't have any idea from where I will get it.
I searched on google but the only solution was to use a LifecycleService, but I need FirebaseMessagingService for my notification purpose.
this is my code:
//Room Database class
private static volatile LocalDatabase INSTANCE;
private static final int NUMBER_OF_THREADS = 4;
public static final ExecutorService taskExecutor =
Executors.newFixedThreadPool(NUMBER_OF_THREADS);
public static LocalDatabase getDatabase(final Context context) {
if (INSTANCE == null) {
synchronized (RoomDatabase.class) {
if (INSTANCE == null) {
INSTANCE = Room.databaseBuilder(context.getApplicationContext(),
LocalDatabase.class, "local_database")
.build();
}
}
}
return INSTANCE;
}
public abstract NotificationDao dao();
//DAO interface
#Insert
void insert(NotificationEntity notificationEntity);
#Query("DELETE FROM notificationentity WHERE trade_id = :tradeId")
int clearByTrade(String tradeId);
#Query("SELECT * FROM notificationentity")
LiveData<List<NotificationEntity>> getAll();
//Repository class{}
private LiveData<List<NotificationEntity>> listLiveData;
public Repository() {
firestore = FirebaseFirestore.getInstance();
storage = FirebaseStorage.getInstance();
}
public Repository(Application application) {
LocalDatabase localDb = LocalDatabase.getDatabase(application);
dao = localDb.dao();
listLiveData = dao.getAll();
}
...
public void saveNotificationInfo(#NonNull NotificationEntity entity){
LocalDatabase.taskExecutor.execute(() -> {
try {
dao.insert(entity);
H.debug("NotificationData saved in local db");
}catch (Exception e){
H.debug("Failed to save NotificationData in local db: "+e.getMessage());
}
});
}
public LiveData<List<NotificationEntity>> getNotifications(){return listLiveData;}
public void clearNotificationInf(#NonNull String tradeId){
LocalDatabase.taskExecutor.execute(() -> {
try {
H.debug("trying to delete rows for id :"+tradeId+"...");
int n = dao.clearByTrade(tradeId);
H.debug("Cleared: "+n+" notification info from localDatabase");
}catch (Exception e){
H.debug("Failed clear NotificationData in local db: "+e.getMessage());
}
});
}
//ViewModel class{}
private Repository rep;
private LiveData<List<NotificationEntity>> list;
public VModel(#NonNull Application application) {
super(application);
rep = new Repository(application);
list = rep.getNotifications();
}
public void saveNotificationInfo(Context context, #NonNull NotificationEntity entity){
rep.saveNotificationInfo(entity);
}
public LiveData<List<NotificationEntity>> getNotifications(){
return rep.getNotifications();
}
public void clearNotificationInf(Context context, #NonNull String tradeId){
rep.clearNotificationInf(tradeId);
}
and finally the FiebaseMessagingService class{}
private static final String TAG = "MyFireBaseService";
private static final int SUMMARY_ID = 999;
private SoundManager sm;
private Context context;
private final String GROUP_KEY = "com.opendev.xpresso.group_xpresso_group_key";
private Repository rep;
private NotificationDao dao;
#Override
public void onCreate() {
super.onCreate();
context = this;
rep = new Repository();
}
/**
* Called if InstanceID token is updated. This may occur if the security of
* the previous token had been compromised. Note that this is called when the InstanceID token
* is initially generated so this is where you would retrieve the token.
*/
#Override
public void onNewToken(#NonNull String s) {
super.onNewToken(s);
}
#Override
public void onMessageReceived(#NonNull RemoteMessage remoteMessage) {
super.onMessageReceived(remoteMessage);
H.debug("OnMessageReceived...");
try {
Map<String, String> data = remoteMessage.getData();
if (Objects.requireNonNull(data.get("purpose")).equals("notify_message")) {
String ChatId
if ((chatId=data.get("chatId"))==null){
H.debug("onMessageReceived: tradeId null! Aborting...");
return;
}
FirebaseFirestore db = FirebaseFirestore.getInstance();
Task<DocumentSnapshot> tradeTask = db.collection("activeTrades").document(chatTask).get();
Task<DocumentSnapshot> userTask = db.collection("users")
.document(FirebaseAuth.getInstance().getCurrentUser().getUid()).get();
Tasks.whenAllSuccess(chatTask, userTask).addOnSuccessListener(objects -> {
if (!((DocumentSnapshot)objects.get(0)).exists() || !((DocumentSnapshot)objects.get(1)).exists()){
H.debug("OnMessageReceived: querying data failed: NOT EXISTS");
return;
}
Chat chat = ((DocumentSnapshot)objects.get(0)).toObject(Trade.class);
MainActivity.USER = ((DocumentSnapshot)objects.get(1)).toObject(User.class);
//Now we got all the needed info we cant process the notification
//Saving the notification locally and updating badge count
//then notify for all the notification in localDatabase
NotificationEntity entity = new NotificationEntity();
entity.setNotificationId(getNextNotificationId());
entity.setTradeId(tradeId);
entity.setChanelId(context.getResources().getString(R.string.channel_id));
entity.setTitle(data.get("title"));
entity.setMessage(data.get("message"));
entity.setPriority(NotificationCompat.PRIORITY_HIGH);
entity.setCategory(NotificationCompat.CATEGORY_MESSAGE);
rep.saveNotificationInfo(entity);
rep.getNotifications().observe(HOW_TO_GET_THE_LIVECYCLE_OWNER, new Observer<List<NotificationEntity>>() {
#Override
public void onChanged(List<NotificationEntity> notificationEntities) {
//
}
});
}).addOnFailureListener(e -> H.debug("OnMessageReceived: querying data failed: "+e.getMessage()));
}
}catch (Exception e){H.debug(e.getMessage());}
}
Updated,
Because It is not recommended to use a LiveData object inside of a FirebaseMessagingService because a FirebaseMessagingService is not a part of the Android activity lifecycle and therefore does not have a lifecycle owner. Instead of trying to use LiveData inside of the FirebaseMessagingService, you could consider using a different approach to handle badge count and clearing specific chat notifications.
So I used a broadcast receiver to receive the notifications. Then I could set the broadcast receiver in my FirebaseMessagingService, and it will receive the notifications and update the badge count in local Room database.
I created a Broadcast Receiver for this, and in onReceive method I send a Intent to a service and handled the badge logic in service.
I'm answering my own question just to show my alternative workaround.
I believe the liveDataObserver still the best way for me but until someone help me by giving me the solution to get LivecycleOwner in FirebaseMessagingService, I'm going to use custom listener for my insert() and my getAll()
like follow
public interface RoomInsertListener{
void onInsert();
}
public interface RoomGetListener{
void onGet(List<NotificationEntity> list);
}
Then use it in FirebaseMessagingService as follow
NotificationEntity entity = new NotificationEntity();
entity.setNotificationId(getNextNotificationId());
entity.setTradeId(tradeId);
entity.setChanelId(context.getResources().getString(R.string.channel_id));
entity.setTitle(data.get("title"));
entity.setMessage(data.get("message"));
entity.setPriority(NotificationCompat.PRIORITY_HIGH);
entity.setCategory(NotificationCompat.CATEGORY_MESSAGE);
rep.saveNotificationInfo(entity, () -> rep.getNotifications(list -> {
ShortcutBadger.applyCount(context, list.size());
H.debug(list.size()+" notifications in Database: applied badge count...");
for (NotificationEntity e:list){
H.debug("id:"+e.getNotificationId()+" trade: "+e.getTradeId());
}
}));

Catagorizing messages in chronicle queue for reading

I want to use chronicle queue to store messages using the high level API as mentioned in the answer of this question. But I also want some kind of key for my messages as mentioned here
1.) Firstly , is this the right/efficient way to read/write using high level API? - Code samples below
2.) How do I separate different category of messages? For example "get me all messages for a particular key , the key in code sample below being ric". Maybe use different topics in the same queue? But how would I do that?
Here's my test code to write to the queue:
public void saveHighLevel(MyInterface obj)
{
try (ChronicleQueue queue = ChronicleQueue.singleBuilder(_location).build()) {
ExcerptAppender appender = queue.acquireAppender();
MyInterface trade = appender.methodWriter(MyInterface.class);
// Write
trade.populate(obj);
}
}
And here's one to read:
public void readHighLevel()
{
try(ChronicleQueue queue = ChronicleQueue.singleBuilder(_location).build()) {
ExcerptTailer tailer = queue.createTailer();
MyInterface container = new MyData();
MethodReader reader = tailer.methodReader(container);
while (reader.readOne()) {
System.out.println(container);
}
}
}
MyInterface:
public interface MyInterface
{
public double getPrice();
public int getSize();
public String getRic();
public void populate(MyInterface obj);
}
Implementation of populate:
public void populate(MyInterface obj)
{
this.price = obj.getPrice();
this.ric = obj.getRic();
this.size = obj.getSize();
}
I found the answer for part (2) of my question in the question of this post.
Essentially by doing:
ChronicleQueue queue = ChronicleQueue.singleBuilder("Topic/SubTopic").build();
where Topic can be substituted with the key I'm looking for.

WebFlux filter: Flux access redis,if ok then run next filter,But unconformity

In project,I verify the app_key is valid by pass redis.
I use ReactiveRedisTemplate to access redis data,and in filter I verify the app_key is valid.if the app_key is valid,then jump to next filter,else output to client the exception.
Actually:if redis connection timeout,ex should be runnig.but when the redis running normal ,the program is not exec verfiy app_key ,It direct jump to next filter.
Please tell me how do,Thanks!
#Resource
private AppKeyProvider appKeyProvider;
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
try {
String app_key =exchange.getRequest().getQueryParams().getFirst("app_key"));
//app_key verify
Flux.just(app_key).flatMap(key -> appKeyProvider.getAppKey(key)).subscribe(
appKey -> {
if (appKey == null) {
//app_key is not valid
throw new AppException(ErrorCode.ILLEGAL_APP_KEY);
}else{
//do... jump to next filter
}
},
ex -> {
throw new AppException(ErrorCode.SERVICE_BASIC_ERROR, ex);
}
);
} catch (AppException ex) {
exchange.getResponse().setStatusCode(HttpStatus.BAD_REQUEST);
exchange.getResponse().getHeaders().setContentType(MediaType.APPLICATION_JSON);
String result = RestHelper.build(ex, exchange).toString();
return exchange.getResponse().writeWith(Mono.just(exchange.getResponse().bufferFactory().wrap(result.getBytes(Charsets.UTF_8))));
}
return chain.filter(exchange);
}
AppKeyProvider.java
#Component
public class AppKeyProvider {
#Resource
private ReactiveRedisTemplate reactiveRedisTemplate;
private final static Logger logger = LoggerFactory.getLogger(AppKeyProvider.class);
private final static AppKeyProvider instance = new AppKeyProvider();
private static ConcurrentHashMap<String, Api> apiMap = new ConcurrentHashMap<String, Api>();
private final static Lock lock = new ReentrantLock(true);
/**
* Get AppKey
*
* #param app_key
* #return
*/
public Mono<AppKey> getAppKey(String app_key) {
ReactiveValueOperations<String, AppKey> operations = reactiveRedisTemplate.opsForValue();
Mono<AppKey> appKey = operations.get(RedisKeypPrefix.APP_KEY + app_key);
return appKey;
}
}
This happens because you've manually subscribed to the key lookup part. Doing so decouples the main filter processing from that operation, meaning they can happen concurrently in different threads - so they can't track each others' result.
Also, in reactive programming errors happen within the pipeline and should be dealt with operators; try/catch blocks won't work in this case.
Here's an attempt at fixing this code snippet:
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
String app_key = exchange.getRequest().getQueryParams().getFirst("app_key"));
return appKeyProvider.getAppKey(app_key)
.switchOnEmpty(Mono.error(new AppException(ErrorCode.ILLEGAL_APP_KEY)))
.flatMap(key -> chain.filter(exchange))
.onErrorResume(AppException.class, exc -> {
exchange.getResponse().setStatusCode(HttpStatus.BAD_REQUEST);
exchange.getResponse().getHeaders().setContentType(MediaType.APPLICATION_JSON);
String result = RestHelper.build(ex, exchange).toString();
return exchange.getResponse().writeWith(Mono.just(exchange.getResponse().bufferFactory().wrap(result.getBytes(Charsets.UTF_8))));
});
}

stop polling files when rabbitmq is down: spring integration

I'm working on a project where we are polling files from a sftp server and streaming it out into a object on the rabbitmq queue. Now when the rabbitmq is down it still polls and deletes the file from the server and losses the file while sending it on queue when rabbitmq is down. I'm using ExpressionEvaluatingRequestHandlerAdvice to remove the file on successful transformation. My code looks like this:
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpProperties.getSftpHost());
factory.setPort(sftpProperties.getSftpPort());
factory.setUser(sftpProperties.getSftpPathUser());
factory.setPassword(sftpProperties.getSftpPathPassword());
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpRemoteFileTemplate sftpRemoteFileTemplate() {
return new SftpRemoteFileTemplate(sftpSessionFactory());
}
#Bean
#InboundChannelAdapter(channel = TransformerChannel.TRANSFORMER_OUTPUT, autoStartup = "false",
poller = #Poller(value = "customPoller"))
public MessageSource<InputStream> sftpMessageSource() {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(sftpRemoteFileTemplate,
null);
messageSource.setRemoteDirectory(sftpProperties.getSftpDirPath());
messageSource.setFilter(new SftpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),
"streaming"));
messageSource.setFilter(new SftpSimplePatternFileListFilter("*.txt"));
return messageSource;
}
#Bean
#Transformer(inputChannel = TransformerChannel.TRANSFORMER_OUTPUT,
outputChannel = SFTPOutputChannel.SFTP_OUTPUT,
adviceChain = "deleteAdvice")
public org.springframework.integration.transformer.Transformer transformer() {
return new SFTPTransformerService("UTF-8");
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice deleteAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnSuccessExpressionString(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])");
advice.setPropagateEvaluationFailures(false);
return advice;
}
I don't want the files to get removed/polled from the remote sftp server when the rabbitmq server is down. How can i achieve this ?
UPDATE
Apologies for not mentioning that I'm using spring cloud stream rabbit binder. And here is the transformer service:
public class SFTPTransformerService extends StreamTransformer {
public SFTPTransformerService(String charset) {
super(charset);
}
#Override
protected Object doTransform(Message<?> message) throws Exception {
String fileName = message.getHeaders().get("file_remoteFile", String.class);
Object fileContents = super.doTransform(message);
return new customFileDTO(fileName, (String) fileContents);
}
}
UPDATE-2
I added TransactionSynchronizationFactory on the customPoller as suggested. Now it doesn't poll file when rabbit server is down, but when the server is up, it keeps on polling the same file over and over again!! I cannot figure it out why? I guess i cannot use PollerSpec cause im on 4.3.2 version.
#Bean(name = "customPoller")
public PollerMetadata pollerMetadataDTX(StartStopTrigger startStopTrigger,
CustomTriggerAdvice customTriggerAdvice) {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setAdviceChain(Collections.singletonList(customTriggerAdvice));
pollerMetadata.setTrigger(startStopTrigger);
pollerMetadata.setMaxMessagesPerPoll(Long.valueOf(sftpProperties.getMaxMessagePoll()));
ExpressionEvaluatingTransactionSynchronizationProcessor syncProcessor =
new ExpressionEvaluatingTransactionSynchronizationProcessor();
syncProcessor.setBeanFactory(applicationContext.getAutowireCapableBeanFactory());
syncProcessor.setBeforeCommitChannel(
applicationContext.getBean(TransformerChannel.TRANSFORMER_OUTPUT, MessageChannel.class));
syncProcessor
.setAfterCommitChannel(
applicationContext.getBean(SFTPOutputChannel.SFTP_OUTPUT, MessageChannel.class));
syncProcessor.setAfterCommitExpression(new SpelExpressionParser().parseExpression(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])"));
DefaultTransactionSynchronizationFactory defaultTransactionSynchronizationFactory =
new DefaultTransactionSynchronizationFactory(syncProcessor);
pollerMetadata.setTransactionSynchronizationFactory(defaultTransactionSynchronizationFactory);
return pollerMetadata;
}
I don't know if you need this info but my CustomTriggerAdvice and StartStopTrigger looks like this :
#Component
public class CustomTriggerAdvice extends AbstractMessageSourceAdvice {
#Autowired private StartStopTrigger startStopTrigger;
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null) {
if (startStopTrigger.getStart()) {
startStopTrigger.stop();
}
} else {
if (!startStopTrigger.getStart()) {
startStopTrigger.stop();
}
}
return result;
}
}
public class StartStopTrigger implements Trigger {
private PeriodicTrigger startTrigger;
private boolean start;
public StartStopTrigger(PeriodicTrigger startTrigger, boolean start) {
this.startTrigger = startTrigger;
this.start = start;
}
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
if (!start) {
return null;
}
start = true;
return startTrigger.nextExecutionTime(triggerContext);
}
public void stop() {
start = false;
}
public void start() {
start = true;
}
public boolean getStart() {
return this.start;
}
}
Well, would be great to see what your SFTPTransformerService and determine how it is possible to perform an onSuccessExpression when there should be an exception in case of down broker.
You also should not only throw an exception do not perform delete, but consider to add a RequestHandlerRetryAdvice to re-send the file to the RabbitMQ: https://docs.spring.io/spring-integration/docs/5.0.6.RELEASE/reference/html/messaging-endpoints-chapter.html#retry-advice
UPDATE
So, well, since Gary guessed that you use Spring Cloud Stream to send message to the Rabbit Binder after your internal process (very sad that you didn't share that information originally), you need to take a look to the Binder error handling on the matter: https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/#_retry_with_the_rabbitmq_binder
And that is true that ExpressionEvaluatingRequestHandlerAdvice is applied only for the SFTPTransformerService and nothing more. The downstream error (in the Binder) is not included in this process already.
UPDATE 2
Yeah... I think Gary is right, and we don't have choice unless configure a TransactionSynchronizationFactory on the customPoller level instead of that ExpressionEvaluatingRequestHandlerAdvice: ExpressionEvaluatingRequestHandlerAdvice .
The DefaultTransactionSynchronizationFactory can be configured with the ExpressionEvaluatingTransactionSynchronizationProcessor, which has similar goal as the mentioned ExpressionEvaluatingRequestHandlerAdvice, but on the transaction level which will include your process starting with the SFTP Channel Adapter and ending on the Rabbit Binder level with the send to AMQP attempts.
See Reference Manual for more information: https://docs.spring.io/spring-integration/reference/html/transactions.html#transaction-synchronization.
The point with the ExpressionEvaluatingRequestHandlerAdvice (and any AbstractRequestHandlerAdvice) that they have a boundary only around handleRequestMessage() method, therefore only during the component they are declared.

Storm Kafkaspout KryoSerialization issue for java bean from kafka topic

Hi I am new to Storm and Kafka.
I am using storm 1.0.1 and kafka 0.10.0
we have a kafkaspout that would receive java bean from kafka topic.
I have spent several hours digging to find the right approach for that.
Found few articles which are useful but none of the approaches worked for me so far.
Following is my codes:
StormTopology:
public class StormTopology {
public static void main(String[] args) throws Exception {
//Topo test /zkroot test
if (args.length == 4) {
System.out.println("started");
BrokerHosts hosts = new ZkHosts("localhost:2181");
SpoutConfig kafkaConf1 = new SpoutConfig(hosts, args[1], args[2],
args[3]);
kafkaConf1.zkRoot = args[2];
kafkaConf1.useStartOffsetTimeIfOffsetOutOfRange = true;
kafkaConf1.startOffsetTime = kafka.api.OffsetRequest.LatestTime();
kafkaConf1.scheme = new SchemeAsMultiScheme(new KryoScheme());
KafkaSpout kafkaSpout1 = new KafkaSpout(kafkaConf1);
System.out.println("started");
ShuffleBolt shuffleBolt = new ShuffleBolt(args[1]);
AnalysisBolt analysisBolt = new AnalysisBolt(args[1]);
TopologyBuilder topologyBuilder = new TopologyBuilder();
topologyBuilder.setSpout("kafkaspout", kafkaSpout1, 1);
//builder.setBolt("counterbolt2", countbolt2, 3).shuffleGrouping("kafkaspout");
//This is for field grouping in bolt we need two bolt for field grouping or it wont work
topologyBuilder.setBolt("shuffleBolt", shuffleBolt, 3).shuffleGrouping("kafkaspout");
topologyBuilder.setBolt("analysisBolt", analysisBolt, 5).fieldsGrouping("shuffleBolt", new Fields("trip"));
Config config = new Config();
config.registerSerialization(VehicleTrip.class, VehicleTripKyroSerializer.class);
config.setDebug(true);
config.setNumWorkers(1);
LocalCluster cluster = new LocalCluster();
cluster.submitTopology(args[0], config, topologyBuilder.createTopology());
// StormSubmitter.submitTopology(args[0], config,
// builder.createTopology());
} else {
System.out
.println("Insufficent Arguements - topologyName kafkaTopic ZKRoot ID");
}
}
}
I am serializing the data at kafka using kryo
KafkaProducer:
public class StreamKafkaProducer {
private static Producer producer;
private final Properties props = new Properties();
private static final StreamKafkaProducer KAFKA_PRODUCER = new StreamKafkaProducer();
private StreamKafkaProducer(){
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "com.abc.serializer.MySerializer");
producer = new org.apache.kafka.clients.producer.KafkaProducer(props);
}
public static StreamKafkaProducer getStreamKafkaProducer(){
return KAFKA_PRODUCER;
}
public void produce(String topic, VehicleTrip vehicleTrip){
ProducerRecord<String,VehicleTrip> producerRecord = new ProducerRecord<>(topic,vehicleTrip);
producer.send(producerRecord);
//producer.close();
}
public static void closeProducer(){
producer.close();
}
}
Kyro Serializer:
public class DataKyroSerializer extends Serializer<Data> implements Serializable {
#Override
public void write(Kryo kryo, Output output, VehicleTrip vehicleTrip) {
output.writeLong(data.getStartedOn().getTime());
output.writeLong(data.getEndedOn().getTime());
}
#Override
public Data read(Kryo kryo, Input input, Class<VehicleTrip> aClass) {
Data data = new Data();
data.setStartedOn(new Date(input.readLong()));
data.setEndedOn(new Date(input.readLong()));
return data;
}
I need to get the data back to the Data bean.
As per few articles I need to provide with a custom scheme and make it part of topology but till now I have no luck
Code for Bolt and Scheme
Scheme:
public class KryoScheme implements Scheme {
private ThreadLocal<Kryo> kryos = new ThreadLocal<Kryo>() {
protected Kryo initialValue() {
Kryo kryo = new Kryo();
kryo.addDefaultSerializer(Data.class, new DataKyroSerializer());
return kryo;
};
};
#Override
public List<Object> deserialize(ByteBuffer ser) {
return Utils.tuple(kryos.get().readObject(new ByteBufferInput(ser.array()), Data.class));
}
#Override
public Fields getOutputFields( ) {
return new Fields( "data" );
}
}
and bolt:
public class AnalysisBolt implements IBasicBolt {
/**
*
*/
private static final long serialVersionUID = 1L;
private String topicname = null;
public AnalysisBolt(String topicname) {
this.topicname = topicname;
}
public void prepare(Map stormConf, TopologyContext topologyContext) {
System.out.println("prepare");
}
public void execute(Tuple input, BasicOutputCollector collector) {
System.out.println("execute");
Fields fields = input.getFields();
try {
JSONObject eventJson = (JSONObject) JSONSerializer.toJSON((String) input
.getValueByField(fields.get(1)));
String StartTime = (String) eventJson.get("startedOn");
String EndTime = (String) eventJson.get("endedOn");
String Oid = (String) eventJson.get("_id");
int V_id = (Integer) eventJson.get("vehicleId");
//call method getEventForVehicleWithinTime(Long vehicleId, Date startTime, Date endTime)
System.out.println("==========="+Oid+"| "+V_id+"| "+StartTime+"| "+EndTime);
} catch (Exception e) {
e.printStackTrace();
}
}
but if I submit the storm topology i am getting error:
java.lang.IllegalStateException: Spout 'kafkaspout' contains a
non-serializable field of type com.abc.topology.KryoScheme$1, which
was instantiated prior to topology creation.
com.minda.iconnect.topology.KryoScheme$1 should be instantiated within
the prepare method of 'kafkaspout at the earliest.
Appreciate help to debug the issue and guide to right path.
Thanks
Your ThreadLocal is not Serializable. The preferable solution would be to make your serializer both Serializable and threadsafe. If this is not possible, then I see 2 alternatives since there is no prepare method as you would get in a bolt.
Declare it as static, which is inherently transient.
Declare it transient and access it via a private get method. Then you can initialize the variable on first access.
Within the Storm lifecycle, the topology is instantiated and then serialized to byte format to be stored in ZooKeeper, prior to the topology being executed. Within this step, if a spout or bolt within the topology has an initialized unserializable property, serialization will fail.
If there is a need for a field that is unserializable, initialize it within the bolt or spout's prepare method, which is run after the topology is delivered to the worker.
Source: Best Practices for implementing Apache Storm