How do I use ActiveMQ in Apache Flink? - activemq

I am getting my data through ActiveMQ which I want to process in real time with Apache Flink DataStreams. There is support for many messaging services like RabbitMQ and Kafka but I can't see any support for ActiveMQ. How can I use it?

Since there is not support for ActiveMQ, I would recommend implement a custom source.
You basically have to implement the SourceFunction interface.
If you want to have exactly-once semantics, you can base your implementation on the MultipleIdsMessageAcknowledgingSourceBase class.
I would recommend you to start with implementing a SourceFunction

Found a JMS connector for Flink:
https://github.com/jkirsch/senser/blob/master/src/main/java/edu/tuberlin/senser/images/flink/io/FlinkJMSStreamSource.java

Related

How to modify spring-websocket to interface with broker via MQTT instead of STOMP?

I'm building a spring-websocket application that currently uses RabbitMQ as a message broker via the STOMP protocol. The rest of our organization mostly uses IBM Websphere MQ as a message broker, so we'd like to convert it away from RabbitMQ. However Websphere MQ doesn't support the STOMP protocol, which is spring-websocket's default. MQTT seems like the easiest supported protocol to use instead. Ideally our front-end web clients will continue to use STOMP, but I'm also OK with migrating them to MQTT if needed.
What classes do I need to overwrite to make spring-websocket interface with the broker via MQTT instead of STOMP? This article provides some general guidance that I should extend AbstractMessageBrokerConfiguration, but I'm unclear where to begin.
Currently I'm using the standard configuration methods: registry.enableStompBrokerRelay and registerStompEndpoints in AbstractWebSocketMessageBrokerConfigurer
Ryan has some good pointers.
The main work is going to be creating a replacement for StompBrokerRelayMessageHandler with an MqttBrokerMessageHandler that not only talks to an MQTT broker but also adapts client STOMP frames to MQTT and vice versa. The protocols are similar enough that it may be possible to find common ground but you won't know until you try.
Note that we did have plans for for MQTT support https://jira.spring.io/browse/SPR-12581 but the key issue was that SockJS which is required over the Web for fallback support does not support binary messages.
Here's my stab at this after reviewing the spring-websocket source code:
Change WebSocketConfig:
Remove #EnableWebSocketMessageBroker
Add new annotation: #EnableMqttWebSocketMessageBroker
Create MqttBrokerMessageHandler that extends AbstractBrokerMessageHandler -- suggest we copy and edit StompBrokerRelayMessageHandler
Create a new class that EnableMqttWebSocketMessageBroker imports: DelegatingMqttWebSocketMessageBrokerConfiguration
DelegatingMqttWebSocketMessageBrokerConfiguration extends AbstractMessageBrokerConfiguration directly and routes to MqttBrokerMessageHandler
Add this to server.xml on WebSphere Liberty:
<feature>websocket-1.1</feature>

Is Apache Kafka another API for JMS?

Is not Apache Kafka another implementation of JMS?
I am using JMS+AMQ in my application, and migrating to Apache Kafka. Do I have to change all JMS codes?
No, Kafka is different from JMS systems such as ActiveMQ.
see ActiveMQ vs Apollo vs Kafka
Kafka has less features than ActiveMQ, as the stress has been put on performances. So before migrating, check that the features you use in AMQ are in Kafka.
However, there is an open suggestion for a bridge between JMS and Kafka, to allow exactly what you need. Maybe the provided links can help you
https://issues.apache.org/jira/browse/KAFKA-1995
Actually, the two are not the same. And with a little more time seeing the two co-exist - and listening to problems and happy points from those deploying each in the field - there is a little more to say about each one.
Firstly, JMS supports both point-to-point messaging (where messages are sent to single consumers; the consumers themselves maintain their message queues) and the publish-and-subscribe (pub/sub) model (where messages are written to a single topic, and consumers, independently, decide which messages to consume).
In a point-to-point messaging architecture, message producers and consumers know each other, where as in a pub/sub model they do not. Apache Kafka focuses on a pub/sub model, maintaining a separate log/topic from which consumers read from offsets. Kafka is also built for the cloud, with high-throughput a core consideration.
Many in our community and at meetups throw their hands up in frustration at MOMs (message-oriented middlewares) like JMS and switch to Kafka, for, what boils down to one reason: scalability. They argue that Kafka is better suited for scale than other MOMs because Kafka maintains a partitioned topic log. In so doing, Kafka can split up message flow to groups of consumers by partition and batch transmit the messages.
This concept also allows Kafka to have more granular control over ACLs (access control) to Kafka Consumers, although there are some issues there, which Apache Pulsar is addressing.
Finally, on Kafka, since the client/consumer decides which messages to consume (by offset in the topic), this removes some of the producer-side complexity of routing rules built into MOMs like JMS.
There's more differences than that, but this is a distillation of some of the ones that keep coming up! Hope this helps.
No, Kafka uses its own non-standard protocol and clients.
However, there's a 3rd-party JMS Client for Kafka from Confluent.

What is the difference between ActiveMQ and Apache Camel

Both are open source ESB. So what are distinct features each of them have and in which scenarios/usecases, each should be used?
Apache camel is basically a Mediation engine which will help you to define
mediation and routing rules for your application,
whereas Activemq is a message broker that allows you to
send messages between producers and consumers using Java Messaging System (JMS)
Apache Camel can use ActiveMQ for forwarding of messages.
Both are altogether different types of systems, with different functionalities, and can not be compared
hope this helps!
Good luck!

Consuming Spring Boot "metricsChannel" via Apache Camel/RabbitMQ

Spring Boot publishes all metrics events to a message channel "metricsChannel" when a dependency on spring-messaging is present. In my project I am using Apache Camel along with RabbitMQ as the broker. Is there any way to consume these metrics messages using purely Camel and not spring integration?
I can see Apache Camel has a component SpringIntegration, however I would like to know if there is a way to directly access these messages via the RabbitMQ component or do I have to add dependency on the camel component spring-integration as well?
It would be far easier for you to implement MetricChannel and do whatever you want with Camel instead of trying to bridge a result with Spring integration. Look at MessageChannelMetricWriter, the implementation is quite straightforward.

ActiveMQ, STOMP, Java example

Can anyone point me to decent example where Java stomp client is used to connect to ActiveMQ.
Also I am interested in following:
Is failover supported over stomp?
How to create durable subscription?
Does stomp support asynchronous messaging? Examples? I think I have to implement MessageListener interface for it, but I wasn't able to find example for this.
If you really want to use STOMP from Java then you could look at StompJMS which maps quite a bit of the JMS API to STOMP. It doesn't support failover but there aren't a lot of stomp client's that do. When using Java you are better off to use the native JMS client from the ActiveMQ broker as it is going to be the most robust and feature complete client library you will find.