Would it be possible to create a server that publishes a client's request to Kafka's topic, and the Consumer that is subscribed to the topic will respond directly to the client?
client -> server -> kafka -> consumer -> client
I guess you are looking for an HTTP bridge, able to bridge the Apache Kafka protocol over HTTP.
I would suggest to take a look at the Strimzi HTTP - Kafka bridge here: https://github.com/strimzi/strimzi-kafka-bridge
It's open source and Apache 2.0 licensed. It's not just when you have Kafka running on Kubernetes via the Strimzi project but even if you have a bare metal Kafka installation. Details on the official documentation: https://strimzi.io/docs/bridge/latest/
Another one is the Confluent REST proxy here: https://github.com/confluentinc/kafka-rest
Still open source but with a custom Confluent Community License. Official documentation https://docs.confluent.io/platform/current/kafka-rest/index.html
Related
This is a follow-up question of How to implement HTTP request/reply when the response comes from a rabbitMQ reply queue using Spring Integration DSL?.
We were able to build the Spring Integration application and the SCDF stream successfully locally. We could send a http request to the rabbitMQ request queue which was bound to the SCDF stream rabbit source. We could also receive the response back from the rabbitMQ response queue which was bound to the SCDF stream rabbit sink.
We have deployed the SCDF stream into PCF environment which had a binding of an internal rabbitMQ broker. Now we need to specify the spring rabbitMQ connection information in the Spring Integration application properties - currently it's using the default localhost#5762, which is no longer valid. Does anyone know how to get this rabbitMQ configuration properties? We already checked the SCDF stream rabbit source/sink log files but couldn't find the information. I know we probably need to check internally whoever set up the SCDF/rabbitMQ in PCF environment, but so far we haven't heard the answers from them.
Also, it appears we can have a different approach that binds both the SCDF stream and the integration application to a separate rabbitMQ instance (instead of using the existing one bundled with the SCDF configuration). Is it a recommended solution?
Thanks,
It is unclear whether you're using the SCDF tile or the SCDF OSS (via manfest.yml) on PCF.
Suppose you're using the OSS, AFA. In that case, you are providing the right RMQ service-instance configuration (that you pre-created) in the manifest.yml, then SCDF would automatically propagate that RMQ service instance and bind it to the apps it is deploying to your ORG/Space. You don't need to muck around with connection credentials manually.
On the other hand, if you are using the SCDF Tile, the SCDF service broker will auto-create the RMQ SI and automatically bind it to the apps it deploys.
In summary, there's no reason to manually pass the connection credentials or pack them as application properties inside your apps. You can automate all this provided you're configuring all this correctly.
I have a Kafka cluster that is running on K8S. I am using the confluent kafka image as and I have an EXTERNAL listeners that is working.
How can I add SSL encryption? Should I use an ingress? Where can I find good documentation?
Thank you
You have a manual way in this gist, which does not use the confluent image.
But for Confluent and its Helm chart (see "Confluent Operator: Getting Started with Apache Kafka and Kubernetes" from Rohit Bakhshi), you can follow:
"Encryption, authentication and external access for Confluent Kafka on Kubernetes" from Ryan Morris
Out of the box, the helm chart doesn’t support SSL configurations for encryption and authentication, or exposing the platform for access from outside the Kubernetes cluster.
To implement these requirements, there are a few modifications to the installation needed.
In summary, they are:
Generate some private keys/certificates for brokers and clients
Create Kubernetes Secrets to provide them within your cluster
Update the broker StatefulSet with your Secrets and SSL configuration
Expose each broker pod via an external service
I recommend using Strimzi kafka operator to deploy Kafka to Kubernetes. I'm using it in production for a year now.
It supports SSL, external load balancers, kafka exporter, etc
Strimzi Kafka Operator
EDIT2: My issue here was caused by an insufficient understanding of how transport connectors work in ActiveMQ. TL;DR is that ActiveMQ will implicitly "transform" or "relay" messages between your transport connector configurations defined in activemq.xml.
EDIT: Additional info, the STOMP messages received by the Angular application are used for debugging and demo purposes. Hence, simply converting the OpenWire message to a blob of readable text is sufficient.
I'm creating an Angular application (preferably website, avoiding native applications), which objective is to "tap in" by web sockets on an ActiveMQ server and subscribe to OpenWire messages. How do I let ActiveMQ transform OpenWire messages to STOMP messages and send these to any clients (i.e. my Angular application) connected to the ActiveMQ WebSocket connector?
In addtiion, it would be nice-to-have if I could transform STOMP to OpenWire as well.
It must be Angular
Avoiding the use of native applications on the client-side is preferable although not a deal-breaker.
Adding extra processing stress on the ActiveMQ server must be done with caution.
To the best of my knowledge, it is only possible to let Angular "talk directly" with the ActiveMQ server by STOMP messages send by web socket, if I am to avoid using native applications.
I already have an Angular application capable of STOMP communication by web sockets (e.g. something like https://github.com/stomp-js/ng2-stompjs-angular7).
I am missing information on how to configure the ActiveMQ server to transform OpenWire-->STOMP through its transport connectors.
In my understanding, what I am trying to do should be possible. It is noted by other users but not how. E.g. users hint that what I want is possible in ActiveMQ but not Apollo: ActiveMQ to Apollo transition, Openwire to Stomp protocol configuration.
I expect (preferably) the need to use something like an ActiveMQ transformer (e.g. adding transformer to the connector configuration: AMQP & Openwire - Activemq broker and 2 different consumers) or maybe writing an ActiveMQ plugin (http://activemq.apache.org/developing-plugins.html). On ActiveMQ's website, an existing transformer is mentioned (http://activemq.apache.org/stomp.html Message Transformations section):
Currently, ActiveMQ comes with a transformer that can transform XML/JSON text to Java objects
... but no mention of how to use this and I am unsure if I can benefit from this and if this means that there are no transformers for OpenWire-->STOMP or vice versa.
I expect I might have misunderstood some of the concepts, and a "you're going in a wrong direction, do this instead" can work out as a good answer for me. At the time of writing, I expect I will have to create an ActiveMQ plugin using their Message Transformer interface (http://activemq.apache.org/message-transformation.html) although their sub links are 404. I hope to achieve a more simple solution, e.g. an existing OpenWire-->STOMP transformer:
<transportConnector name="openwire" uri="{some-openwire-uri}?transport.transformer=stomp"/>
ActiveMQ will "transform" any Openwire message into a STOMP message and vice versa as needed based on client connections. I an Openwire based JMS client connects and places a message onto a queue and a STOMP based client comes along and subscribes to that queue the message will be converted into a STOMP message to send to that client.
Without knowing more about what issue you are having it is hard to provide more insight than that though. There are some cases where the transformation from Openwire to STOMP might not yield exactly the right thing for you such as a MapMessage or StreamMessage and definitely an ObjectMessage so some care needs to be taken about cross protocol messaging.
You do of course need to add a transport connector for each of the protocols you want to support, Openwire, STOMP, AMQP etc. The clients need something to connect to, then once they connect the broker manages the message transformations amongst subscriptions on Topics and Queues.
I am using ActiveMQ embedded in Glassfish with both the default 61616 communication port and a port with MQTT enabled.
Is there a way to publish to both of these MQs in one call if ActiveMQ is configured a certain way?
If not, is the only way to connect to the MQTT server from the J2EE server through a 3rd party MQTT client?
If so, is there a MQTT lib that can take advantage of J2EE container's connection pools?
All protocols in ActiveMQ will share the same topics and queues.
You can subscribe and publish as you wish from java/JMS and the data will be accessible on the same topic using MQTT.
Of course, there will be some issues if you use JMS-only features, such as ObjectMessage and whatnot, but that is pretty obvious. Stick to text messages on topics and you should be fine.
RabbitMQ supports multiple protocols, AMQP, MQTT, STOMP, ....
When using PHP for example, it's easier to publish using the STOMP library since the PHP AMQP libraries requires compiled C code and is somewhat of a mission to setup if you don't have to.
On the JAVA side, apache camel with AMQP on spring is pretty straight forward.
Is it possible to setup a queue, publish to it via STOMP and then consume via AMQP and then again publish via AMQP and consume via STOMP if the message broker is RabbitMQ?
Yes, this should work, given that you have installed RabbitMQ's STOMP plugin on your RabbitMQ node(s).
The protocol only defines the communication between client and server and has no impact on a message itself.
You should note that using protocols other than AMQP will most likely come along with limitations and/or worse performance.
There also exist native PHP libraries for RabbitMQ that don't require compiling C code. Unfortunately, I cannot tell you which one is the best, because I am a Java guy ;-).