Spring Cloud Bus, RenamingMQ in more readable way - rabbitmq

My Cleint is having 2 instances and I am using below snippet to rename the queue and can see testExchange.testQueue is created
under which i can see 2 consumers i.e. my client instances but while /bus/refresh I can see only single instance is getting refreshed and
I am not getting Cloud Bus feature viz on /bus/refresh all instances should get refreshed, please let me know if I am missing any
configuration to rename the queue in readable format.
spring:
cloud:
stream:
bindings:
springCloudBusInput:
destination: testExchange
group: testQueue
config:
bus:
enabled: true
uri: https://Config-Server-offshore.com/
name: ClientApp

With spring-cloud-stream, using group creates competing consumers on the same queue.
If you remove the group each instance will get its own queue.
You can use a place holder in the group to make it unique...
spring.cloud.stream.bindings.input.group=${instanceIndex}
instanceIndex=1
...if you are running on cloud foundry, you can use the vcap instance index.

Related

NestJS integration with rabbitmq- specify exchange name

I am integrating a NestJS application with RabbitMQ and i followed this tutorial
From the tutorial i can see that message consumer can connect to the rabbitMQ broker using a queue name, but actually i want to create a temporary queue that connects to an exchange. I cannot find an option to specify the exchange name in the queue options specified in the tutorial. can someone please point to the right configuration to use to specify an exchange instead of queue?
is it possible to specify an exchange name inside the 'queueOptions' structure?
From what I understand from your tutorial (I don't know NestJS), createMicroservice only connects to a queue (or create it if needed). In other words, it performs assertQueue operation:
var queue = 'task_queue';
channel.assertQueue(queue, {
durable: false
});
If you want to bind a queue to an existing exchange, you need to perform bindQueue operation:
channel.bindQueue(queue, exchange, '');
Thus you can't bind to an exchange using queueOptions. Take a look at the official Rabbitmq documentation, especially the "Temporary queues" and following "Bindings" sections.

Serverless framework for trigger

I am looking for a serverless framework(free) , where i can create a kafka trigger and when triggered a kube function is to be invoked (python)
I have tried nuclio but the problem is that i have kafka version higher and they do not support higher than 2.4.
I want something like:
apiVersion: "nuclio.io/v1beta1"
kind: "NuclioFunction"
spec:
runtime: "python:3.6"
handler: NuclioKafkaHandler:consumer
minReplicas: 1
maxReplicas: 1
triggers:
myKafkaTrigger:
kind: kafka-cluster
attributes:
initialOffset: earliest
topics:
- nuclio
brokers:
- kafka-bootstrap:9092
consumerGroup: Consumer
And a kube function like:
def consumer(context, event):
context.logger.debug(event.body)
print(event.trigger.kind)
As simple as these two files and i have already existing kafka cluster so i just want to have trigger on that.
what are the possible alternatives apart from nuclio? I looked into kubeless seemed complicated. Fission does not support python.
I don't know much about Nuclio but the scenario you described looks possible with Knative.
Simplest way, you can create a Knative Service for your consumer. For the Kafka part, you can use a KafkaSource to get the events into Knative Eventing system. In your KafkaSource, you can tell it to call the Knative Service when there's an event coming from Kafka.
Above is the simplest way. If you need more advanced features, there is also support for filtering based on event types or having multiple consumers subscribed to events and more features.
Red Hat's Knative Tutorial has a section for serverless eventing with Kafka.
The exact same use case is possible with Fission which is an open source serverless framework for Kubernetes.
You can create a Message Queue trigger for Kafka and associate it with a serverless function like this:
fission mqt create --name kafkatest --function consumer --mqtype kafka --mqtkind keda --topic request-topic --resptopic response-topic --errortopic error-topic
This would trigger a function called consumer whenever there's a message in the request-topic queue of Kafka.
You can also associate meta data like authentication information as secrets or flags like pollingintervals, max retries etc.
Reference: https://fission.io/docs/usage/triggers/message-queue-trigger-kind-keda/kafka/

Spring Cloud Stream DLQ, Producer and Consumer Residing under Multiple Application

I have producer in say Application A with the below configuration,
Producer Properties:
spring.cloud.stream.bindings.packageVersionUpdatesPublishChannel.destination=fabric-exchange
spring.cloud.stream.bindings.packageVersionUpdatesPublishChannel.producer.requiredGroups=version-updates
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesPublishChannel.producer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesPublishChannel.producer.routingKeyExpression='package-version'
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesPublishChannel.producer.bindingRoutingKey=package-version
And I have a Consumer for the same Queue in an another application say B,
#Consumer Properties:
spring.cloud.stream.bindings.packageVersionUpdatesConsumerChannel.destination=fabric-exchange
spring.cloud.stream.bindings.packageVersionUpdatesConsumerChannel.group=package-version-updates
spring.cloud.stream.bindings.packageVersionUpdatesConsumerChannel.consumer.max-attempts=1
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.durableSubscription=true
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.bindingRoutingKey=package-version
#DLQ
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.autoBindDlq=true
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.dlqDeadLetterExchange=
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.dlq-ttl=30000
#Error Exchange Creation and Bind the Same to Error Queue
spring.cloud.stream.bindings.packageVersionUpdatesErrorPublishChannel.destination=fabric-error-exchange
spring.cloud.stream.bindings.packageVersionUpdatesErrorPublishChannel.producer.requiredGroups=package-version-updates-error
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesErrorPublishChannel.producer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesErrorPublishChannel.producer.routingKeyExpression='packageversionupdateserror'
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesErrorPublishChannel.producer.bindingRoutingKey=packageversionupdateserror
Now say for example if the Application A boots first, then the Queue version-updates would be created without any dead letter queue associated with it.
And now the when the Application B starts, this is the exception I get and the channel gets shudtdown, I think this is because app B is trying to re-create the queue with a different configuration
inequivalent arg 'x-dead-letter-exchange' for queue 'fabric-exchange.version-updates' in vhost '/': received the value 'DLX' of type 'longstr' but current is none
Can anyone please let me know, how do i solve this, where my requirement is to create a Queue in App A and App-A would simple produce the messages onto this queue
And App-B would consume the same and my requirement is to support re-tries after X amount of time through DLQ
required-groups is simply a convenience to provision the consumer queue when the producer starts, to avoid losing messages if the producer starts first.
You must use identical exchange/queue/binding configuration on both sides.

Dynamically consume and sink Kafka topics with Flink

I haven't been able to find much information about this online. I'm wondering if its possible to build a Flink app that can dynamically consume all topics matching a regex pattern and sync those topics to s3. Also, each topic being dynamically synced would have Avro messages and the Flink app would use Confluent's Schema Registry.
So lucky man! Flink 1.4 just released a few days ago and this is the first version that provides consuming Kafka topics using REGEX. According to java docs here is how you can use it:
FlinkKafkaConsumer011
public FlinkKafkaConsumer011(PatternsubscriptionPattern,DeserializationSchema<T> valueDeserializer,Properties props)
Creates a new Kafka streaming source consumer for Kafka 0.11.x. Use
this constructor to subscribe to multiple topics based on a regular
expression pattern. If partition discovery is enabled (by setting a
non-negative value for
FlinkKafkaConsumerBase.KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS in the
properties), topics with names matching the pattern will also be
subscribed to as they are created on the fly.
Parameters:
subscriptionPattern - The regular expression for a pattern of topic names to subscribe to.
valueDeserializer - The de-/serializer used to convert between Kafka's byte messages and Flink's objects.
props - The properties used to configure the Kafka consumer client, and the ZooKeeper client.
Just notice that running Flink streaming application, it fetch topic data from Zookeeper at intervals specified using the consumer config :
FlinkKafkaConsumerBase.KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS
It means every consumer should resync the metadata including topics, at some specified intervals.The default value is 5 minutes. So adding a new topic you should expect consumer starts to consume it at most in 5 minutes. You should set this configuration for Flink consumer with your desired time interval.
Subscribing to Kafka topics with a regex pattern was added in Flink 1.4. See the documentation here.
S3 is one of the file systems supported by Flink. For reliable, exactly-once delivery of a stream into a file system, use the flink-connector-filesystem connector.
You can configure Flink to use Avro, but I'm not sure what the status is of interop with Confluent's schema registry.
For searching on these and other topics, I recommend the search on the Flink doc page. For example: https://ci.apache.org/projects/flink/flink-docs-release-1.4/search-results.html?q=schema+registry

Akka.net cluster sharding: Unable to register coordinator

I am trying to setup akka.net cluster sharding by creating a simple project.
Project layout:
Actors - class library that defines one actor and message. Is reference by other projects
Inbound - Starts ShardedRegion and is the only node participating in cluster sharding. And should be the one hosting the coordinator too.
MessageProducer - Will host only shardedregion proxy to send messages to the ProcessorActor.
Lighthouse - seed node
Uploaded images show that the coordinator singleton is not initialized and messages send through sharedregion proxy are not delivered.
Based on the blog post by petabridge, petabridge.com/blog/cluster-sharding-technical-overview-akkadotnet/, I have excluded lighthouse, by setting akka.cluster.sharding.role, from participating in cluster sharding so that coordinator is not created on it.
Not sure what am I missing to get this to work.
This was already answered on gitter, but here's the tl;dr:
Shard region proxy needs to share the same role as a corresponding shard region. Otherwise proxy may not be able to find shard coordinator, and therefore not able to find initial location of a shard, it wants to send message to.
IMessageExtractor.GetMessage method is used to extract an actual message, that is going to be send to sharded actor. In example message extractor was used to extract string property from enveloping message, yet a receiver actor has Receive handler set for envelope, not a string.