How to set 'x-message-ttl' properties to this queue?
rabbit:
bindings:
TEST_RESPONSE:
consumer:
bindingRoutingKey: "'${routing}'"
prefetch: ${prefetch}
acknowledge-mode: MANUAL
bindings:
TEST_RESPONSE:
destination: TEST_RESPONSE
content-type: application/json
group: test
because a have this error
Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - inequivalent arg 'x-message-ttl' for queue 'TEST_RESPONSE.test' in vhost '/': received none but current is the value '60000' of type 'long', class-id=50, method-id=10)
Queue definitions are immutable; you can't change a queue argument.
You either need to disable queue declaration
...rabbit.bindings.foo.consumer.bindQueue: false
or add
...rabbit.bindings.foo.consumer.ttl: 60000
to match the existing definition.
See consumer properties.
https://cloud.spring.io/spring-cloud-static/spring-cloud-stream-binder-rabbit/3.0.3.RELEASE/reference/html/spring-cloud-stream-binder-rabbit.html#_rabbitmq_consumer_properties
Related
I have registered receive endpoint in SingleActiveConsumer mode. However I can't find a way to send a message directly to queue by using sendEndpoint. I receive following error:
The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=406, text='PRECONDITION_FAILED - inequivalent arg 'x-single-active-consumer' for queue 'test' in vhost '/': received none but current is the value 'true' of type 'bool'',
I tried setting header "x-single-active-consumer"=true by using bus configurer:
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
cfg.Host("localhost", "/", h =>
{
h.Username("guest");
h.Password("guest");
});
cfg.ConfigureSend(a => a.UseSendExecute(c => c.Headers.Set("x-single-active-consumer", true)));
});
and directly on sendEndpoint:
await sendEndpoint.Send(msg, context => {
context.Headers.Set("x-single-active-consumer", true);
});
If you want to send directly to a receive endpoint in MassTransit, you can use the short address exchange:test instead, which will send to the exchange without trying to create/bind the queue to the exchange with the same name. That way, you decouple the queue configuration from the message producer.
Or, you could just use Publish, and let the exchange bindings route the message to the receive endpoint queue.
I am looking to add SASL Plaintext authentication in Banzai Kafka. I have added following configs in my read only config section.
readOnlyConfig: |
auto.create.topics.enable=false
cruise.control.metrics.topic.auto.create=true
cruise.control.metrics.topic.num.partitions=1
cruise.control.metrics.topic.replication.factor=2
delete.topic.enable=true
offsets.topic.replication.factor=2
group.initial.rebalance.delay.ms=3000
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-256
listener.name.external.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="testuser";
I have scripted following in listener config
listenersConfig:
externalListeners:
- type: "sasl_plaintext"
name: "external"
externalStartingPort: 51985
containerPort: 29094
accessMethod: LoadBalancer
internalListeners:
- type: "plaintext"
name: "internal"
containerPort: 29092
usedForInnerBrokerCommunication: true
- type: "plaintext"
name: "controller"
containerPort: 29093
usedForInnerBrokerCommunication: false
usedForControllerCommunication: true
When I try to connect producer or consumer - kafka returns Authentication Authorization failed error.
I am setting following properties:
session.timeout.ms=60000
partition.assignment.strategy=org.apache.kafka.clients.consumer.StickyAssignor
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="testuser";
Can any one suggest on this?
I provide a gRPC service that unfortunately has to have node affinity between BeginTransaction and Commit API Calls.
The Consumer API calls sequence is typically:
BeginTransaction() returns txnID
DoStuff(txnID, moreParams...)
DoStuff(txnID, moreParams...)
...
Commit(txnID)
Consumers can be multithreaded processes that make simultaneous calls to my API, so they might be using hundreds of Transactions at any point in time.
If I use Envoy proxy as my Service entry point, BeginTransaction should be routed to any healthy node in the cluster, but it must ensure that subsequent calls that use the returned txnID are routed to the same node.
Passing any context info in http headers, or in whatsoever part of the messages, is acceptable in my case.
I made some progress using Ring Hash balancer
In the envoy proxy server (look for "hash"):
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: http2
stat_prefix: ingress_http #just for statistics
route_config:
name: local_route
virtual_hosts:
- name: samplefront_virtualhost
domains:
- "*"
routes:
- match:
prefix: "/mycompany.sample.v1"
grpc: {}
route:
cluster: sampleserver
hash_policy:
header:
header_name: "x-session-hash"
- match:
prefix: "/bbva.sample.admin"
grpc: {}
route:
cluster: sampleadmin
http_filters:
- name: envoy.router
config: {}
clusters:
- name: sampleserver
connect_timeout: 0.25s
type: strict_dns
lb_policy: ring_hash
http2_protocol_options: {}
hosts:
- socket_address:
address: sampleserver
port_value: 80 #Connect to the Sidecard Envoy
- name: sampleadmin
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: sampleadmin
port_value: 80 #Connect to the Sidecard Envoy
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
In my consumers, I create a random hash just before BeginTransaction() and I make sure it is sent in the x-session-hash header every single time until Commit(txnId)
It works but it has some limitations:
When I scale up the service, adding more nodes, some operations fail with error upstream connect error or disconnect/reset before headers. Failures are absolutely ok when one node is lost, but they are hardly acceptable when a node is added!!! Good news is that the load gets rebalanced in both cases.
The client must generate the hash before the first call (BeginTransaction) is made, so is the client who is inadvertently dictating which node will attend the requests for this transaction.
I will keep investigating.
I have read http://www.rabbitmq.com/firehose.html and managed to trace some messages into a queue. I was looking to find out the "delivery-mode" (Non-persistent (1) or persistent (2).) of the messages. However, I can't see it in the firehose notification format. Is it supposed to be there?
Example:
Properties
headers:
exchange_name: myresults
routing_keys:
properties:
headers:
x-received-from:
uri: amqp://obscured1.net/%2F
exchange: myresults
redelivered: false
cluster-name: rabbit#obscured2.net
node: rabbit#b7
vhost: /
connection: rabbit#b7.3.351.0
channel: 1
user: none
routed_queues: myresults-c-v2
Payload: {.............}
Delivery mode is one of the content properties (a.k.a. message properties, basic properties).
I'm using spark-rabbitmq_1.6 library to connect to RabbitMQ through Spark Streaming.
The queue that I'm trying to connect to has limit of x-max-length = 1000.
I set the Rabbit Config Params as below
Map<String, String>rabbitMqConParams = new HashMap<String, String>();
rabbitMqConParams.put("hosts", "rabbit.host.com");
...
rabbitMqConParams.put("x-max-length", "1000");
JavaReceiverInputDStream<String> receiverStream = RabbitMQUtils.createJavaStream(streamCtx, String.class, rabbitMqConParams, messageHandler);
Although the x-max-length is set, it throws the below error.
16/11/28 15:20:27 WARN ReceiverSupervisorImpl: Restarting receiver with delay 2000 ms: Could not connect
java.io.IOException
at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:106)
at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:102)
at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:124)
at com.rabbitmq.client.impl.ChannelN.queueDeclare(ChannelN.java:844)
at com.rabbitmq.client.impl.ChannelN.queueDeclare(ChannelN.java:61)
at org.apache.spark.streaming.rabbitmq.consumer.Consumer.declareQueue(Consumer.scala:136)
at org.apache.spark.streaming.rabbitmq.consumer.Consumer.setQueue(Consumer.scala:110)
at org.apache.spark.streaming.rabbitmq.consumer.Consumer.setQueue(Consumer.scala:82)
at org.apache.spark.streaming.rabbitmq.receiver.RabbitMQReceiver$$anonfun$2.apply(RabbitMQInputDStream.scala:64)
at org.apache.spark.streaming.rabbitmq.receiver.RabbitMQReceiver$$anonfun$2.apply(RabbitMQInputDStream.scala:58)
....
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - inequivalent arg 'x-max-length' for queue 'aeon.output' in vhost '/': received '1000' but current is '1000', class-id=50, method-id=10)
at com.rabbitmq.utility.ValueOrException.getValue(ValueOrException.java:67)
at com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(BlockingValueOrException.java:33)
at com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(AMQChannel.java:361)
at com.rabbitmq.client.impl.AMQChannel.privateRpc(AMQChannel.java:226)
at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:118)
Any suggestions as to why this could occur?
Any help is greatly appreciated.
Thanks.
Looks like this is the issue with this library.
https://github.com/Stratio/spark-rabbitmq/issues/75
Thanks.