I have registered receive endpoint in SingleActiveConsumer mode. However I can't find a way to send a message directly to queue by using sendEndpoint. I receive following error:
The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=406, text='PRECONDITION_FAILED - inequivalent arg 'x-single-active-consumer' for queue 'test' in vhost '/': received none but current is the value 'true' of type 'bool'',
I tried setting header "x-single-active-consumer"=true by using bus configurer:
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
cfg.Host("localhost", "/", h =>
{
h.Username("guest");
h.Password("guest");
});
cfg.ConfigureSend(a => a.UseSendExecute(c => c.Headers.Set("x-single-active-consumer", true)));
});
and directly on sendEndpoint:
await sendEndpoint.Send(msg, context => {
context.Headers.Set("x-single-active-consumer", true);
});
If you want to send directly to a receive endpoint in MassTransit, you can use the short address exchange:test instead, which will send to the exchange without trying to create/bind the queue to the exchange with the same name. That way, you decouple the queue configuration from the message producer.
Or, you could just use Publish, and let the exchange bindings route the message to the receive endpoint queue.
Related
Is it possible to configure MassTransit to not create a RabbitMQ exchange for a consumer host? My RabbitMQ user has not enough rights to declare an exchange at the host where the consuming queue is located, so MassTransit fails to start with the following error:
Unhandled Exception: MassTransit.RabbitMqTransport.RabbitMqConnectionException:
Operation interrupted ---> RabbitMQ.Client.Exceptions.OperationInterruptedExcept
ion: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, c
ode=403, text="ACCESS_REFUSED - access to exchange '***' i
n vhost '***' refused for user '***'", classId=
40, methodId=10, cause=
Here is the code that I use:
var bus = Bus.Factory.CreateUsingRabbitMq(sbc =>
{
var host = sbc.Host(host: "***", port: 5671, virtualHost: "***", configure: configurator =>
{
configurator.UseSsl(sslConfigurator =>
{
sslConfigurator.Certificate = certificate;
sslConfigurator.UseCertificateAsAuthenticationIdentity = true;
sslConfigurator.ServerName = "***";
});
});
sbc.ReceiveEndpoint(host, "***", endpointConfigurator =>
{
endpointConfigurator.Consumer<UpdateCustomerConsumer>();
});
});
I have an Azure IOT solution where data from 2 devices go to the same IOT hub. From my computer I need to read the messages only from one of the devices. I implemented the ReadDeviceToCloudMessages.js in https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-node-node-getstarted
var client = EventHubClient.fromConnectionString(connectionString);
client.open()
.then(client.getPartitionIds.bind(client))
.then(function (partitionIds) {
return partitionIds.map(function (partitionId) {
return client.createReceiver('todevice', partitionId, { 'startAfterTime' : Date.now()}).then(function(receiver) {
console.log('Created partition receiver: ' + partitionId)
receiver.on('errorReceived', printError);
receiver.on('message', printMessage);
});
});
})
.catch(printError);
But I am getting all the messages in the IOThub. How do I get messages only from one device.
You can route the expected device message to build-in endpoint: events. Then you can only receive the selected device message from your above code.
Create the route:
Turn "Device messages which do not match any rules will be written to the 'Events (messages/events)' endpoint." to off and make sure the route is enabled.
I am using amqplib to transfer messages in my node.js server.
Here is my code to listen on queue:
channel.consume(queue, handler1, { noAck: true })
Now, I want to update the consumer to listen the same queue
Like this:
channel.consume(queue, handler2, { noAck: true })
I try to unbindQueue or deleteQueue but don't know why the error Unhandled rejection IllegalOperationError: Channel closing is thrown
The below code shows how I am setting header and message type to AMQP message.
MessageProperties properties = new MessageProperties();
properties.setHeader("KEY", "HOUSE");
properties.setContentType(MessageProperties.CONTENT_TYPE_JSON);
Message message = new Message("1234567;Branch A;SALES;3000.50;Pending approval".getBytes(), properties);
rabbitTemplate.sendAndReceive("", QUEUE_NAME, message);
After sending the message in the queue, the message is received by Transformer.
#Transformer(inputChannel = "inboundChannel", outputChannel = "toutboundChannel")
public Property buildProperty(Message<String> property){
LOGGER.info("message received :: HEADERS: {}, PAYLOAD :{}", property.getHeaders(), property.getPayload());
....
}
In the logs, the header "KEY: HOUSE" is missing and even the message type is not JSON and "text/plain" instead.
LOGS:
[SimpleAsyncTaskExecutor-1] INFO com.demo.maven.spring.integration.endpoint.TransformerRequestBuilder - message received :: HEADERS: {amqp_receivedRoutingKey=mobile.queue, amqp_deliveryTag=2, amqp_replyTo=amq.rabbitmq.reply-to.g2dkABByYWJiaXRAbG9jYWxob3N0AAAW9QAAAAAD.tTIFOS2gsM7qIlGYaybfrg==, amqp_deliveryMode=PERSISTENT, amqp_redelivered=true, id=399dda4f-4ba1-7cf4-2310-03dbfbac82b6, contentType=text/plain, timestamp=1421649922840}, PAYLOAD :1234567;Branch A;SALES;3000.50;Pending approval
MessagePropertiesBuilder class is for that.
By default Spring Integration AMQP Inbound Endpoint (AmqpInboundChannelAdapter and AmqpInboundGateway) maps only standard AMQP headers. That's is a default behaviour of DefaultAmqpHeaderMapper. To accept any user-specofic headers you should inject AmqpHeaderMapper (setHeaderMapper) to that inbound endpoint with an option setRequestHeaderNames("*"). Or provide full list of names of desired custom headers.
Re. contentType=text/plain: I think something between AMQP Inbound Endpoint and that #Transformer(inputChannel = "inboundChannel" overrides the received from AMQP contentType header. Because RabbitTemplate doesn't do that, if you send Message not any other Object. Please, share DEBUG logs for the org.springframework.integration category for the message receiver. Of course we need that part of logs, when you receive message till that #Transformer
This will work, you have to build the messageproperties correctly.
MessageProperties properties = new MessageProperties();
properties.builder()
.contentType(MediaType.APPLICATION_JSON)
//headers here
.headers(Map<String, Object>)
.build();
I've got logstash running, and successfully reading in a file
rabbitmq is running, I'm watching the log, and I can see the web interface
I've configured logstash to output to a rabbitmq exchange... I think!
Here's the problem: nothing ever gets posted to the exchange, as seen in the web interface.
Any ideas?
My output config:
output {
rabbitmq {
codec => plain
host => localhost
exchange => yomtvraps
exchange_type => direct
}
file { path => "/tmp/heartbeat-from-logstash.log" }
}
UPDATE: I'm watching the rabbit log with
tail -F /usr/local/var/log/rabbitmq/rabbit\#localhost.log
As it turns out, the problem was that there was no routing key set for the exchange and queue.
A working config is:
output {
rabbitmq {
codec => plain
host => localhost
exchange => yomtvraps
exchange_type => direct
key => yomtvraps
# these are defaults but you never know...
durable => true
port => 5672
user => "guest"
password => "guest"
}
}
Here's a sample receiver code (using ruby "Bunny")
require "bunny"
conn = Bunny.new(:automatically_recover => false)
conn.start
ch = conn.create_channel
q = ch.queue("yomtvraps")
exchange = ch.direct("yomtvraps", :durable => true)
begin
puts " [*] Waiting for messages. To exit press CTRL+C"
q.bind(exchange, :routing_key => "yomtvraps").subscribe(:block => true) do |delivery_info, properties, body|
puts " [x] Received #{body}"
end
rescue Interrupt => _
conn.close
exit(0)
end
you rabbitmq's parameter seems not enough, username,password and port have not been configured.
You can configure two outputs, one is to rabbitmq, the other is to file for vertifying the log's creation and log stash is ok.
pay attention to the logstash's version(log stash, rabbitmq plugin), it gave me lots of trouble in my trial before (log stash to another redis server etc).
You could debug rabbitmq's log.
ps -ef|grep erl you could find the log file's path in the arguments.
Be sure that rabbitmq's web manager plugin is enabled, and firewall is rightly configured, then open rabbitmq's web manager, ipaddress:15672
check the exchange's type is ok (in this case 'direct' may be a correct choice), your message consumer is configured ok, and your consumer's queue has been been bound to the exchange correctly.
try to post the message to your consumer through web manager and ensure consumer work well.
Monitor your queue when log stash push log into your consumer.