How can I log all AMQP commands that go through RabbitMQ broker including service commands like basic.ack, confirm.select etc?
The standard Java client library com.rabbitmq:amqp-client:3.5.4 contains Tracer tool that works as a standalone proxy between your client and broker. It logs all AMQP commands that go through it to System.out.
It's described here: http://www.rabbitmq.com/java-tools.html
Here's an example of its output:
1441190584927: <Tracer-0> ch#1 -> {#method<channel.open>(out-of-band=), null, ""}
1441190584968: <Tracer-0> ch#1 <- {#method<channel.open-ok>(channel-id=), null, ""}
1441190585008: <Tracer-0> ch#1 -> {#method<confirm.select>(nowait=false), null, ""}
1441190585047: <Tracer-0> ch#1 <- {#method<confirm.select-ok>(), null, ""}
1441190585090: <Tracer-0> ch#1 -> {#method<basic.publish>(ticket=0, exchange=node.confirm.publish, routing-key=, mandatory=false, immediate=false), #contentHeader<basic>(content-type=string/utf8, content-encoding=null, headers=null, delivery-mode=2, priority=null, correlation-id=null, reply-to=null, expiration=null, message-id=null, timestamp=null, type=null, user-id=null, app-id=null, cluster-id=null), "some message"}
1441190585128: <Tracer-0> ch#1 <- {#method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'node.confirm.publish' in vhost '/', class-id=60, method-id=40), null, ""}
you'll need to modify the rabbimq config. see this page for configuration settings.
specifically, you'll want to set something like "info" or "debug" for whatever you're interested in:
[
{log_levels, [
{channel, debug},
{connection, debug}
]}
].
From that page, look for log_levels and you'll find this configuration information:
Controls the granularity of logging. The value is a list of log event category and log level pairs.
The level can be one of 'none' (no events are logged), 'error' (only errors are logged), 'warning' (only errors and warning are logged), 'info' (errors, warnings and informational messages are logged), or 'debug' (errors, warnings, informational messages and debugging messages are logged).
At present there are four categories defined. Other, currently uncategorized, events are always logged.
The categories are:
channel - for all events relating to AMQP channels
connection - for all events relating to network connections
federation - for all events relating to federation
mirroring - for all events relating to mirrored queues
Default: [{connection, info}]
Related
I'm new to RabbitMQ and I have an application that uses RabbitMQ as the message broker. Up until this day, I've been using the default settings - no log rotation. I wanted to use the log rotation feature so I set it using:
{log, [
{file, [{file, "MyAppLogs.log"},
{level, info},
{date, "$D0"},
{size, 1073741824},
{count, 30}
]}
]}
Of course testing would take a while if I am to test 1GB file size, so for testing purposes I changed it to 1024 instead. I expected the log will rotate when it reaches 1KB but it did not. I've noticed that the log files would only rotate once the file size reaches 5KB.
So my question is - is the minimum log file size for RabbitMQ file-based log rotation 5KB?
I've looked around the web - especially in the rabbitmq documentation site: https://www.rabbitmq.com/logging.html - however there's no mention of any minimimum size.
Here is the sample output of my the settings that I've used:
Test Settings:
[{file, [{file, "rabbit.log"},
{level, info},
{date, "$D0"},
{size, 1024},
{count, 3}
]}
]}
https://groups.google.com/d/topic/rabbitmq-users/wJGMVGB1cAk/discussion
Hi Renya,
Please always let us know what version of RabbitMQ and Erlang you are using. I can tell you're using Windows - what version?
Log rotation is not necessarily precise due to when it happens in the logging process, as well as buffering.
Thanks -
Luke
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
This requires rabbitmq version > 3.7. Put the log rotate logics inside your rabbitmq.conf file like below:
{log, [
{file, [{file, "/var/log/rabbitmq/rabbitmq.log"}, %% log.file
{level, info}, %% log.file.info
{date, "$D0"}, %% log.file.rotation.date
{size, 1024}, %% log.file.rotation.size
{count, 15} %% log.file.rotation.count
]}
]},
Using celery, is it possible to listen for new messages using RabbitMQ and schedule worker(s) to process it?
A lot of the celery documentation is about using it as a task producer with a broker (say RabbitMQ), where you execute a task and it will be delivered via the broker.
I would like to consume messages from a broker (generated by other services) and process the messages using celery.
Yes. All you have to do is configure the message that gets put into rabbitMQ in a way that celery recognizes it as a task. I have done this using Nifi. I currently use json and the message looks as followed:
{"expires": null, "utc": true, "args": ["${absolute.path}${filename}", "nifihost"], "chord": null, "callbacks": null, "errbacks": null, "taskset": null, "id": "${uuid}", "retries": 0, "task": "taskmanager.tasks.nifi", "timelimit": [null, null], "eta": null, "kwargs": {}}
I'm not 100% positive which keywords are needed other than the "task" keyword.
I am a little bit confused when interpreting the action part for the following rule
cookie=0x2b000000000000a5, duration=528.939s, table=0, n_packets=176, n_bytes=33116, idle_age=0, priority=2,in_port=1 actions=output:4,output:2
we have multiple action ports in a certain order, when checking the "restconf/operational/opendaylight-inventory:nodes/" in ODL controller we have different order for each port
"action": [
{ "order": 0,"output-action": {
"max-length": 65535,
"output-node-connector": "2" }
{"order": 1, "output-action": {
"max-length": 65535,
"output-node-connector": "4" }
}
I am not sure how the packets hitting such entry will be forwarded, are they replicated and send over both? are they load balanced over all ports?
what does the max-length refer to?
Is there any documentation explaining all fields in detail?
It seems, this is a group-table flow.
You can use group table functionality to support multiports in action part. You can read Openflow 1.3 spec documentation for details. (Part. 5.6, 5.6.1)
For max length, again from the same document (Part A.2.5):
An Output action uses the following structure and fields:
Action structure for OFPAT_OUTPUT, which sends packets out 'port'. When the
'port' is the OFPP_CONTROLLER, 'max_len' indicates the max number of
bytes to send. A 'max_len' of zero means no bytes of the packet should
be sent. A 'max_len' of OFPCML_NO_BUFFER means that the packet is not
buffered and the complete packet is to be sent to the controller.
we're going to use rabbitmq in our project, but facing a problem that, we want to debug on our dev machines, so the response message have to be send to machine which originally send the request message out. How we're going to achive that, is there an existing solution in spring-rabbitmq framework?
We have considered several solutions. such as declare a set of queues for each machine, the queue name prefix by machine name. Is that feasible?
Define set of queues (debug queue A-Z) and bind them to headers exchange with attributes x-match=any, from=[A-Z], to=[A-Z] respectively to . Then bind headers exchange to you main working exchange (one or more) to receive all messages you interested in, so when your consumer publish response it will be duplicated to your debug exchange and then routed to appropriate queue.
[sender X] [ worker ] [consumer on queue X]
| ^ |
[request] | [response from=X, to=X] [duped request from=X|
\ | | [duplicated response from=X, to=X]
\ [request from=X] | ^
v | V |
[working topic exchange] -------> [debug headers exchange]
/ | \ / | \
{bindings by routing key mask} {bindings by any headers from=[A-Z], to=[A-Z]}
/ | \ / | \
[working queue 1] ... [working queue N] [debug queue A] ... [debug queue Z]
To bind request and response messages you can use applicationId and correlationId message attributes.
Note, that both request and response messages will be duplicated to debug queues. You may also specify separate queue for request and response messages by binding queues to match only specific headers, something like x-match=all, from=[A-Z] or x-match=all, to=[A-Z] and publish response and request messages with only that headers (only from or only to), but it is up to you.
The pros:
easy to implement
requires minimal code changes
easy to turn on/off
may be safely run in production environment
Cons:
use more resource power from RabbitMQ side
Alternatively, you can utilize RPC pattern somehow if you debugging process requires realtime response receiving. But this will block publisher until response processed, which may differ from real-world app usage and break business logic.
Pros:
step-by-step debugging process
Cons:
hard to implement
may require a lot of code changes
break business logic
hard to enable/disable
not production environment safe
p.s.: sorry for ascii graph
Using Mule 3.4 with the AMQP Transport plugin and RabbitMQ, I am trying to send a message to the default AMQP exchange. The documentation for the exchangeName attribute states "leave blank or omit for the default exchange". However if I (a) omit it, like so:
<amqp:outbound-endpoint routingKey="my.queue" connector-ref="amqpDefaultConnector" />
Then I get the error message:
Element amqp:outbound-endpoint{connector-ref=amqpDefaultConnector,
name=.test:outbound-endpoint.17, routingKey=process.task.complete}
must have all attributes for one of the sets: [address] [ref]
[queueName] [exchangeName] [exchangeName, queueName].
Which seems to indicate that it is not valid to omit the attribute. However, if I (b) provide it but leave it blank, like so:
<amqp:outbound-endpoint exchangeName="" routingKey="my.queue" connector-ref="amqpDefaultConnector" />
then I get the error message:
java.net.URISyntaxException: Expected authority at index 7: amqp://
I believe that the rest of my configuration and setup is correct, as using a named exchange works as expected. Any help would be appreciated.
To dispatch to the default exchange, you need to pass the queue name in queueName not routingKey:
<amqp:outbound-endpoint exchangeName=""
queueName="my.queue"
connector-ref="amqpDefaultConnector" />