I'm new to RabbitMQ and I have an application that uses RabbitMQ as the message broker. Up until this day, I've been using the default settings - no log rotation. I wanted to use the log rotation feature so I set it using:
{log, [
{file, [{file, "MyAppLogs.log"},
{level, info},
{date, "$D0"},
{size, 1073741824},
{count, 30}
]}
]}
Of course testing would take a while if I am to test 1GB file size, so for testing purposes I changed it to 1024 instead. I expected the log will rotate when it reaches 1KB but it did not. I've noticed that the log files would only rotate once the file size reaches 5KB.
So my question is - is the minimum log file size for RabbitMQ file-based log rotation 5KB?
I've looked around the web - especially in the rabbitmq documentation site: https://www.rabbitmq.com/logging.html - however there's no mention of any minimimum size.
Here is the sample output of my the settings that I've used:
Test Settings:
[{file, [{file, "rabbit.log"},
{level, info},
{date, "$D0"},
{size, 1024},
{count, 3}
]}
]}
https://groups.google.com/d/topic/rabbitmq-users/wJGMVGB1cAk/discussion
Hi Renya,
Please always let us know what version of RabbitMQ and Erlang you are using. I can tell you're using Windows - what version?
Log rotation is not necessarily precise due to when it happens in the logging process, as well as buffering.
Thanks -
Luke
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
This requires rabbitmq version > 3.7. Put the log rotate logics inside your rabbitmq.conf file like below:
{log, [
{file, [{file, "/var/log/rabbitmq/rabbitmq.log"}, %% log.file
{level, info}, %% log.file.info
{date, "$D0"}, %% log.file.rotation.date
{size, 1024}, %% log.file.rotation.size
{count, 15} %% log.file.rotation.count
]}
]},
Related
I am new to RabbitMQ and I have troubles when handling RabbitMQ cluster.
The topology is like:
At first, every is ok. RabbitMQ node1 and RabbitMQ node2 are in a cluster.
They are interconnected by a RabbitMQ plugin called autocluster.
Then I delete pod rabbitmq-1 by kubectl delete pod rabbitmq-1. And I found that RabbitMQ application in node1 is stopped. I don't understand why RabbittoMQ will stop application if it detects another node's failure. It does not make sense. Is this behaviour designed by RabbitMQ or autocluster? Can you enlighten me?
My config is like:
[
{rabbit, [
{tcp_listen_options, [
{backlog, 128},
{nodelay, true},
{linger, {true,0}},
{exit_on_close, false},
{sndbuf, 12000},
{recbuf, 12000}
]},
{loopback_users, [<<"guest">>]},
{log_levels,[{autocluster, debug}, {connection, debug}]},
{cluster_partition_handling, pause_minority},
{vm_memory_high_watermark, {absolute, "3276MiB"}}
]},
{rabbitmq_management, [
{load_definitions, "/etc/rabbitmq/rabbitmq-definitions.json"}
]},
{autocluster, [
{dummy_param_without_comma, true},
{autocluster_log_level, debug},
{backend, etcd},
{autocluster_failure, ignore},
{cleanup_interval, 30},
{cluster_cleanup, false},
{cleanup_warn_only, false},
{etcd_ttl, 30},
{etcd_scheme, http},
{etcd_host, "etcd.kube-system.svc.cluster.local"},
{etcd_port, 2379}
]}
]
In my case, x-ha-policy is enabled.
You set cluster_partition_handling to pause_minority. One out of two nodes isn't the majority, so the cluster stops as configured. You either have to add an additional node or set cluster_partition_handling to ignore.
From the docs:
In pause-minority mode RabbitMQ will automatically pause cluster nodes
which determine themselves to be in a minority (i.e. fewer or equal
than half the total number of nodes) after seeing other nodes go down.
It therefore chooses partition tolerance over availability from the
CAP theorem. This ensures that in the event of a network partition, at
most the nodes in a single partition will continue to run. The
minority nodes will pause as soon as a partition starts, and will
start again when the partition ends.
I am a little bit confused when interpreting the action part for the following rule
cookie=0x2b000000000000a5, duration=528.939s, table=0, n_packets=176, n_bytes=33116, idle_age=0, priority=2,in_port=1 actions=output:4,output:2
we have multiple action ports in a certain order, when checking the "restconf/operational/opendaylight-inventory:nodes/" in ODL controller we have different order for each port
"action": [
{ "order": 0,"output-action": {
"max-length": 65535,
"output-node-connector": "2" }
{"order": 1, "output-action": {
"max-length": 65535,
"output-node-connector": "4" }
}
I am not sure how the packets hitting such entry will be forwarded, are they replicated and send over both? are they load balanced over all ports?
what does the max-length refer to?
Is there any documentation explaining all fields in detail?
It seems, this is a group-table flow.
You can use group table functionality to support multiports in action part. You can read Openflow 1.3 spec documentation for details. (Part. 5.6, 5.6.1)
For max length, again from the same document (Part A.2.5):
An Output action uses the following structure and fields:
Action structure for OFPAT_OUTPUT, which sends packets out 'port'. When the
'port' is the OFPP_CONTROLLER, 'max_len' indicates the max number of
bytes to send. A 'max_len' of zero means no bytes of the packet should
be sent. A 'max_len' of OFPCML_NO_BUFFER means that the packet is not
buffered and the complete packet is to be sent to the controller.
Our RabbitMQ service crashed twice with the following report in the $RABBITMQ_NODENAME-sasl.log:
=CRASH REPORT==== 7-Jun-2016::14:37:25 ===
crasher:
initial call: gen:init_it/6
pid: <0.223.0>
registered_name: []
exception exit: {{badmatch,
{[{msg_location,
<<162,171,39,113,226,229,228,92,227,253,48,186,
45,48,29,98>>,
1,357,0,583},
******************
16000 similar msg_location lines snipped
******************
1795219}},
[{rabbit_msg_store,combine_files,3,[]},
{rabbit_msg_store_gc,attempt_action,3,[]},
{rabbit_msg_store_gc,handle_cast,2,[]},
{gen_server2,handle_msg,2,[]},
{proc_lib,wake_up,3,
[{file,"proc_lib.erl"},{line,250}]}]}
in function gen_server2:terminate/3
ancestors: [msg_store_persistent,rabbit_sup,<0.159.0>]
messages: [{'$gen_cast',{combine,394,380}}]
links: [#Port<0.86370>,<0.218.0>,#Port<0.86369>]
dictionary: [{{"/var/lib/rabbitmq/mnesia/$RABBITMQ_NODENAME/msg_store_persistent/357.rdq",
fhc_file},
{file,1,false}},
{{"/var/lib/rabbitmq/mnesia/$RABBITMQ_NODENAME/msg_store_persistent/340.rdq",
fhc_file},
{file,1,true}},
{fhc_age_tree,{2,
{{1465,346244,764691},
#Ref<0.0.3145729.257998>,nil,
{{1465,346244,891543},
#Ref<0.0.3145729.258001>,nil,nil}}}},
{{#Ref<0.0.3145729.257998>,fhc_handle},
{handle,{file_descriptor,prim_file,{#Port<0.86369>,59}},
0,false,0,1048576,[],false,
"/var/lib/rabbitmq/mnesia/$RABBITMQ_NODENAME/msg_store_persistent/357.rdq",
[raw,binary,read_ahead,read],
[{write_buffer,1048576}],
false,true,
{1465,346244,764691}}},
{{#Ref<0.0.3145729.258001>,fhc_handle},
{handle,{file_descriptor,prim_file,{#Port<0.86370>,64}},
14212552,false,0,1048576,[],false,
"/var/lib/rabbitmq/mnesia/$RABBITMQ_NODENAME/msg_store_persistent/340.rdq",
[raw,binary,read_ahead,read,write],
[{write_buffer,1048576}],
true,true,
{1465,346244,891543}}}]
trap_exit: false
status: running
heap_size: 121536
stack_size: 27
reductions: 835024
neighbours:
We'd like to understand what this crash report means. Does it signify a bad message, RMQ can't find a message, or something completely different? We're using RabbitMQ 3.1.5 with Erlang 18, and while we know we're using an old version, we want to first know what's causing the crash before dedicating resources to an upgrade.
This message means that RabbitMQ message storage process has failed to combine files during garbage collection on message store. This can in theory cause message loss.
Note that 3.1.5 is not supported and has not been tested with OTP 10. This issue can be already fixed in newer versions though.
How can I log all AMQP commands that go through RabbitMQ broker including service commands like basic.ack, confirm.select etc?
The standard Java client library com.rabbitmq:amqp-client:3.5.4 contains Tracer tool that works as a standalone proxy between your client and broker. It logs all AMQP commands that go through it to System.out.
It's described here: http://www.rabbitmq.com/java-tools.html
Here's an example of its output:
1441190584927: <Tracer-0> ch#1 -> {#method<channel.open>(out-of-band=), null, ""}
1441190584968: <Tracer-0> ch#1 <- {#method<channel.open-ok>(channel-id=), null, ""}
1441190585008: <Tracer-0> ch#1 -> {#method<confirm.select>(nowait=false), null, ""}
1441190585047: <Tracer-0> ch#1 <- {#method<confirm.select-ok>(), null, ""}
1441190585090: <Tracer-0> ch#1 -> {#method<basic.publish>(ticket=0, exchange=node.confirm.publish, routing-key=, mandatory=false, immediate=false), #contentHeader<basic>(content-type=string/utf8, content-encoding=null, headers=null, delivery-mode=2, priority=null, correlation-id=null, reply-to=null, expiration=null, message-id=null, timestamp=null, type=null, user-id=null, app-id=null, cluster-id=null), "some message"}
1441190585128: <Tracer-0> ch#1 <- {#method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'node.confirm.publish' in vhost '/', class-id=60, method-id=40), null, ""}
you'll need to modify the rabbimq config. see this page for configuration settings.
specifically, you'll want to set something like "info" or "debug" for whatever you're interested in:
[
{log_levels, [
{channel, debug},
{connection, debug}
]}
].
From that page, look for log_levels and you'll find this configuration information:
Controls the granularity of logging. The value is a list of log event category and log level pairs.
The level can be one of 'none' (no events are logged), 'error' (only errors are logged), 'warning' (only errors and warning are logged), 'info' (errors, warnings and informational messages are logged), or 'debug' (errors, warnings, informational messages and debugging messages are logged).
At present there are four categories defined. Other, currently uncategorized, events are always logged.
The categories are:
channel - for all events relating to AMQP channels
connection - for all events relating to network connections
federation - for all events relating to federation
mirroring - for all events relating to mirrored queues
Default: [{connection, info}]
I use guard for test automation, and it sends notifications to tmux when tests runs complete.
However, some of my tests are fairly long in running, and I don't have any clear way to know, if the tmux pane guard runs in is hidden, whether tests have completed. This is especially true if the tests complete with the same status two runs in a row.
Does guard have support for a different notification which shows that there are running tests?
If so, what's an example configuration if, say, I wanted the tmux session title to turn white while tests are running and then red/green/yellow when they complete?
If not, where should I look in the guard source code if I wanted to develop and pull request that feature?
Check out all the TMux options here:
https://github.com/guard/guard/blob/45ac8e1013767e1d84fcc590418f9a8469b0d3b2/lib/guard/notifiers/tmux.rb#L24-L38
There's a display_on_all_clients option - which should flash in any other TMUX clients you have created.
There's also a color_location option (see TMUX man page for possible values).
Here's some example settings you can place in your ~/.guard.rb file:
notification(:tmux, {
timeout: 0.5,
display_message: true,
display_title: true,
default_message_color: 'black',
display_on_all_clients: true,
success: 'colour150',
failure: 'colour174',
pending: 'colour179',
color_location: %w[status-left-bg pane-active-border-fg pane-border-fg],
}) if ENV['TMUX']
I had this issue today, solved it by creating ~/.guard.rb and adding:
# Guardfile
notification :tmux,
display_message: true,
timeout: 5, # in seconds
default_message_format: '%s >> %s',
# the first %s will show the title, the second the message
# Alternately you can also configure *success_message_format*,
# *pending_message_format*, *failed_message_format*
line_separator: ' > ', # since we are single line we need a separator
color_location: 'status-left-bg', # to customize which tmux element will change color
# Other options:
default_message_color: 'black',
success: 'colour150',
failure: 'colour174',
pending: 'colour179',
# Notify on all tmux clients
display_on_all_clients: false,
color_location: %w[status-left-bg pane-active-border-fg pane-border-fg]