Logstash - Storing RabbitMQ Logs - Multiline - rabbitmq

I have been using ELK for about six months now, and it's been great so far. I'm on logstash version 6.2.3.
RabbitMQ makes up the heart of my distributed system (RabbitMQ is itself distributed), and as such it is very important that I track the logs of RabbitMQ.
Most other conversations on this forum seem to use RabbitMQ as an input/output stage, but I just want to monitor the logs.
The only problem I'm finding is that RabbitMQ has multiline logging, like so:
=WARNING REPORT==== 19-Nov-2017::06:53:14 ===
closing AMQP connection <0.27161.0> (...:32799 -> ...:5672, vhost: '/', user: 'worker'):
client unexpectedly closed TCP connection
=WARNING REPORT==== 19-Nov-2017::06:53:18 ===
closing AMQP connection <0.22410.0> (...:36656 -> ...:5672, vhost: '/', user: 'worker'):
client unexpectedly closed TCP connection
=WARNING REPORT==== 19-Nov-2017::06:53:19 ===
closing AMQP connection <0.26045.0> (...:55427 -> ...:5672, vhost: '/', user: 'worker'):
client unexpectedly closed TCP connection
=WARNING REPORT==== 19-Nov-2017::06:53:20 ===
closing AMQP connection <0.5484.0> (...:47740 -> ...:5672, vhost: '/', user: 'worker'):
client unexpectedly closed TCP connection
I have found a brilliant code example here, which I have stripped just to the filter stage, such that it looks like this:
filter {
if [type] == "rabbitmq" {
codec => multiline {
pattern => "^="
negate => true
what => "previous"
}
grok {
type => "rabbit"
patterns_dir => "patterns"
pattern => "^=%{WORD:report_type} REPORT=+ %{RABBIT_TIME:time_text} ===.*$"
}
date {
type => "rabbit"
time_text => "dd-MMM-yyyy::HH:mm:ss"
}
mutate {
type => "rabbit"
add_field => [
"message",
"%{#message}"
]
}
mutate {
gsub => [
"message", "^=[A-Za-z0-9: =-]+=\n", "",
# interpret message header text as "severity"
"report_type", "INFO", "1",
"report_type", "WARNING", "3",
"report_type", "ERROR", "4",
"report_type", "CRASH", "5",
"report_type", "SUPERVISOR", "5"
]
}
}
}
But when I save this to a conf file and restart logstash I get the following error:
[2018-04-04T07:01:57,308][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-04-04T07:01:57,316][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-04-04T07:01:57,841][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.3"}
[2018-04-04T07:01:57,973][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-04-04T07:01:58,037][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, { at line 3, column 15 (byte 54) after filter {\n if [type] == \"rabbitmq\" {\n codec ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:42:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:50:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:12:in `block in compile_sources'", "org/jruby/RubyArray.java:2486:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `compile_sources'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:51:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:169:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:315:in `block in converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in `with_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:312:in `block in converge_state'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:299:in `converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:166:in `block in converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in `with_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:164:in `converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:348:in `block in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}
Any ideas what the issue could be?
Thanks,

In case you are sending your logs from the rabbitMQ server to logstash with filebeat, you should configure the multiline there.

The answer is multiline indeed. The goal is to merge the lines starting with something else than a date with the previous line that started with a date. This is how :
multiline.pattern: '^\d{4}-\d{2}-\d{2}'
multiline.negate: true
multiline.match: after
Note: I previously tried to merge any lines started with space characters ^\s+ but that did not work because not all warning or error messages started with a space.
Complete filebeat input (7.5.2 format)
filebeat:
inputs:
- exclude_lines:
- 'Failed to publish events caused by: EOF'
fields:
type: rabbitmq
fields_under_root: true
paths:
- /var/log/rabbitmq/*.log
tail_files: false
timeout: 60s
type: log
multiline.pattern: '^\d{4}-\d{2}-\d{2}'
multiline.negate: true
multiline.match: after
Logstash patterns:
# RabbitMQ
RABBITMQDATE %{MONTHDAY}-%{MONTH}-%{YEAR}::%{HOUR}:%{MINUTE}:%{SECOND}
RABBITMQLINE (?m)=%{DATA:severity} %{DATA}==== %{RABBITMQDATE:timestamp} ===\n%{GREEDYDATA:message}
I am sure they had good reasons to log in this odd way in RMQ 3.7.x but without knowing them, it really makes our life hard.

You can't use a codec as a filter plugin. Codecs can only be used in input or output plugins (see the doc), with the codec configuration option.
You'll have to put your multiline codec in the input plugin that's producing your rabbitmq logs.

Related

NesJS RabbitMQ with #golevelup/nestjs-rabbitmq no connection when using new connectioninit

Using #golevelup/nestjs-rabbitmq I tried the connection manager to not wait for a connection. According to the readme it can handle reconnections and wait for a connection without crashing the app. However, when I use the connectionInitOptions as stated and set wait to false, I get a connection error. When I don't use it (default behavior setting wait to true) , it connects to the RabbitMQ server. Below are examples importing the RabbitMQModule in a NestJS module.
This works and connects to the RabbitMQ server
RabbitMQModule.forRoot(RabbitMQModule, {
exchanges: [{ type: 'topic', name: 'main' }],
uri: 'amqp://guest:guest#localhost:5672',
}
This doesn't work and won't connect
RabbitMQModule.forRoot(RabbitMQModule, {
exchanges: [{ type: 'topic', name: 'main' }],
uri: 'amqp://guest:guest#localhost:5672',
connectionInitOptions: {
wait: false,
},
With the second option I get the following error:
Error: AMQP connection is not available
at AmqpConnection.publish (/home/xxx/node_modules/#golevelup/nestjs-rabbitmq/src/amqp/connection.ts:424:13)
at BootstrapService.onApplicationBootstrap (/home/xxx/src/bootstrap/bootstrap.service.ts:20:25)
at MapIterator.iteratee (/home/xxx/node_modules/#nestjs/core/hooks/on-app-bootstrap.hook.js:22:43)
at MapIterator.next (/home/xxx/node_modules/iterare/src/map.ts:9:39)
at IteratorWithOperators.next (/home/xxx/node_modules/iterare/src/iterate.ts:19:28)
at Function.from (<anonymous>)
at IteratorWithOperators.toArray (/home/xxx/node_modules/iterare/src/iterate.ts:227:22)
at callOperator (/home/xxx/node_modules/#nestjs/core/hooks/on-app-bootstrap.hook.js:23:10)
at callModuleBootstrapHook (/home/xxx/node_modules/#nestjs/core/hooks/on-app-bootstrap.hook.js:43:23)
at NestApplication.callBootstrapHook (/home/xxx/node_modules/#nestjs/core/nest-application-context.js:199:55)
at NestApplication.init (/home/xxx/node_modules/#nestjs/core/nest-application.js:98:9)
at NestApplication.listen (/home/xxx/node_modules/#nestjs/core/nest-application.js:155:33)
at bootstrap (/home/xxx/src/main.ts:12:3)
The last line (main.ts:12:3) is the app.listen(3000) statement.
There are other options you can set with the connectionInitOptions (reject and timeout) and I've tried the combinations but still no connection.
RabbitMQ is running in a docker container on Linux but that should be no problem. I posted the same question on NestJS discord but got no reply, so hopefully someone on SO has an idea.
Any idea what could be the cause?
Found the problem, I was using the connection in a onApplicationBootstrap method and then the connection is apparently not present yet.
you can wait for connection asynchronously 'onApplicationBootstrap':
or on :
async onModuleInit() {
await this.amqpConnection.managedChannel.waitForConnect(async () => {
await this.assertQueueAndBindToExchange(
transferRequestQueueName,
transferRequestExchangeName,
createdRoutingKey
);

RabbitMQ new consumer hangs

I'm using rabbitmq 3.6.6 using the docker image "rabbitmq:3"
Whenever I add a new consumer to my RabbitMQ queue it hangs from anywhere to 10 seconds 10 hours.
Below is an example of code used to get the error. I also get this error in Go. So it's not library dependant.
<?php
include(__DIR__."/vendor/autoload.php");
print "Start" . PHP_EOL;
$connection = new \PhpAmqpLib\Connection\AMQPStreamConnection('xxxx', 5697, 'guest', 'guest');
$channel = $connection->channel();
$callback = function($msg) {
echo " [x] Received ", $msg->body, "\n";
};
$channel->basic_consume('repositories', '', false, false, false, false, $callback);
while(count($channel->callbacks)) {
$channel->wait();
}
When I look at the logs I see
=INFO REPORT==== 31-Jan-2017::21:14:33 ===
accepting AMQP connection <0.891.0> (10.32.0.1:54216 -> 10.44.0.3:5672)
=INFO REPORT==== 31-Jan-2017::21:14:34 ===
accepting AMQP connection <0.902.0> (10.32.0.1:54247 -> 10.44.0.3:5672)
When I do list_consumer during via rabbitmqctl I see the consumer in the list, yet no messages are processed by it.
It turns out I needed to set the Qos setting.
Some more information can be found at:
http://www.rabbitmq.com/consumer-prefetch.html
https://github.com/streadway/amqp/blob/master/channel.go#L576

Logstash to Elasticsearch Bulk Request , SSL peer shut down incorrectly- Manticore::ClientProtocolException logstash

ES version - 2.3.5 , Logstash - 2.4
'Attempted to send bulk request to Elasticsearch, configured at ["xxxx.com:9200"] ,
An error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided ?
Error:
"SSL peer shut down incorrectly", Manticore::ClientProtocolException
logstash"'
My logstash Output section:
output
{
stdout { codec => rubydebug }
stdout { codec => json }
elasticsearch
{
user => "xxxx"
password => "xxx"
index => "wrike_jan"
document_type => "data"
hosts => ["xxxx.com:9200"]
ssl => true
ssl_certificate_verification => false
truststore => "elasticsearch-2.3.5/config/truststore.jks"
truststore_password => "83dfcdddxxxxx"
}
}
Logstash file is executed , but it is failing to send the data to ES.
Could you please suggest, thank you.
Be particular about http or https in the url, in the above case i am sending data to https but my ES is using http.
Later, upgrade of logstash version solved to send data to ES.

RabbitMQ STOMP connection

I am working on a fun project which requires me to learn message queues and websockets. I am trying to connect browsers via websockets to an instance of rabbitmq using sockjs rather than pure websockets. On rabbit I have activated the plugins for stomp and web_stomp (web_stomp is required when using sockjs).
The problem I am running into is that while the call from the browser seems to be working properly because a very brief connection to Rabbit is made through the webstomp/stomp connection but after 2 or 3 seconds the connection is dropped by Rabbit.
This is confirmed by the rabbitmq logs:
=INFO REPORT==== 11-Jul-2016::23:01:54 ===
accepting STOMP connection (192.168.1.10:49746 -> 192.168.1.100:55674)
=INFO REPORT==== 11-Jul-2016::23:02:02 ===
closing STOMP connection (192.168.1.10:49746 -> 192.168.1.100:55674)
This is the browser code that connects to RabbitMQ via the webstomp plugin:
var url = "http://192.168.1.100:55674/stomp";
var ws = new SockJS(url);
var client = Stomp.over(ws);
var header = {
login: 'test',
passcode: 'test'
};
client.connect(header,
function(){
console.log('Hooray! Connected');
},
function(error){
console.log('Error connecting to WS via stomp:' + JSON.stringify(error));
}
);
Here is the Rabbit config:
[
{rabbitmq_stomp, [{default_user, [{login, "test"},
{passcode, "test"}
]
},
{tcp_listeners, [{"192.168.1.100", 55674}]},
{heartbeat, 0}
]
}
]
I have been over the Rabbit docs a million times but this feels like something simple that I am overlooking.
Resolved. After combing through the logs I realized that web_stomp was listening on port 15674 so I changed the config file to reflect that. I swear I had made that change at some point but it did not seem to make a difference.
One of the late changes I made before sending out my request was to turn off heartbeat. Everything I have read states that sockjs does not support heartbeat and that there were suggestions to turn it off rather than use the default. In addition to turning off heartbeat in the config file I also added this to the browser code:
client.heartbeat.outgoing=0;
client.heartbeat.incoming=0;

logstash rabbitmq output never posts to exchange

I've got logstash running, and successfully reading in a file
rabbitmq is running, I'm watching the log, and I can see the web interface
I've configured logstash to output to a rabbitmq exchange... I think!
Here's the problem: nothing ever gets posted to the exchange, as seen in the web interface.
Any ideas?
My output config:
output {
rabbitmq {
codec => plain
host => localhost
exchange => yomtvraps
exchange_type => direct
}
file { path => "/tmp/heartbeat-from-logstash.log" }
}
UPDATE: I'm watching the rabbit log with
tail -F /usr/local/var/log/rabbitmq/rabbit\#localhost.log
As it turns out, the problem was that there was no routing key set for the exchange and queue.
A working config is:
output {
rabbitmq {
codec => plain
host => localhost
exchange => yomtvraps
exchange_type => direct
key => yomtvraps
# these are defaults but you never know...
durable => true
port => 5672
user => "guest"
password => "guest"
}
}
Here's a sample receiver code (using ruby "Bunny")
require "bunny"
conn = Bunny.new(:automatically_recover => false)
conn.start
ch = conn.create_channel
q = ch.queue("yomtvraps")
exchange = ch.direct("yomtvraps", :durable => true)
begin
puts " [*] Waiting for messages. To exit press CTRL+C"
q.bind(exchange, :routing_key => "yomtvraps").subscribe(:block => true) do |delivery_info, properties, body|
puts " [x] Received #{body}"
end
rescue Interrupt => _
conn.close
exit(0)
end
you rabbitmq's parameter seems not enough, username,password and port have not been configured.
You can configure two outputs, one is to rabbitmq, the other is to file for vertifying the log's creation and log stash is ok.
pay attention to the logstash's version(log stash, rabbitmq plugin), it gave me lots of trouble in my trial before (log stash to another redis server etc).
You could debug rabbitmq's log.
ps -ef|grep erl you could find the log file's path in the arguments.
Be sure that rabbitmq's web manager plugin is enabled, and firewall is rightly configured, then open rabbitmq's web manager, ipaddress:15672
check the exchange's type is ok (in this case 'direct' may be a correct choice), your message consumer is configured ok, and your consumer's queue has been been bound to the exchange correctly.
try to post the message to your consumer through web manager and ensure consumer work well.
Monitor your queue when log stash push log into your consumer.