I am trying to setup a logstash configuration to push lines from a file to rabbitmq. I installed both logstash 2.1.1 and rabbit 3.6.0 on my local machine to test it. the output configuration is:
output {
rabbitmq {
exchange => "test_exchange"
key => "test"
exchange_type => "direct"
host => "127.0.0.1"
port => "15672"
user => "logstash"
password => "logstashpassword"
}
}
But when I now start logstash it fails to startup with the following error (the output is only shown when logstash is started in debug mode):
Worker threads expected: 4, worker threads started: 4 {:level=>:info, :file=>"logstash/pipeline.rb", :line=>"186", :method=>"start_filters"}
Connecting to RabbitMQ. Settings: {:vhost=>"/", :host=>"127.0.0.1", :port=>15672, :user=>"logstash", :automatic_recovery=>true, :pass=>"logstashpassword", :timeout=>0, :heartbeat=>0} {:level=>:debug, :file=>"logstash/plugin_mixins/rabbitmq_connection.rb", :line=>"135", :method=>"connect"}
The error reported is:
com.rabbitmq.utility.BlockingCell.get(com/rabbitmq/utility/BlockingCell.java:77)
com.rabbitmq.utility.BlockingCell.uninterruptibleGet(com/rabbitmq/utility/BlockingCell.java:111)
com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(com/rabbitmq/utility/BlockingValueOrException.java:37)
com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(com/rabbitmq/client/impl/AMQChannel.java:367)
com.rabbitmq.client.impl.AMQConnection.start(com/rabbitmq/client/impl/AMQConnection.java:293)
com.rabbitmq.client.ConnectionFactory.newConnection(com/rabbitmq/client/ConnectionFactory.java:621)
com.rabbitmq.client.ConnectionFactory.newConnection(com/rabbitmq/client/ConnectionFactory.java:648)
java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:606)
RUBY.new_connection_impl(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:505)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:281)
RUBY.converting_rjc_exceptions_to_ruby(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:467)
RUBY.new_connection_impl(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:500)
RUBY.initialize(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:136)
RUBY.connect(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:109)
RUBY.connect(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare.rb:20)
RUBY.connect(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-mixin-rabbitmq_connection-2.2.0-java/lib/logstash/plugin_mixins/rabbitmq_connection.rb:137)
RUBY.connect!(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-mixin-rabbitmq_connection-2.2.0-java/lib/logstash/plugin_mixins/rabbitmq_connection.rb:94)
RUBY.register(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-rabbitmq-3.0.6-java/lib/logstash/outputs/rabbitmq.rb:40)
org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)
RUBY.start_outputs(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/pipeline.rb:192)
RUBY.run(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/pipeline.rb:102)
RUBY.execute(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/agent.rb:165)
RUBY.run(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/runner.rb:90)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:281)
RUBY.run(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/runner.rb:95)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:281)
RUBY.initialize(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/task.rb:24)
java.lang.Thread.run(java/lang/Thread.java:745)
And then logstash "crashes" i.e. stops. Does anyone know if that has to do with the config of logstash or is it a problem in rabbitMQ setup?
Thanks in advance.
Daniel
Found my error the port for rabbitMQ has to be 5672 not 15672.
Related
I'm trying to migrate a docker based redis container into AWS Elasticache. I have the Redis instance running and can connect via the redis CLI but when I setup the logstash with the following:
input {
redis {
host => "redis<domain>.cache.amazonaws.com"
data_type => "list"
key => "logstash"
codec => msgpack
}
}
It explodes with this:
[2022-02-02T13:52:27,575][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/logstash.conf"], :thread=>"#<Thread:0x547a32a1 run>"}
[2022-02-02T13:52:28,685][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.11}
[2022-02-02T13:52:28,701][INFO ][logstash.inputs.redis ][main] Registering Redis {:identity=>"redis://#redis<domain>.cache.amazonaws.com:6379/0 list:logstash"}
[2022-02-02T13:52:28,709][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-02-02T13:52:28,823][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-02-02T13:52:28,837][ERROR][logstash.inputs.redis ][main][08c8cf37082e202fd617f2bc3c642b630c437b5e58521b08cd412f29ed9a10e1] Unexpected error {:message=>"invalid uri scheme ''", :exception=>ArgumentError, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.5.1/lib/redis/client.rb:473:in `_parse_options'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.5.1/lib/redis/client.rb:94:in `initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.5.1/lib/redis.rb:65:in `initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-redis-3.7.0/lib/logstash/inputs/redis.rb:129:in `new_redis_instance'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-redis-3.7.0/lib/logstash/inputs/redis.rb:134:in `connect'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-redis-3.7.0/lib/logstash/inputs/redis.rb:186:in `list_runner'", "org/jruby/RubyMethod.java:131:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-redis-3.7.0/lib/logstash/inputs/redis.rb:87:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:409:in `inputworker'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:400:in `block in start_input'"]}
but when I then use this configuration to provide the uri:
input {
redis {
host => "redis://redis<domain>.cache.amazonaws.com"
data_type => "list"
key => "logstash"
codec => msgpack
}
}
I get this:
[2022-02-02T13:57:10,475][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/logstash.conf"], :thread=>"#<Thread:0x20738737 run>"}
[2022-02-02T13:57:11,586][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.11}
[2022-02-02T13:57:11,600][INFO ][logstash.inputs.redis ][main] Registering Redis {:identity=>"redis://#redis://redis<domain>.cache.amazonaws.com:6379/0 list:logstash"}
[2022-02-02T13:57:11,605][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-02-02T13:57:11,724][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-02-02T13:57:11,843][WARN ][logstash.inputs.redis ][main][08c8cf37082e202fd617f2bc3c642b630c437b5e58521b08cd412f29ed9a10e1] Redis connection error {:message=>"Error connecting to Redis on redis://redis<domain>.cache.amazonaws.com:6379 (SocketError)", :exception=>Redis::CannotConnectError}
The latter error looks saner but the Registering Redis line looks messed up. But neither provide any insight as to why they can't connect, yet I can connect to the Redis instance from the pod. What am I missing here?
Turned out on top of the config, I also had an environment variable set called REDIS_URL that was trying to gazump the config as it's used in the Redis client.
From the readme, I finally discovered:
By default, the client will try to read the REDIS_URL environment
variable and use that as URL to connect to. The above statement is
therefore equivalent to setting this environment variable and calling
Redis.new without arguments.
I'm trying to connect a Logstash instance, running in a docker container, to an Amazon MQ Broker.
My ultimate goal is to insert the MQ message bodies into ElasticSearch
Based on the logs, I think Logstash is able to reach the MQ Queue, but the error message doesn't give any other info:
[2021-05-21T23:30:53,226][ERROR][logstash.inputs.rabbitmq ][instance_journal_pipeline][rmq_instance_source]
RabbitMQ connection error, will retry. {
:error_message=>"An unknown error occurred. RabbitMQ gave no hints as to the cause. Maybe this is a configuration error (invalid vhost, for example). I recommend checking the RabbitMQ server logs for clues about this failure.",
:exception=>"Java::JavaIo::IOException"
}
My input config is as follows:
input {
rabbitmq {
id => "rmq_instance_source"
ack => true
durable => true
passive => true
exchange => "events"
exchange_type => "topic"
host => "${AWS_MQ_URL}"
user => "${AWS_MQ_USER}"
port => "${AWS_MQ_PORT}"
password => "${AWS_MQ_PASSWORD}"
queue => "outbound_task_queue_name"
key => "outbound_task_key"
arguments => {
# arguments reproduced from the RMQ Queue's admin page
}
}
}
Turns out that this was an SSL configuration issue. The RMQ Logstash plugin requires you to specify that the server is using SSL, and version. The error message does not tell you that this is the case. Verify your SSL version with your MQ Broker.
I added the following params to my config (to match the Amazon MQ configuration) and that solved it:
ssl => true
ssl_version => "TLSv1.2"
can you help me with RabbitMQ input in logstash.
My application sending versions of code to rabbitmq and then it go to store in elastic stack.
For app in rabbitmq was created queue
name: app_version_queue;
type: classic;
durable: true
Then logstash was configured with that config:
input {
rabbitmq {
id => "rabbitmyq_id"
# connect to rabbit
host => "localhost"
port => 5672
vhost => "/"
# INPUT - PRODUCERS
key => "app_version_queue"
# OUTPUT - CONSUMER
# queue for logstash
queue => "logstash"
auto_delete => false
# Exchange for logstash
exchange => logstash
exchange_type => direct
durable => "true"
# No ack will boost your perf
ack => false
}
}
output {
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index => "app_version-%{+YYYY.MM.dd}"
}
}
It's worked, but now, in RabbitMQ console, i see in table Queued messages
Ready: 914,444
Unacked: 0
Total: 914,444
And my disk space on rabbitmq cluster go to full in 3 days.
After rebooting rabbitmq server, all space is free.
UPDATED:
All reason, why i do that, i want to remove NIFI from that chain app=>rabbit=>nifi=>elastic
I want to do: app=>rabbit=>logstash=>elastic
Queue: app_version
My application send messages to nifi=>ELASTIC
Queue1 - app_version_queue
Queue: logstash, what i created with logstash
Queue2 - logstash
I try to stop NIFI sending, but messages not leaving.
It sounds like what's happened is you've created the infrastructure twice:
Once manually in RabbitMQ
Once in the configuration options to LogStash
What you need is just three things:
An exchange for the application to publish messages to.
A queue for LogStash to consume messages from.
A binding between that exchange and that queue; the queue will get a copy of every message published to the exchange with a matching routing key.
What you have is all of this:
An exchange called logs (created manually) which your application publishes messages to.
A queue called app_version_queue (created manually) which nothing consumes from.
A binding (created manually) delivering copies of messages from logs into app_version_queue, which then sit there forever.
An exchange called logstash (created by LogStash) which nothing publishes messages to.
A queue called logstash (created by LogStash) which LogStash consumes messages from.
A binding (created by LogStash) from the logstash exchange to the logstash queue which doesn't do anything, because no messages are published to that exchange.
A binding (created manually) from the logs exchange to the logstash queue which is actually delivering the messages from your application.
So, for each of the three things (the exchange, the queue, and the binding) you need to:
Decide a name
Decide if you're creating it, or letting LogStash create it
Configure everything to use the same name
For instance, you could keep the names logs and app_version_queue, and create everything manually.
Then your LogStash application would look something like this:
input {
rabbitmq {
id => "rabbitmyq_id"
# connect to rabbit
host => "localhost"
port => 5672
vhost => "/"
# Consume from existing queue
queue => "app_version_queue"
# No ack will boost your perf
ack => false
}
}
On the other hand, you could create just the logs exchange, and let LogStash create the queue and binding, like this:
input {
rabbitmq {
id => "rabbitmyq_id"
# connect to rabbit
host => "localhost"
port => 5672
vhost => "/"
# Create a new queue
queue => "logstash_processing_queue"
durable => "true"
# Take a copy of all messages with the "app_version_queue" routing key from the existing exchange
exchange => "logs"
key => "app_version_queue"
# No ack will boost your perf
ack => false
}
}
Or you could let LogStash create all of it, and make sure your application publishes to the right exchange:
input {
rabbitmq {
id => "rabbitmyq_id"
# connect to rabbit
host => "localhost"
port => 5672
vhost => "/"
# Create a new queue
queue => "logstash_processing_queue"
durable => "true"
# Create a new exchange; point your application to publish here!
exchange => "log_exchange"
exchange_type => "direct"
# Take a copy of all messages with the "app_version_queue" routing key from the new exchange
key => "app_version_queue"
# No ack will boost your perf
ack => false
}
}
I'd probably go with the middle option: the exchange is a part of the application's deployment requirements (it will produce errors if it can't publish there), but any number of queues might bind to it for different reasons (maybe none at all in a test environment, where you don't need ElasticSearch set up).
I am trying to link logstash to read messages from a queue that will get indexed in elasticsearch. I initially had it working with a shipper sending messages to the logstash port but now even that is not working. The error when trying to run the logstash conf file:
RabbitMq connection error: . Will reconnect in 10 seconds... {:level=>error}
//not sure if the next piece is related:
WARN: org.elasticsearch.discovery.zen.ping.unicast: [Hellstrom, Damion] failed to send ping
to [[#zen_unicast_2#][inet[localhost/127.0.0.1:9301]]]
org.elasticsearch.transport.ReceiveTimeoutTransportException: []
[inet[localhost/127.0.0.1:9301]][discovery/zen/unicast] request_id [0] timed out after [3752ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
log4j, [2014-03-17T14:48:20.197] WARN: org.elasticsearch.discovery.zen.ping.
unicast: [Hellstrom, Damion] failed to send ping to
[[#zen_unicast_4#] [inet[localhost/127.0.0.1:9303]]]
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[] [inet[localhost/127.0.0.1:9303]][discovery/zen/unicast]
request_id [3]
timed out after [3752ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
log4j, [2014-03-17T14:48:20.198] WARN: org.elasticsearch.discovery.zen.ping.unicast:
[Hellstrom, Damion] failed to send ping to
[[#zen_unicast_3#] [inet[localhost/127.0.0.1:9302]]]
Please I would really appreciate help on this. I have spent all weekend trying to get it to work. Even tried Redis initially but that had its own set of errors.
Oh yes is my conf file:
input {
rabbitmq {
queue => "input.queue"
host => "192.xxx.x.xxx"
exchange => "exchange.output"
vhost => "myhost"
}
}
output {
elasticsearch {
embedded => true
index => "board-feed"
}
}
The problem is related to authentication with the RabbitMQ server. For the RabbitMQ transport, the default values for user/password are guest/guest, which by default in Rabbit will only work when connecting locally (to 127.0.0.1), whereas you are connecting to 192.xxx.x.xxx. (https://www.rabbitmq.com/access-control.html)
My guess is that when it worked before, you were running the Logstash Server on the same machine as RabbitMQ.
To fix the problem, setup an account in RabbitMQ and fill in the user/password fields of the RabbitMQ output to match.
I'm trying to get logstash working in a centralised setup using the docs as an example:
http://logstash.net/docs/1.2.2/tutorials/getting-started-centralized
I've got logstash (as indexer), redis, elasticsearch and standalone kibana3 running on my web server. I then need to run logstash as an agent on another server to collect apache logs and send them to the web server via redis. The number of agents will increase and the logs will vary, but for now I just want to get this working!
I need everything to run as a service so that all is well after reboots etc. All servers are running Ubuntu.
For all logstash instances (indexer and agent), I'm using the following init script (Ubuntu version, second gist):
https://gist.github.com/shadabahmed/5486949#file-logstash-ubuntu
For running redis as a service, I followed the instructions here:
http://redis.io/topics/quickstart (Installing redis more properly)
Elasticsearch is also running as a service.
On the web server, running redis-cli returns PONG correctly. Navigating to the correct Elasticsearch URL returns the correct JSON response. Navigating to the Kibana3 url gives me the dashboard, but no data. UFW is set to allow the redis port (at the moment from everywhere).
On the web server, my logstash.conf is:
input {
file {
path => "/var/log/apache2/access.log"
type => "apache-access"
sincedb_path => "/etc/logstash/.sincedb"
}
redis {
host => "127.0.0.1"
data_type => "list"
key => "logstash"
codec => json
}
}
filter {
grok {
type => "apache-access"
pattern => "%{COMBINEDAPACHELOG}"
}
}
output {
elasticsearch {
embedded => true
}
statsd {
# Count one hit every event by response
increment => "apache.response.%{response}"
}
}
From the agent server, I can telnet successfully to the web server IP and redis port. logstash is running. The logstash.conf file is:
input {
file {
path => "/var/log/apache2/shift.access.log"
type => "apache"
sincedb_path => "/etc/logstash/since_db"
}
stdin {
type => "example"
}
}
filter {
if [type] == "apache" {
grok {
pattern => "%{COMBINEDAPACHELOG}"
}
}
}
output {
stdout { codec => rubydebug }
redis { host => ["xx.xx.xx.xx"] data_type => "list" key => "logstash" }
}
If I comment out the stdin and stdout lines, I still don't get a result. The logstash logs do not give me any connection errors - only warnings about the deprecated grok settings format.
I have also tried running logstash from the command line (making sure to stop the demonised service first). The apache log file is correctly outputted in the terminal, so I know that logstash is accessing the log correctly. And I can write random strings and they are output in the correct logstash format.
The redis logs on the web server show no sign of trouble......
The frustrating thing is that this has worked once. One message from stdin made it all the way through to elastic search. That was this morning just after getting everything setup. Since then, I have had no luck and I have no idea why!
Any tips/pointers gratefully received... Solving my problem will stop me tearing out more of my hair which will also make my wife happy......
UPDATE
Rather than filling the comments....
Thanks to #Vor and #rutter, I've confirmed that the user running logstash can read/write to the logstash.log file.
I've run the agent with -vv and the logs are populated with e.g.:
{:timestamp=>"2013-12-12T06:27:59.754000+0100", :message=>"config LogStash::Outputs::Redis/#host = [\"XX.XX.XX.XX\"]", :level=>:debug, :file=>"/opt/logstash/logstash.jar!/logstash/config/mixin.rb", :line=>"104"}
I then input random text into the terminal and get stdout results. However, I do not see anything in the logs until AFTER terminating the logstash agent. After the agent is terminated, I get lines like these in the logstash.log:
{:timestamp=>"2013-12-12T06:27:59.835000+0100", :message=>"Pipeline started", :level=>:info, :file=>"/opt/logstash/logstash.jar!/logstash/pipeline.rb", :line=>"69"}
{:timestamp=>"2013-12-12T06:29:22.429000+0100", :message=>"output received", :event=>#<LogStash::Event:0x77962b4d #cancelled=false, #data={"message"=>"test", "#timestamp"=>"2013-12-12T05:29:22.420Z", "#version"=>"1", "type"=>"example", "host"=>"Ubuntu-1204-precise-64-minimal"}>, :level=>:info, :file=>"(eval)", :line=>"16"}
{:timestamp=>"2013-12-12T06:29:22.461000+0100", :level=>:debug, :host=>"XX.XX.XX.XX", :port=>6379, :timeout=>5, :db=>0, :file=>"/opt/logstash/logstash.jar!/logstash/outputs/redis.rb", :line=>"230"}
But while I do get messages in stdout, I get nothing in redis on the other server. I can however telnet to the correct port on the other server, and I get "ping/PONG" in telnet, so redis on the other server is working..... And there are no errors etc in the redis logs.
It looks to me very much like the redis plugin on the logstash shipper agent is not working as expected, but for the life of me, I can't see where the breakdown is coming from.....