Logstash Redis Input connection refused - redis

I'm trying to migrate a docker based redis container into AWS Elasticache. I have the Redis instance running and can connect via the redis CLI but when I setup the logstash with the following:
input {
redis {
host => "redis<domain>.cache.amazonaws.com"
data_type => "list"
key => "logstash"
codec => msgpack
}
}
It explodes with this:
[2022-02-02T13:52:27,575][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/logstash.conf"], :thread=>"#<Thread:0x547a32a1 run>"}
[2022-02-02T13:52:28,685][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.11}
[2022-02-02T13:52:28,701][INFO ][logstash.inputs.redis ][main] Registering Redis {:identity=>"redis://#redis<domain>.cache.amazonaws.com:6379/0 list:logstash"}
[2022-02-02T13:52:28,709][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-02-02T13:52:28,823][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-02-02T13:52:28,837][ERROR][logstash.inputs.redis ][main][08c8cf37082e202fd617f2bc3c642b630c437b5e58521b08cd412f29ed9a10e1] Unexpected error {:message=>"invalid uri scheme ''", :exception=>ArgumentError, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.5.1/lib/redis/client.rb:473:in `_parse_options'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.5.1/lib/redis/client.rb:94:in `initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.5.1/lib/redis.rb:65:in `initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-redis-3.7.0/lib/logstash/inputs/redis.rb:129:in `new_redis_instance'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-redis-3.7.0/lib/logstash/inputs/redis.rb:134:in `connect'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-redis-3.7.0/lib/logstash/inputs/redis.rb:186:in `list_runner'", "org/jruby/RubyMethod.java:131:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-redis-3.7.0/lib/logstash/inputs/redis.rb:87:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:409:in `inputworker'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:400:in `block in start_input'"]}
but when I then use this configuration to provide the uri:
input {
redis {
host => "redis://redis<domain>.cache.amazonaws.com"
data_type => "list"
key => "logstash"
codec => msgpack
}
}
I get this:
[2022-02-02T13:57:10,475][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/logstash.conf"], :thread=>"#<Thread:0x20738737 run>"}
[2022-02-02T13:57:11,586][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.11}
[2022-02-02T13:57:11,600][INFO ][logstash.inputs.redis ][main] Registering Redis {:identity=>"redis://#redis://redis<domain>.cache.amazonaws.com:6379/0 list:logstash"}
[2022-02-02T13:57:11,605][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-02-02T13:57:11,724][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-02-02T13:57:11,843][WARN ][logstash.inputs.redis ][main][08c8cf37082e202fd617f2bc3c642b630c437b5e58521b08cd412f29ed9a10e1] Redis connection error {:message=>"Error connecting to Redis on redis://redis<domain>.cache.amazonaws.com:6379 (SocketError)", :exception=>Redis::CannotConnectError}
The latter error looks saner but the Registering Redis line looks messed up. But neither provide any insight as to why they can't connect, yet I can connect to the Redis instance from the pod. What am I missing here?

Turned out on top of the config, I also had an environment variable set called REDIS_URL that was trying to gazump the config as it's used in the Redis client.
From the readme, I finally discovered:
By default, the client will try to read the REDIS_URL environment
variable and use that as URL to connect to. The above statement is
therefore equivalent to setting this environment variable and calling
Redis.new without arguments.

Related

MassTransit ignores environment variables

I'm trying to utilize Docker and environment variables but MassTransit seems to ignore them completely.
In my Dockerfile (amongst other things), I have this:
ENV MT_RMQ__HOST "rabbitmq"
ENV MT_RMQ__USER "root"
ENV MT_RMQ__PASS "root"
The following get applied just fine and display in Docker UI, but MassTransit still uses the same connection credentials:
Connect: guest#localhost:5672/
warn: MassTransit[0]
Retrying 00:00:08.2900000: Broker unreachable: guest#localhost:5672/
My configuration is simple:
services.AddMassTransit(x =>
{
x.UsingRabbitMq((context, cfg) => cfg.ConfigureEndpoints(context));
});
services.AddMassTransitHostedService();

Logstash s3 output wrote format logstash s3 input doesn't understand

I'm moving data from two ES clusters which are seperated. I've added s3 as a common area and have two logstash instances, one that writes to s3 from Elasticsearch and another that reads S3 and loads Elasticsearch.
The problem is that only one document from each index is loaded. The output file written by s3 output plugin is a single long line, with many json documents all run together without commas or opening or closing square brackets for the array. For example, instead of [{"id":1},{"id":2},{"id":3}] the output is writing files which read {"id":1}{"id":2}{"id":3}. In which case only {"id":1} is read by logstash using s3 as an input.
The configuration to go to s3 is:
input {
elasticsearch {
hosts => ["${ES_HOST}:${ES_PORT}"]
index => "${ES_INDEX}"
password => "${ES_PASS}"
ssl => "true"
user => "${ES_USER}"
}
}
output {
s3 {
bucket => "${S3_BUCKET}"
encoding => "gzip"
codec => "json"
prefix => "${S3_PREFIX}/${ES_INDEX}"
region => "ap-southeast-2"
}
}
The configuration reading S3 is:
input {
s3 {
bucket => "${S3_BUCKET}"
codec => "json"
prefix => "${S3_PREFIX}/${ES_INDEX}/"
region => "ap-southeast-2"
watch_for_new_files => false
}
}
output {
stdout { }
}
In both cases the ${} variables are set in the environment (bash shell).
Both servers are running logstash 7.6.0
PS: I don't think they are important, but the stdout log from logstash says:
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/home/ec2-user/logstash-7.6.0/logstash-core/lib/jars/jruby-complete-9.2.9.0.jar) to method sun.nio.ch.NativeThread.signal(long)
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to /home/ec2-user/logstash-7.6.0/logs which is now configured via log4j2.properties
[2020-03-09T01:10:35,168][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-03-09T01:10:35,353][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.6.0"}
[2020-03-09T01:10:37,813][INFO ][org.reflections.Reflections] Reflections took 48 ms to scan 1 urls, producing 20 keys and 40 values
[2020-03-09T01:10:53,476][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2020-03-09T01:10:53,515][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/home/ec2-user/kibana/from_s3.conf"], :thread=>"#<Thread:0x1364485f run>"}
[2020-03-09T01:10:54,561][INFO ][logstash.inputs.s3 ][main] Registering s3 input {:bucket=>"my-bucket-here", :region=>"ap-southeast-2"}
[2020-03-09T01:10:55,334][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-03-09T01:10:55,435][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-03-09T01:10:55,833][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-03-09T01:10:57,507][INFO ][logstash.inputs.s3 ][main] Using default generated file for the sincedb {:filename=>"/home/ec2-user/logstash-7.6.0/data/plugins/inputs/s3/sincedb_1906e463a09b003733b719c08277c793"}
/home/ec2-user/logstash-7.6.0/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
my-document-here
}
[2020-03-09T01:10:59,587][INFO ][logstash.runner ] Logstash shut down.
PPS: deleting the since DB allows the one row to load, the file is not changing.
Use the below in your s3 output plugin.
codec => "json_lines"
This does delimit each event by a new line.
s3 input plugin can still use the codec of "json".

Logstash fails to connect to rabbitmq

I am trying to setup a logstash configuration to push lines from a file to rabbitmq. I installed both logstash 2.1.1 and rabbit 3.6.0 on my local machine to test it. the output configuration is:
output {
rabbitmq {
exchange => "test_exchange"
key => "test"
exchange_type => "direct"
host => "127.0.0.1"
port => "15672"
user => "logstash"
password => "logstashpassword"
}
}
But when I now start logstash it fails to startup with the following error (the output is only shown when logstash is started in debug mode):
Worker threads expected: 4, worker threads started: 4 {:level=>:info, :file=>"logstash/pipeline.rb", :line=>"186", :method=>"start_filters"}
Connecting to RabbitMQ. Settings: {:vhost=>"/", :host=>"127.0.0.1", :port=>15672, :user=>"logstash", :automatic_recovery=>true, :pass=>"logstashpassword", :timeout=>0, :heartbeat=>0} {:level=>:debug, :file=>"logstash/plugin_mixins/rabbitmq_connection.rb", :line=>"135", :method=>"connect"}
The error reported is:
com.rabbitmq.utility.BlockingCell.get(com/rabbitmq/utility/BlockingCell.java:77)
com.rabbitmq.utility.BlockingCell.uninterruptibleGet(com/rabbitmq/utility/BlockingCell.java:111)
com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(com/rabbitmq/utility/BlockingValueOrException.java:37)
com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(com/rabbitmq/client/impl/AMQChannel.java:367)
com.rabbitmq.client.impl.AMQConnection.start(com/rabbitmq/client/impl/AMQConnection.java:293)
com.rabbitmq.client.ConnectionFactory.newConnection(com/rabbitmq/client/ConnectionFactory.java:621)
com.rabbitmq.client.ConnectionFactory.newConnection(com/rabbitmq/client/ConnectionFactory.java:648)
java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:606)
RUBY.new_connection_impl(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:505)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:281)
RUBY.converting_rjc_exceptions_to_ruby(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:467)
RUBY.new_connection_impl(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:500)
RUBY.initialize(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:136)
RUBY.connect(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:109)
RUBY.connect(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare.rb:20)
RUBY.connect(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-mixin-rabbitmq_connection-2.2.0-java/lib/logstash/plugin_mixins/rabbitmq_connection.rb:137)
RUBY.connect!(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-mixin-rabbitmq_connection-2.2.0-java/lib/logstash/plugin_mixins/rabbitmq_connection.rb:94)
RUBY.register(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-rabbitmq-3.0.6-java/lib/logstash/outputs/rabbitmq.rb:40)
org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)
RUBY.start_outputs(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/pipeline.rb:192)
RUBY.run(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/pipeline.rb:102)
RUBY.execute(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/agent.rb:165)
RUBY.run(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/runner.rb:90)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:281)
RUBY.run(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/runner.rb:95)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:281)
RUBY.initialize(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/task.rb:24)
java.lang.Thread.run(java/lang/Thread.java:745)
And then logstash "crashes" i.e. stops. Does anyone know if that has to do with the config of logstash or is it a problem in rabbitMQ setup?
Thanks in advance.
Daniel
Found my error the port for rabbitMQ has to be 5672 not 15672.

How can I use the Chef JSON to set a redis and sidekiq configuration

I'm using AWS OpsWorks for a Rails application with Redis and Sidekiq and would like to do the following:
Override the maxmemory config for redis
Only run Redis & Sidekiq on a selected EC2 instance
My current JSON config only has the database.yml overrides:
{
"deploy": {
"appname": {
"database": {
"username": "user",
"password": "password",
"database": "db_production",
"host": "db.host.com",
"adapter": "mysql2"
}
}
}
}
Override the maxmemory config for redis
Take a look and see if your Redis cookbook of choice gives you an attribute to set that / provide custom config values. I know the main redisio one lets you set config value, as I do it on my stacks (I set the path to the on disk cache, I believe)
Only run Redis & Sidekiq on a selected EC2 instance
This part is easy: create a Layer for Redis (or Redis/Sidekiq) and add an instance to that layer.
Now, because Redis is on a different instance than your Rails server, you won't necessarily know what the IP address for your Redis server is. Especially since you'll probably want to use the internal EC2 IP address vs the public IP address for the box (using the internal address means you're already inside the default firewall).
Sooo... what you'll probably need to do is to write a custom cookbook for your app, if you haven't already. In your attributes/default.rb write some code like this:
redis_instance_details = nil
redis_stack_name = "REDIS"
redis_instance_name, redis_instance_details = node["opsworks"]["layers"][redis_stack_name]["instances"].first
redis_server_dns = "127.0.0.1"
if redis_instance_details
redis_server_dns = redis_instance_details["private_dns_name"]
end
Then later in the attributes file set your redis config to your redis_hostname (maybe using it to set:
default[:deploy][appname][:environment_variables][:REDIS_URL] = "redis://#{redis_server_dns}:#{redis_port_number}"
Hope this helps!

logstash and centralised redis problems

I'm trying to get logstash working in a centralised setup using the docs as an example:
http://logstash.net/docs/1.2.2/tutorials/getting-started-centralized
I've got logstash (as indexer), redis, elasticsearch and standalone kibana3 running on my web server. I then need to run logstash as an agent on another server to collect apache logs and send them to the web server via redis. The number of agents will increase and the logs will vary, but for now I just want to get this working!
I need everything to run as a service so that all is well after reboots etc. All servers are running Ubuntu.
For all logstash instances (indexer and agent), I'm using the following init script (Ubuntu version, second gist):
https://gist.github.com/shadabahmed/5486949#file-logstash-ubuntu
For running redis as a service, I followed the instructions here:
http://redis.io/topics/quickstart (Installing redis more properly)
Elasticsearch is also running as a service.
On the web server, running redis-cli returns PONG correctly. Navigating to the correct Elasticsearch URL returns the correct JSON response. Navigating to the Kibana3 url gives me the dashboard, but no data. UFW is set to allow the redis port (at the moment from everywhere).
On the web server, my logstash.conf is:
input {
file {
path => "/var/log/apache2/access.log"
type => "apache-access"
sincedb_path => "/etc/logstash/.sincedb"
}
redis {
host => "127.0.0.1"
data_type => "list"
key => "logstash"
codec => json
}
}
filter {
grok {
type => "apache-access"
pattern => "%{COMBINEDAPACHELOG}"
}
}
output {
elasticsearch {
embedded => true
}
statsd {
# Count one hit every event by response
increment => "apache.response.%{response}"
}
}
From the agent server, I can telnet successfully to the web server IP and redis port. logstash is running. The logstash.conf file is:
input {
file {
path => "/var/log/apache2/shift.access.log"
type => "apache"
sincedb_path => "/etc/logstash/since_db"
}
stdin {
type => "example"
}
}
filter {
if [type] == "apache" {
grok {
pattern => "%{COMBINEDAPACHELOG}"
}
}
}
output {
stdout { codec => rubydebug }
redis { host => ["xx.xx.xx.xx"] data_type => "list" key => "logstash" }
}
If I comment out the stdin and stdout lines, I still don't get a result. The logstash logs do not give me any connection errors - only warnings about the deprecated grok settings format.
I have also tried running logstash from the command line (making sure to stop the demonised service first). The apache log file is correctly outputted in the terminal, so I know that logstash is accessing the log correctly. And I can write random strings and they are output in the correct logstash format.
The redis logs on the web server show no sign of trouble......
The frustrating thing is that this has worked once. One message from stdin made it all the way through to elastic search. That was this morning just after getting everything setup. Since then, I have had no luck and I have no idea why!
Any tips/pointers gratefully received... Solving my problem will stop me tearing out more of my hair which will also make my wife happy......
UPDATE
Rather than filling the comments....
Thanks to #Vor and #rutter, I've confirmed that the user running logstash can read/write to the logstash.log file.
I've run the agent with -vv and the logs are populated with e.g.:
{:timestamp=>"2013-12-12T06:27:59.754000+0100", :message=>"config LogStash::Outputs::Redis/#host = [\"XX.XX.XX.XX\"]", :level=>:debug, :file=>"/opt/logstash/logstash.jar!/logstash/config/mixin.rb", :line=>"104"}
I then input random text into the terminal and get stdout results. However, I do not see anything in the logs until AFTER terminating the logstash agent. After the agent is terminated, I get lines like these in the logstash.log:
{:timestamp=>"2013-12-12T06:27:59.835000+0100", :message=>"Pipeline started", :level=>:info, :file=>"/opt/logstash/logstash.jar!/logstash/pipeline.rb", :line=>"69"}
{:timestamp=>"2013-12-12T06:29:22.429000+0100", :message=>"output received", :event=>#<LogStash::Event:0x77962b4d #cancelled=false, #data={"message"=>"test", "#timestamp"=>"2013-12-12T05:29:22.420Z", "#version"=>"1", "type"=>"example", "host"=>"Ubuntu-1204-precise-64-minimal"}>, :level=>:info, :file=>"(eval)", :line=>"16"}
{:timestamp=>"2013-12-12T06:29:22.461000+0100", :level=>:debug, :host=>"XX.XX.XX.XX", :port=>6379, :timeout=>5, :db=>0, :file=>"/opt/logstash/logstash.jar!/logstash/outputs/redis.rb", :line=>"230"}
But while I do get messages in stdout, I get nothing in redis on the other server. I can however telnet to the correct port on the other server, and I get "ping/PONG" in telnet, so redis on the other server is working..... And there are no errors etc in the redis logs.
It looks to me very much like the redis plugin on the logstash shipper agent is not working as expected, but for the life of me, I can't see where the breakdown is coming from.....