Variables in logstash config not substituted - amazon-s3

None of the variables in prefix are substituted - why?
It was working with on old version of logstash (1.5.4) but doesn't anymore with 2.3.
Part of the output filter in logstash.cfg (dumps to s3):
output {
if [bucket] == "bucket1" {
s3 {
bucket => "bucket1"
access_key_id => "****"
secret_access_key => "****"
region => "ap-southeast-2"
prefix => "%{env}/%{year}/%{month}/%{day}/"
size_file => 50000000 #50mb
time_file => 1
codec => json_lines # save log as json line (no newlines)
temporary_directory => "/var/log/temp-logstash"
tags => ["bucket1"]
}
}
..
}
Example dataset (taken from stdout):
{
"random_person" => "Kenneth Cumming 2016-04-14 00:53:59.777647",
"#timestamp" => "2016-04-14T00:53:59.917Z",
"host" => "192.168.99.1",
"year" => "2016",
"month" => "04",
"day" => "14",
"env" => "dev",
"bucket" => "bucket1"
}
Just in case, here is the filter:
filter {
mutate {
add_field => {
"request_uri" => "%{[headers][request_path]}"
}
}
grok {
break_on_match => false # default behaviour is to stop matching after first match, we don't want that
match => { "#timestamp" => "%{NOTSPACE:date}T%{NOTSPACE:time}Z"} # break timestamp field into date and time
match => { "date" => "%{INT:year}-%{INT:month}-%{INT:day}"} # break date into year month and day fields
match => { "request_uri" => "/%{WORD:env}/%{NOTSPACE:bucket}"} # break request uri into environment and bucket fields
}
mutate {
remove_field => ["request_uri", "headers", "#version", "date", "time"]
}
}

It's a known issue that field variables aren't allowed in 'prefix'.

Related

Create field from file name condition in logstash

I have several logs with the following names, where [E-1].[P-28], [E-1].[P-45] and [E-1].[P-51] are operators that generate these logs (They do not appear within the data. I can only identify them by obtaining from the file name)
p2sajava131.srv.gva.es_11101.log.online.[E-1].[P-28].21.01.21.log
p1sajava130.srv.gva.es_11101.log.online.[E-1].[P-45].21.03.04.log
p1sajava130.srv.gva.es_11101.log.online.[E-1].[P-51].21.03.04.log
...
is it posible to use translate filter create a new field?
somethink like:
translate{
field => "[log.file.path]"
destination => "[operator_name]"
dictionary => {
if contains "[E-1].[P-28]" => "OPERATOR-1"
if contains "[E-1].[P-45]" => "OPERATOR-2"
if contains "[E-1].[P-51]" => "OPERATOR-3"
thanx
I don't have ELK here so I can't test but this should works
if [log][file][path] =~ "[E-1].[P-28]" {
mutate {
add_field => { "[operator][name]" => "OPERATOR-1" }
}
}
if [log][file][path] =~ "[E-1].[P-45]" {
mutate {
add_field => { "[operator][name]" => "OPERATOR-2" }
}
}
if [log][file][path] =~ "[E-1].[P-51]" {
mutate {
add_field => { "[operator][name]" => "OPERATOR-3" }
}
}

Error when I try to create index in elastic search from logstash

Hi Im getting the following error when I try to create index in ElasticSearch from logstash:
[Converge PipelineAction::Create] agent - Failed to execute action
{:action=>LogStash::PipelineAction::Create/pipeline_id:main,
:exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, input, filter, output at
line 1, column 1 (byte 1)"
Can you tell me if I got something wrong in my .conf file
iput {
file {
path => "/opt/sis-host/process/uptime_test*"
# start_position => "beginning"
ignore_older => 0
}
}*emphasized text*
filter {
grok {
match => { "message" => "%{DATA:hora} %{DATA:fecha} %{DATA:status} %{DATA:server} %
{INT:segundos}" }
}
date {
match => ["horayfecha", "HH:mm:ss MM/dd/YYYY" ]
target => "#timestamp"
}
}
output {
elasticsearch {
hosts => ["host:9200"]
index => "uptime_test-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
The configuration file should start with input and not "iput"
input { # not iput
file {
path => "/opt/sis-host/process/uptime_test*"
# start_position => "beginning"
ignore_older => 0
}
}
filter {
grok {
match => { "message" => "%{DATA:hora} %{DATA:fecha} %{DATA:status} %{DATA:server} %
{INT:segundos}" }
}
date {
match => ["horayfecha", "HH:mm:ss MM/dd/YYYY" ]
target => "#timestamp"
}
}
output {
elasticsearch {
hosts => ["host:9200"]
index => "uptime_test-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}

Using RabbitMQ fields in Logstash output

I want to use some fields from RabbitMQ messages into Logstah Elasticsearch output (like a index name, etc).
If I use [#metadata][rabbitmq_properties][timestamp] in filter it works nice, but not in output statement (config below).
What am I doing wrong?
input {
rabbitmq {
host => "rabbitmq:5672"
user => "user"
password => "password"
queue => "queue "
durable => true
prefetch_count => 1
threads => 3
ack => true
metadata_enabled => true
}
}
filter {
if [#metadata][rabbitmq_properties][timestamp] {
date {
match => ["[#metadata][rabbitmq_properties][timestamp]", "UNIX"]
}
}
}
output {
elasticsearch {
hosts => ['http://elasticsearch:9200']
index => "%{[#metadata][rabbitmq_properties][IndexName]}_%{+YYYY.MM.dd}"
}
stdout {codec => rubydebug}
}
check with replace function as mentioned below.
input {
rabbitmq {
host => "rabbitmq:5672"
user => "user"
password => "password"
queue => "queue "
durable => true
prefetch_count => 1
threads => 3
ack => true
metadata_enabled => true
}
}
filter {
if [#metadata][rabbitmq_properties][timestamp] {
date {
match => ["[#metadata][rabbitmq_properties][timestamp]", "UNIX"]
}
}
mutate {
replace => {
"[#metadata][index]" => "%{[#metadata][rabbitmq_properties][IndexName]}_%{+YYYY.MM.dd}"
}
}
}
output {
elasticsearch {
hosts => ['http://elasticsearch:9200']
index => "%{[#metadata][index]}_%{+YYYY.MM.dd}"
}
stdout {codec => rubydebug}
}

Logstash sprintf formatting for elasticsearch output plugin not working

I am having trouble using sprintf to reference the event fields in the elasticsearch output plugin and I'm not sure why. Below is the event received from Filebeat and sent to Elasticsearch after filtering is complete:
{
"beat" => {
"hostname" => "ca86fed16953",
"name" => "ca86fed16953",
"version" => "6.5.1"
},
"#timestamp" => 2018-12-02T05:13:21.879Z,
"host" => {
"name" => "ca86fed16953"
},
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
],
"fields" => {
"env" => "DEV"
},
"source" => "/usr/share/filebeat/dockerlogs/logstash_DEV.log",
"#version" => "1",
"prospector" => {
"type" => "log"
},
"bgp_id" => "42313900",
"message" => "{<some message here>}",
"offset" => 1440990627,
"input" => {
"type" => "log"
},
"docker" => {
"container" => {
"id" => "logstash_DEV.log"
}
}
}
I am trying to index the files this based on filebeat's environment. Here is my config file:
input {
http { }
beats {
port => 5044
}
}
filter {
grok {
patterns_dir => ["/usr/share/logstash/pipeline/patterns"]
break_on_match => false
match => { "message" => ["%{RUBY_LOGGER}"]
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "%{[fields][env]}-%{+yyyy.MM.dd}"
}
stdout { codec => rubydebug }
}
I would think the referenced event fields would have already been populated by the time it reaches the elasticsearch output plugin. However, on the kibana end, it doesnt not register the formatted index. Instead, its since like this:
What have I done wrong?
In Elasticsearch Output plugin docs:
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-manage_template
Should you require support for other index names, or would like to
change the mappings in the template in general, a custom template can
be specified by setting template to the path of a template file.
Setting manage_template to false disables this feature. If you require
more control over template creation, (e.g. creating indices
dynamically based on field names) you should set manage_template to
false and use the REST API to apply your templates manually.
By default, elasticsearch requires you to specify a custom template if using different index names other than logstash-%{+YYYY.MM.dd}. To disable, we need to include the manage_template => false key.
So with this new set of info, the working config should be:
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "%{[fields][env]}-%{+yyyy.MM.dd}"
manage_template => false
}
stdout { codec => rubydebug }
}

only strings in influxdb

i've this config file in logstash
input {
redis{
host => "localhost"
data_type => "list"
key => "vortex"
threads => 4
type => "testrecord"
codec => "plain"
}
}
filter {
kv {
add_field => {
"test1" => "yellow"
"test" => "ife"
"feild" => "pink"
}
}
}
output {
stdout { codec => rubydebug }
influxdb {
db => "toast"
host => "localhost"
measurement => "myseries"
allow_time_override => true
use_event_fields_for_data_points => true
exclude_fields => ["#version", "#timestamp", "sequence", "message", "type", "host"]
send_as_tags => ["bar", "feild", "test1", "test"]
}
}
and a list in redis with the following data:
foo=10207 bar=1 sensor2=1 sensor3=33.3 time=1489686662
everything works fine but every field in influx is defined as string regardless of values.
does anybody know how to get around this issue?
The mutate filter may be what you're looking for here.
filter {
mutate {
convert => {
"value" => "integer"
"average" => "float"
}
}
}
It means you need to know what your fields are before-hand, but it will convert them into the right data-type.