I'm trying to upload a file from Logstash to s3. Therefore, I want to replace all special characters in the field that will be the s3 key.
The filter that I'm using in my conf:
filter {
mutate {
gsub => [ "log.file.path", "[=#&<>{}:,~#`%\;\$\+\?\\\^\[\]\|\s+]", "_" ]
}
}
I also added an output to file to test the gsub :
output {
file {
codec => rubydebug
path => "/tmp/test_gsub"
}
s3 {
....
}
}
An example of output in /tmp/test_gsub that shows that the gsub didn't work:
"#timestamp" => 2020 - 06 - 04T08: 40: 17.564Z,
"log" => {
"offset" => 1784971,
"file" => {
"path" => "/var/log/AVI1:VM_B30/app.log"
}
},
"message" => "just random message",
The log.file.path still has the : in the path. I would expect the path to change to /var/log/AVI1_VM_B30/app.log
Update
Tried also to use the following regex but still got same result :
filter {
mutate {
gsub => [ "log.file.path", "[:]", "_" ]
}
}
What worked for me in the end :
filter {
mutate {
gsub => [ "[log][file][path]", "[=#&<>{}:,~#`%\;\$\+\?\\\^\[\]\|\s+]", "_" ]
}
Related
I am using the Amazon PHP SDK to upload a folder on my server to a bucket. This is working great:
$skip = ['index.html', '_metadata.txt', '_s3log.txt'];
$meta = [
'key' => $options->EWRbackup_s3_key,
'region' => $options->EWRbackup_s3_region,
'bucket' => $options->EWRbackup_s3_bucket,
'directory' => 's3://'.$options->EWRbackup_s3_bucket.'/'.$subdir,
];
$client = new S3Client([
'version' => 'latest',
'region' => $meta['region'],
'credentials' => [
'key' => $meta['key'],
'secret' => $options->EWRbackup_s3_secret,
]
]);
$s3log = fopen($subpath.'/_s3log.txt', 'w+');
fwrite($s3log, "-- Connecting to ".$meta['region'].":".$meta['bucket']."...\n");
$manager = new Transfer($client, $subpath, $meta['directory'], [
'before' => function ($command)
{
$filename = basename($command->get('SourceFile'));
fwrite($this->s3log, "-- Sending file $filename...\n");
},
]);
$manager->transfer();
fwrite($s3log, "-- Disconnecting from ".$meta['key'].":".$meta['bucket']."...");
fclose($s3log);
However, in the folder I am uploading using the Transfer method, there are 3 files I want to skip. They are defined in the $skip variable on line one. I was wondering if there was a way I could get the Transfer to skip these 3 files...
I modified the AWS client in a WordPress plugin I created. The AWS/S3/Transfer.php file is where the uploads are managed, in this case. I modified the private function upload($filename) to look for a boolean return value from the before function:
private function upload($filename)
{
$args = $this->s3Args;
$args['SourceFile'] = $filename;
$args['Key'] = $this->createS3Key($filename);
$command = $this->client->getCommand('PutObject', $args);
if ($this->before) {
if (false!==call_user_func($this->before, $command)) {
return $this->client->executeAsync($command);
}
} else {
return $this->client->executeAsync($command);
}
}
This replaces the original lines:
$this->before and call_user_func($this->before, $command);
return $this->client->executeAsync($command);
with
if ($this->before) {
if (false!==call_user_func($this->before, $command)) {
return $this->client->executeAsync($command);
}
} else {
return $this->client->executeAsync($command);
}
Then, in your declared before function, you can return false if you do not want this particular file uploaded.
I was able to do this because I can control when the AWS PHP SDK is updated and can therefore modify the code it contains. I have not found any hooks in the PHP SDK to do this in a better way.
I am using logstash, elastic search and Kibana.
input file is in .csv format
I first created the following mapping through the Dev Tools > console in Kibana:
PUT /defects
{
"mappings": {
"type_name":{
"properties" : {
"Detected on Date" :{
"type": "date"
},
"DEFECTID": {
"type": "integer"
}
}
}
}
}
It was successful. Then created a logstash conf file and ran it.
Here is my logstash.conf file:
input {
file {
path => ["E:/d drive/ELK/data/sample.csv"]
start_position => "beginning"
sincedb_path => "/dev/nul"
}
}
filter {
csv {
columns => ["DEFECTID","Detected on Date","SEVERITY","STATUS"]
separator => ","
}
}
output {
elasticsearch {
action => "index"
hosts => "localhost:9200"
manage_template => false
template_overwrite => true
index => "defects"
}
}
I created index pattern defects* in Kibana. when i look at the type of the fields, all are shown as string. Pls Let me know where i am missing
I'm centralizing logs with ELK Stack (Elasticsearch, Logstash and Kibana). It works fine but..
There are some types of logs in my S3 bucket:
elasticbeanstalk-access-logs
error logs
tomcat7 access-logs
stacktrace logs
I'm using the S3 input plugin in my Logstash config file:
input {
s3 {
secret_access_key => "..."
access_key_id => "..."
region => "eu-central-1"
bucket => "bucket_name"
prefix => "resources/environments/logs/publish"
codec => "plain"
}
}
And I'm using some filter plugins:
filter {
if [type] == "access" {
mutate { replace => { type => "apache_access" } }
grok { match => { "message" => "%{COMBINEDAPACHELOG}" } }
date { match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ] }
} else {
multiline {
#type => "all" # no type means for all inputs
pattern => "(^.+Exception: .+)|(^\s+at .+)|(^\s+... \d+ more)|(^\s*Caused by:.+)"
what => "previous"
}
grok {
match => [ "message", "(?m)%{TIMESTAMP_ISO8601:timestamp} \[%{HOSTNAME:thread}\] %{LOGLEVEL:severity} %{GREEDYDATA:message}" ]
overwrite => [ "message" ]
}
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss,SSS" ]
}
}
}
The problem: there are 4 types. How can I use the 'if's for filtering the logs. I used "http://grokconstructor.appspot.com" to test my grok filter and it works for 1 type of log.
The solution should be something like this:
if [type] == "access" {
#my grok filter
} else if [type] == "stacktrace" {
#my grok filter
} else if [type] == "tomcat7" {
#my grok filter
} ...
Tomcat Cataline out log:
2016-04-07 15:27:28,459 [http-bio-8080-exec-33] ERROR v1.PaymentTxController - Cannot get property 'attrs' on null object
java.lang.NullPointerException: Cannot get property 'attrs' on null object
at com.b2boost.payment.provider.paybox.PayboxPaymentProviderService.createSubscriptionAndPay(PayboxPaymentProviderService.groovy:206)
at com.b2boost.payment.provider.paybox.PayboxPaymentProviderService$__tt__pay_closure9.doCall(PayboxPaymentProviderService.groovy:82)
at com.b2boost.commons.error.AppError.safe(AppError.groovy:53)
at com.b2boost.commons.error.AppError.safe(AppError.groovy:60)
at com.b2boost.payment.provider.paybox.PayboxPaymentProviderService.$tt__pay(PayboxPaymentProviderService.groovy:73)
at com.b2boost.payment.PaymentService$__tt__pay_closure8.doCall(PaymentService.groovy:52)
at com.b2boost.commons.error.AppError.safeWithEither(AppError.groovy:70)
at com.b2boost.commons.error.AppError.safeWithEither(AppError.groovy:64)
at com.b2boost.payment.PaymentService.$tt__pay(PaymentService.groovy:43)
at com.b2boost.users.api.v1.PaymentTxController$_save_closure1.doCall(PaymentTxController.groovy:49)
at com.b2boost.users.api.v1.BaseController.documentWithAuthorization(BaseController.groovy:101)
at com.b2boost.users.api.v1.PaymentTxController.save(PaymentTxController.groovy:45)
at grails.plugin.cache.web.filter.PageFragmentCachingFilter.doFilter(PageFragmentCachingFilter.java:177)
at grails.plugin.cache.web.filter.AbstractFilter.doFilter(AbstractFilter.java:63)
at com.odobo.grails.plugin.springsecurity.rest.RestTokenValidationFilter.processFilterChain(RestTokenValidationFilter.groovy:99)
at com.odobo.grails.plugin.springsecurity.rest.RestTokenValidationFilter.doFilter(RestTokenValidationFilter.groovy:66)
at grails.plugin.springsecurity.web.filter.GrailsAnonymousAuthenticationFilter.doFilter(GrailsAnonymousAuthenticationFilter.java:53)
at com.odobo.grails.plugin.springsecurity.rest.RestAuthenticationFilter.doFilter(RestAuthenticationFilter.groovy:108)
at grails.plugin.springsecurity.web.authentication.logout.MutableLogoutFilter.doFilter(MutableLogoutFilter.java:82)
at com.odobo.grails.plugin.springsecurity.rest.RestLogoutFilter.doFilter(RestLogoutFilter.groovy:63)
at com.brandseye.cors.CorsFilter.doFilter(CorsFilter.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Error log:
[Tue Apr 12 10:01:01 2016] [notice] Apache/2.2.29 (Unix) DAV/2 configured -- resuming normal operations
Stacktrace log
2015-11-13 16:02:28,524 [MonitoringThread-118] ERROR StackTrace - Full Stack Trace:
com.notnoop.exceptions.ApnsDeliveryErrorException: Failed to deliver notification with error code 8
at com.notnoop.apns.internal.ApnsConnectionImpl$2.run(ApnsConnectionImpl.java:189)
at java.lang.Thread.run(Thread.java:745)
I am learning how to use Puppet. An now I am trying to change config file for nscd. I need to change such lines:
server-user nscd
paranoia yes
And let's suppose that full config looks as next:
$ cat /etc/nscd/nscd.conf
logfile /var/log/nscd.log
threads 4
max-threads 32
server-user nobody
stat-user somebody
debug-level 0
reload-count 5
paranoia no
restart-interval 3600
Previously I have wrote such module for replacing needed lines and it looks as follow:
include nscd
class nscd {
define line_replace ($match) {
file_line { $name:
path => '/etc/nscd/nscd.conf',
line => $name,
match => $match,
notify => Service["nscd"]
}
}
anchor{'nscd::begin':}
->
package { 'nscd':
ensure => installed,
}
->
line_replace {
"1" : name => "server-user nscd", match => "^\s*server-user.*$";
"2" : name => "paranoia yes", match => "^\s*paranoia.*$";
}
->
service { 'nscd':
ensure => running,
enable => "true",
}
->
anchor{'nscd::end':}
}
Is it possible to make the same in more efficient way? With arrays or like that?
I recommend you to use the inifile puppet module to easy manage INI-style files like this, but also you can take advantage of the create_resources function:
include nscd
class nscd {
$server_user_line = 'server-user nscd'
$paranoia_line = 'paranoia yes'
$defaults = {
'path' => '/etc/nscd/nscd.conf',
'notify' => Service["nscd"],
}
$lines = {
$server_user_line => {
line => $server_user_line,
match => "^\s*server-user.*$",
},
$paranoia_line => {
line => $paranoia_line,
match => "^\s*paranoia.*$",
}
}
anchor{'nscd::begin':}
->
package { 'nscd':
ensure => installed,
}
->
create_resources(file_line, $lines, $defaults)
->
service { 'nscd':
ensure => running,
enable => "true",
}
->
anchor{'nscd::end':}
}
So I wrote such code:
class nscd($parameters) {
define change_parameters() {
file_line { $name:
path => '/etc/nscd.conf',
line => $name,
# #name.split[0..-2] will remove last element,
# does not matter how many elements in line
match => inline_template('<%="^\\s*"+(#name.split[0..-2]).join("\\s*")+".*$" %>'),
}
}
anchor{'nscd::begin':}
->
package { 'nscd':
ensure => installed,
}
->
change_parameters { $parameters: }
->
service { 'nscd':
ensure => 'running',
enable => true,
hasrestart => true
}
->
anchor{'nscd::end':}
}
And class can be launched by passing list/array to class:
class { 'nscd':
parameters =>
[' server-user nscd',
' paranoia yes',
' enable-cache hosts yes smth',
' shared hosts yes']
}
Then each element from array goes to change_parameters function as $name argument after that inline_template module will generate regexp with ruby one line code.
And the same for each element from list/array.
But anyway I think better to use erb template for such changing.
So I was testing this config for using metrics from the Logstash website here.
input {
generator {
type => "generated"
}
}
filter {
if [type] == "generated" {
metrics {
meter => "events"
add_tag => "metric"
}
}
}
output {
# only emit events with the 'metric' tag
if "metric" in [tags] {
stdout {
message => "rate: %{events.rate_1m}"
}
}
}
But it looks like the "message" field for stdout was deprecated. What is the correct way to do this in Logstash 1.4?
So figured it out after looking at the JIRA page for Logstash.
NOTE: The metrics only print or "flush" every 5 seconds so if you are generating logs for less than 5 seconds, you won't see a metrics print statement
Looks like it should be:
output {
if "metric" in [tags]
{
stdout {
codec => line {
format => "Rate: %{events.rate_1m}"
}
}
}
}