Logstash configuration to parse 0 size aws alb log file - amazon-s3

While parsing AWS alb log with logstash s3 plugin it throws "We cannot uncompress the gzip file" when 0 size gz comes up
present logstash config:
input {
s3 {
bucket => "bucket-production-logs"
access_key_id => "${ACCESS_KEY}"
secret_access_key => "${SECRET_KEY}"
region => "${REGION}"
prefix => "web-alb-logs/"
}
}

Related

Logstash current date logstash.conf as backup_add_prefix (s3 input plugin)

I want to add the current date to every filename that is incoming to my s3 bucket.
My current config looks like this:
input {
s3 {
access_key_id => "some_key"
secret_access_key => "some_access_key"
region => "some_region"
bucket => "mybucket"
interval => "10"
sincedb_path => "/tmp/sincedb_something"
backup_add_prefix =>'%{+yyyy.MM.dd.HH}'
backup_to_bucket => "mybucket"
additional_settings => {
force_path_style => true
follow_redirects => false
}
}
}
Is there a way to use the current date in backup_add_prefix =>'%{+yyyy.MM.dd.HH}'
because the current syntax does not work as it produces: "
%{+yyyy.MM.dd.HH}test_file.txt" in my bucket.
Though it's not supported in s3 input plugin directly, it can be achieved. Use the following steps:
Go to logstash home path.
Open the file vendor/bundle/jruby/2.3.0/gems/logstash-input-s3-3.4.1/lib/logstash/inputs/s3.rb. The exact path will depend on your lagstash version.
Look for the method backup_to_bucket.
There is a line backup_key = "#{#backup_add_prefix}#{object.key}"
Add following lines before the above line:
t = Time.new
date_s3 = t.strftime("%Y.%m.%d")
Now change the backup_key to #{#backup_add_prefix}#{date_s3}#{object.key}
Now you are done. Restart your logstash pipeline. It should be able to achieve the desired result.

logstash to s3 how to prefix a unique number as folder

I am trying to send logs to S3 and simulate a folder structure like
dev/logstash/1234/logfilename.txt
some how my this configuration is not working. How do i pass the num value? this is my output conf to s3
output{
s3{
region => "us-east-1"
bucket => "xx-yy-zz"
prefix => "dev/logstash/appname/%{num}/"
time_file => 1
}
}
the %{num} doesnt evaluate. how to pass that value?

Logstash to Elasticsearch Bulk Request , SSL peer shut down incorrectly- Manticore::ClientProtocolException logstash

ES version - 2.3.5 , Logstash - 2.4
'Attempted to send bulk request to Elasticsearch, configured at ["xxxx.com:9200"] ,
An error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided ?
Error:
"SSL peer shut down incorrectly", Manticore::ClientProtocolException
logstash"'
My logstash Output section:
output
{
stdout { codec => rubydebug }
stdout { codec => json }
elasticsearch
{
user => "xxxx"
password => "xxx"
index => "wrike_jan"
document_type => "data"
hosts => ["xxxx.com:9200"]
ssl => true
ssl_certificate_verification => false
truststore => "elasticsearch-2.3.5/config/truststore.jks"
truststore_password => "83dfcdddxxxxx"
}
}
Logstash file is executed , but it is failing to send the data to ES.
Could you please suggest, thank you.
Be particular about http or https in the url, in the above case i am sending data to https but my ES is using http.
Later, upgrade of logstash version solved to send data to ES.

Amazon s3 putObject is not working

I am trying to upload a file in amzon s3. I want to uppload it in a subfolder of subfolder of my bucket. It is not allowing me to do this. My bucket keys and information are valid. If I upload it without subfolder name, it is being uploaded. Here is the code below
try {
$client->putObject(array(
'Bucket'=>'image.sitename.com',
'Key' => 'music/3956/3956_122.jpg',
'SourceFile' => '3956_122.jpg',
'StorageClass' => 'REDUCED_REDUNDANCY'
));
} catch (S3Exception $e) {
echo $e->getMessage();
}

500 (Internal Server Error) when uploading an Image to AWS S3 via Elastic Beanstalk Application

My task is to make an application using AWS Elastic Beanstalk and S3 where an Image can be uploaded/deleted from the user.
I am using the BlueImp library : "https://github.com/blueimp/jQuery-File-Upload"
as well as the modified version of UploadHandler.php :"https://gist.github.com/tim-peterson/8172999"
I have installed aws-php-sdk using Composer inside the application and created a config.php file as follows:
<?php return[
's3' => [
'key' => '***',
'secret' => '***',
'region' => 'eu-west-1',
'bucket' => 'my-bucket'
]];?>
But when I try to upload an Image it shows an error : "500 (Internal Server Error)"
Any ideas why?
Thanks in advance!