I am trying to send logs to S3 and simulate a folder structure like
dev/logstash/1234/logfilename.txt
some how my this configuration is not working. How do i pass the num value? this is my output conf to s3
output{
s3{
region => "us-east-1"
bucket => "xx-yy-zz"
prefix => "dev/logstash/appname/%{num}/"
time_file => 1
}
}
the %{num} doesnt evaluate. how to pass that value?
Related
I want to add the current date to every filename that is incoming to my s3 bucket.
My current config looks like this:
input {
s3 {
access_key_id => "some_key"
secret_access_key => "some_access_key"
region => "some_region"
bucket => "mybucket"
interval => "10"
sincedb_path => "/tmp/sincedb_something"
backup_add_prefix =>'%{+yyyy.MM.dd.HH}'
backup_to_bucket => "mybucket"
additional_settings => {
force_path_style => true
follow_redirects => false
}
}
}
Is there a way to use the current date in backup_add_prefix =>'%{+yyyy.MM.dd.HH}'
because the current syntax does not work as it produces: "
%{+yyyy.MM.dd.HH}test_file.txt" in my bucket.
Though it's not supported in s3 input plugin directly, it can be achieved. Use the following steps:
Go to logstash home path.
Open the file vendor/bundle/jruby/2.3.0/gems/logstash-input-s3-3.4.1/lib/logstash/inputs/s3.rb. The exact path will depend on your lagstash version.
Look for the method backup_to_bucket.
There is a line backup_key = "#{#backup_add_prefix}#{object.key}"
Add following lines before the above line:
t = Time.new
date_s3 = t.strftime("%Y.%m.%d")
Now change the backup_key to #{#backup_add_prefix}#{date_s3}#{object.key}
Now you are done. Restart your logstash pipeline. It should be able to achieve the desired result.
While parsing AWS alb log with logstash s3 plugin it throws "We cannot uncompress the gzip file" when 0 size gz comes up
present logstash config:
input {
s3 {
bucket => "bucket-production-logs"
access_key_id => "${ACCESS_KEY}"
secret_access_key => "${SECRET_KEY}"
region => "${REGION}"
prefix => "web-alb-logs/"
}
}
I can now upload JPG ifrom my BackpackForLaravel to AWS 3 Yahhh!!!
But how or where can i make change this code:
$this->crud->addField([ // image
'label' => "Produkt foto",
'name' => "productfoto",
'type' => 'image',
'tab' => 'Produktfoto',
'upload' => true,
'crop' => true,
'aspect_ratio' => 1,
'disks' => 's3images' // This is not working ??
]);
To showing url from AWS S3 mith my uploaded JPG... (instead of local public)
I can't find any documentation or code examples of it :-(
Please help...
I don't really know how this can be of help to you. Pictures to S3 usually need to be base64 encoded so you will need to decode them before you can store in your s3 bucket. So how i handled it sometime ago is this:
$getId = $request->get('myId');
$encoded_data = $request->get('myphotodata');
$binary_data = base64_decode($encoded_data);
$filename_path = md5(time().uniqid()).".jpg";
$directory = 'uploads';
Storage::disk('s3')->put($directory.'/'.$filename_path , $binary_data);
In summary, i am asuming you have the right permission on your bucket and ready to put in image from your storage disks.
I am new to logstash. I have some logs stored in AWS S3 and I am able to import them to logstash. My question is: is it possible to use the grok filter to add tags based on the filenames? I try to use:
grok {
match => {"path" => "%{GREEDYDATA}/%{GREEDYDATA:bitcoin}.err.log"}
add_tag => ["bitcoin_err"]
}
This is not working. I guess the reason is "path" only working with file inputs.
Here is the structure of my S3 buckets:
my_buckets
----A
----2014-07-02
----a.log
----b.log
----B
----2014-07-02
----a.log
----b.log
I am using this inputs conf:
s3 {
bucket => "my_buckets"
region => "us-west-1"
credentials => ["XXXXXX","XXXXXXX"]
}
What I want is that, for any log messages in:
"A/2014-07-02/a.log": they will have tag ["A","a"].
"A/2014-07-02/b.log": they will have tag ["A","b"].
"B/2014-07-02/a.log": they will have tag ["B","a"].
"B/2014-07-02/b.log": they will have tag ["B","b"].
Sorry about my english....
There is no "path" in S3 inputs. I mount the S3 storage on my server and use the file inputs. With file inputs, I can use the filter to match the path now.
With Logstash 6.0.1, I was able to get key for each file from S3. In your case, you can use this key (or path) in filter to add tags.
Example:
input {
s3 {
bucket => "<bucket-name>"
prefix => "<prefix>"
}
}
filter {
mutate {
add_field => {
"file" => "%{[#metadata][s3][key]}"
}
}
...
}
Use this above file field in filter to add tags.
Reference:
Look for eye8 answer in this issue
If you want to use tags based on filename, I think that this will work (I have not test it):
filter {
grok {
match => [ "path", "%{GREEDYDATA:content}"]
}
mutate {
add_tag => ["content"]
}
}
"content" tag will be the filename, now it's up to you to modify the pattern to create differents tags with the specific part of the filename.
how would I get the file from amazon s3 to local system using php.
I am trying to do this but its not working
$s3 = new AmazonS3("key 1", " acces pass");
$s3->getObject("Bucket/filename");
//write to local
$fp = fopen('/tmp/filename.mp4', 'w');
fpassthru($fp);
EDIT
I am trying to save the file to my local server from s3
As of 3.35.x verison AWS SDK -- the following snippet works with SaveAs.
Notice the buket name, key, and saveas with full path with file name.
$result = $s3->getObject(array(
'Bucket' => $bucket,
'Key' => $key,
'SaveAs' => $path . $model->file_name,
));
Check out the docs for getObject:
You need to either pass the remote file name as the 2nd param, then in the options set the value of 'fileDownload' to a file name or an OPEN file resource as a parameter there.
Example:
$s3->getObject('myBucket','myRemoteFile', array('fileDownload' => 'localFileName'));