I want to add the current date to every filename that is incoming to my s3 bucket.
My current config looks like this:
input {
s3 {
access_key_id => "some_key"
secret_access_key => "some_access_key"
region => "some_region"
bucket => "mybucket"
interval => "10"
sincedb_path => "/tmp/sincedb_something"
backup_add_prefix =>'%{+yyyy.MM.dd.HH}'
backup_to_bucket => "mybucket"
additional_settings => {
force_path_style => true
follow_redirects => false
}
}
}
Is there a way to use the current date in backup_add_prefix =>'%{+yyyy.MM.dd.HH}'
because the current syntax does not work as it produces: "
%{+yyyy.MM.dd.HH}test_file.txt" in my bucket.
Though it's not supported in s3 input plugin directly, it can be achieved. Use the following steps:
Go to logstash home path.
Open the file vendor/bundle/jruby/2.3.0/gems/logstash-input-s3-3.4.1/lib/logstash/inputs/s3.rb. The exact path will depend on your lagstash version.
Look for the method backup_to_bucket.
There is a line backup_key = "#{#backup_add_prefix}#{object.key}"
Add following lines before the above line:
t = Time.new
date_s3 = t.strftime("%Y.%m.%d")
Now change the backup_key to #{#backup_add_prefix}#{date_s3}#{object.key}
Now you are done. Restart your logstash pipeline. It should be able to achieve the desired result.
Related
I have the following file resource in my puppet manifest:
file{
'example.dat'
path => "/path/to/example.dat",
owner => devops,
mode => "0644",
source => "/path/to/example.txt"
}
I want to run the above snippet only when the .txt file is present in a particular directory. Otherwise, I do not want the snippet to run.
How do I go about doing that in puppet?
In general, you would create a custom fact that returns true when the directory in question satisfies your condition. For example:
Facter.add('contains_txt') do
setcode do
! Dir.glob("/path/to/dir/*.txt").empty?
end
end
Then you would write:
if $facts['contains_txt'] {
file { 'example.dat':
path => "/path/to/example.dat",
owner => devops,
mode => "0644",
source => "/path/to/example.txt",
}
}
I am trying to send logs to S3 and simulate a folder structure like
dev/logstash/1234/logfilename.txt
some how my this configuration is not working. How do i pass the num value? this is my output conf to s3
output{
s3{
region => "us-east-1"
bucket => "xx-yy-zz"
prefix => "dev/logstash/appname/%{num}/"
time_file => 1
}
}
the %{num} doesnt evaluate. how to pass that value?
I can now upload JPG ifrom my BackpackForLaravel to AWS 3 Yahhh!!!
But how or where can i make change this code:
$this->crud->addField([ // image
'label' => "Produkt foto",
'name' => "productfoto",
'type' => 'image',
'tab' => 'Produktfoto',
'upload' => true,
'crop' => true,
'aspect_ratio' => 1,
'disks' => 's3images' // This is not working ??
]);
To showing url from AWS S3 mith my uploaded JPG... (instead of local public)
I can't find any documentation or code examples of it :-(
Please help...
I don't really know how this can be of help to you. Pictures to S3 usually need to be base64 encoded so you will need to decode them before you can store in your s3 bucket. So how i handled it sometime ago is this:
$getId = $request->get('myId');
$encoded_data = $request->get('myphotodata');
$binary_data = base64_decode($encoded_data);
$filename_path = md5(time().uniqid()).".jpg";
$directory = 'uploads';
Storage::disk('s3')->put($directory.'/'.$filename_path , $binary_data);
In summary, i am asuming you have the right permission on your bucket and ready to put in image from your storage disks.
I am new to logstash. I have some logs stored in AWS S3 and I am able to import them to logstash. My question is: is it possible to use the grok filter to add tags based on the filenames? I try to use:
grok {
match => {"path" => "%{GREEDYDATA}/%{GREEDYDATA:bitcoin}.err.log"}
add_tag => ["bitcoin_err"]
}
This is not working. I guess the reason is "path" only working with file inputs.
Here is the structure of my S3 buckets:
my_buckets
----A
----2014-07-02
----a.log
----b.log
----B
----2014-07-02
----a.log
----b.log
I am using this inputs conf:
s3 {
bucket => "my_buckets"
region => "us-west-1"
credentials => ["XXXXXX","XXXXXXX"]
}
What I want is that, for any log messages in:
"A/2014-07-02/a.log": they will have tag ["A","a"].
"A/2014-07-02/b.log": they will have tag ["A","b"].
"B/2014-07-02/a.log": they will have tag ["B","a"].
"B/2014-07-02/b.log": they will have tag ["B","b"].
Sorry about my english....
There is no "path" in S3 inputs. I mount the S3 storage on my server and use the file inputs. With file inputs, I can use the filter to match the path now.
With Logstash 6.0.1, I was able to get key for each file from S3. In your case, you can use this key (or path) in filter to add tags.
Example:
input {
s3 {
bucket => "<bucket-name>"
prefix => "<prefix>"
}
}
filter {
mutate {
add_field => {
"file" => "%{[#metadata][s3][key]}"
}
}
...
}
Use this above file field in filter to add tags.
Reference:
Look for eye8 answer in this issue
If you want to use tags based on filename, I think that this will work (I have not test it):
filter {
grok {
match => [ "path", "%{GREEDYDATA:content}"]
}
mutate {
add_tag => ["content"]
}
}
"content" tag will be the filename, now it's up to you to modify the pattern to create differents tags with the specific part of the filename.
We're receiving logs using Logstash with the following configuration:
input {
udp {
type => "logs"
port => 12203
}
}
filter {
grok {
type => "tracker"
pattern => '%{GREEDYDATA:message}'
}
date {
type => "tracker"
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss,SSS" ]
}
}
output{
tcp{
type => "logs"
host => "host"
port => 12203
}
}
We're then picking the logs up on the machine "host" with the following settings:
input {
tcp {
type => "logs"
port => 12203
}
}
output {
pipe {
command => "python /usr/lib/piperedis.py"
}
}
From here, we're doing parsing of the lines and putting them into a Redis database. However, we've discovered an interesting problem.
Logstash 'wraps' the log message in a JSON style package i.e.:
{\"#source\":\"source/\",\"#tags\":[],\"#fields\":{\"timestamp\":[\"2013-09-16 15:50:47,440\"],\"thread\":[\"ajp-8009-7\"],\"level\":[\"INFO\"],\"classname\":[\"classname\"],\"message\":[\"message"\]}}
We then, on receiving it and passing it on on the next machine, take that as the message and put it in another wrapper! We're only interested in the actual log message and none of the other stuff (source path, source, tags, fields, timestamp e.t.c.)
Is there a way we can use filters or something to do this? We've looked through the documentation but can't find any way to just pass the raw log lines between instances of Logstash.
Thanks,
Matt
The logstash documentation is wrong - it indicates that the default "codec" is plain but in fact it doesn't use a codec - it uses an output format.
To get a simpler output, change your output to something like
output {
pipe {
command => "python /usr/lib/piperedis.py"
message_format => "%{message}"
}
}
Why not just extract those messages from stdout?
line = sys.stdin.readline()
line_json = json.loads(line)
line_json['message'] # will be your #message