Print a string to stdout using Logstash 1.4? - config

So I was testing this config for using metrics from the Logstash website here.
input {
generator {
type => "generated"
}
}
filter {
if [type] == "generated" {
metrics {
meter => "events"
add_tag => "metric"
}
}
}
output {
# only emit events with the 'metric' tag
if "metric" in [tags] {
stdout {
message => "rate: %{events.rate_1m}"
}
}
}
But it looks like the "message" field for stdout was deprecated. What is the correct way to do this in Logstash 1.4?

So figured it out after looking at the JIRA page for Logstash.
NOTE: The metrics only print or "flush" every 5 seconds so if you are generating logs for less than 5 seconds, you won't see a metrics print statement
Looks like it should be:
output {
if "metric" in [tags]
{
stdout {
codec => line {
format => "Rate: %{events.rate_1m}"
}
}
}
}

Related

how to pass date variable to graphQL call in svelte?

I have a graphql file with a date in ISO format. I would like to pass a variable instead of hardcoding the date. I would like to use Date.toISOstring() or some get current date method.
GRAPHQL FILE
let today = Date.toISOString() //or DateNow()
query guide {
tv {
guide(date: "2022-08-10T00:00:00Z <-replace--${today}") {
entries {
channel {
show{
......
}
}
}
}
}
}
Is this possible?
Use a GraphQL variable and pass it to your query. Here are the adjustment you have to make to the query. I am guessing the name of the date scalar here (DateTime), it might as well simply be String. Check the documentation of the API to get the correct name.
query guide($date: DateTime!) {
tv {
guide(date: $date) {
entries {
channel {
show{
......
}
}
}
}
}
}
If you use Svelte Apollo for example, you can pass the variables like this:
const guide = query(GUIDE_QUERY, {
variables: { date: new Date().toIsoString() },
});

Logstash mutate gsub for s3 special characters

I'm trying to upload a file from Logstash to s3. Therefore, I want to replace all special characters in the field that will be the s3 key.
The filter that I'm using in my conf:
filter {
mutate {
gsub => [ "log.file.path", "[=#&<>{}:,~#`%\;\$\+\?\\\^\[\]\|\s+]", "_" ]
}
}
I also added an output to file to test the gsub :
output {
file {
codec => rubydebug
path => "/tmp/test_gsub"
}
s3 {
....
}
}
An example of output in /tmp/test_gsub that shows that the gsub didn't work:
"#timestamp" => 2020 - 06 - 04T08: 40: 17.564Z,
"log" => {
"offset" => 1784971,
"file" => {
"path" => "/var/log/AVI1:VM_B30/app.log"
}
},
"message" => "just random message",
The log.file.path still has the : in the path. I would expect the path to change to /var/log/AVI1_VM_B30/app.log
Update
Tried also to use the following regex but still got same result :
filter {
mutate {
gsub => [ "log.file.path", "[:]", "_" ]
}
}
What worked for me in the end :
filter {
mutate {
gsub => [ "[log][file][path]", "[=#&<>{}:,~#`%\;\$\+\?\\\^\[\]\|\s+]", "_" ]
}

Why is Date query with aggregate is not working in parse-server?

I want to query user where updatedAt is less than or equal today using aggregate because I'm doing other stuff like sorting by pointers.
I'm using cloud code to define the query from the server.
I first tried using mongoDB Compass to check my query using ISODate and it works, but using it in NodeJS seems not working correctly.
I also noticed about this problem that was already fix, they say. I also saw their tests.
Here's a link to that PR.
I'm passing date like this:
const pipeline = [
{
project: {
_id: true,
process: {
$substr: ['$_p_testdata', 12, -1]
}
}
},
{
lookup: {
from: 'Test',
localField: 'process',
foreignField: '_id',
as: 'process'
}
},
{
unwind: {
path: '$process'
}
},
{
match: {
'process._updated_at': {
$lte: new Date()
}
}
}
];
const query = new Parse.Query('data');
return query.aggregate(pipeline);
I expect value to be an array with length of 4 but only give me empty array.
I was able to fetch data without match date.
Please try this:
const pipeline = [
{
match: {
'editedBy.updatedAt': {
$lte: new Date()
}
}
}
];

not able to specify type for fields in elastic search index

I am using logstash, elastic search and Kibana.
input file is in .csv format
I first created the following mapping through the Dev Tools > console in Kibana:
PUT /defects
{
"mappings": {
"type_name":{
"properties" : {
"Detected on Date" :{
"type": "date"
},
"DEFECTID": {
"type": "integer"
}
}
}
}
}
It was successful. Then created a logstash conf file and ran it.
Here is my logstash.conf file:
input {
file {
path => ["E:/d drive/ELK/data/sample.csv"]
start_position => "beginning"
sincedb_path => "/dev/nul"
}
}
filter {
csv {
columns => ["DEFECTID","Detected on Date","SEVERITY","STATUS"]
separator => ","
}
}
output {
elasticsearch {
action => "index"
hosts => "localhost:9200"
manage_template => false
template_overwrite => true
index => "defects"
}
}
I created index pattern defects* in Kibana. when i look at the type of the fields, all are shown as string. Pls Let me know where i am missing

Puppet. Change any config/text file in most efficient way

I am learning how to use Puppet. An now I am trying to change config file for nscd. I need to change such lines:
server-user nscd
paranoia yes
And let's suppose that full config looks as next:
$ cat /etc/nscd/nscd.conf
logfile /var/log/nscd.log
threads 4
max-threads 32
server-user nobody
stat-user somebody
debug-level 0
reload-count 5
paranoia no
restart-interval 3600
Previously I have wrote such module for replacing needed lines and it looks as follow:
include nscd
class nscd {
define line_replace ($match) {
file_line { $name:
path => '/etc/nscd/nscd.conf',
line => $name,
match => $match,
notify => Service["nscd"]
}
}
anchor{'nscd::begin':}
->
package { 'nscd':
ensure => installed,
}
->
line_replace {
"1" : name => "server-user nscd", match => "^\s*server-user.*$";
"2" : name => "paranoia yes", match => "^\s*paranoia.*$";
}
->
service { 'nscd':
ensure => running,
enable => "true",
}
->
anchor{'nscd::end':}
}
Is it possible to make the same in more efficient way? With arrays or like that?
I recommend you to use the inifile puppet module to easy manage INI-style files like this, but also you can take advantage of the create_resources function:
include nscd
class nscd {
$server_user_line = 'server-user nscd'
$paranoia_line = 'paranoia yes'
$defaults = {
'path' => '/etc/nscd/nscd.conf',
'notify' => Service["nscd"],
}
$lines = {
$server_user_line => {
line => $server_user_line,
match => "^\s*server-user.*$",
},
$paranoia_line => {
line => $paranoia_line,
match => "^\s*paranoia.*$",
}
}
anchor{'nscd::begin':}
->
package { 'nscd':
ensure => installed,
}
->
create_resources(file_line, $lines, $defaults)
->
service { 'nscd':
ensure => running,
enable => "true",
}
->
anchor{'nscd::end':}
}
So I wrote such code:
class nscd($parameters) {
define change_parameters() {
file_line { $name:
path => '/etc/nscd.conf',
line => $name,
# #name.split[0..-2] will remove last element,
# does not matter how many elements in line
match => inline_template('<%="^\\s*"+(#name.split[0..-2]).join("\\s*")+".*$" %>'),
}
}
anchor{'nscd::begin':}
->
package { 'nscd':
ensure => installed,
}
->
change_parameters { $parameters: }
->
service { 'nscd':
ensure => 'running',
enable => true,
hasrestart => true
}
->
anchor{'nscd::end':}
}
And class can be launched by passing list/array to class:
class { 'nscd':
parameters =>
[' server-user nscd',
' paranoia yes',
' enable-cache hosts yes smth',
' shared hosts yes']
}
Then each element from array goes to change_parameters function as $name argument after that inline_template module will generate regexp with ruby one line code.
And the same for each element from list/array.
But anyway I think better to use erb template for such changing.