filebeat add_fields processor with condition - filebeat

I'd like to add a field "app" with the value "apache-access" to every line that is exported to Graylog by the Filebeat "apache" module.
The following configuration should add the field as I see a "event_dataset"="apache.access" field in Graylog but to does not do anything.
If I remove the condition, the "add_fields" processor does add a field though.
filebeat.inputs:
- type: log
enabled: false
paths:
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.logstash:
hosts: [ "localhost:5044" ]
processors:
- add_fields:
when:
equals:
event_dataset: "apache.access"
target: ""
fields:
app: "apache-access"
logging.level: info

For whatever reason the field is called "event.dataset" in filebeat but displayed as "event_dataset" in Graylog.

Related

Multiple filebeat.inputs to one output. How to optimize the yml

I have multiple inputs paths (with log files) with the same configuration and one output in a filebeat.yml file
I couldn't find how to write the filebeat.yml file in a pretty way. For example, if I have 2 inputs I would write:
filebeat.inputs:
- type: filestream
enabled: true
ignore_older: 20h
parsers:
- multiline:
type: pattern
pattern: ''
negate: false
match: after
prospector.scanner.check_interval: 10s
harvester_buffer_size: 16384
close_timeout: 5m
close_inactive: 5m
close_removed: true
clean_renamed: true
paths:
- \\path\folder\*.log
fields:
document_type: "A"
- type: filestream
enabled: true
ignore_older: 20h
parsers:
- multiline:
type: pattern
pattern: ''
negate: false
match: after
prospector.scanner.check_interval: 10s
harvester_buffer_size: 16384
close_timeout: 5m
close_inactive: 5m
close_removed: true
clean_renamed: true
paths:
- \\other_path\other_folder\*.log
fields:
document_type: "B"
output.kafka:
enabled: true
hosts: ["abc:9092"]
topic: test
key: "%{[fields.document_type]}"
The problem is that, if I have 20 inputs, I have to write the same thing 20 times. Is there any possibility to write the filebeat.inputs just once and only add the path and the field.document_type each time?

I need to copy the content of text file into yaml file at particular line using shell or ansible scripting language

My text file similarly looks like below
ICAgICAidHlwZSI6ICJhY2NlcHQiCiAgICB9CiAgXSwKICAidHJhbnNwb3J0cyI6IHsKICAgICJkb2NrZXIiOiB7CiAgICAgICJpbWFnZS1yZWdpc3RyeS5vcGVuc2hpZnQtaW1hZ2UtcmVnaXN0cnkuc3ZjOjUwMDAvaW1hZ2Utc2lnbmluZyI6IFsKICAgICAgICB7CiAgICAgICAgICAidHlwZSI6ICJzaWduZWRCeSIsCiAgICAgICAgICAia2V5VHlwZSI6ICJHUEdLZXlzIiwKICAgICAgICAgICJrZXlQYXRoIjogIi9ldGMvcGtpL3NpZ24ta2V5L2tleSIKICAgICAgICB9CiAgICAgIF0KIC
and YAML file as below
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: image-policy
spec:
config:
ignition:
config: {}
security:
tls: {}
timeouts: {}
version: 3.2.0
networkd: {}
passwd: {}
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,<<copy_text_here>>
now I need to copy the content of the text file into the YAML file at the source parameter in place of <<copy_text_here>>.
any suggestions on this?
Thanks in advance
Something like that should do de trick, you load the content of the file with lookup. Then you find the string and replace it with the replace module
---
- hosts: localhost
tasks:
- name: replace
replace:
path: subst.yml
regexp: '<<copy_text_here>>'
replace: "{{lookup('file', 'text.yml')}}"

Serverless: TypeError: Cannot read property 'stage' of undefined

frameworkVersion: '2'
plugins:
- serverless-step-functions
- serverless-python-requirements
- serverless-parameters
- serverless-pseudo-parameters
provider:
name: aws
region: us-east-2
stage: ${opt:stage, 'dev'}
runtime: python3.7
versionFunctions: false
iam:
role: arn:aws:iam::#{AWS::AccountId}:role/AWSLambdaVPCAccessExecutionRole
apiGateway:
shouldStartNameWithService: true
lambdaHashingVersion: 20201221
package:
exclude:
- node_modules/**
- venv/**
# Lambda functions
functions:
generateAlert:
handler: handler.generateAlert
generateData:
handler: handler.generateDataHandler
timeout: 600
approveDenied:
handler: handler.approveDenied
timeout: 600
stepFunctions:
stateMachines:
"claims-etl-and-insight-generation-${self:provider.stage}":
loggingConfig:
level: ALL
includeExecutionData: true
destinations:
- Fn::GetAtt: ["ETLStepFunctionLogGroup", Arn]
name: "claims-etl-and-insight-generation-${self:provider.stage}"
definition:
Comment: "${self:provider.stage} ETL Workflow"
StartAt: RawQualityJob
States:
# Raw Data Quality Check Job Start
RawQualityJob:
Type: Task
Resource: arn:aws:states:::glue:startJobRun.sync
Parameters:
JobName: "data_quality_v2_${self:provider.stage}"
Arguments:
"--workflow-name": "${self:provider.stage}-Workflow"
"--dataset_id.$": "$.datasetId"
"--client_id.$": "$.clientId"
Next: DataQualityChoice
Retry:
- ErrorEquals: [States.ALL]
MaxAttempts: 2
IntervalSeconds: 10
BackoffRate: 5
Catch:
- ErrorEquals: [States.ALL]
Next: GenerateErrorAlertDataQuality
# End Raw Data Quality Check Job
DataQualityChoice:
Type: Task
Resource:
Fn::GetAtt: [approveDenied, Arn]
Next: Is Approved ?
Is Approved ?:
Type: Choice
Choices:
- Variable: "$.quality_status"
StringEquals: "Denied"
Next: FailState
Default: HeaderLineJob
FailState:
Type: Fail
Cause: "Denied status"
# Header Line Job Start
HeaderLineJob:
Type: Parallel
Branches:
- StartAt: HeaderLineIngestion
States:
HeaderLineIngestion:
Type: Task
Resource: arn:aws:states:::glue:startJobRun.sync
Parameters:
JobName: headers_lines_etl_rs_v2
Arguments:
"--workflow-name.$": "$.Arguments.--workflow-name"
"--dataset_id.$": "$.Arguments.--dataset_id"
"--client_id.$": "$.Arguments.--client_id"
End: True
Retry:
- ErrorEquals: [States.ALL]
MaxAttempts: 2
IntervalSeconds: 10
BackoffRate: 5
Catch:
- ErrorEquals: [States.ALL]
Next: GenerateErrorAlertHeaderLine
End: True
# Header Line Job End
GenerateErrorAlertDataQuality:
Type: Task
Resource:
Fn::GetAtt: [generateAlert, Arn]
End: true
resources:
Resources:
# Cloudwatch Log
"ETLStepFunctionLogGroup":
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: "ETLStepFunctionLogGroup_${self:provider.stage}"
This is what my serverless.yml file looks like.
When I run the command:
sls deploy --stage staging
It show
Type Error ----------------------------------------------
TypeError: Cannot read property 'stage' of undefined
at Variables.getValueFromOptions (/snapshot/serverless/lib/classes/Variables.js:648:37)
at Variables.getValueFromSource (/snapshot/serverless/lib/classes/Variables.js:579:17)
at /snapshot/serverless/lib/classes/Variables.js:539:12
Your Environment Information ---------------------------
Operating System: linux
Node Version: 14.4.0
Framework Version: 2.30.3 (standalone)
Plugin Version: 4.5.1
SDK Version: 4.2.0
Components Version: 3.7.4
How I can fix this? I tried with different version of serverless.
There is error in yamlParser file, which is provided by serverless-step-functions.
Above is my serverless config file.
It looks like a $ sign is missing from your provider -> stage?
provider:
name: aws
region: us-east-2
stage: ${opt:stage, 'dev'} # $ sign is missing?
runtime: python3.7
versionFunctions: false
iam:
role: arn:aws:iam::#{AWS::AccountId}:role/AWSLambdaVPCAccessExecutionRole
apiGateway:
shouldStartNameWithService: true
lambdaHashingVersion: 20201221

filebeat configuration using elasticsearch

I am Facing issue with Filebeat,i taken filebeat image docker pull docker.elastic.co/beats/filebeat:6.3.1
my filebeat.yml file is
filebeat.config:
prospectors:
path: ${path.config}/prospectors.d/*.yml
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
processors:
- add_cloud_metadata:
output.elasticsearch:
hosts: ['192.0.0.0:9200']
username: elastic
password: changeme
setup.kibana:
host: '192.0.0.0:5601'
filebeat.inputs:
- type: log
paths:
- /var/log/*.log
When i run filebeat i am getting yum.logs and harvester started for yum.log,Please help me
Thanks in advance.

when condition not being evaluated with include statement

I have a tikitaka3.yml (main yml file) and a tikitaka3a.yml (playbook to be included).
I prompt the user for a variable, and then in the tasks section I call it, like so:
---
- hosts: all
vars:
khan:
# contents: "{{ lookup('file', '/home/imran/Desktop/tobefetched/file1.txt') }}"
vars_prompt:
- name: targetenv
prompt: 1.)EPC 2.)CLIENTS 3)TESTERS
private: False
default: "1"
gather_facts: no
tasks:
- name: Inlude playbook tikitaka3a
include: /home/khan/Desktop/playbooks/tikitaka3a.yml target=umar
when: targetenv.stdout|int < 2 #this statement has no effect
#when: targetenv == 1 #Neither does this statement
#when: targetenc == "1" #and neither does this statement have affect
#- name: stuff n stuff # This task will give an error if not commented
# debug: var=targetenv.stdout
The include statement always comes into affect, without the when condition ever being evaluated.
Why is this happening?
When you include an Ansible task file it will attach the when: condition to all included tasks. This means that you will see the tasks displayed even when the when: condition is false though all tasks will be skipped.
One problem with your code above is targetenv.stdout, here is a working version with proper formatting:
- hosts: all
gather_facts: no
vars_prompt:
- name: targetenv
prompt: 1.)EPC 2.)CLIENTS 3)TESTERS
private: False
default: "1"
tasks:
- name: Inlude playbook tikitaka3a
include: roles/test/tasks/tikitaka3a.yml target=umar
when: targetenv|int < 2