Multiple filebeat.inputs to one output. How to optimize the yml - filebeat

I have multiple inputs paths (with log files) with the same configuration and one output in a filebeat.yml file
I couldn't find how to write the filebeat.yml file in a pretty way. For example, if I have 2 inputs I would write:
filebeat.inputs:
- type: filestream
enabled: true
ignore_older: 20h
parsers:
- multiline:
type: pattern
pattern: ''
negate: false
match: after
prospector.scanner.check_interval: 10s
harvester_buffer_size: 16384
close_timeout: 5m
close_inactive: 5m
close_removed: true
clean_renamed: true
paths:
- \\path\folder\*.log
fields:
document_type: "A"
- type: filestream
enabled: true
ignore_older: 20h
parsers:
- multiline:
type: pattern
pattern: ''
negate: false
match: after
prospector.scanner.check_interval: 10s
harvester_buffer_size: 16384
close_timeout: 5m
close_inactive: 5m
close_removed: true
clean_renamed: true
paths:
- \\other_path\other_folder\*.log
fields:
document_type: "B"
output.kafka:
enabled: true
hosts: ["abc:9092"]
topic: test
key: "%{[fields.document_type]}"
The problem is that, if I have 20 inputs, I have to write the same thing 20 times. Is there any possibility to write the filebeat.inputs just once and only add the path and the field.document_type each time?

Related

Serverless: TypeError: Cannot read property 'stage' of undefined

frameworkVersion: '2'
plugins:
- serverless-step-functions
- serverless-python-requirements
- serverless-parameters
- serverless-pseudo-parameters
provider:
name: aws
region: us-east-2
stage: ${opt:stage, 'dev'}
runtime: python3.7
versionFunctions: false
iam:
role: arn:aws:iam::#{AWS::AccountId}:role/AWSLambdaVPCAccessExecutionRole
apiGateway:
shouldStartNameWithService: true
lambdaHashingVersion: 20201221
package:
exclude:
- node_modules/**
- venv/**
# Lambda functions
functions:
generateAlert:
handler: handler.generateAlert
generateData:
handler: handler.generateDataHandler
timeout: 600
approveDenied:
handler: handler.approveDenied
timeout: 600
stepFunctions:
stateMachines:
"claims-etl-and-insight-generation-${self:provider.stage}":
loggingConfig:
level: ALL
includeExecutionData: true
destinations:
- Fn::GetAtt: ["ETLStepFunctionLogGroup", Arn]
name: "claims-etl-and-insight-generation-${self:provider.stage}"
definition:
Comment: "${self:provider.stage} ETL Workflow"
StartAt: RawQualityJob
States:
# Raw Data Quality Check Job Start
RawQualityJob:
Type: Task
Resource: arn:aws:states:::glue:startJobRun.sync
Parameters:
JobName: "data_quality_v2_${self:provider.stage}"
Arguments:
"--workflow-name": "${self:provider.stage}-Workflow"
"--dataset_id.$": "$.datasetId"
"--client_id.$": "$.clientId"
Next: DataQualityChoice
Retry:
- ErrorEquals: [States.ALL]
MaxAttempts: 2
IntervalSeconds: 10
BackoffRate: 5
Catch:
- ErrorEquals: [States.ALL]
Next: GenerateErrorAlertDataQuality
# End Raw Data Quality Check Job
DataQualityChoice:
Type: Task
Resource:
Fn::GetAtt: [approveDenied, Arn]
Next: Is Approved ?
Is Approved ?:
Type: Choice
Choices:
- Variable: "$.quality_status"
StringEquals: "Denied"
Next: FailState
Default: HeaderLineJob
FailState:
Type: Fail
Cause: "Denied status"
# Header Line Job Start
HeaderLineJob:
Type: Parallel
Branches:
- StartAt: HeaderLineIngestion
States:
HeaderLineIngestion:
Type: Task
Resource: arn:aws:states:::glue:startJobRun.sync
Parameters:
JobName: headers_lines_etl_rs_v2
Arguments:
"--workflow-name.$": "$.Arguments.--workflow-name"
"--dataset_id.$": "$.Arguments.--dataset_id"
"--client_id.$": "$.Arguments.--client_id"
End: True
Retry:
- ErrorEquals: [States.ALL]
MaxAttempts: 2
IntervalSeconds: 10
BackoffRate: 5
Catch:
- ErrorEquals: [States.ALL]
Next: GenerateErrorAlertHeaderLine
End: True
# Header Line Job End
GenerateErrorAlertDataQuality:
Type: Task
Resource:
Fn::GetAtt: [generateAlert, Arn]
End: true
resources:
Resources:
# Cloudwatch Log
"ETLStepFunctionLogGroup":
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: "ETLStepFunctionLogGroup_${self:provider.stage}"
This is what my serverless.yml file looks like.
When I run the command:
sls deploy --stage staging
It show
Type Error ----------------------------------------------
TypeError: Cannot read property 'stage' of undefined
at Variables.getValueFromOptions (/snapshot/serverless/lib/classes/Variables.js:648:37)
at Variables.getValueFromSource (/snapshot/serverless/lib/classes/Variables.js:579:17)
at /snapshot/serverless/lib/classes/Variables.js:539:12
Your Environment Information ---------------------------
Operating System: linux
Node Version: 14.4.0
Framework Version: 2.30.3 (standalone)
Plugin Version: 4.5.1
SDK Version: 4.2.0
Components Version: 3.7.4
How I can fix this? I tried with different version of serverless.
There is error in yamlParser file, which is provided by serverless-step-functions.
Above is my serverless config file.
It looks like a $ sign is missing from your provider -> stage?
provider:
name: aws
region: us-east-2
stage: ${opt:stage, 'dev'} # $ sign is missing?
runtime: python3.7
versionFunctions: false
iam:
role: arn:aws:iam::#{AWS::AccountId}:role/AWSLambdaVPCAccessExecutionRole
apiGateway:
shouldStartNameWithService: true
lambdaHashingVersion: 20201221

filebeat add_fields processor with condition

I'd like to add a field "app" with the value "apache-access" to every line that is exported to Graylog by the Filebeat "apache" module.
The following configuration should add the field as I see a "event_dataset"="apache.access" field in Graylog but to does not do anything.
If I remove the condition, the "add_fields" processor does add a field though.
filebeat.inputs:
- type: log
enabled: false
paths:
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.logstash:
hosts: [ "localhost:5044" ]
processors:
- add_fields:
when:
equals:
event_dataset: "apache.access"
target: ""
fields:
app: "apache-access"
logging.level: info
For whatever reason the field is called "event.dataset" in filebeat but displayed as "event_dataset" in Graylog.

filebeat configuration using elasticsearch

I am Facing issue with Filebeat,i taken filebeat image docker pull docker.elastic.co/beats/filebeat:6.3.1
my filebeat.yml file is
filebeat.config:
prospectors:
path: ${path.config}/prospectors.d/*.yml
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
processors:
- add_cloud_metadata:
output.elasticsearch:
hosts: ['192.0.0.0:9200']
username: elastic
password: changeme
setup.kibana:
host: '192.0.0.0:5601'
filebeat.inputs:
- type: log
paths:
- /var/log/*.log
When i run filebeat i am getting yum.logs and harvester started for yum.log,Please help me
Thanks in advance.

Codeception Content-disposition assert file

I want to assert a xls file. I get this back:
"attachment; filename="testify.xlsx""
How can I do assertions on the file?
I have the tmp Path from the $_SERVER['TMPDIR'] and a clean filename, but there is no file in directory.
There are no built-in methods for this purpose, you will have to write your own helper.
If you are using PhpBrowser or one of framework modules, they have two useful hidden methods:
_getResponseContent returns page(or file) content,
_savePageSource saves it to file.
So your helper method would look like this:
function seeXlsFileIsValid()
{
$fileContent = $this->getModule('PhpBrowser)->_getResponseContent();
$this->assertTrue(..., 'returned xls file is not valid');
}
If you want to assert response headers, copy seeHttpHeader method from REST module.
codeception.yml
actor: Tester
paths:
tests: tests
log: tests/_output
data: tests/_data
support: tests/_support
envs: tests/_envs
settings:
bootstrap: _bootstrap.php
colors: true
memory_limit: 1024M
shuffle: true
extensions:
enabled:
- Codeception\Extension\RunFailed
coverage:
enabled: true
c3_url: 'http://www-dev.testify.com:8080/c3.php'
remote: false
whitelist:
include:
- _php/*
api.suite.yml
class_name: ApiTester
modules:
enabled:
- PhpBrowser:
url: http://www-dev.testify.com:8080
clear_cookies: false
restart: false
curl:
CURLOPT_RETURNTRANSFER: true
- REST:
url: http://www-dev.testify.com:8080
depends: PhpBrowser
part: Json
- Asserts
- Helper\Api
- Db:
dsn: 'sqlite:tests/_output/database.sqlite'
user: ''
password: ''
env:
fast:

when condition not being evaluated with include statement

I have a tikitaka3.yml (main yml file) and a tikitaka3a.yml (playbook to be included).
I prompt the user for a variable, and then in the tasks section I call it, like so:
---
- hosts: all
vars:
khan:
# contents: "{{ lookup('file', '/home/imran/Desktop/tobefetched/file1.txt') }}"
vars_prompt:
- name: targetenv
prompt: 1.)EPC 2.)CLIENTS 3)TESTERS
private: False
default: "1"
gather_facts: no
tasks:
- name: Inlude playbook tikitaka3a
include: /home/khan/Desktop/playbooks/tikitaka3a.yml target=umar
when: targetenv.stdout|int < 2 #this statement has no effect
#when: targetenv == 1 #Neither does this statement
#when: targetenc == "1" #and neither does this statement have affect
#- name: stuff n stuff # This task will give an error if not commented
# debug: var=targetenv.stdout
The include statement always comes into affect, without the when condition ever being evaluated.
Why is this happening?
When you include an Ansible task file it will attach the when: condition to all included tasks. This means that you will see the tasks displayed even when the when: condition is false though all tasks will be skipped.
One problem with your code above is targetenv.stdout, here is a working version with proper formatting:
- hosts: all
gather_facts: no
vars_prompt:
- name: targetenv
prompt: 1.)EPC 2.)CLIENTS 3)TESTERS
private: False
default: "1"
tasks:
- name: Inlude playbook tikitaka3a
include: roles/test/tasks/tikitaka3a.yml target=umar
when: targetenv|int < 2