filebeat tomcat module and collect webapps logs files - filebeat

I just installed filebeat on my remote server to collect logs by an app. Everything seems OK. The ELK stack retrieves the info and I can view it via Kibana.
Today, I want to collect the logs generated by 2 webapps hosted on the same tomcat server. I want to be able to add a field to allow me to create a filter on it on Kibana
I am using the tomcat.yml module which I want to rename as webapp1.yml and webapp2.yml.
In each of these files, I will add a field that corresponds to the name of my webapp
webapp1.yml
- module: tomcat
log:
enabled: true
var.input: file
var.paths:
- c:\app\webapp1.log
var.rsa_fields: true
**var.rsa.misc.context: webapp1**
webapp2.yml
- module: tomcat
log:
enabled: true
var.input: file
var.paths:
- c:\app\webapp2.log
var.rsa_fields: true
**var.rsa.misc.context: webapp2**
But, logstash index do not recognized this new field context
How can i solve this ?
Thanks for help me

So, i find the solution...
- module: tomcat
log:
enabled: true
var.input: file
var.paths:
- c:\app\webapp1.log
# Toggle output of non-ECS fields (default true).
#var.rsa_fields: true
input:
processors:
- add_fields:
target: fields
fields:
application-name: webapp1

Related

Can we send data to wazuh-indexer using filebeat and without agent in Wazuh?

I am trying to send data from filebeat to wazuh-indexer directly but I get connection errors between filebeat and elasticsearch. Following is my filebeat configuration:
filebeat.inputs:
- input_type: log
paths:
- /home/siem/first4.log
enable: true
output.elasticsearch:
hosts: ["192.168.0.123:9200"]
protocol: https
index: "test"
username: admin
password: admin
ssl.certificate_authorities:
- /etc/filebeat/certs/root-ca.pem
ssl.certificate: "/etc/filebeat/certs/filebeat-1.pem"
ssl.key: "/etc/filebeat/certs/filebeat-1-key.pem"
setup.template.json.enabled: false
setup.ilm.overwrite: true
setup.ilm.enabled: false
setup.template.name: false
setup.template.pattern: false
#setup.template.json.path: '/etc/filebeat/wazuh-template.json'
#setup.template.json.name: 'wazuh'
#filebeat.modules:
# - module: wazuh
# alerts:
# enabled: true
# archives:
# enabled: false
Following is the error:
2023-01-30T09:29:18.634Z ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to backoff(elasticsearch(https://192.168.0.123:9200)): Get "https://192.168.0.123:9200": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2023-01-30T09:29:18.635Z INFO [publisher_pipeline_output] pipeline/output.go:145 Attempting to reconnect to backoff(elasticsearch(https://192.168.0.123:9200)) with 1 reconnect attempt(s)
2023-01-30T09:29:18.635Z INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
2023-01-30T09:29:18.635Z INFO [publisher] pipeline/retry.go:223 done
2023-01-30T09:29:46.177Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s
Can anyone tell what mistake am I doing?
Yes, you could send logs directly using Filebeat without a Wazuh agent but that way you won't benefit from the Wazuh analysis engine.
With your current configuration, the logs will be ingested under filebeat-<version>-<date>. Make sure to create an index pattern for these events.
As your logs indicate, there's a connectivity issue between Filebeat and the Wazuh indexer. To diagnose the problem:
Try running the following call to make sure you can reach the Wazuh indexer:
curl -k -u admin:admin https://192.168.0.123:9200
Run a Filebeat test output:
filebeat test output

Forward Flex Gateway Logs to Splunk

I have an instance of MuleSoft's Flex Gateway (v 1.2.0) installed on a Linux machine in a podman container. I am trying to forward container as well as API logs to Splunk. Below is my log.yaml file in /home/username/app folder. Not sure what I am doing wrong, but the logs are not getting forwarded to Splunk.
apiVersion: gateway.mulesoft.com/v1alpha1
kind: Configuration
metadata:
name: logging-config
spec:
logging:
outputs:
- name: default
type: splunk
parameters:
host: <instance-name>.splunkcloud.com
port: "443"
splunk_token: xxxxx-xxxxx-xxxx-xxxx
tls: "on"
tls.verify: "off"
splunk_send_raw: "on"
runtimeLogs:
logLevel: info
outputs:
- default
accessLogs:
outputs:
- default
Please advise.
The endpoint for Splunk's HTTP Event Collector (HEC) is https://http-input.<instance-name>.splunkcloud.com:443/services/collector/raw. If you're using a free trial of Splunk Cloud then change the port number to 8088. See https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector#Send_data_to_HTTP_Event_Collector_on_Splunk_Cloud_Platform for details.
I managed to get this work. The issue was that I had to give full permissions to the app folder using "chmod" command. After it was done, the fluent-bit.conf file had an entry for Splunk and logs started flowing.

Filebeat Config help for type: aws-cloudwatch

This is my filebeat config for aws-cloudwatch.
type: aws-cloudwatch
log_group_arn: arn:aws:logs:us-x-xxxx1:x:loxxxxxg-group:/aws/aes/domains/xxxxx-dev/:
scan_frequency: 1m
start_position: end
role_arn: arn:aws:iam::xxxxxxxxxxxx:role/ec2-role-xxxxxx-ansible-us-xxxx-1
proxy_uri: http://x.app.xxxxxxx.com:80
enabled: true
I would like to know what are the minimum config I would need to test the setup.
https://www.elastic.co/guide/en/beats/filebeat/7.13/filebeat-input-aws-cloudwatch.html#filebeat-input-aws-cloudwatch
I am using version 7.13 filebeat.
can role_arn be used instead of credential_profile_name?

enabling dashboards for fllebeat

I am trying to develop more visibility around aws. I'd really like to use the prebuilt dashboards that come with filebeat, but I seem to constantly run into issues with the visualizations for elb and vpcflow logs. My configuration looks like this:
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
host: "localhost:9243"
protocol: "https"
username: "kibana_user"
password: "kibana_password"
setup.dashboards.enabled: true
setup.dashboards.directory: ${path.config}/kibana
setup.ilm.enabled: false
output.elasticsearch:
hosts: ["localhost:9200"]
protocol: "https"
username: "elastic_user"
password: "password"
indices:
- index: "cloudtrail-%{[agent.version]}-%{+yyyy.MM.dd}"
when.contains:
event.dataset: "aws.cloudtrail"
- index: "elb-%{[agent.version]}-%{+yyyy.MM.dd}"
when.contains:
event.dataset: "aws.elb"
- index: "vpc-%{[agent.version]}-%{+yyyy.MM.dd}"
when.contains:
event.dataset: "aws.vpc"
processors:
- add_fields:
target: my_env
fields:
environment: development
In my dashboards directory I changed the filebeat-* index to
vpc-* for Filebeat-aws-vpcflow-overview.json, cloudtrail-* for filebeat-aws-cloudtrail.json and elb-* for Filebeat-aws-elb-overview.json. The cloudtrail dashboard works just fine. I only run into issues with the elb and vpcflow visualizations. None of elb requests visualizations work. The top ip addresses for vpcflow logs do not work either. Here are some screenshots
Any help with this would be greatly appreciated.
For this particular situation if you don't use the deafault filebeaat-* index there are issues getting the prebuilt dashboards to spin up. I dropped the custom indexing that I had in my configuration and I was able to get the dashboards to load properly.

serverless-api-gateway-caching plugin is not setting the cache size

I try to set the AWS API Gateway cache using the serverless-api-gateway-caching plugin.
All is working fine, except the cacheSize.
This is my configuration for the caching:
caching:
enabled: true
clusterSize: '13.5'
ttlInSeconds: 3600
cacheKeyParameters:
- name: request.path.param1
- name: request.querystring.param2
The cache is configured correctly, but the cache size is always the default one '0.5'
Any idea about what is wrong?
sls -v
1.42.3
node --version
v9.11.2
serverless-api-gateway-caching: 1.4.0
Regards
Because of "Cache Capacity" setting is global per stage, it is not possible to set it per endpoint.
So the plugin is going to check this parameter only in the servelerless global configuration, ignoring it at the endpoint level.
It means that the right configuration is:
custom:
apiGatewayCaching:
enabled: true
clusterSize: '13.5'