enabling dashboards for fllebeat - filebeat

I am trying to develop more visibility around aws. I'd really like to use the prebuilt dashboards that come with filebeat, but I seem to constantly run into issues with the visualizations for elb and vpcflow logs. My configuration looks like this:
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
host: "localhost:9243"
protocol: "https"
username: "kibana_user"
password: "kibana_password"
setup.dashboards.enabled: true
setup.dashboards.directory: ${path.config}/kibana
setup.ilm.enabled: false
output.elasticsearch:
hosts: ["localhost:9200"]
protocol: "https"
username: "elastic_user"
password: "password"
indices:
- index: "cloudtrail-%{[agent.version]}-%{+yyyy.MM.dd}"
when.contains:
event.dataset: "aws.cloudtrail"
- index: "elb-%{[agent.version]}-%{+yyyy.MM.dd}"
when.contains:
event.dataset: "aws.elb"
- index: "vpc-%{[agent.version]}-%{+yyyy.MM.dd}"
when.contains:
event.dataset: "aws.vpc"
processors:
- add_fields:
target: my_env
fields:
environment: development
In my dashboards directory I changed the filebeat-* index to
vpc-* for Filebeat-aws-vpcflow-overview.json, cloudtrail-* for filebeat-aws-cloudtrail.json and elb-* for Filebeat-aws-elb-overview.json. The cloudtrail dashboard works just fine. I only run into issues with the elb and vpcflow visualizations. None of elb requests visualizations work. The top ip addresses for vpcflow logs do not work either. Here are some screenshots
Any help with this would be greatly appreciated.

For this particular situation if you don't use the deafault filebeaat-* index there are issues getting the prebuilt dashboards to spin up. I dropped the custom indexing that I had in my configuration and I was able to get the dashboards to load properly.

Related

Forward Flex Gateway Logs to Splunk

I have an instance of MuleSoft's Flex Gateway (v 1.2.0) installed on a Linux machine in a podman container. I am trying to forward container as well as API logs to Splunk. Below is my log.yaml file in /home/username/app folder. Not sure what I am doing wrong, but the logs are not getting forwarded to Splunk.
apiVersion: gateway.mulesoft.com/v1alpha1
kind: Configuration
metadata:
name: logging-config
spec:
logging:
outputs:
- name: default
type: splunk
parameters:
host: <instance-name>.splunkcloud.com
port: "443"
splunk_token: xxxxx-xxxxx-xxxx-xxxx
tls: "on"
tls.verify: "off"
splunk_send_raw: "on"
runtimeLogs:
logLevel: info
outputs:
- default
accessLogs:
outputs:
- default
Please advise.
The endpoint for Splunk's HTTP Event Collector (HEC) is https://http-input.<instance-name>.splunkcloud.com:443/services/collector/raw. If you're using a free trial of Splunk Cloud then change the port number to 8088. See https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector#Send_data_to_HTTP_Event_Collector_on_Splunk_Cloud_Platform for details.
I managed to get this work. The issue was that I had to give full permissions to the app folder using "chmod" command. After it was done, the fluent-bit.conf file had an entry for Splunk and logs started flowing.

filebeat tomcat module and collect webapps logs files

I just installed filebeat on my remote server to collect logs by an app. Everything seems OK. The ELK stack retrieves the info and I can view it via Kibana.
Today, I want to collect the logs generated by 2 webapps hosted on the same tomcat server. I want to be able to add a field to allow me to create a filter on it on Kibana
I am using the tomcat.yml module which I want to rename as webapp1.yml and webapp2.yml.
In each of these files, I will add a field that corresponds to the name of my webapp
webapp1.yml
- module: tomcat
log:
enabled: true
var.input: file
var.paths:
- c:\app\webapp1.log
var.rsa_fields: true
**var.rsa.misc.context: webapp1**
webapp2.yml
- module: tomcat
log:
enabled: true
var.input: file
var.paths:
- c:\app\webapp2.log
var.rsa_fields: true
**var.rsa.misc.context: webapp2**
But, logstash index do not recognized this new field context
How can i solve this ?
Thanks for help me
So, i find the solution...
- module: tomcat
log:
enabled: true
var.input: file
var.paths:
- c:\app\webapp1.log
# Toggle output of non-ECS fields (default true).
#var.rsa_fields: true
input:
processors:
- add_fields:
target: fields
fields:
application-name: webapp1

*.aclpolicy file not works - Auth using Active Directory

Summarizing my environment:
Running Rundeck (3.3.11) at Kuberenetes Cluster
Dedicated Database MariaDB connected via JDBC Connector.
Configured Active Directory via JAAS using the variables RUNDECK_JAAS_LDAP_ * and auth working, I can logon using my AD user.
Configured ACL Policy template using K8s Secret like in this Zoo sample:
volumeMounts:
- name: aclpolicy
mountPath: /home/rundeck/etc/rundeck-adm.aclpolicy
subPath: rundeck-adm.aclpolicy
volumes:
- name: aclpolicy
secret:
secretName: rundeck-adm-policy
items:
- key: rundeck-admin-role.yaml
path: rundeck-adm.aclpolicy
Variables exported to Rundeck Pod:
RUNDECK_JAAS_MODULES_0=JettyCombinedLdapLoginModule
RUNDECK_JAAS_LDAP_USERBASEDN=OU=Users,OU=MYBRAND,DC=corp,DC=MYDOMAIN
RUNDECK_JAAS_LDAP_ROLEBASEDN=OU=RundeckRoles,OU=Users,OU=MYBRAND,DC=corp,DC=MYDOMAIN
RUNDECK_JAAS_LDAP_FLAG=sufficient
RUNDECK_JAAS_LDAP_BINDDN=myrundeckuser#mybrand.mydomain
RUNDECK_JAAS_LDAP_BINDPASSWORD=foo
In my MS Active Directory the structure is:
-mybrand.mydomain
- MYBRAND
- Users
- RundeckRoles
- rundeck-adm (group with my user associated)
After I login returns this screen:
EDIT1:
My rundeck-admin-role.yaml:
description: Admin project level access control. Applies to resources within a specific project.
context:
project: '.*' # all projects
for:
resource:
- equals:
kind: job
allow: [create] # allow create jobs
- equals:
kind: node
allow: [read,create,update,refresh] # allow refresh node sources
- equals:
kind: event
allow: [read,create] # allow read/create events
adhoc:
- allow: [read,run,runAs,kill,killAs] # allow running/killing adhoc jobs
job:
- allow: [create,read,update,delete,run,runAs,kill,killAs] # allow create/read/write/delete/run/kill of all jobs
node:
- allow: [read,run] # allow read/run for nodes
by:
group: rundeck-adm
---
description: Admin Application level access control, applies to creating/deleting projects, admin of user profiles, viewing projects and reading system information.
context:
application: 'rundeck'
for:
resource:
- equals:
kind: project
allow: [create] # allow create of projects
- equals:
kind: system
allow: [read,enable_executions,disable_executions,admin] # allow read of system info, enable/disable all executions
- equals:
kind: system_acl
allow: [read,create,update,delete,admin] # allow modifying system ACL files
- equals:
kind: user
allow: [admin] # allow modify user profiles
project:
- match:
name: '.*'
allow: [read,import,export,configure,delete,admin] # allow full access of all projects or use 'admin'
project_acl:
- match:
name: '.*'
allow: [read,create,update,delete,admin] # allow modifying project-specific ACL files
storage:
- allow: [read,create,update,delete] # allow access for /ssh-key/* storage content
by:
group: rundeck-adm
Someone can help me to find my mistake?
Guys I found the trouble!
It was missing to add some variables RUNDECK_JAAS_LDAP_ROLEMEMBERATTRIBUTE and RUNDECK_JAAS_LDAP_ROLEOBJECTCLASS, by default if you don't declare that, Rundeck assume another values.
After I apply this vars and re-deploy my Rundeck Pod back works my access using my AD Account.
To help the community I'm making available the list of vars that I used in my deployment:
"JVM_MAX_RAM_PERCENTAGE"
"RUNDECK_DATABASE_URL"
"RUNDECK_DATABASE_DRIVER"
"RUNDECK_DATABASE_USERNAME"
"RUNDECK_DATABASE_PASSWORD"
"RUNDECK_LOGGING_AUDIT_ENABLED"
"RUNDECK_JAAS_MODULES_0"
"RUNDECK_JAAS_LDAP_FLAG"
"RUNDECK_JAAS_LDAP_PROVIDERURL"
"RUNDECK_JAAS_LDAP_BINDDN"
"RUNDECK_JAAS_LDAP_BINDPASSWORD"
"RUNDECK_JAAS_LDAP_USERBASEDN"
"RUNDECK_JAAS_LDAP_ROLEBASEDN"
"RUNDECK_GRAILS_URL"
"RUNDECK_SERVER_FORWARDED"
"RUNDECK_JAAS_LDAP_USERRDNATTRIBUTE"
"RUNDECK_JAAS_LDAP_USERIDATTRIBUTE"
"RUNDECK_JAAS_LDAP_ROLEMEMBERATTRIBUTE"
The JAAS plugin that I use was: JettyCombinedLdapLoginModule

Cloudfront distribution fails to load subdirectories in one stack but works in another

I'm using cloudformation (via serverless framework) to deploy static sites to S3 and set up a cloudfront distribution that is aliased from a route53 domain.
I have this working for two domains, each are new domains that were created in route53. I am trying the same set up with an older domain that I am transferring to route53 from an existing registrar.
The cloudfront distribution for this new domain fails to load sub directories. I.e https://[mydistid].cloudfront.net/sub/dir/ does not load the resource at https://[mydistid].cloudfront.net/sub/dir/index.html
There is a common gotcha covered in other SO questions. You must specify the s3 bucket as a custom origin, in order for CloudFront to apply the Default Root Object to sub directories.
I have done this, as can be seen from my serverless.yml CloudFrontDistribution resource:
XxxxCloudFrontDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Aliases:
- ${self:provider.environment.CUSTOM_DOMAIN}
Origins:
- DomainName: ${self:provider.environment.BUCKET_NAME}.s3.amazonaws.com
Id: Xxxx
CustomOriginConfig:
HTTPPort: 80
HTTPSPort: 443
OriginProtocolPolicy: https-only
Enabled: 'true'
DefaultRootObject: index.html
CustomErrorResponses:
- ErrorCode: 404
ResponseCode: 200
ResponsePagePath: /error.html
DefaultCacheBehavior:
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
TargetOriginId: Xxxx
Compress: 'true'
ForwardedValues:
QueryString: 'false'
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
ViewerCertificate:
AcmCertificateArn: ${self:provider.environment.ACM_CERT_ARN}
SslSupportMethod: sni-only
This results in the a CF distribution with the s3 bucket as a 'Custom Origin' in AWS.
However, when accessed the sub directories route to the error page rather than the default root object in that directory.
What is extremely odd is that this uses the same config as another stack that is fine. The only diff I can see (so far) is that the working stack has a route53 created domain whereas this uses a domain that originated from another registrar, so I'll see what happens once the name server migration completes. I'm skeptical this will resolve the issue though as the CF distribution shouldn't be affected by the route53 domain status
I have both stacks working now. The problem was the use of the S3 REST API url
${self:provider.environment.BUCKET_NAME}.s3.amazonaws.com
Changing both to the s3 website url works:
${self:provider.environment.BUCKET_NAME}.s3-website-us-east-1.amazonaws.com
I have no explanation as to why the former URL worked with 1 stack but not the other.
The other change that I needed to make was to set the OriginProtocolPolicy of CustomOriginConfig to http-only, this is because s3 websites don't support https.
Here is my updated CloudFormation config:
XxxxCloudFrontDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Aliases:
- ${self:provider.environment.CUSTOM_DOMAIN}
Origins:
- DomainName: ${self:provider.environment.BUCKET_NAME}.s3-website-us-east-1.amazonaws.com
Id: Xxxx
CustomOriginConfig:
HTTPPort: 80
OriginProtocolPolicy: http-only
Enabled: 'true'
DefaultRootObject: index.html
CustomErrorResponses:
- ErrorCode: 404
ResponseCode: 200
ResponsePagePath: /error.html
DefaultCacheBehavior:
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
TargetOriginId: Xxxx
Compress: 'true'
ForwardedValues:
QueryString: 'false'
Cookies:
Forward: none
ViewerProtocolPolicy: redirect-to-https
ViewerCertificate:
AcmCertificateArn: ${self:provider.environment.ACM_CERT_ARN}
SslSupportMethod: sni-only

Spinnaker on Titus cloud provider

Are there any steps of configuring Spinnaker/Halyard to work on Titus based cluster? - https://netflix.github.io/titus/
There aren't any steps described in the documentation: https://www.spinnaker.io/setup/install/providers/
Also, check this Github issue: https://github.com/spinnaker/spinnaker.github.io/issues/869
There is a sample config in the github repo:
titus:
enabled: true
awsVpc: vpc0 # this is the default vpc used by titus
accounts:
- name: titusdevint
environment: test
discovery: "http://discovery.compary.com/v2"
discoveryEnabled: true
registry: testregistry # reference to the docker registry being used
awsAccount: test # aws account underpinning
autoscalingEnabled: true
loadBalancingEnabled: false # load balancing will be released at a later date
regions:
- name: us-east-1
url: https://myTitus.us-east-1.company.com/
port: 7104
autoscalingEnabled: true
loadBalancingEnabled: false
- name: eu-west-1
url: https://myTitus.eu-west-1.company.com/
port: 7104
autoscalingEnabled: true
loadBalancingEnabled: false
https://github.com/spinnaker/clouddriver/tree/master/clouddriver-titus
Right now you'll have to edit clouddriver.yml manually and then update via halyard