filebeat configuration using elasticsearch - filebeat

I am Facing issue with Filebeat,i taken filebeat image docker pull docker.elastic.co/beats/filebeat:6.3.1
my filebeat.yml file is
filebeat.config:
prospectors:
path: ${path.config}/prospectors.d/*.yml
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
processors:
- add_cloud_metadata:
output.elasticsearch:
hosts: ['192.0.0.0:9200']
username: elastic
password: changeme
setup.kibana:
host: '192.0.0.0:5601'
filebeat.inputs:
- type: log
paths:
- /var/log/*.log
When i run filebeat i am getting yum.logs and harvester started for yum.log,Please help me
Thanks in advance.

Related

Gitpod cannot resolve workspace image: hostname required on workspace start

I've gitpod selfhosted in EKS. When I try to start a new workspace I have this error:
Request createWorkspace failed with message: 13 INTERNAL: cannot
resolve workspace image: hostname required
Unknown Error: { "code": -32603 }
I haven't found any solution.
Any idea?
Thank you
Here my gitpod-config.yaml
apiVersion: v1
authProviders: []
blockNewUsers:
enabled: false
passlist: []
certificate:
kind: secret
name: https-certificates
containerRegistry:
inCluster: true
s3storage:
bucket: custom-bucket
certificate:
kind: secret
name: object-storage-gitpod-token
database:
inCluster: false
external:
certificate:
kind: secret
name: mysql-gitpod-token
domain: my-domain.com
imagePullSecrets: null
jaegerOperator:
inCluster: true
kind: Full
metadata:
region: eu-west-1
objectStorage:
inCluster: true
observability:
logLevel: info
repository: eu.gcr.io/gitpod-core-dev/build
workspace:
resources:
requests:
cpu: "1"
memory: 2Gi
runtime:
containerdRuntimeDir: /var/lib/containerd/io.containerd.runtime.v2.task/k8s.io
containerdSocket: /run/containerd/containerd.sock
fsShiftMethod: shiftfs
I'm a Gitpodder and wrote most of the Installer. This is usually down to a misconfiguration.
Can you post your config.yaml (redacting the domain) please and hopefully I'll be able to see the issue?
Check your .env file has an entry for the CERTIFICATE_ARN= and that the cert has 3 entries for the base domain.
e.g. if DOMAIN=mygitpod.domain.com
The cert needs these three:
mygitpod.domain.com
*.mygitpod.domain.com
*.ws.mygitpod.domain.com`
i had this error and i resolved it by adding a DNS record for
$DOMAIN
*.$DOMAIN
*.ws.$DOMAIN`
I run v2022.03.1 and now I have all three DNS records configured.
It works.
Thanks everybody who response

Loki config with s3

I can't get Loki to connect to AWS S3 using docker-compose. Logs are visible in Grafana but the S3 bucket remains empty.
The s3 bucket is public and I have an IAM role attached to allow s3:FullAccess.
I updated loki to v2.0.0 and changed the period to 24h but it made no difference. There are no errors in the loki logs.
Here are the selected lines from docker logs (loki):
msg="Starting Loki" version="(version=master-4e661cd, branch=master, revision=4e661cde)"
caller=server.go:225 http=[::]:3100 grpc=[::]:9095 msg="server listening on addresses"
caller=worker.go:65 msg="no address specified, not starting worker"
msg="cleaning up mapped rules directory" path=/loki/tmprules
msg=initialising module=memberlist-kv
msg=initialising module=store
msg=initialising module=server
msg=initialising module=ring
msg="value is nil" key=collectors/ring index=1
msg=initialising module=ingester
msg="not loading tokens from file, tokens file path is empty"
msg="instance not found in ring, adding with no tokens" ring=ingester
msg="auto-joining cluster after timeout" ring=ingester
msg=initialising module=table-manager
msg=initialising module=distributor
msg=initialising module=ingester-querier
msg=initialising module=ruler
msg="ruler up and running"
msg="Loki started"
msg="synching tables" expected_tables=132
Here is my loki.config:
auth_enabled: false
server:
http_listen_port: 3100
distributor:
ring:
kvstore:
store: memberlist
ingester:
lifecycler:
ring:
kvstore:
store: memberlist
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
schema_config:
configs:
- from: 2020-10-27
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: index_
period: 24h
storage_config:
boltdb_shipper:
active_index_directory: /loki/index
cache_location: /loki/index_cache
resync_interval: 5s
shared_store: s3
aws:
s3: s3://AKIARE3#us-east-1/mydomain.com.docker.loki.logs
s3forcepathstyle: true
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
Here is docker-compose.yaml
version: "3.8"
networks:
traefik:
external: true
volumes:
data:
services:
fluentd:
image: grafana/fluent-plugin-loki:master
command:
- "fluentd"
- "-v"
- "-p"
- "/fluentd/plugins"
environment:
LOKI_URL: http://loki:3100
LOKI_USERNAME:
LOKI_PASSWORD:
container_name: "fluentd"
restart: always
ports:
- '24224:24224'
networks:
- traefik
volumes:
- type: bind
source: ./config/fluent.conf
target: /fluentd/etc/fluent.conf
logging:
options:
tag: docker.monitoring
loki:
image: grafana/loki:master
container_name: "loki"
restart: always
networks:
- traefik
volumes:
- type: volume
source: data
target: /loki
ports:
- 3100
volumes:
- type: bind
source: ./config/s3.loki.conf
target: /loki/etc/loki.conf
depends_on:
- fluentd
I finally did work this out. It requires a compactor but gives no warning about it. Best practice is to create an AWS s3 bucket without any public access. Next create an IAM user with programmatic access only. Create an access policy which gives full access only to the bucket you created. Attach the policy to the user's permissions. You do not need to attach a policy to the bucket itself. Check if you have "/" in your URL that you escape it with %2F otherwise you will get an auth error. Note that this config is for loki v2.0.0 which was released yesterday.
Here are my complete working docker-compose and loki config files. I put them on an external network to enable prometheus monitoring.
here is my docker-compose.yaml
version: "3.8"
networks:
appnet:
external: true
volumes:
loki_data:
services:
fluentd:
container_name: "fluentd"
image: grafana/fluent-plugin-loki:master
command:
- "fluentd"
- "-v"
- "-p"
- "/fluentd/plugins"
environment:
LOKI_URL: http://loki:3100
LOKI_USERNAME:
LOKI_PASSWORD:
restart: always
ports:
- '24224:24224'
networks:
- appnet
volumes:
- type: bind
source: ./config/fluent.conf
target: /fluentd/etc/fluent.conf
loki:
container_name: "loki"
image: grafana/loki:2.0.0
restart: always
networks:
- appnet
ports:
- 3100
volumes:
- type: volume
source: loki_data
target: /data
- type: bind
source: ./config/s3-loki-bolt-conf.yml
target: /etc/loki/local-config.yaml
command: -config.file=/etc/loki/local-config.yaml
depends_on:
- fluentd
Here is my loki config in prometheus/config/s3-loki-bolt-conf.yml. You can name this anything you want but keep the target file name as above as it is the loki default config file.
auth_enabled: false
ingester:
chunk_idle_period: 3m
chunk_block_size: 262144
chunk_retain_period: 1m
max_transfer_retries: 0
lifecycler:
ring:
kvstore:
store: inmemory
replication_factor: 1
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
compactor:
working_directory: /loki/boltdb-shipper-compactor
shared_store: aws
schema_config:
configs:
- from: 2020-07-01
store: boltdb-shipper
object_store: aws
schema: v11
index:
prefix: loki_index_
period: 24h
server:
http_listen_port: 3100
storage_config:
aws:
s3: s3://ACCESS_KEY:SECRET_ACCESS_KEY#us-west-1/mydomain.com.docker.loki.logs
boltdb_shipper:
active_index_directory: /loki/index
shared_store: s3
cache_location: /loki/boltdb-cache
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
For those who want to use boltdb-shipper and store in S3 compatible object store (in my case from Scaleway), using helm and loki 2.0.0
Here is my values.yml:
loki:
enabled: true
config:
auth_enabled: false
ingester:
chunk_idle_period: 3m
chunk_block_size: 262144
chunk_retain_period: 1m
max_transfer_retries: 0
lifecycler:
ring:
kvstore:
store: inmemory
replication_factor: 1
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
compactor:
working_directory: /data/loki/boltdb-shipper-compactor
shared_store: aws
schema_config:
configs:
- from: 2020-11-13
store: boltdb-shipper
object_store: aws
schema: v11
index:
prefix: loki_index_
period: 24h
server:
http_listen_port: 3100
storage_config:
aws:
s3: s3://<key>:<secret>#s3.fr-par.scw.cloud/<bucket-name>
region: fr-par
s3forcepathstyle: true
boltdb_shipper:
active_index_directory: /data/loki/index
shared_store: s3
cache_location: /data/loki/boltdb-cache
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: true
retention_period: 720h
promtail:
enabled: true

How to redirect to the dashboard from a URL?

I currently access the V2 dashboard through http://traefik.my.server:8080/dashboard/ (Traefik runs in a docker container and 8080 is exposed to the host).
I would like to change that so that the dashboard is available at http://traefik.my.server/dashboard
I tried to add the following labels to configure this behavior but I get a 404 when accessing http://traefik.my.server/dashboard
- traefik.http.routers.dashboard.rule=Host(`traefik.my.server:`) && Path(`/dashboard`)
- traefik.http.services.dashboard.loadbalancer.server.port=8080
- traefik.http.routers.dashboard.entryPoints=http
(the http entrypoint is port 80)
What is the correct way to set up such redirectio
Recommend read:
https://docs.traefik.io/v2.1/operations/dashboard/#secure-mode
https://blog.containo.us/traefik-2-0-docker-101-fc2893944b9d
https://github.com/containous/blog-posts/tree/master/2019_09_10-101_docker
FYI it's not redirection but a routing.
https://community.containo.us/t/how-to-redirect-to-the-dashboard-from-a-url/4082/2
Following up on #Idez help at https://community.containo.us/t/how-to-redirect-to-the-dashboard-from-a-url/4082, a working configuration is
The docker-compose file:
services:
traefik:
container_name: traefik
image: traefik
ports:
- 80:80
- 443:443
restart: unless-stopped
volumes:
- /etc/docker/container-data/traefik:/etc/traefik
- /var/run/docker.sock:/var/run/docker.sock
- /etc/localtime:/etc/localtime:ro
labels:
- traefik.http.routers.api.rule=Host(`traefik.mydomain.org`)
- traefik.http.routers.api.service=api#internal
- traefik.http.routers.api.middlewares=lan
- traefik.http.middlewares.lan.ipwhitelist.sourcerange=192.168.10.0/24, 192.168.20.0/24
- traefik.enable=true
version: "3"
Configuration file
global:
sendAnonymousUsage: true
entryPoints:
http:
address: ":80"
https:
address: ":443"
api:
dashboard: true
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
defaultRule: "Host(`{{ index .Labels \"com.docker.compose.service\" }}.mydomain.org`)"
log:
level: INFO
#level: DEBUG
certificatesResolvers:
le:
acme:
email: le#mydomain.org
storage: /etc/traefik/acme.json
tlsChallenge: {}
#caServer: "https://acme-staging-v02.api.letsencrypt.org/directory"

filebeat add_fields processor with condition

I'd like to add a field "app" with the value "apache-access" to every line that is exported to Graylog by the Filebeat "apache" module.
The following configuration should add the field as I see a "event_dataset"="apache.access" field in Graylog but to does not do anything.
If I remove the condition, the "add_fields" processor does add a field though.
filebeat.inputs:
- type: log
enabled: false
paths:
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.logstash:
hosts: [ "localhost:5044" ]
processors:
- add_fields:
when:
equals:
event_dataset: "apache.access"
target: ""
fields:
app: "apache-access"
logging.level: info
For whatever reason the field is called "event.dataset" in filebeat but displayed as "event_dataset" in Graylog.

Unable to deploy application on EC2 instance using AWS CloudFormation template through cfn-init and UserData script

I am trying to deploy sample.war application on EC2 instance at the time of launch. That is when an instance is launched the application should be deployed automatically on it using cfn-init and Metadata. I added a user with policy and authentication with no luck. If I wget with the S3 path, the file is being downloaded. Below is my script. What am I missing in this, or is there any other way to do this?
---
AWSTemplateFormatVersion: 2010-09-09
Description: Test QA Template
Resources:
MyInstance:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref AMIIdParam
InstanceType: !Ref InstanceType
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
java-1.8.0-openjdk.x86_64: []
tomcat: []
httpd.x86_64: []
services:
sysvinit:
httpd:
enabled: true
ensureRunning: true
files:
/usr/share/tomcat/webapps/sample.zip:
source: https://s3.amazonaws.com/mybucket/sample.zip
mode: '000500'
owner: tomcat
group: tomcat
authentication: S3AccessCreds
AWS::CloudFormation::Authentication:
S3AccessCreds:
type: 'S3'
accessKeyId: !Ref HostKeys
secretKey: Fn::GetAtt:
- HostKeys
- SecretAccessKey
buckets: !Ref BucketName
CfnUser:
Type: AWS::IAM::User
Properties:
Path: '/'
Policies:
- PolicyName: 'S3Access'
PolicyDocument:
Statement:
- Effect: 'Allow'
Action: s3:*
Resource: '*'
HostKeys:
Type: AWS::IAM::AccessKey
Properties:
UserName: !Ref CfnUser
I was unable to reproduce this using the following template:
---
AWSTemplateFormatVersion: 2010-09-09
Description: Test QA Template
Resources:
MyInstance:
Type: AWS::EC2::Instance
Properties:
ImageId: ami-08589eca6dcc9b39c
InstanceType: t2.micro
KeyName: default
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
/opt/aws/bin/cfn-init -s ${AWS::StackId} --resource MyInstance --region ${AWS::Region}
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
java-1.8.0-openjdk.x86_64: []
tomcat: []
httpd.x86_64: []
services:
sysvinit:
httpd:
enabled: true
ensureRunning: true
files:
/usr/share/tomcat/webapps/sample.zip:
source: https://s3.amazonaws.com/mybucket/sample.zip
mode: '000500'
owner: tomcat
group: tomcat
(In other words, use of the above template allowed me to install a sample.zip file using cfn-init.)
Thus there is something permissions-related in the way you're accessing the S3 bucket.
Suffice to say it is a bad practice to use Access Keys. Have a look at this document on best practices of assigning an IAM Role to an EC2 instance and then adding a Bucket Policy that grants appropriate access to that Role.