Configuration of filebeat with ELK - filebeat

I am trying to configure the Filebeat modules to get all the logs. During the configuration I am unable to get all the logs specify the useful documentation to configure the Filebeat with ELK.

Related

EKS pods logging to Elastic Cloud

I am trying to set up pods logs shipping from EKS to ElasticSearch Cloud.
According to Fluent Bit for Amazon EKS on AWS Fargate is here, ElasticSearch should be supported:
You can choose between CloudWatch, Elasticsearch, Kinesis Firehose and Kinesis Streams as outputs.
According to FluentBit Configuration Parameters for ElasticSearch having Cloud_ID and Cloud_Auth parameters should be enough to ship logs to Elasticsearch Cloud.
An example here shows how to configure ES output for FluentBit, so my config looks like:
[OUTPUT]
Name es
Match *
Logstash_Format On
Logstash_Prefix ${logstash_prefix}
tls On
tls.verify Off
Pipeline date_to_timestamp
Cloud_ID ${es_cloud_id}
Cloud_Auth ${es_cloud_auth}
Trace_Output On
I am running a simple ngnix container to generate some logs (as in one of the linked examples), but they don't seem to appear in my ElasticSearch / Kibana.
Am I missing anything? How do I ship logs to ElasticSearch Cloud?
Also, Trace_Output On is supposed to log FluentBits' attempts to ship logs, but where can I see these logs on EKS?
I also ran into this. It seems to me only AWS ElasticSearch is supported when using the AWS managed FluentBit (from what I can tell).
https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-eks-adds-built-in-logging-support-for-aws-fargate/
You can work around this by using a sidecar fluentbit container (which can send to ElasticSearch) if that's an option for you. You will need to modify the application to have logs written to the filesystem.
Or you can use the managed FluentBit with the cloudwatch output, subscribe with to the log group with a lambda function and send it to ES.

How to check the if config command name is changed in AWS Elasticache(REDIS)

I am trying to access AWS elasticache(REDIS). I followed this instruction:
https://redsmin.uservoice.com/knowledgebase/articles/734646-amazon-elasticache-and-redsmin
Redis is connected now but when I click on configuration. I got this error:
"Redsmin can't load the configuration. Check with your provider that you have access to the configuration command."
edit 1:
config Redis command is sadly not available on AWS Elasticache, see their documentation:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/RestrictedCommands.html
To deliver a managed service experience, ElastiCache restricts access to certain cache engine-specific commands that require advanced privileges. For cache clusters running Redis, the following commands are unavailable:
[...]
config
That's why Redsmin configuration module (it's the only module impacted) cannot display current your Redis AWS Elasticache configuration.

ECS AWS Cloudwatch logs

I have a task in ECS that runs tomcat. That tomcat has 2 or 3 apps deployed to it. I know its not an ideal situation but this is what we've got. Log4j is used and logs for apps goto different log files under logs folder of tomcat. Is there a way I can have those different log files from my docker container to CloudWatch under different streams? I know if I write logs to stdout using log4j appender I can have them in cloudwatch easily but then they will not be separate, it'll be log from all apps going in one place.
Many Thanks
Instead of using log4j and sending logs to STDOUT you may set your log-configuration and docker log driver to aws-logs, which will help you to send logs directly to the cloudwatch using cloudwatch agent.
Reference: https://aws.amazon.com/blogs/devops/send-ecs-container-logs-to-cloudwatch-logs-for-centralized-monitoring/

configure SSL in Apache Flume JMS 1.6.0

I am trying to configure SSL in Apache Flume JMS1.6.0. As part of Apache Flume JMS Source does not support SSL by nature.
Is Anyone implemented by enabling SSL in Apache Flume JMS1.6.0.
Is there any option in writing custom code to enable SSL for JMS Source
Currently, Flume JMS Source doesn't have this feature. Flume Avro source or HTTP source has this.
https://github.com/apache/flume/blob/trunk/flume-ng-core/src/main/java/org/apache/flume/source/AvroSource.java
You can refer the above class and write a custom JMS Source.

Hadoop Cluster deployment using Apache Ambari

I have listed few queries related to ambari as follows:
Can I configure more than one hadoop cluster via UI of ambari ?
( Note : I am using Ambari 1.6.1 for my hadoop cluster deployment purpose and I am aware that this can be done via Ambari API, but not able to find via ambari portal)
We can check the status of services on each node by “jps” command, if we have configured hadoop cluster w/o ambari.
Is there any way similar to “jps” to check from back end if the setup for hadoop cluster was successful from the backend ?
( Note : I can see that services are showing UP on the ambari portal )
Help Appreciated !!
Please let me know if any additional information is required.
Thanks
The ambari UI is served up by the ambari server for a specific cluster. To configure another cluster you need to point your browser to the URL for that other cluster's ambari server. So you can't see the configuration for multiple servers on the same web page, but you can set browser bookmarks to jump from configuration to configuration.