I am using AWS Batch Service for my job. i want to send the logs generated from AWS Batch directly to Splunk instead of sending that to cloud-watch. How can i configure log-driver in AWS Batch to achieve this?
-ND
Splunk provides 3 methods to forward logs from a host server to the cloud server.
Splunk Forwarder (agent)
Http Event Collector (HEC)
Splunk logging driver for Docker
But, Splunk HTTP Event Collector (HEC) is the easy and efficient way to send data to Splunk Enterprise and Splunk Cloud in your scenario. You can send logs through Http request using HEC. This can be defined in your AWS batch job definition. Tutorial.
Other than that, you can use Splunk Docker logging driver, since AWS batch job will be spawn on an ECS container. For this method, you should define a custom AMI(for compute environment) which configured the docker daemon to send all the container logs to particular Splunk server.
AWS Batch logs can be sent to Cloudwatch and using Splunk Add on for AWS or using one of the AWS LAMBDA functions (HTTP Event Collector) can be onboarded into Splunk.
Related
I looking for a solution to send the logs from AWS-Xray to AWS CloudWatch to help me doing an aggregation and metrics.
I was checking if we can do this directly using AWS X-Ray daemon, it seems there is no way to do this form the X-Ray daemon.
I can see that the only solution to do so using Get the trace summary from Xray using AWS XRAY SDK API and share to other streams like CloudWatch.
Is there a solution to conduct this using a config in AWS X-Ray daemon to send the logs directly to CloudWatch log group?
Unfortunately xray daemon only support X-Ray endpoint by PutTraceSegments API, it cannot emit metrics or logs to CloudWatch.
Alternatively, you can choose ADOT collector which is a all-in-one agent.
https://aws-otel.github.io/docs/getting-started/collector
I am trying to set up pods logs shipping from EKS to ElasticSearch Cloud.
According to Fluent Bit for Amazon EKS on AWS Fargate is here, ElasticSearch should be supported:
You can choose between CloudWatch, Elasticsearch, Kinesis Firehose and Kinesis Streams as outputs.
According to FluentBit Configuration Parameters for ElasticSearch having Cloud_ID and Cloud_Auth parameters should be enough to ship logs to Elasticsearch Cloud.
An example here shows how to configure ES output for FluentBit, so my config looks like:
[OUTPUT]
Name es
Match *
Logstash_Format On
Logstash_Prefix ${logstash_prefix}
tls On
tls.verify Off
Pipeline date_to_timestamp
Cloud_ID ${es_cloud_id}
Cloud_Auth ${es_cloud_auth}
Trace_Output On
I am running a simple ngnix container to generate some logs (as in one of the linked examples), but they don't seem to appear in my ElasticSearch / Kibana.
Am I missing anything? How do I ship logs to ElasticSearch Cloud?
Also, Trace_Output On is supposed to log FluentBits' attempts to ship logs, but where can I see these logs on EKS?
I also ran into this. It seems to me only AWS ElasticSearch is supported when using the AWS managed FluentBit (from what I can tell).
https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-eks-adds-built-in-logging-support-for-aws-fargate/
You can work around this by using a sidecar fluentbit container (which can send to ElasticSearch) if that's an option for you. You will need to modify the application to have logs written to the filesystem.
Or you can use the managed FluentBit with the cloudwatch output, subscribe with to the log group with a lambda function and send it to ES.
I have a task in ECS that runs tomcat. That tomcat has 2 or 3 apps deployed to it. I know its not an ideal situation but this is what we've got. Log4j is used and logs for apps goto different log files under logs folder of tomcat. Is there a way I can have those different log files from my docker container to CloudWatch under different streams? I know if I write logs to stdout using log4j appender I can have them in cloudwatch easily but then they will not be separate, it'll be log from all apps going in one place.
Many Thanks
Instead of using log4j and sending logs to STDOUT you may set your log-configuration and docker log driver to aws-logs, which will help you to send logs directly to the cloudwatch using cloudwatch agent.
Reference: https://aws.amazon.com/blogs/devops/send-ecs-container-logs-to-cloudwatch-logs-for-centralized-monitoring/
We would like to process AWS ELB access logs and write them into InfluxDB
to be used for application metrics and monitoring (ex. Grafana).
We configured ELB to store access logs into S3 bucket.
What would be the best way to process those logs and write them to InfluxDB?
What we tried so far was to mount S3 bucket to filesystem using s3fs and then use Telegraf agent for processing. But this approach has some issues: s3fs mounting looks like a hack, and all the files in the bucket are compressed and need to be unzipped before telegraf can process them which makes this task overcomplicated.
Is there any better way?
Thanks,
Oleksandr
Can you just install the telegraf agent on the AWS instance that is generating the logs, and have them sent directly to InfluxDB in real-time?
Explanation:
I have one executable Jar deployed on one EC2 instance which can be run manually to listen on port 80 for proxy traffic
I have one Spring application on another EC2 instance which hits a website on third party server
Connection between these two machines:
Spring application setup i.e. B tells third party server to open a website and use A as a proxy, this leads to generation of logs of network calls on A.
What I want to do is: for every request I send from B to third party server I want network logs that are being generated on A to be transferred to B
What I tried:
One way is to rotate logs on A and write to S3 and then application and pick it from S3 and process them
ssh into A and grep the log file, but this stops the JAR to listen to the new traffic and it gets stuck
What I am looking for:
A realtime solution, as soon as logs show up on A I want them to be ported to B without stopping A on its listening job
I am not sure what OS you are running, but if you are running a nix variant, you can install syslog-ng instead of syslog, or rsyslog, which is capable of logging local and external events. In this case I would set up a central logging server, that listens for logs from server a, and server b.
Another alternative is syslog-ng is not what you're looking for, you could install splunk on a server, and have it pick up the logs from splunk reporters on each server you want to centrally log.
Hope this helps.
As Kevin mentioned , you could set up a Splunk Indexer on an EC2 instance and use this to aggregate the collection of logs from A and B and any other sources , and then use the Splunk Search language to search over this log data in "near realtime", correlate events together across your various systems , create custom dashboards, setup proactive alerting etc...
http://www.splunk.com/
As far as the mechanisms for getting this data from your systems to the Splunk Indexer :
1) Use a Splunk Universal Forwarder to monitor log output and forward it to your Splunk Indexer , http://www.splunk.com/download/universalforwarder
2) As your systems are Java based , SplunkJavaLogging has log4j/logback/jdk appenders that you can seamlessly wire in to your logging config to forward log events to your Splunk Indexer : https://github.com/damiendallimore/SplunkJavaLogging
3) Use the Splunk Java SDK , http://dev.splunk.com/view/java-sdk/SP-CAAAECN , to input log events into your Splunk Indexer via HTTP REST or Raw TCP