Send AWS X-Ray traces and segments to the AWS CloudWatch - amazon-cloudwatch

I looking for a solution to send the logs from AWS-Xray to AWS CloudWatch to help me doing an aggregation and metrics.
I was checking if we can do this directly using AWS X-Ray daemon, it seems there is no way to do this form the X-Ray daemon.
I can see that the only solution to do so using Get the trace summary from Xray using AWS XRAY SDK API and share to other streams like CloudWatch.
Is there a solution to conduct this using a config in AWS X-Ray daemon to send the logs directly to CloudWatch log group?

Unfortunately xray daemon only support X-Ray endpoint by PutTraceSegments API, it cannot emit metrics or logs to CloudWatch.
Alternatively, you can choose ADOT collector which is a all-in-one agent.
https://aws-otel.github.io/docs/getting-started/collector

Related

In amazon eks - how to view logs which are prior to eks fargate node creation and logs while pods is starting up

I'm using amazon EKS fargate. I can see container logs using fluentbit side car etc no problem at all. But those logs ONLY show what is happening inside the container AFTER it has started up
I enabled aws eks cluster logging fully
Now I would like to see logs in cloudwatch which is equivalent of
kubectl describe pod
command
I have searched the ENTIRE cloudwatch clustername log group and am not able to find logs like
"pulling image into container"
"efs not mounted"
etc
I want to see logs in cloudwatch prior to the actual container creation stage
IS it possible at all using eks fargate ?
Thanks a bunch
You can use Container Insights which can collect metrics by using performance log events using the embedded metric format. The logs are stored in CloudWatch Logs. CloudWatch generates several metrics automatically from the logs which you can view in the CloudWatch console.
In Amazon EKS and Kubernetes, Container Insights uses a containerized version of the CloudWatch agent to discover all of the running containers in a cluster. It then collects performance data at every layer of the performance stack.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-view-metrics.html

EKS pods logging to Elastic Cloud

I am trying to set up pods logs shipping from EKS to ElasticSearch Cloud.
According to Fluent Bit for Amazon EKS on AWS Fargate is here, ElasticSearch should be supported:
You can choose between CloudWatch, Elasticsearch, Kinesis Firehose and Kinesis Streams as outputs.
According to FluentBit Configuration Parameters for ElasticSearch having Cloud_ID and Cloud_Auth parameters should be enough to ship logs to Elasticsearch Cloud.
An example here shows how to configure ES output for FluentBit, so my config looks like:
[OUTPUT]
Name es
Match *
Logstash_Format On
Logstash_Prefix ${logstash_prefix}
tls On
tls.verify Off
Pipeline date_to_timestamp
Cloud_ID ${es_cloud_id}
Cloud_Auth ${es_cloud_auth}
Trace_Output On
I am running a simple ngnix container to generate some logs (as in one of the linked examples), but they don't seem to appear in my ElasticSearch / Kibana.
Am I missing anything? How do I ship logs to ElasticSearch Cloud?
Also, Trace_Output On is supposed to log FluentBits' attempts to ship logs, but where can I see these logs on EKS?
I also ran into this. It seems to me only AWS ElasticSearch is supported when using the AWS managed FluentBit (from what I can tell).
https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-eks-adds-built-in-logging-support-for-aws-fargate/
You can work around this by using a sidecar fluentbit container (which can send to ElasticSearch) if that's an option for you. You will need to modify the application to have logs written to the filesystem.
Or you can use the managed FluentBit with the cloudwatch output, subscribe with to the log group with a lambda function and send it to ES.

What is XRay daemon?

When we are running a serverless app, say a beanstalk, a danymoDB and
a SNS, there will be three XRay daemons in each JVM, right?
If so,how could XRay trace a request from the beginning to the end?
The trace ID will go with http request header or sth like that?
According to the docs, "The AWS X-Ray daemon is a software application that listens for traffic on UDP port 2000, gathers raw segment data, and relays it to the AWS X-Ray API. The daemon works in conjunction with the AWS X-Ray SDKs and must be running so that data sent by the SDKs can reach the X-Ray service."
It's important to note that the X-Ray SDK produces what's called a remote subsegment to mimic the result of the downstream in a client-sided manner. For a service like DynamoDB, you would see this. For something like SNS, the trace header information is propagated across through the Http headers. The X-Ray Daemon is used to forward segments generated when a service receives an upstream request; DynamoDB doesn't do this yet, and SNS forwards it through the trace header mentioned.
The Daemon isn't part of the JVM; it's an external process that's ran in an instance that forwards the trace data to the service. Technically, it can be run by a single instance, the same instance, or all instances.

ECS AWS Cloudwatch logs

I have a task in ECS that runs tomcat. That tomcat has 2 or 3 apps deployed to it. I know its not an ideal situation but this is what we've got. Log4j is used and logs for apps goto different log files under logs folder of tomcat. Is there a way I can have those different log files from my docker container to CloudWatch under different streams? I know if I write logs to stdout using log4j appender I can have them in cloudwatch easily but then they will not be separate, it'll be log from all apps going in one place.
Many Thanks
Instead of using log4j and sending logs to STDOUT you may set your log-configuration and docker log driver to aws-logs, which will help you to send logs directly to the cloudwatch using cloudwatch agent.
Reference: https://aws.amazon.com/blogs/devops/send-ecs-container-logs-to-cloudwatch-logs-for-centralized-monitoring/

AWS Batch Logs to splunk

I am using AWS Batch Service for my job. i want to send the logs generated from AWS Batch directly to Splunk instead of sending that to cloud-watch. How can i configure log-driver in AWS Batch to achieve this?
-ND
Splunk provides 3 methods to forward logs from a host server to the cloud server.
Splunk Forwarder (agent)
Http Event Collector (HEC)
Splunk logging driver for Docker
But, Splunk HTTP Event Collector (HEC) is the easy and efficient way to send data to Splunk Enterprise and Splunk Cloud in your scenario. You can send logs through Http request using HEC. This can be defined in your AWS batch job definition. Tutorial.
Other than that, you can use Splunk Docker logging driver, since AWS batch job will be spawn on an ECS container. For this method, you should define a custom AMI(for compute environment) which configured the docker daemon to send all the container logs to particular Splunk server.
AWS Batch logs can be sent to Cloudwatch and using Splunk Add on for AWS or using one of the AWS LAMBDA functions (HTTP Event Collector) can be onboarded into Splunk.