I have a web application which emits lines of json-formatted logs to stdout.
I want to...
1. save the logs on AWS CloudWatch
2. visualize the timeseries of the number of logs which match a custom condition
Is there any solution that summerizes (i.e. count) the custom logs on AWS CloudWatch dashboard?
1) To save the logs on cloudwatch you can install the cloudwatch logs agent on your EC2 instance (if that is what you are serving your app from). You will need to configure which logs are sent in /etc/awslogs/awslogs.conf (and
/etc/awslogs/awscli.conf for the region setting).
2) For visualising the logs you can do a fair amount with Cloudwatch Logs Insights, which uses its own query language to analyse your logs from any given group
Related
Is there any way I can see FluentBit logs for EKS Fargate? I'd like to see the errors that are raised by the plugins.
The EKS Fargate logging manual provides a way to see if the ConfigMap is valid. The ConfigMap entry I'm using is valid, but there seem to be some issues in the plugin because the logs aren't created in Cloudwatch and I don't know why.
Turns out AWS provides a way - we need to put the flag flb_log_cw: "true" under data in the ConfigMap (ref), and that would output the FluentBit logs to Cloudwatch logs.
I have one lambda function on AWS, which is storing it's logs over AWS cloudwatch. I want to store ALL these logs to S3 using CLI. My linux server is already configured with CLI and has all the necessary permissions to access AWS resources. I want that the logs that are getting displayed on my AWS cloudwatch console, should get created over an S3 bucket.
Once, these logs are stored to some location on S3, then I can easily export them to an SQL table over Redshift.
Any idea how to bring these logs to S3? Thanks for reading.
You can use boto3 in lambda and export logs into S3 need to write a lambda function thatcsubscribe to CloudWatch logs and triggered on cloud watch log events.
AWS Doc:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#LambdaFunctionExample
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3Export.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html
Example: https://medium.com/dnx-labs/exporting-cloudwatch-logs-automatically-to-s3-with-a-lambda-function-80e1f7ea0187
Your question does not specify that you want to export it onetime or regular basis. Thus, there are two options to export the cloudwatch logs to s3 location:
Create a export task (onetime)
You can create a task with below command:
aws logs create-export-task \
--profile {PROFILE_NAME} \
--task-name {TASK_NAME} \
--log-group-name {CW_LOG_GROUP_NAME} \
--from {START_TIME_IN_MILLS} \
--to {END_TIME_IN_MILLS} \
--destination {BUCKET_NAME} \
--destination-prefix {BUCKET_DESTINATION_PREFIX}
You can refer this in detail.
A lambda to write the logs to s3 (event based from CloudWatch subscription)
exports.lambdaHandler = async (event, context) => {
// get the logs content from the event
// if any change to data
// write the data to s3 location
}
My recommendation would be to push the logs to ELK stack or any equivalent logging systems(Splunk, Loggly, etc) for better anylysis, visualization of the data.
Our AWS statement came in and we noticed we're being doubly charged for the number of requests.
First charge is for Asia Pacific (Tokyo) (ap-northeast-1) and this is straightforward because it's where our bucket is located. But there's another charge against US East (N. Virginia) (us-east-1) with a similar number of requests.
Long story short, it appears this is happening because we're using the aws s3 command and we haven't specified a region either via the --region option or any of the fallback methods.
Typing aws configure list shows region: Value=<not set> Type=None Location=None.
And yet our aws s3 commands succeed, albeit with this seemingly hidden charge. The presumption is, our requests first go to us-east-1, but since there isn't a bucket there by the name we specified, it turns around and comes back to ap-northeast-1, where it ultimately succeeds while getting accounted twice.
The ec2 instance where the aws command is run is itself in ap-northeast-1 if that counts for anything.
So the question is, is the presumption above a reasonable account of what's happening? (i.e. Is it expected behaviour.) And, it seems a bit insidious to me but is there a proper rationale for this?
What you are seeing is correct. The aws s3 command needs to know the region in order to access the S3 bucket.
Since this has not been provided, it will make a request to us-east-1, which is effectively the default - see the AWS S3 region chart to see that us-east-1 does not require a location constraint.
If the S3 receives a request for a bucket which is not in that region then it returns a PermanentRedirect response with the correct region for the Bucket. The AWS CLI handles this transparently and repeats the request with the correct endpoint which includes the region.
The easiest way to see this in action is to run commands in debug mode:
aws s3 ls ap-northeast-1-bucket --debug
The output will include:
DEBUG - Response body:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>PermanentRedirect</Code><Message>The bucket you are attempting to access
must be addressed using the specified endpoint. Please send all future requests to
this endpoint.</Message>
<Endpoint>ap-northeast-1-bucket.s3.ap-northeast-1.amazonaws.com</Endpoint>
<Bucket>ap-northeast-1</Bucket>
<RequestId>3C4FED2EFFF915E9</RequestId><HostId>...</HostId></Error>
The AWS CLI does not assume the Region is the same as the calling EC2 instance, it's a long running confusion/feature request.
Additional Note: Not all AWS services will auto-discover the region in this way and will fail if the Region is not set. S3 works because it uses a Global Namespace which inherently requires some form of discovery service.
i want to Host my website on AWS s3
but when i create code deployment & i followed this url -> https://aws.amazon.com/getting-started/tutorials/deploy-code-vm/
showing this error -> Deployment Failed
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS)
error Screen shoot -> http://i.prntscr.com/oqr4AxEiThuy823jmMck7A.png
so please Help me
If you want to host your website on S3, you should upload your code into S3 bucket and enable Static Web Hosting for that bucket. If you use CodeDeploy, it will take application code either from S3 bucket or GitHub and host it on EC2 instances.
I will assume that you want to use CodeDeploy to host your website on a group of EC2 instances. The error that you have mentioned could occur if your EC2 instances do not have correct permission through IAM role.
From Documentation
The permissions you add to the service role specify the operations AWS CodeDeploy can perform when it accesses your Amazon EC2 instances and Auto Scaling groups. To add these permissions, attach an AWS-supplied policy, AWSCodeDeployRole, to the service role.
If you are following along the sample deployment from the CodeDeploy wizard make sure you have picked Create A Service Role at the stage that you are required to Select a service role.
I'd like to know if there's a way for me to have bucket-level stats in amazon s3.
Basically i want to charge customers for storage and GET requests on my system (which is hosted on s3).
So i created a specific bucket for each client, but i can't seem to get the stats just for a specific bucket.
I see the API lets me
GET Bucket
or
GET Bucket requestPayment
But i just can't find how to get the number of requests issued to said bucket and the total size of the bucket.
Thanks for help !
Regards
I don't think that what you are trying to achieve is possible using Amazon API. The GET Bucket request does not contain usage statistics (requests, etc) other than the timestamp of the latest modification (LastModified).
My suggestion would be that you enable logging in your buckets and perform the analysis that you want from there.
S3 starting page gives you an overview on it:
Amazon S3 also supports logging of requests made against your Amazon S3 resources. You can configure your Amazon S3 bucket to create access log records for the requests made against it. These server access logs capture all requests made against a bucket or the objects in it and can be used for auditing purposes.
And I am sure there is plenty of documentation on that matter.
HTH.