How to retrieve matched event from Cloudwatch in Codebuild? - amazon-cloudwatch

Like in lambda, the Cloudwatch events are sent to handler(event, context), if I set Codebuild as target, which variable do I use to read the matched event in Codebuild?

For the AWS CodeBuild target in CloudWatch Events (for example to trigger a build at regular intervals), you can match by projectARN. Details: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatch-Events-tutorial-codebuild.html
If you are using this integration more as a means of consuming the event stream from CodeBuild for build state and phase chnage, you can follow the sample # https://docs.aws.amazon.com/codebuild/latest/userguide/sample-build-notifications.html

Related

How to create alarm on number of EventBridge Rules breaching a certain threshold?

I have a system that auto creates and deletes CloudWatch Rules but there is a hard limit of 300 rules in every Event Bus. I want to create an alarm that gets triggered whenever a certain number, say 100, is reached. How can I do that using CDK code?
Create a Lambda function that calls the ListRules API. The Lambda counts the rules. Have EventBridge trigger the Lambda periodically with a scheduled Rule. If you want an Alarm, your Lambda could write the rule count to CloudWatch as a custom metric and configure an Alarm to monitor it. A simpler option is to have the Lambda put an SNS notification to a Topic if the rule count exceeds 100.
To recap, the CDK constructs would be: a Lambda Function, an EventBridge Rule and a CloudWatch Alarm (or SNS Topic and Subscription).

Monitor S3 - Send an alert if more than 5 minutes have passed since a last file was written

I have a program that uploads files to S3 every 5 minutes.
I need to monitor it. So I want to check every 10 minutes what is the time of the last file uploaded and if it more than X minutes sends an alert (email) about it.
I understand that I need to use CloudWatch and Lambda. But I don't know how to do it.
Any help, please.
The following AWS products should help you build this:
AWS EventBridge (formerly known as CloudWatch Events)
AWS Lambda
AWS SES
Solution outline:
Create your Lambda function.
Create a scheduled event rule in EventBridge.
When creating the rule, use a rate of 10 minutes.
Set your Lambda from step 1. as target of your rule.
When your Lambda is triggered, run your business logic to check when the last file was uploaded.
If you need to send an email, you can use AWS SES to send it to your recipients.
Important:
You need to allow AWS EventBridge to call your Lambda. If you do all of this in the AWS console, the required permissions should be set automatically. If you use CloudFormation, Terraform or SAM, you probably need to add those permissions to your Lambda.

Should we handle a lambda container crash?

Reading a lot about error handling for AWS Lambdas and nothing covers to topic of a running Lambda container just crashing.
Is this a possibility because it seems like one? I'm building an event driven system using Lambdas, triggered by a file upload to S3 and am uncertain if I should bother building in logic to pickup processing if a lambda has died.
e.g. File object is create on S3 -> S3 notifies Lambda of the event -> Lambda instance happens to crash before it can start processing -> Event is now gone forever* (assumption here, I'm unsure if that's true, but can't find anything to say the contrary).
I'm debating building in logic to reconcile what is on S3 and what was processed each day so I can detect the (albeit rare) scenario where a Lambda died (died and couldn't write a failure to a DLQ) and we need to process these files. Is this worth it? Would S3 somehow know that the lambda died and it needs to put the event on a DLQ of its own?
From https://docs.aws.amazon.com/fr_fr/lambda/latest/dg/with-s3.html AWS S3 are async.
Next from https://docs.aws.amazon.com/lambda/latest/dg/invocation-retries.html, Async lambda invocation are retries twice without any queuing.
I guess if more tries are needed, better to setup a SNS/SQS queuing.

Sending exception message from Step-functions to aws cloudwatch event logs

When AWS Step-function fails, the output message is null and same is passed to cloudwatch event logs, but the error message in exception is not passed to cloudwatch events log. How can I send that exception message to cloudwatch events log so that I can process it downstream.
CloudWatch Events and CloudWatch Logs are two different services. If you enable logging at ERROR, FATAL, or ALL log level, you'll see the errors in CloudWatch Logs. See https://docs.aws.amazon.com/step-functions/latest/dg/cw-logs.html
I assume you're referring to CloudWatch Events/EventBridge here. The events Step Functions emits to CloudWatch Events contain the result of the DescribeExecution API. It doesn't include errors, just the input and output of the execution. The error and cause that caused the execution to fail is in the executionFailedEventDetails field of last event in the execution history. You can retrieve the last event by calling GetExecutionHistory with "reverseOrder": true and "maxResults": 1.
I ended up using below boto3.client(‘stepfunctions’) and then using get_execution_history() method to retrieve the cause and error.

How can I get notification about new S3 objects?

I have a scenario where we have many clients uploading to s3.
What is the best approach to knowing that there is a new file?
Is it realistic/good idea, for me to poll the bucket ever few seconds?
UPDATE:
Since November 2014, S3 supports the following event notifications:
s3:ObjectCreated:Put – An object was created by an HTTP PUT operation.
s3:ObjectCreated:Post – An object was created by HTTP POST operation.
s3:ObjectCreated:Copy – An object was created an S3 copy operation.
s3:ObjectCreated:CompleteMultipartUpload – An object was created by the completion of a S3 multi-part upload.
s3:ObjectCreated:* – An object was created by one of the event types listed above or by a similar object creation event added in the future.
s3:ReducedRedundancyObjectLost – An S3 object stored with Reduced Redundancy has been lost.
These notifications can be issued to Amazon SNS, SQS or Lambda. Check out the blog post that's linked in Alan's answer for more information on these new notifications.
Original Answer:
Although Amazon S3 has a bucket notifications system in place it does not support notifications for anything but the s3:ReducedRedundancyLostObject event (see the GET Bucket notification section in their API).
Currently the only way to check for new objects is to poll the bucket at a preset time interval or build your own notification logic in the upload clients (possibly based on Amazon SNS).
Push notifications are now built into S3:
http://aws.amazon.com/blogs/aws/s3-event-notification/
You can send notifications to SQS or SNS when an object is created via PUT or POST or a multi-part upload is finished.
Your best option nowadays is using the AWS Lambda service. You can write a Lambda using either node.js javascript, java or Python (probably more options will be added in time).
The lambda service allows you to write functions that respond to events from S3 such as file upload. Cost effective, scalable and easy to use.
You can implement a pub-sub mechanism relatively simply by using SNS, SQS, and AWS Lambda. Please see the below steps. So, whenever a new file is added to the bucket a notification can be raised and acted upon that (everything is automated)
Please see attached diagram explaining the basic pub-sub mechanism
Step 1
Simply configure the S3 bucket event notification to notify an SNS topic. You can do this from the S3 console (Properties tab)
Step 2
Make an SQS Queue subscribed to this topic. So whenever an object is uploaded to the S3 bucket a message will be added to the queue.
Step 3
Create an AWS Lambda function to read messages from the SQS queue. AWS Lambda supports SQS events as a trigger. Therefore, whenever a message appears in the SQS queue, Lambda will trigger and read the message. Once successfully read the message it will be automatically deleted from the queue. For the messages that can't be processed by Lambda (erroneous messages) will not be deleted. So those messages will pile up in the queue. To prevent this behavior using a Dead Letter Queue (DLQ) is a good idea.
In your Lambda function, add your logic to handle what to do when users upload files to the bucket
Note: DLQ is nothing more than a normal queue.
Step 4
Debugging and analyzing the process
Make use of AWS Cloud watch to log details. Each Lambda function creates a log under log groups. This is a good place to check if something went wrong.