Capture events in cloudwatch with target sns - amazon-cloudwatch

How to create sns target mesage attributes from the cloud watch event rules.
I have created a event rule which listen to job status with target as SNS.
And i do a fan out to SQS and have a filter based on states.
So how to pass message attributes to sns through cloud watch event?
I want to pass the event state as sns mesaage attribute so that i can filter on state attribute

Related

How to create alarm on number of EventBridge Rules breaching a certain threshold?

I have a system that auto creates and deletes CloudWatch Rules but there is a hard limit of 300 rules in every Event Bus. I want to create an alarm that gets triggered whenever a certain number, say 100, is reached. How can I do that using CDK code?
Create a Lambda function that calls the ListRules API. The Lambda counts the rules. Have EventBridge trigger the Lambda periodically with a scheduled Rule. If you want an Alarm, your Lambda could write the rule count to CloudWatch as a custom metric and configure an Alarm to monitor it. A simpler option is to have the Lambda put an SNS notification to a Topic if the rule count exceeds 100.
To recap, the CDK constructs would be: a Lambda Function, an EventBridge Rule and a CloudWatch Alarm (or SNS Topic and Subscription).

How to retrieve matched event from Cloudwatch in Codebuild?

Like in lambda, the Cloudwatch events are sent to handler(event, context), if I set Codebuild as target, which variable do I use to read the matched event in Codebuild?
For the AWS CodeBuild target in CloudWatch Events (for example to trigger a build at regular intervals), you can match by projectARN. Details: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatch-Events-tutorial-codebuild.html
If you are using this integration more as a means of consuming the event stream from CodeBuild for build state and phase chnage, you can follow the sample # https://docs.aws.amazon.com/codebuild/latest/userguide/sample-build-notifications.html

Publish a list of "objects" to SNS in one message, then subscribe with a master lambda, feed to slaves

Wondering if this is recommended.
With a master lambda, subscribe to an SNS topic where I publish a message which contains a list of source/destination pairs for s3. Say 100 per message.
The master lambda with then loop though those pairs and call a slave worker for each item in the list which will do the copy of S3 objects from source to destination.
I was originally trying to use SQS but SQS is not an event source for lambda. Cloudwatch Events are too murky as how you pass actual data as payload.
So wondering if my approach above is valid and will hold up, or if there's a better alternative.
Thanks

Amazon SNS Notifications on S3 objects

We would like to publish a message to SNS Topic subscribers, and the notification can happen in different forms. One is if a new object is placed there will be one notification and if a object is updated then there will be another notification and the same way if an object deleted.
Example:
SNS Topic: MyTopic
S3 Path: s3://mybucket/myfolder1/myfolder2/myfolder3/
Task:
1. So if I add a file1 to this S3 path, then a notifications should be sent out to all subscribers of MyTopic.
2. If file1 has updated, then a notification should be sent out
3. If file1 has deleted, then a notification should be sent out.
Is there a IAM policy we can define that matches to all these criteria.

How can I get notification about new S3 objects?

I have a scenario where we have many clients uploading to s3.
What is the best approach to knowing that there is a new file?
Is it realistic/good idea, for me to poll the bucket ever few seconds?
UPDATE:
Since November 2014, S3 supports the following event notifications:
s3:ObjectCreated:Put – An object was created by an HTTP PUT operation.
s3:ObjectCreated:Post – An object was created by HTTP POST operation.
s3:ObjectCreated:Copy – An object was created an S3 copy operation.
s3:ObjectCreated:CompleteMultipartUpload – An object was created by the completion of a S3 multi-part upload.
s3:ObjectCreated:* – An object was created by one of the event types listed above or by a similar object creation event added in the future.
s3:ReducedRedundancyObjectLost – An S3 object stored with Reduced Redundancy has been lost.
These notifications can be issued to Amazon SNS, SQS or Lambda. Check out the blog post that's linked in Alan's answer for more information on these new notifications.
Original Answer:
Although Amazon S3 has a bucket notifications system in place it does not support notifications for anything but the s3:ReducedRedundancyLostObject event (see the GET Bucket notification section in their API).
Currently the only way to check for new objects is to poll the bucket at a preset time interval or build your own notification logic in the upload clients (possibly based on Amazon SNS).
Push notifications are now built into S3:
http://aws.amazon.com/blogs/aws/s3-event-notification/
You can send notifications to SQS or SNS when an object is created via PUT or POST or a multi-part upload is finished.
Your best option nowadays is using the AWS Lambda service. You can write a Lambda using either node.js javascript, java or Python (probably more options will be added in time).
The lambda service allows you to write functions that respond to events from S3 such as file upload. Cost effective, scalable and easy to use.
You can implement a pub-sub mechanism relatively simply by using SNS, SQS, and AWS Lambda. Please see the below steps. So, whenever a new file is added to the bucket a notification can be raised and acted upon that (everything is automated)
Please see attached diagram explaining the basic pub-sub mechanism
Step 1
Simply configure the S3 bucket event notification to notify an SNS topic. You can do this from the S3 console (Properties tab)
Step 2
Make an SQS Queue subscribed to this topic. So whenever an object is uploaded to the S3 bucket a message will be added to the queue.
Step 3
Create an AWS Lambda function to read messages from the SQS queue. AWS Lambda supports SQS events as a trigger. Therefore, whenever a message appears in the SQS queue, Lambda will trigger and read the message. Once successfully read the message it will be automatically deleted from the queue. For the messages that can't be processed by Lambda (erroneous messages) will not be deleted. So those messages will pile up in the queue. To prevent this behavior using a Dead Letter Queue (DLQ) is a good idea.
In your Lambda function, add your logic to handle what to do when users upload files to the bucket
Note: DLQ is nothing more than a normal queue.
Step 4
Debugging and analyzing the process
Make use of AWS Cloud watch to log details. Each Lambda function creates a log under log groups. This is a good place to check if something went wrong.