When I try to delete an event notification from S3, I get the following message:
In Text:
Unable to validate the following destination configurations. Not authorized to invoke function [arn:aws:lambda:eu-west-1:FOOBAR:function:FOOBAR]. (arn:aws:lambda:eu-west-1:FOOBAR:function:FOOBAR, null)
Nobody in my organization seems to be able to delete that - not even admins.
When I try to set the same S3 event notification in AWS Lambda as a trigger via the web interface, I get
Configuration is ambiguously defined. Cannot have overlapping suffixes in two rules if the prefixes are overlapping for the same event type. (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: FOOBAR; S3 Extended Request ID: FOOBAR/FOOBAR/FOOBAR)
How can I delete that existing event notification? How can I further investigate the problem?
I was having the same problem tonight and did the following:
1) Issue the command:
aws s3api put-bucket-notification-configuration --bucket=mybucket --notification-configuration="{}"
2) In the console, delete the troublesome event.
Assuming you have better permissions from the CLI:
aws s3api put-bucket-notification-configuration --bucket=mybucket --notification-configuration='{"LambdaFunctionConfigurations": []}'
retrieve all the notification configurations of a specific bucket
aws s3api get-bucket-notification-configuration --bucket=mybucket > notification.sh
the notification.sh file would look like the following
{
"LambdaFunctionConfigurations": [
{
"Id": ...,
"LambdaFunctionArn": ...,
"Events": [...],
"Filter": {...},
},
{ ... },
]
}
remove the notification object from the notification.sh
modify the notification.sh like the following
#! /bin/zsh
aws s3api put-bucket-notification-configuration --bucket=mybucket --notification-configuration='{
"LambdaFunctionConfigurations": [
{
"Id": ...,
"LambdaFunctionArn": ...,
"Events": [...],
"Filter": {...},
},
{ ... },
]
}'
run the shell script
source notification.sh
There is no 's3api delete notification-configuration' in AWS CLI. Only the 's3api put-bucket-notification-configuration' is present which will override any previously existing events in the s3 bucket. So, if you wish to delete a specific event only you need to handle that programatically.
Something like this:
Step 1. Do a 's3api get-bucket-notification-configuration' and get the s3-notification.json file.
Step 2. Now edit this file to reach the required s3-notification.json file using your code.
Step 3. Finally, do 's3api put-bucket-notification-configuration' (aws s3api put-bucket-notification-configuration --bucket my-bucket --notification-configuration file://s3-notification.json)
i had worked on the logic in AWS CLI, it requires a jq command to merge the json output
I tried but doesnt work for me, I uploaded a lambda with the same name of function but without events, after go to the function in the dashboard and add trigger with the same prefix and suffix, when apply changes the dashboard says error, but if you come back to function lambda, you can see the trigger now is linked to lambda, so after you can remove tha lambda or events
Related
When enabling distributed map task and exporting the state transitions history to S3 triggers a exception with the following error message
An error occurred while executing the state 'Map' (entered at the event id #12). Failed to write a test manifest into the specified output bucket. | Message from S3: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint
This is my the ResultWriter key object definition
/// other keys omitted...
"ResultWriter": {
"Resource": "arn:aws:states:::s3:putObject",
"Parameters": {
"Bucket": "sds-qa-nv",
"Prefix": "distributed_excecutions/"
}
}
I tried enabling Export Map state results to Amazon S3 to save state transition to S3 and I'm expecting that the results are saved to S3 without failing.
I am trying to configure AWS Event rule using event pattern. Bye default the code is
{
"source": [
"aws.ec2"
],
"detail-type": [
"EC2 Instance State-change Notification"
]
}
I want to filter the EC2 based on tag lets say all of my EC2 has unique AppID attached i.e.20567. Reason I want to filter it because other teams have EC2's under same AWS account and I want to configure SNS only for the instances that belongs to me based on tag 'App ID'
Target I have selected SNS topic and using input formatter with value
{"instance":"$.detail.instance-id","state":"$.detail.state","time":"$.time","region":"$.region","account":"$.account"}
Any suggestion where can I pass tag key value to filter my EC2 Instances.
I can only speak for Cloudwatch Events (now called as EventBridge). We do not get tag information from EC2 prior to rule-matching. A sample EC2 event is shown at https://docs.aws.amazon.com/eventbridge/latest/userguide/event-types.html#ec2-event-type
{
"id":"7bf73129-1428-4cd3-a780-95db273d1602",
"detail-type":"EC2 Instance State-change Notification",
"source":"aws.ec2",
"account":"123456789012",
"time":"2015-11-11T21:29:54Z",
"region":"us-east-1",
"resources":[
"arn:aws:ec2:us-east-1:123456789012:instance/i-abcd1111"
],
"detail":{
"instance-id":"i-abcd1111",
"state":"pending"
}
}
So you best course of action would be to fetch the tags for a resource and filter out the events after reading.
I have a AWS lambda that does some calculation and writes output to S3 location. The AWS lambda is triggered by cloudwatch cron expression. Since the lambda can be triggered multiple times, I want to modify lamda code such that it handles multiple triggers for the lambda.
The only major side-effect for my lambda is writing to S3 and sending a mail. In this case, how do I ensure the lambda executes multiple times but still ensuring idempotent behavior?
You need a unique id and check on it before processing.
See if the event has one. For example, Sample event for "cron expression" in cloud watch rule suggests that you will get something like this:
{
"version": "0",
"id": "89d1a02d-5ec7-412e-82f5-13505f849b41",
"detail-type": "Scheduled Event",
"source": "aws.events",
"account": "123456789012",
"time": "2016-12-30T18:44:49Z",
"region": "us-east-1",
"resources": [
"arn:aws:events:us-east-1:123456789012:rule/SampleRule"
],
"detail": {}
}
In this case your code would read event.id and write that to S3 (say yourbucket/running/89d1a02d-5ec7-412e-82f5-13505f849b41). When lambda is initiated it can list the keys under "yourbucket/running" and see if event.id matches any of them.
If none matches, create it.
It's not bullet-proof solution, you might conceivably run into some race-condition, say if another lambda fires up while AWS is slow in creating the key but fast at launching the other lambda but that is what you have to live with if you would like to use s3.
My team wants to create an S3 bucket in a cloudformation template without assigning it a bucket name (to let cloudformation name it itself).
When putting a file into the S3 bucket from my lambda function, is there a way for me to get the S3 bucket's name without having to manually look at the AWS console and check what name was created?
Using the Serverless Application Model (SAM) you can include environment variables with the function properties
AWSTemplateFormatVersion: '2010-09-09'
Description: "Demo"
Transform: 'AWS::Serverless-2016-10-31'
Resources:
MyLambdaFunction:
Type: 'AWS::Serverless::Function'
Properties:
Runtime: nodejs10.x
Handler: index.handler
CodeUri: ./src
Policies:
- Version: '2012-10-17'
Statement:
- Action:
- s3:PutObject
Effect: 'Allow'
Resource: !Sub ${MyS3Bucket.Arn}/*
Environment:
Variables:
BUCKET_NAME: !Ref MyS3Bucket
MyS3Bucket:
Type: 'AWS::S3::Bucket'
Then your function can access the environment variables using process.env.BUCKET_NAME in node.js. In python I think you'd use os.environ['BUCKET_NAME']
const aws = require('aws-sdk');
const s3 = new aws.S3();
exports.handler = async (event) => {
const params = {
Body: 'The Body',
Bucket: process.env.BUCKET_NAME,
Key: 'abc123',
}
return s3.putObject(params).promise();
}
I would assume this works for any CloudFormation templates which aren't using the SAM transform too.
You can use Fn::GetAtt to get the values from your newly created S3 bucket.
You can check it here.
S3 Ref & Get Attribute Documentation
The problem is how to pass the value to lambda function.
Here is the step that might be works.
Use the function Get Attribute above to get the s3 bucket name that cloudformation created.
Insert the value into a file, you can use the UserData, or use Metadata if you are using cloud-init already.
Store the file into existed s3 bucket (or any other storage that lambda can access), you can using the cloud formation template bucket, that always been created when you launch a cloudformation template (usually named cf-template...).
Add a code to your lambda to access the s3 and get the file. Now you get the data of the s3 bucket that your cloudformation has been created and can use it on lambda.
Hope this help.
So I am trying to create a simple pipeline in Amazon AWS. I want to execute a step function using data generated by a stream which triggers the first lambda of the state machine
What I want to do is following.
Input data is streamed by AWS Kinesis
This Kinesis stream is used as a trigger for a lambda1 that executes and writes to S3 Bucket.
This would trigger (using step function) a lambda2 that would read the content from the given bucket and write it to another bucket
Now I want to implement a state machine using Amazon Step Function. I have created the state machine which is quite straightforward
{
"Comment": "Linear step function test",
"StartAt": "lambda1",
"States": {
"lambda1": {
"Type": "Task",
"Resource": "arn:....",
"Next": "lambda2"
},
"lambda2": {
"Type": "Task",
"Resource": "arn:...",
"End": true
}
}
}
What I want is, that Kinesis should trigger the first Lambda and once its executed the step function would execute lambda 2. Which does not seem to happen. Step function does nothing even though my Lambda 1 is triggered from the stream and writing to S3 bucket. I have an option to manually start a new execution and pass a JSON as input, but that is not the work flow I am looking for
you did wrong to kick off State machine.
you need to add another Starter Lambda function to use SDK to invoke State Machine. The process is like this:
kinesis -> starter(lambda) -> StateMachine (start Lambda 1 and Lambda 2)
The problem of using Step Function is lack of triggers. There are only 3 triggers which are CloudWatch Events, SDK or API Gateway.