Serverless Framework - S3 upload lambda trigger - amazon-s3

I'd like to trigger different lambdas on the same bucket according to the folder where the file is uploaded. Basically, when the user uploads a file to "user/some_id/bills" I want to trigger lambda 1; When the user upload a file to "user/some_id/docs" I want to trigger lambda 2;
I tried the configuration bellow but did not work...
insertUploadBill:
handler: resources/insertUploadBill.main
events:
- s3:
bucket: ${self:custom.settings.BUCKET}
event: s3:ObjectCreated:*
rules:
- prefix: user/*/bills/
insertUploadDocs:
handler: resources/insertUploadDoc.main
events:
- s3:
bucket: ${self:custom.settings.BUCKET}
event: s3:ObjectCreated:*
rules:
- prefix: user/*/docs/

if you look at the docs
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html#notification-how-to-filtering
The wild card characters in rules (prefix / suffix) cannot be used.
So either you can change the S3 object key to match something like this
user/images/[user-id]
Or you can make a separate lambda to be invoked on all the s3:ObjectCreated:* events and then use this lambda to match the key and invoke your current lambdas. resources/insertUploadBill.main and resources/insertUploadDoc.main

Related

How to delete a elasticache subnet group with cloud custodian?

I want to use cloud custodian to cleanup some aws resources.( elasticache in this case) .
However ; I got error when trying to delete elasticache subnet groups. According to custodian cache.subnet-group documentation, delete is not a valid action for cache.subnet-group.
How do I perform delete in this case?
my policy file.
policies:
- name: cleanup-elasticache-subnet-group
resource: aws.cache-subnet-group
filters:
- type: value
key: "CacheSubnetGroupName"
op: contains
value: foo
actions:
- type: delele >>> !!! delete is not a valaid action type !!!
Since Cloud Custodian doesn't natively provide the delete action today, your main options are:
Open a feature request for an aws.cache-subnet-group.delete action.
Use Cloud Custodian to report/notify on subnet groups you need to delete, but handle the actual deletion with external tools.
For what it's worth, you can review open feature requests here. I didn't see any for this action, but it's always good to check!

Istio AuthorizationPolicy with Wildcard

authorizationpolicy does not supports any wildcard pattern on paths?
i have the following endpoints:
/my-service/docs/active (GET)
/my-service/docs/<id>/activate/<bool> (PUT)
the first one will get all active docs, and second will activate/deactivate the specific doc.
i’ve tried to set it on the authorizationpolicy and it seems to ignore this policy due to willdcard.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: my-service-private
namespace: default
spec:
action: DENY
selector:
matchLabels:
app:my-service
rules:
- from:
- source:
notNamespaces: [ "default" ]
to:
- operation:
methods: ["GET"]
paths: ["/my-service/docs/active"]
- operation:
methods: ["PUT"]
paths: ["/my-service/docs/*/activate/*"]
any different solution here except updating all my endpoints?
10x
As I mentioned in comments
According to istio documentation:
Rule
Rule matches requests from a list of sources that perform a list of
operations subject to a list of conditions. A match occurs when at
least one source, operation and condition matches the request. An
empty rule is always matched.
Any string field in the rule supports Exact, Prefix, Suffix and
Presence match:
Exact match: “abc” will match on value “abc”.
Prefix match: “abc*” will match on value “abc” and “abcd”.
Suffix match: “*abc” will match on value “abc” and “xabc”.
Presence match: “*” will match when value is not empty.
So Authorization Policy does support wildcard, but I think the issue is with the */activate/* path, because paths can use wildcards only at the start, end or whole string, double wildcard just doesn't work.
There are related open github issues about that:
https://github.com/istio/istio/issues/16585
https://github.com/istio/istio/issues/25021

How to set Lambda Invoke Role in a for a custom authorizer?

I've defined my authorizer in the custom section of my serverless.yml as follows:
custom:
authoriser:
name: api-authorizer
arn: arn:aws:lambda:eu-west-1:nnnnnnnnnn:function:api-authorizer
resultTtlInSeconds: 3600
identitySource: method.request.header.Authorization
identityValidationExpression: '^Bearer [-0-9a-zA-z\.]*$'
The bit I can't work out is how to set the Lambda Invoke Role. There doesn't appear to be any mapping between the values I can set and what is needed by Cloud Formation.
Is this possible to do in this way? Or do I need to register my authorizer as a resource?

How can I delete an existing S3 event notification?

When I try to delete an event notification from S3, I get the following message:
In Text:
Unable to validate the following destination configurations. Not authorized to invoke function [arn:aws:lambda:eu-west-1:FOOBAR:function:FOOBAR]. (arn:aws:lambda:eu-west-1:FOOBAR:function:FOOBAR, null)
Nobody in my organization seems to be able to delete that - not even admins.
When I try to set the same S3 event notification in AWS Lambda as a trigger via the web interface, I get
Configuration is ambiguously defined. Cannot have overlapping suffixes in two rules if the prefixes are overlapping for the same event type. (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: FOOBAR; S3 Extended Request ID: FOOBAR/FOOBAR/FOOBAR)
How can I delete that existing event notification? How can I further investigate the problem?
I was having the same problem tonight and did the following:
1) Issue the command:
aws s3api put-bucket-notification-configuration --bucket=mybucket --notification-configuration="{}"
2) In the console, delete the troublesome event.
Assuming you have better permissions from the CLI:
aws s3api put-bucket-notification-configuration --bucket=mybucket --notification-configuration='{"LambdaFunctionConfigurations": []}'
retrieve all the notification configurations of a specific bucket
aws s3api get-bucket-notification-configuration --bucket=mybucket > notification.sh
the notification.sh file would look like the following
{
"LambdaFunctionConfigurations": [
{
"Id": ...,
"LambdaFunctionArn": ...,
"Events": [...],
"Filter": {...},
},
{ ... },
]
}
remove the notification object from the notification.sh
modify the notification.sh like the following
#! /bin/zsh
aws s3api put-bucket-notification-configuration --bucket=mybucket --notification-configuration='{
"LambdaFunctionConfigurations": [
{
"Id": ...,
"LambdaFunctionArn": ...,
"Events": [...],
"Filter": {...},
},
{ ... },
]
}'
run the shell script
source notification.sh
There is no 's3api delete notification-configuration' in AWS CLI. Only the 's3api put-bucket-notification-configuration' is present which will override any previously existing events in the s3 bucket. So, if you wish to delete a specific event only you need to handle that programatically.
Something like this:
Step 1. Do a 's3api get-bucket-notification-configuration' and get the s3-notification.json file.
Step 2. Now edit this file to reach the required s3-notification.json file using your code.
Step 3. Finally, do 's3api put-bucket-notification-configuration' (aws s3api put-bucket-notification-configuration --bucket my-bucket --notification-configuration file://s3-notification.json)
i had worked on the logic in AWS CLI, it requires a jq command to merge the json output
I tried but doesnt work for me, I uploaded a lambda with the same name of function but without events, after go to the function in the dashboard and add trigger with the same prefix and suffix, when apply changes the dashboard says error, but if you come back to function lambda, you can see the trigger now is linked to lambda, so after you can remove tha lambda or events

Way to access S3 Bucket's name from cloudformation/boto3?

My team wants to create an S3 bucket in a cloudformation template without assigning it a bucket name (to let cloudformation name it itself).
When putting a file into the S3 bucket from my lambda function, is there a way for me to get the S3 bucket's name without having to manually look at the AWS console and check what name was created?
Using the Serverless Application Model (SAM) you can include environment variables with the function properties
AWSTemplateFormatVersion: '2010-09-09'
Description: "Demo"
Transform: 'AWS::Serverless-2016-10-31'
Resources:
MyLambdaFunction:
Type: 'AWS::Serverless::Function'
Properties:
Runtime: nodejs10.x
Handler: index.handler
CodeUri: ./src
Policies:
- Version: '2012-10-17'
Statement:
- Action:
- s3:PutObject
Effect: 'Allow'
Resource: !Sub ${MyS3Bucket.Arn}/*
Environment:
Variables:
BUCKET_NAME: !Ref MyS3Bucket
MyS3Bucket:
Type: 'AWS::S3::Bucket'
Then your function can access the environment variables using process.env.BUCKET_NAME in node.js. In python I think you'd use os.environ['BUCKET_NAME']
const aws = require('aws-sdk');
const s3 = new aws.S3();
exports.handler = async (event) => {
const params = {
Body: 'The Body',
Bucket: process.env.BUCKET_NAME,
Key: 'abc123',
}
return s3.putObject(params).promise();
}
I would assume this works for any CloudFormation templates which aren't using the SAM transform too.
You can use Fn::GetAtt to get the values from your newly created S3 bucket.
You can check it here.
S3 Ref & Get Attribute Documentation
The problem is how to pass the value to lambda function.
Here is the step that might be works.
Use the function Get Attribute above to get the s3 bucket name that cloudformation created.
Insert the value into a file, you can use the UserData, or use Metadata if you are using cloud-init already.
Store the file into existed s3 bucket (or any other storage that lambda can access), you can using the cloud formation template bucket, that always been created when you launch a cloudformation template (usually named cf-template...).
Add a code to your lambda to access the s3 and get the file. Now you get the data of the s3 bucket that your cloudformation has been created and can use it on lambda.
Hope this help.