I've wrote a serverless.yml to deploy some lambdas and I'm using GSI in a specific API.
If I run locally using serverless-offline, it's working but I'm facing an error when deploy the lambda:
AccessDeniedException: User: arn:aws:sts::408462944160:assumed-role/telecom-integration-dev-us-east-1-lambdaRole/integration-dev-dialerStatistics
is not authorized to perform: dynamodb:Query on resource: arn:aws:dynamodb:us-east-1:408462944160:table/integration-dialer-dev/index/other_dial_status-index
Here is how I've created serverless.yml
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource:
- { "Fn::GetAtt": ["DialerDynamoDbTable", "Arn" ] }
dialerStatistics:
handler: integration/dialer.statistics
description: Import data on dialer.
memorySize: 256
timeout: 30
events:
- http:
path: dialer-statistics
method: get
cors: false
private: false
DialerDynamoDbTable:
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: ${self:provider.environment.DELETION_POLICY}
# DeletionPolicy: Delete # Useful for recreating environment in dev
Properties:
AttributeDefinitions:
-
AttributeName: "id"
AttributeType: "S"
-
AttributeName: "dial_status"
AttributeType: "S"
KeySchema:
-
AttributeName: "id"
KeyType: "HASH"
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:provider.environment.DIALER_TABLE}
GlobalSecondaryIndexes:
- IndexName: "other_dial_status-index"
KeySchema:
- AttributeName: "dial_status"
KeyType: HASH
Projection:
ProjectionType: "ALL"
ProvisionedThroughput:
ReadCapacityUnits: '20'
WriteCapacityUnits: '20'
Probably it's missing some permission on iAmRoleStatements but I'm not sure what else should I do.
Your IAM role does not cover the indexes. Try to add them in the role's ressources:
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource:
- { "Fn::GetAtt": ["DialerDynamoDbTable", "Arn" ] }
- Fn::Join:
- "/"
-
- { "Fn::GetAtt": ["DialerDynamoDbTable", "Arn" ] }
- "index/*"
For reference, the Fn::Join will append /index/* to DialerDynamoDbTable's ARN.
It worked locally because Serverless uses the "admin" IAM user you configured it with.
Resource:
- arn:aws:dynamodb:*:*:table/${self:custom.myTable}
- arn:aws:dynamodb:*:*:table/${self:custom.myTable}/index/*
for those in search of cloud formation
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:DeleteItem
- dynamodb:UpdateItem
- dynamodb:Query
- dynamodb:Scan
- dynamodb:BatchGetItem
- dynamodb:BatchWriteItem
Resource: [!GetAtt DialerDynamoDbTable.Arn, !Join [ '/',[!GetAtt DialerDynamoDbTable.Arn,index/*]]]
Related
I would like to grab the name of the serverless function. Here is my code.
What I am trying to achieve is, instead of arn https://sqs.us-east-1.amazonaws.com/xxxx/channels.fifo, I want to the env SQS_URL to be set to channels.fifo. I looked at Fn::Split fucntion of cloudformation, but was unable to properly use it to set it on env.
functions:
S3ToSqs:
handler: lambda_function.lambda_handler
role: S3ToSqsLambdaRole
memorySize: 128
timeout: 5
events:
- schedule:
name: 'S3ToSqsCronEvent'
rate: rate(1 minute)
enabled: true
environment:
SQS_URL:
Ref: sqsQueue
REGION: 'us-east-1'
resources:
Resources:
S3ToSqsLambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
- events.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: S3ToSqsRole
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
- sqs:DeleteMessage
- sqs:GetQueueUrl
- sqs:ChangeMessageVisibility
- sqs:SendMessageBatch
- sqs:ReceiveMessage
- sqs:SendMessage
- sqs:GetQueueAttributes
- sqs:ListQueueTags
- sqs:ListDeadLetterSourceQueues
- sqs:DeleteMessageBatch
- sqs:PurgeQueue
- sqs:DeleteQueue
- sqs:CreateQueue
- sqs:ChangeMessageVisibilityBatch
- sqs:SetQueueAttribute
- s3:GetObjectVersion
- s3:GetObject
- s3:ListBucket
Resource: "arn:aws:logs:*:*:*"
sqsQueue:
Type: "AWS::SQS::Queue"
Properties:
ContentBasedDeduplication: true
FifoQueue: true
QueueName: "channels.fifo"
#Package used from https://github.com/arabold/serverless-export-env
plugins:
- serverless-export-env
If you take a look at the SQS::Queue CloudFormation resource you can see that the Queue Name is exposed as an attribute.
As a result of that you can use: !GetAtt sqsQueue.QueueName or Fn::GetAtt [sqsQueue, QueueName], both of which may be a little easier to read than the solution you came up with (which still works).
I was able to achieve it by replacing
environment:
SQS_URL:
Ref: sqsQueue
REGION: 'us-east-1'
section with
environment:
SQS_URL: !Select [4, !Split ["/",!Ref sqsQueue ]]
which now gives me the output as channels.info instead of https://sqs.us-east-1.amazonaws.com/xxxx/channels.fifo
Im trying to give my Aurora PostgreSQL permissions to access an s3 bucket. I'm using the serverless framework and have the following code.
RDSCluster:
Type: 'AWS::RDS::DBCluster'
Properties:
MasterUsername: AuserName
MasterUserPassword: Apassword
DBSubnetGroupName:
Ref: RDSClusterGroup
AvailabilityZones:
- eu-central-1a
- eu-central-1b
Engine: aurora-postgresql
EngineVersion: 11.9
EngineMode: provisioned
EnableHttpEndpoint: true
DatabaseName: initialbase
DBClusterParameterGroupName:
Ref: RDSDBParameterGroup
AssociatedRoles:
- RoleArn:
{ Fn::GetAtt: [ AuroraPolicy, Arn ] }
VpcSecurityGroupIds:
- Ref: RdsSecurityGroup
AuroraPolicy:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- rds.amazonaws.com
Action:
- sts:AssumeRole
Path: "/"
Policies:
- PolicyName: AuroraRolePolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- s3:AbortMultipartUpload
- s3:GetBucketLocation
- s3:GetObject
- s3:ListBucket
- s3:ListBucketMultipartUploads
- s3:PutObject
Resource:
- { Fn::GetAtt: [ S3BucketEgresbucket, Arn ] }
- Fn::Join:
- ""
- - { Fn::GetAtt: [ S3BucketEgresbucket, Arn ] }
- "/*"
This should grant the DB permission to execute query's using SELECT aws_commons.create_s3_ur
However when I try and deploy I get the error message:
The feature-name parameter must be provided with the current operation for the Aurora (PostgreSQL) engine.
The issue comes from the AssociatedRoles object, cloudformation states that the FeatureName field is not needed however if you are wishing for your cluster to access other AWS services it is required. In this case as I was wanting to have my cluster access an s3 bucket I had to change my AssociatedRoles object so it looked like this:
AssociatedRoles:
- RoleArn: { Fn::GetAtt: [ roleServiceIntegration, Arn ] }
FeatureName: s3Import
My requirement is to trigger Lambda_Function_1 if input.txt file creates in S3 bucket and trigger Lambda_Function_2 if output.txt file creates in same S3 bucket.
The below cfn is not working, but it works fine if I put only one event instead of two events in same LambdaConfigurations.
Can some one please help me here?
Parameters:
S3BucketBaseName:
Type: String
Description: The base name of the Amazon S3 bucket.
Default: dw-trip
Resources:
LambdaStart:
DependsOn:
- LambdaStartStopEC2
Type: "AWS::Lambda::Function"
Properties:
FunctionName: "dw-trip-start-ec2"
Handler: "index.handler"
Role: !GetAtt LambdaStartStopEC2.Arn
Runtime: python3.7
MemorySize: 3008
Timeout: 900
Code:
ZipFile: |
import boto3
region = 'us-east-1'
instances = ['i-05d5fbec4c82956b6']
ec2 = boto3.client('ec2', region_name=region)
def lambda_handler(event, context):
ec2.start_instances(InstanceIds=instances)
print('started your instances: ' + str(instances))
ProcessingLambdaPermissionStart:
Type: AWS::Lambda::Permission
DependsOn:
- LambdaStart
Properties:
Action: 'lambda:InvokeFunction'
FunctionName: !Ref LambdaStart
Principal: s3.amazonaws.com
SourceArn:
Fn::Join:
- ''
- - 'arn:aws:s3:::'
- !Join ["-",[!Ref "S3BucketBaseName",!Ref "AWS::AccountId"]]
SourceAccount: !Ref AWS::AccountId
LambdaStop:
DependsOn:
- ProcessingLambdaPermissionStart
Type: "AWS::Lambda::Function"
Properties:
FunctionName: "dw-trip-stop-ec2"
Handler: "index.handler"
Role: !GetAtt LambdaStartStopEC2.Arn
Runtime: python3.7
MemorySize: 3008
Timeout: 900
Code:
ZipFile: |
import boto3
region = 'us-east-1'
instances = ['i-05d5fbec4c82956b6']
ec2 = boto3.client('ec2', region_name=region)
def lambda_handler(event, context):
ec2.stop_instances(InstanceIds=instances)
print('stopping your instances: ' + str(instances))
ProcessingLambdaPermissionStop:
Type: AWS::Lambda::Permission
DependsOn:
- LambdaStop
Properties:
Action: 'lambda:InvokeFunction'
FunctionName: !Ref LambdaStop
Principal: s3.amazonaws.com
SourceArn:
Fn::Join:
- ''
- - 'arn:aws:s3:::'
- !Join ["-",[!Ref "S3BucketBaseName",!Ref "AWS::AccountId"]]
SourceAccount: !Ref AWS::AccountId
S3KmsKey:
Type: AWS::KMS::Key
DependsOn:
- ProcessingLambdaPermissionStop
Properties:
Description: KMS key for trip S3 bucket.
Enabled: true
EnableKeyRotation: true
KeyPolicy:
Statement:
- Sid: Administration
Effect: Allow
Principal:
AWS:
- Fn::Join:
- ''
- - 'arn:aws:iam::'
- Ref: AWS::AccountId
- ':role/DW01-codepipeline-action-us-east-1'
- Fn::Join:
- ''
- - 'arn:aws:iam::'
- Ref: AWS::AccountId
- ':root'
Action: 'kms:*'
Resource: '*'
S3bucketCreate:
DependsOn:
- S3KmsKey
Type: AWS::S3::Bucket
Properties:
BucketName: !Join ["-",[!Ref "S3BucketBaseName",!Ref "AWS::AccountId"]]
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
KMSMasterKeyID: !Ref S3KmsKey
SSEAlgorithm: "aws:kms"
NotificationConfiguration:
LambdaConfigurations:
- Event: s3:ObjectCreated:*
Function: !GetAtt LambdaStart.Arn
Filter:
S3Key:
Rules:
- Name: prefix
Value: input.txt
- Event: s3:ObjectCreated:*
Function: !GetAtt LambdaStop.Arn
Filter:
S3Key:
Rules:
- Name: prefix
Value: output.txt
S3bucketPolicy:
DependsOn:
- S3bucketCreate
Type: AWS::S3::BucketPolicy
Properties:
Bucket:
Ref: 'S3bucketCreate'
PolicyDocument:
Statement:
- Sid: AllowEc2AccesstoBucket
Action:
- 's3:GetObject'
- 's3:PutObject'
Effect: Allow
Principal:
AWS:
- Fn::Join:
- ''
- - 'arn:aws:iam::'
- Ref: AWS::AccountId
- ':role/DevDW01-EC2-us-east-1'
Resource:
Fn::Join:
- ''
- - 'arn:aws:s3:::'
- Ref: 'S3bucketCreate'
- '/*'
LambdaStartStopEC2:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
RoleName: Lambda-StartStop-EC2
MaxSessionDuration: 43200
Policies:
- PolicyName: StartStop-EC2
PolicyDocument:
Statement:
- Action:
- s3:*
Effect: Allow
Resource: '*'
- Action:
- ec2:*
Effect: Allow
Resource: '*'
- PolicyName: logs
PolicyDocument:
Statement:
- Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:DescribeLogGroups
- logs:DescribeLogStreams
- logs:PutLogEvents
- logs:GetLogEvents
- logs:FilterLogEvents
Effect: Allow
Resource: '*'
Outputs:
S3bucketCreateName:
Value:
Ref: S3bucketCreate
Export:
Name: S3bucketCreateName
S3bucketCreateArn:
Value:
Fn::GetAtt: S3bucketCreate.Arn
Export:
Name: S3bucketCreateArn
S3KmsKeyArn:
Value:
Fn::GetAtt: S3KmsKey.Arn
Export:
Name: S3KmsKeyArn
Multiple filter rules with prefix and suffix as name are allowed as long as they do not overlap. Refer here for various examples explaining how overlapping may occur and how to avoid them.
In this case, the error Template format error: YAML not well-formed is possibly due to improper YAML formatting. Use cfn-lint to validate the templates.
Adding a snippet that explicitly specifies the expected prefix and suffix of the S3 object.
NotificationConfiguration:
LambdaConfigurations:
- Event: s3:ObjectCreated:*
Function: !GetAtt LambdaStart.Arn
Filter:
S3Key:
Rules:
- Name: prefix
Value: input
- Name: suffix
Value: txt
- Event: s3:ObjectCreated:*
Function: !GetAtt LambdaStop.Arn
Filter:
S3Key:
Rules:
- Name: prefix
Value: output
- Name: suffix
Value: txt
I actually had to do it like this create multiple LambdaConfigurations.
"NotificationConfiguration": {
"LambdaConfigurations": [{
"Event": "s3:ObjectCreated:*",
"Function": {
"Fn::GetAtt": ["lambdaVodFunction", "Arn"]
},
"Filter": {
"S3Key": {
"Rules": [{
"Name": "suffix",
"Value": ".mp4"
}]
}
}
},
{
"Event": "s3:ObjectCreated:*",
"Function": {
"Fn::GetAtt": ["lambdaVodFunction", "Arn"]
},
"Filter": {
"S3Key": {
"Rules": [{
"Name": "suffix",
"Value": ".mov"
}]
}
}
}
]
}
I simply tried to add a new S3Bucket into the Resources section and the stack is not being built anymore:
resources:
Resources:
myBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: prefix-${self:custom.env.myvar}-myBucket
and the error I'm getting is not helping too much:
Template format error: Unresolved resource dependencies
[] in the Resources block of the template (nothing between the [] that could indicate what to look for)
Any idea what's going on?
I'm running serverless v1.5.0
serverless.yml
service: myService
frameworkVersion: "=1.5.0"
custom:
env: ${file(./.variables.yml)}
provider:
name: aws
runtime: nodejs4.3
stage: ${opt:stage, self:custom.env.stage}
region: ${self:custom.env.region}
profile: myProfile-${opt:stage, self:custom.env.stage}
memorySize: 128
iamRoleStatements:
- Effect: "Allow"
Action:
- "lambda:InvokeFunction"
Resource: "*"
- Effect: "Allow"
Action:
- "s3:ListBucket"
Resource: { "Fn::Join" : ["", ["arn:aws:s3:::", { "Ref" : "ServerlessDeploymentBucket" } ] ] }
- Effect: "Allow"
Action:
- "s3:PutObject"
Resource:
Fn::Join:
- ""
- - "arn:aws:s3:::"
- "Ref" : "ServerlessDeploymentBucket"
- "Ref" : ""
functions:
myFunction:
handler: functions/myFunction.handler
name: ${opt:stage, self:custom.env.stage}-myFunction
resources:
Resources:
myBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: myService-${opt:stage, self:custom.env.myVar}-myBucket
The Reference to an empty-string in your iamRoleStatements section, - "Ref" : "", is likely causing the Unresolved resource dependencies [] error. Remove this line from your template, since it seems to be unnecessary.
I'm trying to make a reusable CloudFormation template and would like to do some kind of conditional where if the Environment parameter is "test" (or any other environment other than "prod"), then send SES emails to only gmail accounts (i.e., corporate accounts), but for "prod", send SES emails anywhere. Would I have to do two different roles and have conditions on each one? Or is there a way to do this inside of just the one role below? Thanks for any help!
Parameters:
Environment:
Description: Environment, which can be "test", "stage", "prod", etc.
Type: String
Resources:
Role:
Type: AWS::IAM::Role
Properties:
RoleName: myRole
Path: /
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Principal:
Service:
- "ecs.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
-
PolicyName: "ses-policy"
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action:
- "ses:SendEmail"
- "ses:SendRawEmail"
Resource: "*"
Condition:
"ForAllValues:StringLike":
"ses:Recipients":
- "*#gmail.com"
Conditions are perfectly suited for adding this sort of conditional logic to CloudFormation Resource Properties. In your example, you could use the Fn::If Intrinsic Function to include the existing Policy Condition (not to be confused with the CloudFormation Condition!) if the environment is not prod, and AWS::NoValue otherwise (removing the Policy Condition entirely when environment is prod):
Parameters:
Environment:
Description: Environment, which can be "test", "stage", "prod", etc.
Type: String
AllowedValues: [test, stage, prod]
Conditions:
IsProdEnvironment: !Equals [ !Ref Environment, prod ]
Resources:
Role:
Type: AWS::IAM::Role
Properties:
RoleName: myRole
Path: /
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Principal:
Service:
- "ecs.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
-
PolicyName: "ses-policy"
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action:
- "ses:SendEmail"
- "ses:SendRawEmail"
Resource: "*"
Condition: !If
- IsProdEnvironment
- !Ref AWS::NoValue
- "ForAllValues:StringLike":
"ses:Recipients":
- "*#gmail.com"