HostedRotationLambda & Transform - serverless-framework

I'm trying to implement aws secrets rotation with with the serverless framework (https://alexharv074.github.io/2020/11/23/adding-hosted-rotation-lambda-to-a-database-stack.html)
AppGraphqDbAppsSecretRotationSchedule:
Type: AWS::SecretsManager::RotationSchedule
Properties:
SecretId: !Ref AppGraphqlDbAppsSecretsManagerSecret
HostedRotationLambda:
RotationType: PostgreSQLSingleUser
RotationRules:
AutomaticallyAfterDays: 30
I get the error
To use the HostedRotationLambda property, you must use the AWS::SecretsManager transform.
How can I add "transform"? Below a working Aws cloudformation template
AWSTemplateFormatVersion: 2010-09-09
Description: Rotation Lambda example stack
Transform: AWS::SecretsManager-2020-07-23
Parameters: {}
Resources:

Related

serverless-s3-local writing to real S3 bucket

I am using Serverless framework with the serverless-s3-local plugin to test my code during development. However, despite being in offline mode, the real S3 bucket is being written to. How can I alter my configuration to use a local fake s3 bucket when in offline mode?
Relevant serverless.yml sections:
plugins:
- serverless-stack-output
- serverless-plugin-include-dependencies
- serverless-layers
- serverless-deployment-bucket
- serverless-s3-local
- serverless-offline
custom:
#...
s3:
bucketName: test-s3-buck
host: localhost
serverless-offline:
ignoreJWTSignature: true
httpPort: 4000
noAuth: true
directory: /tmp
resources:
Resources:
#...
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.s3.bucketName}
Endpoint Calling S3:
import boto3
def post(event, context):
s3_path = "/test.txt"
body = "test"
encoded_string = body.encode("utf-8")
s3 = boto3.resource("s3")
bucket_name = "test-s3-buck"
s3.Bucket(bucket_name).put_object(Key=s3_path, Body=encoded_string)
response = {
"statusCode": 200,
"body": "Created."
}
return response
Launching Serverless Offline:
serverless offline start
on the readme file in serverless-s3-local we have:
const S3 = new AWS.S3({
s3ForcePathStyle: true,
accessKeyId: 'S3RVER', // This specific key is required when working offline
secretAccessKey: 'S3RVER',
endpoint: new AWS.Endpoint('http://localhost:4569'),
});
you can achieve the same with boto:
import boto3
client = boto3.client(
's3',
aws_access_key_id='S3RVER',
aws_secret_access_key='S3RVER'
)
which means, when you run your serverless offline start you need to set the aws access key id to S3RVER and aws secret access key to S3RVER, otherwise, the real bucket will be used.
also in the readme, there's instructions to setup a s3local aws profile, https://github.com/ar90n/serverless-s3-local#triggering-aws-events-offline
another way to achieve it is to run your command with environment variables:
AWS_ACCESS_KEY_ID=S3RVER AWS_SECRET_ACCESS_KEY=S3RVER serverless offline start
in that way, the aws-sdk inside your code will read the correct values for the offline mode

Authenticate AppSync queries console with Cognito User Pools

I am trying to authenticate Queries playground in AWS AppSync console. I have created User Pool and linked it to the AppSync API, I have also created an App Client in Cognito User Pool (deployed using CloudFormation). It appears under Select the authorization provider to use for running queries on this page: in the console.
When I run test query I get:
{
"errors": [
{
"errorType": "UnauthorizedException",
"message": "Unable to parse JWT token."
}
]
}
This is what I would expect. There is an option to Login with User Pools. The issue is I can't select any Client ID and when I choose to insert Client ID manually, anything I enter I get Invalid UserPoolId format. I am trying to copy Pool ID from User Pool General settings (format eu-west-2_xxxxxxxxx) but no joy. Btw, I am not using Amplify and I have not configured any Identity Pools.
EDIT:
Here is the CloudFormation GraphQLApi definition:
MyApi:
Type: AWS::AppSync::GraphQLApi
Properties:
Name: !Sub "${AWS::StackName}-api"
AuthenticationType: AMAZON_COGNITO_USER_POOLS
UserPoolConfig:
UserPoolId: !Ref UserPoolClient
AwsRegion: !Sub ${AWS::Region}
DefaultAction: ALLOW
To set up the stack using CloudFormation I have followed these 2 examples:
https://adrianhall.github.io/cloud/2018/04/17/deploy-an-aws-appsync-graphql-api-with-cloudformation/
https://gist.github.com/adrianhall/f330a10451f05a529680f32978dddb64
Turns out they both (same author) have an issue in them in the section where ApiGraphQL is defined. This:
MyApi:
Type: AWS::AppSync::GraphQLApi
Properties:
Name: !Sub "${AWS::StackName}-api"
AuthenticationType: AMAZON_COGNITO_USER_POOLS
UserPoolConfig:
UserPoolId: !Ref UserPoolClient
AwsRegion: !Sub ${AWS::Region}
DefaultAction: ALLOW
Should be:
MyApi:
Type: AWS::AppSync::GraphQLApi
Properties:
Name: !Sub "${AWS::StackName}-api"
AuthenticationType: AMAZON_COGNITO_USER_POOLS
UserPoolConfig:
UserPoolId: !Ref UserPool
AwsRegion: !Sub ${AWS::Region}
DefaultAction: ALLOW
Thank you #Myz for pointing me back to review the whole CF yaml file

How to get SNS topic ARN inside lambda handler and set permissions to wite to it?

I have two lambda functions defined in serverless.yml: graphql and convertTextToSpeech. The former (in one of the GraphQL endpoints) should write to SNS topic to execute the latter one. Here is my serverless.yml file:
service: hello-world
provider:
name: aws
runtime: nodejs6.10
plugins:
- serverless-offline
functions:
graphql:
handler: dist/app.handler
events:
- http:
path: graphql
method: post
cors: true
convertTextToSpeach:
handler: dist/tasks/convertTextToSpeach.handler
events:
- sns:
topicName: convertTextToSpeach
displayName: Convert text to speach
And GraphQL endpoint writing to SNS:
// ...
const sns = new AWS.SNS()
const params = {
Message: 'Test',
Subject: 'Test SNS from lambda',
TopicArn: 'arn:aws:sns:us-east-1:101972216059:convertTextToSpeach'
}
await sns.publish(params).promise()
// ...
There are two issues here:
Topic ARN (which is required to write to a topic) is hardcoded it. How I can get this in my code "dynamically"? Is it provided somehow by serverless framework?
Even when topic arn is hardcoded lambda functions does not have permissions to wrote to that topic. How I can define such permissions in serverless.yml file?
1) You can resolve the topic dynamically.
This can be done through CloudFormation Intrinsic Functions, which are available within the serverless template (see the added environment section).
functions:
graphql:
handler: handler.hello
environment:
topicARN:
Ref: SNSTopicConvertTextToSpeach
events:
- http:
path: graphql
method: post
cors: true
convertTextToSpeach:
handler: handler.hello
events:
- sns:
topicName: convertTextToSpeach
displayName: Convert text to speach
In this case, the actual topic reference name (generated by the serverless framework) is SNSTopicConvertTextToSpeach. The generation of those names is explained in the serverless docs.
2) Now that the ARN of the topic is mapped into an environment variable, you can access it within the GraphQL lambda through the process variable (process.env.topicARN).
// ...
const sns = new AWS.SNS()
const params = {
Message: 'Test',
Subject: 'Test SNS from lambda',
TopicArn: process.env.topicARN
}
await sns.publish(params).promise()
// ...

how ot add a kinesis stream to trigger lambda function

When configuring a lambda function in the serverless framework, i am trying to add a kinesis stream as the event course:
here is the snippet from serverless.yml
functions:
Foo:
handler: handler.foo
events:
- stream:
arn: arn:aws:kinesis:us-east-1:783995676505:stream/search-helper
batchSize: 100
startingPosition: LATEST
enabled: false
The deployment via "serverless deploy" is successful however the trigger does not get added to the function configuration.
I checked the yml file using a yml validatior and there are no errors. What am i missing here ?
The yml file needs to be indented just after stream:
functions:
Foo:
handler: handler.foo
events:
- stream:
arn: arn:aws:kinesis:us-east-1:783995676505:stream/search-helper
batchSize: 100
startingPosition: LATEST
enabled: false
See the Serveless Framework examples at https://serverless.com/framework/docs/providers/aws/events/streams/

Error creating BucketPolicy in CloudFormation yaml

I'm trying to use the following yaml to create an S3 Bucket Policy in CloudFormation:
cloudTrailBucketPolicy:
Type: "AWS::S3::BucketPolicy"
Properties:
Bucket:
Ref: cloudtrailBucket
PolicyDocument:
-
Action:
- "s3:GetBucketAcl"
Effect: Allow
Resource:
Fn::Join:
- ""
-
- "arn:aws:s3:::"
-
Ref: cloudtrailBucket
- "/*"
Principal: "*"
-
Action:
- "s3:PutObject"
Effect: Allow
Resource:
Fn::Join:
- ""
-
- "arn:aws:s3:::"
-
Ref: cloudtrailBucket
- "/*"
Principal:
Service: cloudtrail.amazonaws.com
When I try to do this, I get a message that "Value of property PolicyDocument must be an object"
Anyone have any ideas?
Looks like you solved the issue, but for readability you can compress your formatting by using !Sub and knowing that action allows single values as well as list. One of the main reasons I like yaml is that you use less vertical.
PolicyDocument:
-
Action: "s3:GetBucketAcl"
Effect: Allow
Resource: !Sub arn:aws:s3:::${cloudtrailBucket}
Principal: "*"
-
Action: "s3:PutObject"
Effect: Allow
Resource: !Sub arn:aws:s3:::${cloudtrailBucket}/*
Principal:
Service: cloudtrail.amazonaws.com
The PolicyDocument property of the AWS::S3::BucketPolicy Resource has a required type of JSON Object. The YAML template in your question incorrectly provides a JSON Array containing two JSON Objects as the value of the PolicyDocument property, hence the error message you received.
To fix this error, the objects should be properly nested within a Statement element which is missing from the current template.
Refer to the IAM Policy Elements Reference for more detail on IAM Policy Document syntax.
Ahhh. s3:GetBucketAcl is an action on a bucket. I removed the /* in the first statement and it worked. Gee. Super helpful error message.