How to get SNS topic ARN inside lambda handler and set permissions to wite to it? - serverless-framework

I have two lambda functions defined in serverless.yml: graphql and convertTextToSpeech. The former (in one of the GraphQL endpoints) should write to SNS topic to execute the latter one. Here is my serverless.yml file:
service: hello-world
provider:
name: aws
runtime: nodejs6.10
plugins:
- serverless-offline
functions:
graphql:
handler: dist/app.handler
events:
- http:
path: graphql
method: post
cors: true
convertTextToSpeach:
handler: dist/tasks/convertTextToSpeach.handler
events:
- sns:
topicName: convertTextToSpeach
displayName: Convert text to speach
And GraphQL endpoint writing to SNS:
// ...
const sns = new AWS.SNS()
const params = {
Message: 'Test',
Subject: 'Test SNS from lambda',
TopicArn: 'arn:aws:sns:us-east-1:101972216059:convertTextToSpeach'
}
await sns.publish(params).promise()
// ...
There are two issues here:
Topic ARN (which is required to write to a topic) is hardcoded it. How I can get this in my code "dynamically"? Is it provided somehow by serverless framework?
Even when topic arn is hardcoded lambda functions does not have permissions to wrote to that topic. How I can define such permissions in serverless.yml file?

1) You can resolve the topic dynamically.
This can be done through CloudFormation Intrinsic Functions, which are available within the serverless template (see the added environment section).
functions:
graphql:
handler: handler.hello
environment:
topicARN:
Ref: SNSTopicConvertTextToSpeach
events:
- http:
path: graphql
method: post
cors: true
convertTextToSpeach:
handler: handler.hello
events:
- sns:
topicName: convertTextToSpeach
displayName: Convert text to speach
In this case, the actual topic reference name (generated by the serverless framework) is SNSTopicConvertTextToSpeach. The generation of those names is explained in the serverless docs.
2) Now that the ARN of the topic is mapped into an environment variable, you can access it within the GraphQL lambda through the process variable (process.env.topicARN).
// ...
const sns = new AWS.SNS()
const params = {
Message: 'Test',
Subject: 'Test SNS from lambda',
TopicArn: process.env.topicARN
}
await sns.publish(params).promise()
// ...

Related

Cognito Custom Auth trigger not getting Session from Cognito

I tried to call the InitiateAuth API from AWS CLI. I have set up Define auth, create auth and verify auth lambda triggers correctly. The problem is that, when I ran the below command, it's showing error:
aws cognito-idp initiate-auth --client-id <my_client_id> --auth-flow CUSTOM_AUTH --auth-parameters USERNAME=uname,ChallengeName="SRP_A",SRP_A="<srp_value>"
Error: An error occurred (UserLambdaValidationException) when calling the InitiateAuth operation: DefineAuthChallenge failed with error Cannot read property 'challengeName' of undefined.
I checked the Define Auth lambda code, and also the Cloud Watch logs of Lambda execution. The error occurred because the input from Cognito contains an empty session key in the event json (which usually sent from Cognito to Lambda). As the property challengeName resides inside the session key (as shown in official documentation).
Here is the JSON event sent to Lambda from Cognito when I ran that command (I got this JSON from CloudWatch Lambda logs, I printed the event which is being sent from Cognito):
{
version: '1',
region: 'us-east-1',
userPoolId: 'us-east-1_******',
userName: 'uname',
callerContext: {
awsSdkVersion: 'aws-sdk-unknown-unknown',
clientId: '<my_client_id>'
},
triggerSource: 'DefineAuthChallenge_Authentication',
request: {
userAttributes: {
sub: 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx',
'cognito:email_alias': '<email>',
'cognito:user_status': 'CONFIRMED',
email_verified: 'true',
name: 'Custom Test',
email: '<email>'
},
session: [], -----> !! Empty
userNotFound: false
},
response: { challengeName: null, issueTokens: null, failAuthentication: null }
}
What is the reason? Is it because I am sending the request from CLI so Cognito not able to create a session or something? I'm not sure. Any help will be appreciated.
Session holds previous auth challenge results (either from built-in challenges or you custom challenges). It will be empty for the first invocation of the define auth challenge lambda. As the name suggests you have to define the auth challenge in the handler response.

serverless-s3-local writing to real S3 bucket

I am using Serverless framework with the serverless-s3-local plugin to test my code during development. However, despite being in offline mode, the real S3 bucket is being written to. How can I alter my configuration to use a local fake s3 bucket when in offline mode?
Relevant serverless.yml sections:
plugins:
- serverless-stack-output
- serverless-plugin-include-dependencies
- serverless-layers
- serverless-deployment-bucket
- serverless-s3-local
- serverless-offline
custom:
#...
s3:
bucketName: test-s3-buck
host: localhost
serverless-offline:
ignoreJWTSignature: true
httpPort: 4000
noAuth: true
directory: /tmp
resources:
Resources:
#...
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.s3.bucketName}
Endpoint Calling S3:
import boto3
def post(event, context):
s3_path = "/test.txt"
body = "test"
encoded_string = body.encode("utf-8")
s3 = boto3.resource("s3")
bucket_name = "test-s3-buck"
s3.Bucket(bucket_name).put_object(Key=s3_path, Body=encoded_string)
response = {
"statusCode": 200,
"body": "Created."
}
return response
Launching Serverless Offline:
serverless offline start
on the readme file in serverless-s3-local we have:
const S3 = new AWS.S3({
s3ForcePathStyle: true,
accessKeyId: 'S3RVER', // This specific key is required when working offline
secretAccessKey: 'S3RVER',
endpoint: new AWS.Endpoint('http://localhost:4569'),
});
you can achieve the same with boto:
import boto3
client = boto3.client(
's3',
aws_access_key_id='S3RVER',
aws_secret_access_key='S3RVER'
)
which means, when you run your serverless offline start you need to set the aws access key id to S3RVER and aws secret access key to S3RVER, otherwise, the real bucket will be used.
also in the readme, there's instructions to setup a s3local aws profile, https://github.com/ar90n/serverless-s3-local#triggering-aws-events-offline
another way to achieve it is to run your command with environment variables:
AWS_ACCESS_KEY_ID=S3RVER AWS_SECRET_ACCESS_KEY=S3RVER serverless offline start
in that way, the aws-sdk inside your code will read the correct values for the offline mode

"Execution failed" when setting up API Gateway and Fargate with AWS CDK

I am trying to setup AWS API Gateway to access a fargate container in a private VPC as described here. For this I am using AWS CDK as described below. But when I curl the endpoint after successful cdk deploy I get "Internal Server Error" as a response. I can't find any additional information. For some reason API GW can't reach the container.
So when I curl the endpoint like this:
curl - i https://xxx.execute-api.eu-central-1.amazonaws.com/prod/MyResource
... I get the following log output in cloud watch:
Extended Request Id: NpuEPFWHliAFm_w=
Verifying Usage Plan for request: 757c6b9e-c4af-4dab-a5b1-542b15a1ba21. API Key: API Stage: ...
PI Key authorized because method 'ANY /MyResource/{proxy+}' does not require API Key. Request will not contribute to throttle or quota limits
Usage Plan check succeeded for API Key and API Stage ...
Starting execution for request: 757c6b9e-c4af-4dab-a5b1-542b15a1ba21
HTTP Method: GET, Resource Path: /MyResource/test
Execution failed due to configuration error: There was an internal error while executing your request
CDK Code
First I create a network load balanced fargate service:
private setupService(): NetworkLoadBalancedFargateService {
const vpc = new Vpc(this, 'MyVpc');
const cluster = new Cluster(this, 'MyCluster', {
vpc: vpc,
});
cluster.connections.allowFromAnyIpv4(Port.tcp(5050));
const taskDefinition = new FargateTaskDefinition(this, 'MyTaskDefinition');
const container = taskDefinition.addContainer('MyContainer', {
image: ContainerImage.fromRegistry('vad1mo/hello-world-rest'),
});
container.addPortMappings({
containerPort: 5050,
hostPort: 5050,
});
const service = new NetworkLoadBalancedFargateService(this, 'MyFargateServie', {
cluster,
taskDefinition,
assignPublicIp: true,
});
service.service.connections.allowFromAnyIpv4(Port.tcp(5050));
return service;
}
Next I create the VpcLink and the API Gateway:
private setupApiGw(service: NetworkLoadBalancedFargateService) {
const api = new RestApi(this, `MyApi`, {
restApiName: `MyApi`,
deployOptions: {
loggingLevel: MethodLoggingLevel.INFO,
},
});
// setup api resource which forwards to container
const resource = api.root.addResource('MyResource');
resource.addProxy({
anyMethod: true,
defaultIntegration: new HttpIntegration('http://localhost.com:5050', {
httpMethod: 'ANY',
options: {
connectionType: ConnectionType.VPC_LINK,
vpcLink: new VpcLink(this, 'MyVpcLink', {
targets: [service.loadBalancer],
vpcLinkName: 'MyVpcLink',
}),
},
proxy: true,
}),
defaultMethodOptions: {
authorizationType: AuthorizationType.NONE,
},
});
resource.addMethod('ANY');
this.addCorsOptions(resource);
}
Anyone has a clue what is wrong with this config?
After hours of trying I finally figured out that the security groups do not seem to be updated correctly when setting up the VpcLink with CDK. Broadening the allowed connection with
service.service.connections.allowFromAnyIpv4(Port.allTraffic())
solved it. Still need to figure out which minimum set needs to be set instead of allTrafic()
Additionally I replaced localhost in the HttpIntegration by the endpoint of the load balancer like this:
resource.addMethod("ANY", new HttpIntegration(
'http://' + service.loadBalancer.loadBalancerDnsName,
{
httpMethod: 'ANY',
options: {
connectionType: ConnectionType.VPC_LINK,
vpcLink: new VpcLink(this, 'MyVpcLink', {
targets: [service.loadBalancer],
vpcLinkName: 'MyVpcLink',
})
},
}
))

Serverless function with authorizer arn provided returns 401

I am using serverless.
When I setup one of my functions as the following, which includes authorizer, on the client, I receive 401.
However when I remove it, there are no problems.
provider:
name: aws
runtime: nodejs8.10
region: eu-west-1
environment:
USER_POOL_ARN: "arn:aws:cognito-idp:eu-west-1:974280.....:userpool/eu-west-1........"
functions:
create:
handler: handlers/create.main
events:
- http:
path: create
method: post
cors: true
authorizer:
type: COGNITO_USER_POOLS
name: serviceBAuthFunc
arn: ${self:provider.environment.USER_POOL_ARN}
On the client, I expect a logged in user of the same user pool could get expected response. However it returns 401.
Any help is appreciated. Thanks.
After desperate hours spent, I have come up with the solution.
For anyone who comes across the same issue, here is a solution that worked for me.
Add integration: lambda after cors: true (though the order doesn't matter).
Below is just demonstrating that.
functions:
create:
handler: handlers/create.main
events:
- http:
path: create
method: post
cors: true
integration: lambda // this solves the problem
authorizer:
type: COGNITO_USER_POOLS
name: serviceBAuthFunc
arn: ${self:provider.environment.USER_POOL_ARN}
Send Authorization header with the value of Auth.currentSession().idToken.jwtToken while making the request.
Below is an example for sending headers using API of #aws-amplify/api and Auth of #aws-amplify/auth.
const currentSession = await Auth.currentSession()
await API.post(
'your-endpoint-name',
"/your-endpoint-path/..",
{
headers: {
'Authorization': currentSession.idToken.jwtToken
}
}
)

how ot add a kinesis stream to trigger lambda function

When configuring a lambda function in the serverless framework, i am trying to add a kinesis stream as the event course:
here is the snippet from serverless.yml
functions:
Foo:
handler: handler.foo
events:
- stream:
arn: arn:aws:kinesis:us-east-1:783995676505:stream/search-helper
batchSize: 100
startingPosition: LATEST
enabled: false
The deployment via "serverless deploy" is successful however the trigger does not get added to the function configuration.
I checked the yml file using a yml validatior and there are no errors. What am i missing here ?
The yml file needs to be indented just after stream:
functions:
Foo:
handler: handler.foo
events:
- stream:
arn: arn:aws:kinesis:us-east-1:783995676505:stream/search-helper
batchSize: 100
startingPosition: LATEST
enabled: false
See the Serveless Framework examples at https://serverless.com/framework/docs/providers/aws/events/streams/