Cloudfront with lambda edge not working with new cache behavior - amazon-s3

I had a CloudFront distribution using the legacy cache Behavior and Aws Lambda Edge to change the origin path to serve multiple websites using the same bucket.
This was my lambda edge that was working with the legacy cache behavior:
|
'use strict';
const env = '${Environment}';
const origin_hostname = 'yourwebsite-${Environment}.s3.amazonaws.com';
exports.handler = (event, context, callback) => {
const request = event.Records[0].cf.request;
const headers = request.headers;
const host_header = headers.host[0].value;
var remove_suffix = '.yourwebsite.com';
if(env == "dev"){
remove_suffix = '-dev.yourwebsite.com';
}
if(host_header.endsWith(remove_suffix))
{
request.uri = '/' + host_header.substring(0,host_header.length - remove_suffix.length) + request.uri;
}
// fix the host header so that S3 understands the request
headers.host[0].value = origin_hostname;
// return control to CloudFront with the modified request
return callback(null,request);
};
This was my CloudFormation Lambda function association and cache policies:
LambdaFunctionAssociations:
- EventType: origin-request
LambdaFunctionARN: !Ref HotSitesEdgeFunctionVersion
CachePolicyId: "658327ea-f89d-4fab-a63d-7e88639e58f6"
ResponseHeadersPolicyId: "67f7725c-6f97-4210-82d7-5512b31e9d03"

After some hours working to understand, I realize that the host value was ..s3.amazonaws.com and not my subdomain. :(
The solution was
Create a new OriginRequestPolicy and attach the id to OriginRequestPolicyId in your distribution.
HotSiteCustomOriginRequestPolicy:
Type: AWS::CloudFront::OriginRequestPolicy
Properties:
OriginRequestPolicyConfig:
Comment: Custom policy to redirect Host header
CookiesConfig:
CookieBehavior: none
HeadersConfig:
HeaderBehavior: whitelist
Headers:
- Host
- Origin
Name: HotSiteCustomOriginRequestPolicy
QueryStringsConfig:
QueryStringBehavior: none
And in your distribution
OriginRequestPolicyId: !Ref HotSiteCustomOriginRequestPolicy
Documentation for all managed policy if you need:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-origin-request-policies.html
Basically, you have to forward the CloudFront Host and Origin to your lambda edge.
I hope this can help you guys.

Related

AWS static website - how to connect subdomains with subfolders

I want to setup S3 static website and connect with my domain (for example domain: example.com).
In this S3 bucket I want to create one particular folder (name content) and many different subfolders with in, then I want to connect these subfolders with appropriate subdomains, so for example
folder content/foo should be available from subdomain foo.example.com,
fodler content/bar should be available from subdomain bar.example.com.
Any content subfolder should be automatically available from subdomain with that same prefix name like folder name.
I will be grateful for any possible solutions for this problem. Should I use redirection option or there is any better solution? Thanks in advance for help.
My solution base on this video:
https://www.youtube.com/watch?v=mls8tiiI3uc
Because above video don’t explain subdomain problem, here is few additional things to do:
to AWS Route53 hostage zone we should add records A with “*.domainname” as record name and edge address as Value
to certificate domains we should add also “*.domainname”- to have certificate for wildcard domain
when setting up Cloudfront distribution we should add to “Alternate domain name (CNAME)“ section “www.domainname” and also “*.domainname”
redirection/forwarding from subdomain to subfolder is realizing via Lambda#Edge function (function should be improve a bit):
'use strict';
exports.handler = (event, context, callback) => {
const path = require("path");
const remove_suffix = ".domain.com";
const host_with_www = "www.domain.com"
const origin_hostname = "www.domain.com.s3-website.eu-west-1.amazonaws.com";
const request = event.Records[0].cf.request;
const headers = request.headers;
const host_header = headers.host[0].value;
if (host_header == host_with_www) {
return callback(null, request);
}
if (host_header.startsWith('www')) {
var new_host_header = host_header.substring(3,host_header.length)
}
if (typeof new_host_header === 'undefined') {
var new_host_header = host_header
}
if (new_host_header.endsWith(remove_suffix)) {
// to support SPA | redirect all(non-file) requests to index.html
const parsedPath = path.parse(request.uri);
if (parsedPath.ext === "") {
request.uri = "/index.html";
}
request.uri =
"/" +
new_host_header.substring(0, new_host_header.length - remove_suffix.length) +
request.uri;
}
headers.host[0].value = origin_hostname;
return callback(null, request);
};
Lambda#Edge is just Lambda function connected with particular Cloudfront distribution
need to add to Cloudfront distribution additional setting for Lambda execution (this setting is needed if we want to have different redirection for different subdomian, instead all redirection will point to main directory or probably to first directory which will be cached - first request to our Cloudfront domain):

Adding HTTP Security Headers Using Lambda#Edge and Amazon CloudFront for All routes (error routes)

I am using this doc https://aws.amazon.com/blogs/networking-and-content-delivery/adding-http-security-headers-using-lambdaedge-and-amazon-cloudfront/
I am using a react app in S3 bucket, with a cloudfront CDN. I have added a lambdaedge to add a security header
headers['x-frame-options'] = [{key: 'X-Frame-Options', value: 'DENY'}];
headers['x-xss-protection'] = [{key: 'X-XSS-Protection', value: '1; mode=block'}];
It is working fine for the homepage (mySite.com):
But it doesn't work for a different route, example mySite.com/login
When I check the error behavior in cloudFront, there are no options to add a header
Why this page /login is in error? Because of react router doesn't work in aws s3 bucket
The way that I solved this was to implement an origin request Lambda#Edge function that validates the event.Records[0].cf.request.uri value against a whitelist of known files and paths in the S3 origin. If the path doesn't match the whitelist, it updates the uri value to /index.html which allows the React app handle the request.
With this change, there will no longer be any 404 requests, which means the response will successfully pass through the origin response Lambda#Edge function which adds the response headers.
We solve this by using this feature called policy response headers in cloudfront. It add headers in all cases even in error response.
Doc: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-response-headers-policies.html
We implemented it with terraform
resource "aws_cloudfront_response_headers_policy" "my_name" {
name = "policy-response-headers-my-name"
comment = "Add security headers to responses"
security_headers_config {
xss_protection {
mode_block = true
override = true
protection = true
}
frame_options {
frame_option = "DENY"
override = true
}
content_type_options {
override = true
}
content_security_policy {
content_security_policy = "frame-ancestors 'none'"
override = true
}
referrer_policy {
referrer_policy = "same-origin"
override = true
}
strict_transport_security {
access_control_max_age_sec = 63072000
override = true
}
origin_override = true
}
}

"Access key does not exist" when generating pre-signed S3 URL from Lambda function

I'm trying to generate a presigned URL from within a Lambda function, to get an existing S3 object .
(The Lambda function runs an ExpressJS app, and the code to generate the URL is called on one of its routes.)
I'm getting an error "The AWS Access Key Id you provided does not exist in our records." when I visit the generated URL, though, and Google isn't helping me:
<Error>
<Code>InvalidAccessKeyId</Code>
<Message>The AWS Access Key Id you provided does not exist in our records.</Message>
<AWSAccessKeyId>AKIAJ4LNLEBHJ5LTJZ5A</AWSAccessKeyId>
<RequestId>DKQ55DK3XJBYGKQ6</RequestId>
<HostId>IempRjLRk8iK66ncWcNdiTV0FW1WpGuNv1Eg4Fcq0mqqWUATujYxmXqEMAFHAPyNyQQ5tRxto2U=</HostId>
</Error>
The Lambda function is defined via AWS SAM and given bucket access via the predefined S3CrudPolicy template:
ExpressLambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: ExpressJSApp
Description: Main website request handler
CodeUri: ../lambda.zip
Handler: lambda.handler
[SNIP]
Policies:
- S3CrudPolicy:
BucketName: my-bucket-name
The URL is generated via the AWS SDK:
const router = require('express').Router();
const AWS = require('aws-sdk');
router.get('/', (req, res) => {
const s3 = new AWS.S3({
region: 'eu-west-1',
signatureVersion: 'v4'
});
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
s3.getSignedUrl('getObject', params, (error, url) => {
res.send(`<p>${url}</p>`)
});
});
What's going wrong? Do I need to pass credentials explicitly when calling getSignedUrl() from within a Lambda function? Doesn't the function's execute role supply those? Am I barking up the wrong tree?
tldr; Go sure, to have the correct order of signature_v4 headers/formdata, in your request.
I had the same exact issue.
I am not sure if this is the solution for everyone who is encountering the problem, but I learned the following:
The error message, and other misleading error messages can occur, if you don't use the correct order of security headers. In my case I was using the endpoint to create a presigned url, for posting a file, to upload it. In this case, you need to go sure, that you are having the correct order of security relevant data in your form-data. For signatureVersion 's3v3' it is:
key
x-amz-algorithm
x-amz-credential
x-amz-date
policy
x-amz-security-token
x-amz-signature
In the special case of a POST-Request to a presigned url, to upload a file, it's important to have your file, AFTER the security data.
After that, the request works as expected.
I can't say for certain but I'm guessing this may have something to do with you using the old SDK. Here it is w/ v3 of the SDK. You may need to massage it a little more.
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
const { S3Client, GetObjectCommand } = require("#aws-sdk/client-s3");
// ...
const client = new S3Client({ region: 'eu-west-1' });
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
const command = new GetObjectCommand(params);
getSignedUrl(client, command(error, url) => {
res.send(`<p>${url}</p>`)
});

ses.sendmail() gives CORS error. 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*'..credentials mode is 'include'

I have been stuck on this for 4 days now. I really need some insights.
I have a serverless express app deployed on AWS. I am serving my frontend from S3 and backend from lambda. API gateway has proxy as shown in the serverless.yml below.
I have also used cloudfront to map my domain(https://my.domain.com.au) with the S3 bucket origin URL.
The normal GET POST PUT DELETE requests are working fine. But when I try to access any of the other AWS service from Lambda I get following CORS error.
Access to XMLHttpRequest at 'https://0cn0ej4t5w.execute-api.ap-southeast-2.amazonaws.com/prod/api/auth/reset-password' from origin 'https://my.domain.com.au' has been blocked by CORS policy: The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*' when the request's credentials mode is 'include'. The credentials mode of requests initiated by the XMLHttpRequest is controlled by the withCredentials attribute.
My use case is to send a mail from my app for which I tried using.
ses.sendEmail(params).promise();
This gave me the same error. So i tried invoking it through lambda, same error. Now i am trying to push mail contents to S3 and send mail from lambda using trigger but this gave me the same error.
The issue doesn't seem to be on the code as its working perfect from local environment. However, i don't want to leave any stones unturned.
Since, my lambda is in a VPC i have used internet gateway and tried setting up the private link as well.
Serverless.yml
service: my-api
# plugins
plugins:
- serverless-webpack
- serverless-offline
- serverless-dotenv-plugin
# custom for secret inclusions
custom:
stage: ${opt:stage, self:provider.stage}
serverless-offline:
httpPort: 5000
webpack:
webpackConfig: ./webpack.config.js
includeModules: # enable auto-packing of external modules
forceInclude:
- mysql
- mysql2
- passport-jwt
- jsonwebtoken
- moment
- moment-timezone
- lodash
# provider
provider:
name: aws
runtime: nodejs12.x
# you can overwrite defaults here
stage: prod
region: ${env:AWS_REGION_APP}
timeout: 10
iamManagedPolicies:
- 'arn:aws:iam::777777777777777:policy/LambdaSESAccessPolicy'
vpc:
securityGroupIds:
- ${env:AWS_SUBNET_GROUP_ID}
subnetIds:
- ${env:AWS_SUBNET_ID1}
- ${env:AWS_SUBNET_ID2}
- ${env:AWS_SUBNET_ID3}
environment:
/// env variables (hidden)
iamRoleStatements:
- Effect: "Allow"
Action:
- s3:*
- ses:*
- lambda:*
Resource: '*'
# functions
functions:
app:
handler: server.handler
events:
- http:
path: /
method: ANY
- http:
path: /{proxy+}
method: ANY
cors:
origin: ${env:CORS_ORIGIN_URL}
allowCredentials: true
headers: 'Access-Control-Allow-Origin, Access-Control-Allow-Headers, Origin, X-Requested-With, Content-Type, Accept, Access-Control-Request-Method, Access-Control-Request-Headers, Authorization'
method: ANY
# you can add CloudFormation resource templates here
resources:
# API Gateway Errors
- ${file(resources/api-gateway-errors.yml)}
# VPC Access for RDS
- ${file(resources/lambda-vpc-access.yml)}
I have configured response headers as well:
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", process.env.CORS_ORIGIN_URL);
res.header("Access-Control-Allow-Headers", "Access-Control-Allow-Origin, Access-Control-Allow-Headers, Origin, X-Requested-With, Content-Type, Accept, Access-Control-Request-Method, Access-Control-Request-Headers, Authorization");
res.header("Access-Control-Allow-Credentials", "true");
res.header("Access-Control-Allow-Methods", "GET,HEAD,OPTIONS,POST,PUT,DELETE");
next();
});
I actually have the same exact error as you but I've figured it out.
I'll just paste my code since you didn't show what your lambda function looks like.
I also know its been two weeks... so hopefully this helps someone in the future.
CORS errors are server side, and I'm sure you are aware. The problem with AWS SES is you have to handle the lambda correctly or it'll give you a cors error even though you have the right headers.
First things first... I don't think you have OPTIONS method in your api gateway...although I'm not sure if ANY can work as a replacement.
Second here is my code:
I check which http method I'm getting then I respond based on that. I am receiving a post event and some details come in the body. You might want to change the finally block to something else. The OPTIONS is important for the CORS, it lets the browser know that its okay to send the POST request (or at least that's how I see it)
var ses = new AWS.SES();
var RECEIVER = 'receiver#gmail.com';
var SENDER = 'sender#gmail.com';
exports.handler = async(event) => {
let body;
let statusCode = '200';
const headers = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET,DELETE,POST,PATCH,OPTIONS',
'Access-Control-Allow-Credentials': true,
'Access-Control-Allow-Headers': 'access-control-allow-credentials,access-control-allow-headers,access-control-allow-methods,Access-Control-Allow-Origin,authorization,content-type',
'Content-Type': 'application/json'
};
console.log(event);
try {
switch (event.httpMethod) {
case 'POST':
event = JSON.parse(event.body);
var params = {
Destination: {
ToAddresses: [
RECEIVER
]
},
Message: {
Body: {
Html: {
Data: html(event.name, event.phone, event.email, event.message), // 'Name: ' + event.name + '\nPhone: ' + event.phone + '\nEmail: ' + event.email + '\n\nMessage:\n' + event.message,
Charset: 'UTF-8'
}
},
Subject: {
Data: 'You Have a Message From ' + event.name,
Charset: 'UTF-8'
}
},
Source: SENDER
};
await ses.sendEmail(params).promise();
break;
case 'OPTIONS':
statusCode = '200';
body = "OK";
break;
default:
throw new Error(`Unsupported method "${event.httpMethod}"`);
}
}
catch (err) {
statusCode = '400';
body = err.message;
}
finally {
body = "{\"result\": \"Success\"}"
}
console.log({
statusCode,
body,
headers,
})
return {
statusCode,
body,
headers,
};
}

Express-Gateway, How to pick a service end point based on URL pattern?

I am trying to get a bunch of individual servers on the same domain behind the gateway. Currently, each of these servers can be reached from outside world via multiple names. Our sales team wanted to provide customers with a unique url, so if a server serves 10 customers, we have 10 CNAME records pointing to it.
As you can see, with 5 or 6 servers, the number of apiEndpoints is pretty large. On top of that, new CNAMEs can be created at any given time making hardcoded apiEndpoints a pain to manage.
Is it possible to have a dynamic serviceEndpoint url. What I'm thinking is something like this:
apiEndpoints:
legacy:
host: '*.mydomain.com'
paths: '/v1/*'
serviceEndpoints:
legacyEndPoint:
url: '${someVarWithValueofStar}.internal.com'
pipelines:
default:
apiEndpoints:
- legacy:
policies:
- proxy:
- action:
serviceEndpoint: legacyEndPoint
Basically, what I want to achieve is to redirect the all the x.mydomain.com to x.internal.com where x can be anything.
Can I use variables in the url strings? Is there a way to get the string that matched the wild card in the host? Are there other options to deal with this problem?
I ended up hacking a proxy plugin together for my needs. Very basic and requires more work and testing, but this what I started with:
The proxy plugin (my-proxy)
const httpProxy = require("http-proxy");
/**
* This is a very rudimentary proxy plugin for the express gateway framework.
* Basically it will redirect requests xxx.external.com to xxx.internal.com
* Where xxx can be any name and the destination comes from providing a
* service endpoint with a http://*.destination.com url
* #param {*} params
* #param {*} config
*/
module.exports = function (params, config) {
const serviceEndpointKey = params.serviceEndpoint;
const changeOrigin = params.changeOrigin;
const endpoint = config.gatewayConfig.serviceEndpoints[serviceEndpointKey];
const url = endpoint.url;
const reg = /(\/\/\*\.)(\S+)/;
const match = reg.exec(url);
const domain = match[2];
const proxy = httpProxy.createProxyServer({changeOrigin : changeOrigin});
proxy.on("error", (err, req, res) => {
console.error(err);
if (!res.headersSent) {
res.status(502).send('Bad gateway.');
} else {
res.end();
}
});
return (req, res, next) => {
const hostname = req.hostname;
const regex = /^(.*?)\./
const tokens = regex.exec(hostname)
const serverName = tokens[1];
const destination = req.protocol + "://" + serverName + "." + domain;
proxy.web(req, res, {target : destination});
};
};
gateway.config.xml
http:
port: 8080
apiEndpoints:
legacy:
host: '*.external.com'
paths: '/v1/*'
serviceEndpoints:
legacy_end_point:
url: 'https://*.internal.com'
policies:
- my-proxy
pipelines:
default:
apiEndpoints:
- legacy
policies:
- my-proxy:
- action:
serviceEndpoint: legacy_end_point
changeOrigin: true
It all boils down to regex parsing the wild cards in the apiEndpoints and serviceEndpoints host and urls, nothing fancy so far. I looked at the source code of the built in proxy plugin and I don't think my naive approach will fit in very well, but it works for what I need it.
thanks for the question, I think this is going to be asked a lot over the following months.
Express Gateway has support for environment variables; unfortunately right now the apiEndpoint can only be a single and well defined endpoint without any replacement capabilities.
This is something we'll probably change in the near term future — with a Proxy Table API that will let you insert some more difficult templates.
In case this is pressing for you, I'd invite you to open an issue so that everybody in the team is aware of such feature and we can prioritize it effectively.
In meantime, unfortunately, you'll have to deal with numerous numbers of ApiEndpoints
V.