I'd like to add X-Frame-Options HTTP response header for static content hosted on Amazon S3 with a Cloudfront cache. How can I add these headers?
You can add the x-frame-options header to the response from CloudFront / S3 using a Lambda#Edge function. The lambda code runs within the local edge locations, but needs to be created and maintained in the us-east-1 region.
The example code here uses nodeJS 6.10 to add the response header
'use strict';
exports.handler = (event, context, callback) => {
const response = event.Records[0].cf.response;
const headers = response.headers;
response.headers['x-frame-options'] = [{"key":"X-Frame-Options","value":"SAMEORIGIN"}];
console.log(response.headers);
callback(null, response);
};
Create a definitive version of the Lambda, then set the Lambda Version's trigger configuration as the CloudFront origin-response Event type for your path pattern behavior.
The example code logs events to CloudWatch logs service for debugging purposes. If you don't already have one you will need to setup a lambda execution IAM role that allows a policy allowing CloudWatch logs actions to be assumed by edgelambda.amazonaws.com and lambda.amazonaws.com.
Basic Lambda Execution Policy allowing logs to be written to CloudWatch:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*",
"Effect": "Allow"
}
]
}
Trust Relationship allowing Lambda and Lambda#Edge to assume the role :
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"edgelambda.amazonaws.com",
"lambda.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
It would be better if AWS simply allowed the x-frame-options header to be set in the GUI but until then this solution works and will allow you to keep your Security Auditors happy.
It is now possible to use SecurityHeaders via CloudFront
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-http-security-headers/
Which include:
x-xss-protection: 1; mode=block
x-frame-options: SAMEORIGIN
x-content-type-options: nosniff
strict-transport-security: max-age=31536000
Yes, you can set the headers in the $http angular service like so:
$http(method: '<TYPE>', headers: headers, url: <URL>, data: {}).success(...);
var headers = {'X-Frame-Options': ...};
Related
I have 3 S3 buckets:
my-routing-test-ap-southeast-2
my-routing-test-eu-west-2
my-routing-test-us-east-1
They are all configured as a static website, with block all public access turned off and (example) this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Demo",
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetObject","s3:GetObjectVersion"],
"Resource": "arn:aws:s3:::my-routing-test-us-east-1/*"
}
]
}
I have configured a cloudfront distribution with one origin:
my-routing-test-us-east-1.s3.us-east-1.amazonaws.com
And a behaviour configured for the origin above and Legacy cache settings header option set with the CloudFront-Viewer-Country value.
I should point out here that the documentation for caching based on request header states:
Specify whether you want CloudFront to cache objects based on the values of specified headers:
Whitelist – CloudFront caches your objects based only on the values of the specified headers. Use Whitelist Headers to choose the headers that you want CloudFront to base caching on.
However, the Edit behaviour section of the CloudFront console shows the "Cache key and origin requests" options as:
Legacy cache settings > Headers > Include the following headers > CloudFront-Viewer-Country
Which, of course, does not appear to include the "Whitelist" option.
The distribution also has the Origin request set to the Lambda#Edge function (where the code is pulled from this documentation page):
'use strict';
exports.handler = (event, context, callback) => {
const request = event.Records[0].cf.request;
const countryToRegion = {
'US': 'us-east-1',
'AU': 'ap-southeast-2',
'GB': 'eu-west-2'
};
if (request.headers['cloudfront-viewer-country']) {
const countryCode = request.headers['cloudfront-viewer-country'][0].value;
const region = countryToRegion[countryCode];
console.log('countryCode: '+countryCode+' region: '+region);
if (region) {
console.log('region: '+region);
request.origin.s3.region = region;
const domainName = `my-routing-test-${region}.s3.${region}.amazonaws.com`;
request.origin.s3.domainName = domainName;
console.log('request.origin.s3.domainName: '+domainName);
request.headers['host'] = [{ key: 'host', value: domainName }];
}
}
callback(null, request);
};
When I call the cloudfront URL to retrieve my test file for my region (eu-west-2) I see this in my region's log group:
countryCode: GB region: eu-west-2
region: eu-west-2
request.origin.s3.domainName: origin-routing-eu-west-2.s3.eu-west-2.amazonaws.com
But the file is always the same image served from the us-east-1 region. This should not be the case as each bucket contains a different image for each region.
What is missing or incorrect in this configuration?
It turns out that the "Cache key and origin expiry" object caching was set to the default (1 year) and when I tested the initial file retrieval that basically store in the cache the primary origin file which could not get replaced by the regional version. Check your cache settings, folks!
I am trying to upload a file to AWS S3 using aws-sdk v3 from a Nuxt app's Vue Component.
Here's how I upload it.
<script>
export default {
...
methods: {
onSubmit(event) {
event.preventDefault()
this.addPhoto()
},
addPhoto() {
// Load the required clients and packages
const { CognitoIdentityClient } = require('#aws-sdk/client-cognito-identity')
const { fromCognitoIdentityPool } = require('#aws-sdk/credential-provider-cognito-identity')
const {
S3Client,
PutObjectCommand,
ListObjectsCommand,
DeleteObjectCommand,
} = require('#aws-sdk/client-s3')
const REGION = 'us-east-1' // REGION
const albumBucketName = 'samyojya-1'
const IdentityPoolId = 'XXXXXXX'
const s3 = new S3Client({
region: REGION,
credentials: {
accessKeyId: this.$config.CLIENT_ID,
secretAccessKey: this.$config.CLIENT_SECRET,
sessionToken: localStorage.getItem('accessToken'),
},
})
var file = this.formFields[0].fieldName
var fileName = this.formFields[0].fieldName.name
var photoKey = 'user-dp/' + fileName
var s3Response = s3.send(
new PutObjectCommand({
Bucket: albumBucketName,
Key: photoKey,
Body: file,
}),
)
s3Response
.then((response) => {
console.log('Successfully uploaded photo.' + JSON.stringify(response))
})
.catch((error) => {
console.log(
'There was an error uploading your photo: Error stacktrace' + JSON.stringify(error.message),
)
const { requestId, cfId, extendedRequestId } = error.$metadata
console.log({ requestId, cfId, extendedRequestId })
})
},
...
}
</script>
The issue now is that the browser complains about CORS.
This is my CORS configuration on AWS S3
I'm suspecting something while creating the upload request using SDK. (I'm open to use an API that is better than what I'm using).
Nuxt setting that allows CORS.
Something else on S3 CORS config at permissions
Network tab on chrome dev tools shows Internal Server Error (500) for prefetch. (Don't know why we see 2 entries here)
Appreciate any pointers on how to debug this.
I was having the same issue today. The S3 logs were saying it returned a 200 code response, but Chrome was seeing a 500 response. In Safari, the error showed up as:
received 'us-west-1'; expected 'eu-west-1'
Adding region: 'eu-west-1' (i.e. the region where the bucked was created)to the parameters when creating the S3 service solved the issue for me.
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/setting-region.html#setting-region-constructor
In the bucket policy use this
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:GetObjectAcl",
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:ListMultipartUploadParts"
],
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*",
"Condition": {
"StringLike": {
"aws:Referer": "https://example/*"
}
}
}
]}
and use the region of your bucket
const s3 = new aws.S3({
apiVersion: 'latest',
accessKeyId: process.env.AWS_ACCESS_KEY_ID_CUSTOM,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY_CUSTOM,
region: 'us-west-1',
})
I am having the same problem, but according to the docs you should be using Cognito Identity to access the bucket. Only in V3 for clients to be able to access the buckets from the browser you must use Cognito Identity to authenticate users in order to have access to bucket/object commands. Currently trying to implement, so I am not 100% how to do it just the process. Feel free to take a look. I hope this helps. ~~~~~~~~~~~~~~~~~~~~~~~~~~
| Cognito SDK Link: | https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html
| Example: | https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/loading-browser-credentials-cognito.html
The error needs to be fixed on the backend, since it's CORS. It's clearly states a missing header of Access-Control-Allow-Origin.
So, checking it in the official AWS docs gives you the answer: https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors.html
I was doing multiple things wrongly here. Every answer on this post helped me make a little progress while debugging. Can't thank you enough!
My bucket policy was not using role-based ALLOW/DENY that has to correspond to authenticated role on my cognito identity pool.
Needed to rightly configure the Authentication provider as Cognito Userpool.
Making sure the region is right. Cognito region could be different from S3 region.
Make sure CORS policy includes relevant information like "Access-Control-Allow-Origin".
Double check the token includes the right credentials. This comes very handy cognito decode-verify
Was stand-alone testing from the browser. But this is not a good approach. Use an API server to take the file and push to S3 from there.
I'm trying to make an upload have an ACL of public-read. The docs are super thin for Amazonica, and after hours of tinkering, I'm no closer to figuring out how to accomplish this goal. In short, I can't figure out how to get it to sign the header.
Server side, my code looks like this.
(s3/generate-presigned-url
creds
{:bucket-name "mybucket"
:method "PUT"
:expires 10000
:key "my-key"
:cache-control "max-age=31557600;"
:request-parameters {:x-amz-acl "public-read"}
})
Client side, I grab the URL that creates and do an XHR PUT request
var xhr = new XMLHttpRequest();
xhr.open("PUT", signedUrl);
xhr.setRequestHeader('Cache-Control', 'max-age=31557600')
xhr.onload = ...
xhr.onerror = ...
xhr.send(file);
And this works perfectly, with the exception that it has the wrong ACL: "private" rather than "public"
Adding it client side is easy
var xhr = new XMLHttpRequest();
xhr.open("PUT", signedUrl);
xhr.setRequestHeader('Cache-Control', 'max-age=31557600')
xhr.setRequestHeader('x-amz-acl', 'public-read')
xhr.onload = ...
xhr.onerror = ...
xhr.send(file);
But the request of course fails due to HeadersNotSigned. I can't at all figure out how to add it server side so that they get signed. The SignedHeaders section never includes any additional parameters.
I've blindly tried all sorts of combos
(s3/generate-presigned-url
creds
{:headers {:x-amz-acl "public-read"}
:x-amz-acl "public-read"
:metadata {:x-amz-acl "public-read"}
:signed-headers {:x-amz-acl "public-read"}
:amz-acl "public-read"
:x-amz-signed-headers {:x-amz-acl "public-read"}
:X-Amz-SignedHeaders ["x-amz-acl"]
:request-parameters {:x-amz-acl "public-read"}
})
How do you add an ACL policy to a signed url?
I don't have a direct answer to that, but a workaround for your consideration: making all objects in your s3 bucket default to public-read.
You can do this by adding this bucket policy to your bucket (replace bucketnm of course):
{
"Id": "Policy1397632521960",
"Statement": [
{
"Sid": "Stmt1397633323327",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketnm/*",
"Principal": {
"AWS": [
"*"
]
}
}
]
}
Im using com.amazonaws.services.cloudsearchdomain.AmazonCloudSearchDomainClient to uploadDocuments() with passing AWS secretkey and access id, End points .
Access Policy - Access all for all services
It is returning
Service: AmazonCloudSearchDomain; Status Code: 403; Error Code:
SignatureDoesNotMatch;
But with same package i have tried search() with same credentials , im getting search result correctly as expected.
Some one please help for above exception
This may be caused by your access policy allowing public access to search requests, but not upload. So there may be an issue with the credentials being passed, but you don't see that error when performing search requests, because credentials aren't necessary for that type of request.
For example, this access policy below would allow anyone to search without presenting credentials. But any other operation (like uploading documents) would require a valid set of credentials that have access to the CloudSearch domain.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "cloudsearch:search"
}
]
}
I have uploaded an image to Amazon S3 storage. But how can I access this image by url? I have made the folder and file public but still get AccessDenied error if try to access it by url https://s3.amazonaws.com/bucket/path/image.png
This is an older question, but for anybody who comes across this question, once I made the file public I was able to access my image as https://mybucket.s3.amazonaws.com/myfolder/afile.jpg
in my case i have uploaded image privately so that i was unable to access. i did following code
const AWS = require('aws-sdk')
const myBucket = 'BUCKET_NAME'
const myKey = 'FILE_NAME.JPG'
const signedUrlExpireSeconds = 60 * 1
const s3 = new AWS.S3({
accessKeyId: "ACCESS_KEY_ID",
signatureVersion: 'v4',
region: 'S3_REGION',
secretAccessKey: "ACCESS_SECRET"
});
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
console.log(url)
You can access your image by using:
https://s3.amazonaws.com/bucketname/foldername/imagename.jpg
or if there are no folders, you can do:
https://s3.amazonaws.com/bucketname/imagename.jpg
upvote if helps. It conforms to present AWS dated 30 may 2017.
Seems like you can now simply right-click on any folder inside a bucket and select 'Make Public' to make everything in that folder public. It may not work at the bucket level itself.
One of easiest way is to make a bucket policy.
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "MakeItPublic",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketname.com/*"
}]
}
make sure you access image using the same case as it was uploaded and stored on S3.
for example, if you uploaded image_name.JPG, you should use the same name, but not image_name.jpg
For future reference, if you want to access a file in Amazon S3 the URL needs to be something like:
bucketname.s3.region.amazonaws.com/foldername/image.png
Example: my-awesome-bucket.s3.eu-central-1.amazonaws.com/media/img/dog.png
Don't forget to set the object to public.
Inside S3 if you click on the object will you see a field called: Object URL. That's the object's web address.
On your console, right click on the image you want to access and click on "Make Public"; when thats done right click on the image again and click on "Properties" and copy the Link from the Extended view.
I came across this question whilst looking for a solution to a similar problem with being unable to access images.
It turns out that images with a % in their filename, when being accessed, must have the % symbol URL encoded to %25.
i.e. photo%20of%20a%20banana%20-%2019%20june%202016.jpg needs to be accessed via photo%2520of%2520a%2520banana%2520-%252019%2520june%25202016.jpg.
However, URL encoding the full path didn't work for us, since the slashes, etc would be encoded, and the path would not work. In our specific case, simply replacing % with %25 in all access paths made the difference.
I was having the same problem. I have the issue the spacing in image url. I did this to make it work:
String imgUrl=prizes.get(position).getImagePreview().replaceAll("\\s","%20");
now pass this url to picasso:
Picasso.with(mContext)
.load(imgUrl)
.into(mImageView);
Just add Permission to follow the below image.
To access private images via URL you must provide Query-string authentication. Query-string authentication version 4 requires the X-Amz-Algorithm, X-Amz-Credential, X-Amz-Signature, X-Amz-Date, X-Amz-SignedHeaders, and X-Amz-Expires parameters.
Just addon to #akotian answer, you can get the object URL by clicking the object as follows
and to access publically you can set the ACL programmatically while uploading the object to the bucket
i.e sample java request
PutObjectRequest putObjectRequest = PutObjectRequest.builder()
.contentType(contentType)
.bucket(LOGO_BUCKET_NAME)
.key(LOGO_FOLDER_PREFIX+fileName)
.acl(ObjectCannedACL.PUBLIC_READ)// this make public read
.metadata(metadata)
.build();
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"
]
}
]
}
use this policy for that bucket, which makes it public.
Adding bucket policy worked for me
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketname/*"
}
]
}
Turn off Block public access (bucket settings) from Permissions tab inside your bucket. You also need to Edit the permissions of the object. Provide Read access on Grantee Everyone (public access). Then chech "I understand the effects of these changes on this object." and Save changes.
Change your Bucket.. It may works