Authorization Issues on S3 Hosted Static Site - vue.js

I have configured cognito based authentication and everything is working on my local machine, however when I push the compiled nuxt application to S3, I get the following error after login:
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>YJ5VBAF69BFHNTRA</RequestId>
<HostId>+ROpBRvEbtrVxTgwqSfhDvK5jwhCfbD9eoE3X6RslkFghQXDL+NwkupIqXoYW2Em9ZoBEhP31Oo=</HostId>
</Error>
This seems to be an s3 error, and I am not sure what is causing it as the site acts like normal otherwise.
Can be repeated by registering and trying to login to the site (copyswapper.com).

This is a problem serving the default document (index.html) - instructions on fixing it below.
LOCAL DEVELOPMENT
On your local machine, a development web server, eg the webpack one, serves index.html regardless of the path the user browses to within the vue.js app:
http://localhost:3000/login
AWS
I see you are deploying static files to S3, then serving them via Cloudfront. But the default document handling works differently, meaning this path does not serve an index.html file, and results in an error instead:
https://copyswapper.com/login
AWS PERMISSIONS
I have a demo Single Page App hosted the same way, which you can run from this page to compare against. The standard setup is to only allow Cloudfront to access files, via permissions like this. It results in the above errors if a file is missing though:
{
"Version": "2008-10-17",
"Id": "PolicyForPublicWebsiteContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity H1D9C6K7CY211F"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::web.authsamples.com/*"
}
]
}
CONCERN 1: DEFAULT DOCUMENT HANDLING
You need to provide a lambda edge function to serve the default document, regardless of the path of the user within your Single Page App:
Code to set the default document
Here are a couple of paths for my demo SPA. Note that the first of these could be a path within the app, so the default document handling deals with that. The second of these results in a non-existent Javascript file, and I did not try to fix that, so it results in the same error you are getting:
https://web.authsamples.com/nonexistentpath
https://web.authsamples.com/nonexistentpath.js
CONCERN 2: SECURITY HEADERS
While you are there you should also write a lambda edge function to use recommended security headers, similar to this:
Code to set security headers
If you then browse to Mozilla Observatory and type in your site name, you will get a better security rating, as for my demo app:
Demo App Security Rating
LAMBDA EDGE TESTING AND DEPLOYMENT
The lambda edge functions can be managed in a small subproject, as in my example. I use the Serverless framework, meaning the logic is expressed in a Serverless.yml file and I then run these commands during development, to test the logic:
npm install
npm run defaultDocument
npm run securityHeaders
I then deploy the code with these commands:
npm run package
npm run deploy
SUMMARY
Single Page Apps are not 100% static, and require some code to handle the two concerns mentioned above.

Related

Unable to start the environment. To retry, refresh the browser or restart by selecting Actions, Restart AWS CloudShell

I am unable to use aws cloud shell. I operate in the supported region (Ireleand) and my user has the right permissions (AWSCloudShellFullAccess).
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "cloudshell:*" ], "Effect": "Allow", "Resource": "*" } ] }
Why is it disabled?
I tried to follow this guide. But the advice there doesnt work.
AWS cloudshell troubleshooting
So i was able to resolve this issue. Few things to try to create CloudShell environment:
Time Synchronization: Make sure Your machine time is accurate. It means its correct based on world time. did you try from another machine to see if its working there? may be any time sync related issue?
Check in different regions..
check AWSCloudShellFullAccess policy to ensure it has below JSON data.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudshell:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Check in different browser to see if it works in different browser or not. https://docs.aws.amazon.com/cloudshell/latest/userguide/troubleshooting.html
Did you deleted cloudshell home directory or something? Try after resetting home directory. But it can DELETE all your data exists in home directory. https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#deleting-home-directory
check if any DENY policy created for it. remove that.
It is possible your account is not 100% verified. Try this: Create a CloudFront distribution if you get the below error it confirms your account is unverified or if you can create 2 distributions and can't create the 3rd one.
Your account must be verified before you can add new CloudFront
resources. To verify your account, please contact AWS Support
(https://console.aws.amazon.com/support/home#/ ) and include this
error message.
Click the support link
Navigate to:
Support / New case / Service limit increase
Limit type:
CloudFront Distributions
In Requests select:
Limit: Web Distributions per Account
New limit value: <TYPE_YOUR_NEW_VALUE_HERE>
MY CASE: In my case, I had 2 distributions, wanted to create 3rd, but couldn't. So I have put as <TYPE_YOUR_NEW_VALUE_HERE> a number 10.
Note: If nothing works choose last option as your last resort to confirm your account is verified.

gatsby compression doesn't work in live server, only on local

I am using the brotli plugin for compressing my bundles in gatsby: https://github.com/ovhemert/gatsby-plugin-brotli
the plugin is configured as follows:
{
resolve: "gatsby-plugin-brotli",
options: {
extensions: ["css", "html", "js", "svg", "ttf"],
},
},
}
For some reason, it seems to work only when I "gatsby serve" it on my local machine (localhost:9000), but uploading it to a bucket on S3, shows no compressions what so ever:
local deployment:
s3 deployment:
Nothing works, no matter what I've tried.
for uploading to S3, I've been using the gatsby-plugin-s3 package:
https://github.com/jariz/gatsby-plugin-s3
Any idea?
Thanks!
EDIT:
I checked the bucket to make sure that it contains the compressed files and it does, but also the decompressed files:
So I guess I need to refine my questions, but not sure how... not sure what caused this issue
So after spending quite some time on this issue, I finally found the answer in AWS docs.
I'm using CloudFront to serve the site, but since this site is a demo site for testing purposes, I didn't purchase an HTTPS Cert for it.
According to AWS Docs, CF doesn't serve compressed content over HTTP, ONLY on HTTPS:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
Purchasing and installing a certificate SOLVED this problem for me.

Access files stored on Amazon S3 through web browser

Current Situation
I have a project on GitHub that builds after every commit on Travis-CI. After each successful build Travis uploads the artifacts to an S3 bucket. Is there some way for me to easily let anyone access the files in the bucket? I know I could generate a read-only access key, but it'd be easier for the user to access the files through their web browser.
I have website hosting enabled with the root document of "." set.
However, I still get an 403 Forbidden when trying to go to the bucket's endpoint.
The Question
How can I let users easily browse and download artifacts stored on Amazon S3 from their web browser? Preferably without a third-party client.
I found this related question: Directory Listing in S3 Static Website
As it turns out, if you enable public read for the whole bucket, S3 can serve directory listings. Problem is they are in XML instead of HTML, so not very user-friendly.
There are three ways you could go for generating listings:
Generate index.html files for each directory on your own computer, upload them to s3, and update them whenever you add new files to a directory. Very low-tech. Since you're saying you're uploading build files straight from Travis, this may not be that practical since it would require doing extra work there.
Use a client-side S3 browser tool.
s3-bucket-listing by Rufus Pollock
s3-file-list-page by Adam Pritchard
Use a server-side browser tool.
s3browser (PHP)
s3index Scala. Going by the existence of a Procfile, it may be readily deployable to Heroku. Not sure since I don't have any experience with Scala.
Filestash is the perfect tool for that:
login to your bucket from https://www.filestash.app/s3-browser.html:
create a shared link:
Share it with the world
Also Filestash is open source. (Disclaimer: I am the author)
I had the same problem and I fixed it by using the
new context menu "Make Public".
Go to https://console.aws.amazon.com/s3/home,
select the bucket and then for each Folder or File (or multiple selects) right click and
"make public"
You can use a bucket policy to give anonymous users full read access to your objects. Depending on whether you need them to LIST or just perform a GET, you'll want to tweak this. (I.e. permissions for listing the contents of a bucket have the action set to "s3:ListBucket").
http://docs.aws.amazon.com/AmazonS3/latest/dev/AccessPolicyLanguage_UseCases_s3_a.html
Your policy will look something like the following. You can use the S3 console at http://aws.amazon.com/console to upload it.
{
"Version":"2008-10-17",
"Statement":[{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": {
"AWS": "*"
},
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::bucket/*"
]
}
]
}
If you're truly opening up your objects to the world, you'll want to look into setting up CloudWatch rules on your billing so you can shut off permissions to your objects if they become too popular.
https://github.com/jupierce/aws-s3-web-browser-file-listing is a solution I developed for this use case. It leverages AWS CloudFront and Lambda#Edge functions to dynamically render and deliver file listings to a client's browser.
To use it, a simple CloudFormation template will create an S3 bucket and have your file server interface up and running in just a few minutes.
There are many viable alternatives, as already suggested by other posters, but I believe this approach has a unique range of benefits:
Completely serverless and built for web-scale.
Open source and free to use (though, of course, you must pay AWS for resource utilization -- such S3 storage costs).
Simple / static client browser content:
No Ajax or third party libraries to worry about.
No browser compatibility worries.
All backing systems are native AWS components.
You never share account credentials or rely on 3rd party services.
The S3 bucket remains private - allowing you to only expose parts of the bucket.
A custom hostname / SSL certificate can be established for your file server interface.
Some or all of the host files can be protected behind Basic Auth username/password.
An AWS WebACL can be configured to prevent abusive access to the service.

Is there something wrong with my Amazon S3 bucket policy?

I am trying to block hotlinking of my Cloudfront files from specific domains. Through a combination of online examples and Amazon's own policy generator, I have come up with this:
{
"Version": "2008-10-17",
"Id": "http referer policy",
"Statement": [{
"Sid": "Block image requests",
"Action": "s3:GetObject",
"Effect": "Deny",
"Resource": "arn:aws:s3:::mybucket/subdir/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://example.com/*"
]
}
},
"Principal": {
"AWS": "*"
}
}]
}
I sent an invalidation request for a file in the subdirectory of mybucket, then a few minutes later tried reloading the image with the referer header still sent (verified using Chrome's dev tools). Did a hard reload with Ctrl+F5, and the response headers contained "X-Cache:Miss from cloudfront" so it's definitely getting the latest version of the image.
But the image is still displaying fine and is not blocked. The policy generator did not have an option for the "aws:Referer" key, but it's in the Amazon docs here. Have I done something wrong here?
Update 2
Revisiting your policy I wonder how you have actually allowed CloudFront access to your objects in the first place? Have you by chance followed the common advise in e.g. Start Using CloudFront with Amazon S3 that You must ensure that your object permissions are set to Make Everything Public for each object in your Amazon S3 bucket.
In this case you might have stumbled over a related pitfall due to the interaction between the meanwhile three different S3 access control mechanisms available, which can be rather confusing indeed. This is addressed e.g. in Using ACLs and Bucket Policies Together:
When you have ACLs and bucket policies assigned to buckets, Amazon S3
evaluates the existing Amazon S3 ACLs as well as the bucket policy
when determining an account’s access permissions to an Amazon S3
resource. If an account has access to resources that an ACL or policy
specifies, they are able to access the requested resource.
Consequently you would need to migrate your ACL to the bucket policy (i.e. allow CloudFront access before denying via aws:referer) and delete the overly generous ACL thereafter.
Good luck!
Update 1
Okay, now with client caching out the way, I'm afraid this is going to be non trivial (as apparent when you search for aws:referer in the AWS forums), thus might require a couple of iterations (especially given you have researched the topic yourself already):
The most common issue encountered is the leading whitespace error in the AWS documentation (which is particularly annoying, because a simple documentation fix would remedy lots of wasted time on behalf of users and AWS support staff alike)
Your policy doesn't exhibit this issue, however, given you sanitized the real domain you might have replaced the error in your production code in fact?
Also it is important to realize that the HTTP referer header is not necessarily going to be available, see e.g. Referer Hiding (thus your policy won't prevent malicious access anyway, though that's apparently not the issue)
You stated already, that you have verified it to be sent via the Chrome developer tools, so this doesn't apply either (I'm mentioning it to stress the reduced security level).
The policy looks fine at first sight - before digging further into this direction though, I'd recommend to ensure that you are actually bypassing Chrome's cache successfully, which is notoriously less straight forward than people are used to from other browsers; in particular, Ctrl + F5 simply reloads the page, but does not Bypass the cache (not reliable at least)!
As documented there as well, you could use one of the other key combinations To reload a page and bypass the cache (including the confusing 2nd Ctrl + F5 after the 1st one reloaded) , however, I recommend facilitating one of the following two alternatives instead:
Chrome's developer tools offers dedicated support for browsing without a cache - in the bottom right corner of the toolbox panel is a cog icon for settings, clicking this triggers an overlay with an options panel, amongst these you'll find the option Disable cache under section Network.
Chrome's Incognito mode (Ctrl + Shift + N) keeps Google Chrome from storing information about the websites you've visited, which as of today (might change anytime of course) seems to include cached content, cookies, DNS and the like as expected, thus is an even quicker, though less explicit option right now.

file:/// URL permission for Chromium

My experimental Chromium extension would like run some content scripts on local HTML pages. I've in my manifest.json file
"permission": [
...
"file:///*/*"
]
and I've checked the "Allow access to file URL's" in the extension management page. However, I'm not seeing the effect. I expected it to add an item to the context menu, but it doesn't in the local page, while it works on web pages. What could be wrong?
Make sure you have set up the "matches" filter correctly in the "content_scripts" section of you manifest.