How to allow NuxtJS(app) webpage to open in an iFrame and stored on S3 served via CloudFront? - amazon-s3

I am working on one website which has the feature like widget, there are few pages that I need to allow to open in iFrame. I have created one separate index.html file which has one iframe and that iframe's src is URL of the one page of the NuxtJS app. When I run this index.html file(open in browser) I am getting an error
Refused to display 'https://example.com/widgets/new/' in a frame because it set 'X-Frame-Options' to 'deny'.
When I run the project locally(localhost) and use the local URL https://localhost:3000/widgets/new/ of the page as iframe's src then it works. but not in production.
I looked around the internet but couldn't find any related solution to try.
Is there any option or config in NuxtJS which I can use to set the x-frame option to allow(true) on certain routes or pages?
I am using the Amazon S3 Bucket to store the project and using the Amazon CloudFront to serve the website.

Related

Cannot access subpages of my NextJS static site on S3 via cloudfront when public access is blocked

I have configured cloudfront to serve my NextJS static site from an S3 buckets. I have intentionally blocked all public access to my S3 bucket, so the only way to access this site would via the cloudfront URL (I've set up Origin access control "OAC" on the cloudfront distribution, so that's how cloudfront is able to access my S3 bucket. For the cloudfront origin domain, I have added the S3 bucket URL and not the static hosting endpoint, because S3 static hosting endpoint requires S3 objects to be publicly accessible, which is what I am trying to block).
I am able to go to my website and click around using the cloudfront URL. It navigates to subpages and content shows up as expected. However, refreshing on a subpage results in the 'AccessDenied' page.
For example, this is the name of the website: https://example.com
Going to https://example.com works fine, it shows the index.html like I have configured. And then, clicking a button on the website takes me to https://example.com/another-page, which also shows up just fine. However, if I refresh on https://example.com/another-page, that's when the 'AccessDenied' error shows up
Are there ways to get around it, so I can go straight to the subpages? It feels like it is possible, given that I was able to navigate to https://example.com/another-page within the app itself.

How to allow users to type address when using vue spa on google cloud storage

We are hosting a vue spa on google storage buckets and it works find for the main index page and for the links on the page but you are unable to type a url in and go the the page because google cloud tries to find the file instead of using vue router
If i switch off history mode it works because its a # instead of a path but we need history mode on for this to work
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
</Error>
Here is the XML data for not found pages
I want to have users able to type in the url and goes to that page. I would like to keep it on GCloud Storage buckets if possible if not a vm is fine.
Fixed it by setting the 404 page to index.html
You can also fix it through gcloud SDK using the command below:
gsutil web set -m index.html -e index.html gs://your.bucket.com
I had the same problem here and this is how I fixed it.

Receive AccessDenied when trying to access a reload or refresh or one in new tab in angular 5

For a while, I was simply storing the contents of my website in a s3 bucket and could access all pages via the full url just fine. I wanted to make my website more secure by adding an SSL so I created a CloudFront Distribution to point to my s3 bucket.
The site will load just fine, but if the user tries to refresh the page or if they try to access a page using the full url (i.e., www.example.com/home), they will receive an AccessDenied page.
S3 doesn't understand route open when you reload and open in new tab. You need to tell S3 is for this route used index.html.Whenever new route open its gives 403 [access denied ] error. for this you need to do setting CloudFront to set 403 error page redirect to index.html
Go to aws cloud front and open your configuration then go to Error page tab you will see same as above screenshot
Here is details blog : https://www.internetkatta.com/host-angular-2-or-4-or-5-version-in-aws-s3-using-cloudfront

React Router + AWS Backend, how to SEO

I am using React and React Router in my single page web application. Since I'm doing client side rendering, I'd like to serve all of my static files (HTML, CSS, JS) with a CDN. I'm using Amazon S3 to host the files and Amazon CloudFront as the CDN.
When the user requests /css/styles.css, the file exists so S3 serves it.
When the user requests /foo/bar, this is a dynamic URL so S3 adds a hashbang: /#!/foo/bar. This will serve index.html. On my client side I remove the hashbang so my URLs are pretty.
This all works great for 100% of my users.
All static files are served through a CDN
A dynamic URL will be routed to /#!/{...} which serves index.html (my single page application)
My client side removes the hashbang so the URLs are pretty again
The problem
The problem is that Google won't crawl my website. Here's why:
Google requests /
They see a bunch of links, e.g. to /foo/bar
Google requests /foo/bar
They get redirected to /#!/foo/bar (302 Found)
They remove the hashbang and request /
Why is the hashbang being removed? My app works great for 100% of my users so why do I need to redesign it in such a way just to get Google to crawl it properly? It's 2016, just follow the hashbang...
</rant>
Am I doing something wrong? Is there a better way to get S3 to serve index.html when it doesn't recognize the path?
Setting up a node server to handle these paths isn't the correct solution because that defeats the entire purpose of having a CDN.
In this thread Michael Jackson, top contributor to React Router, says "Thankfully hashbang is no longer in widespread use." How would you change my set up to not use the hashbang?
You can also check out this trick. You need to setup cloudfront distribution and then alter 404 behaviour in "Error Pages" section of your distribution. That way you can again domain.com/foo/bar links :)
I know this has been a few months old, but for anyone that came across the same problem, you can simply specify "index.html" as the error document in S3. Error document property can be found under bucket Properties => static Website Hosting => Enable website hosting.
Please keep in mind that, taking this approach means you will be responsible for handling Http errors like 404 in your own application along with other http errors.
The Hash bang is not recommended when you want to make SEO friendly website, even if its indexed in Google, the page will display only a little and thin content.
The best way to do your website is by using the latest trend and techniques which is "Progressive web enhancement" search for it on Google and you will find many articles about it.
Mainly you should do a separate link for each page, and when the user clicks on any page he will be redirected to this page using any effect you want or even if it single page website.
In this case, Google will have a unique link for each page and the user will have the fancy effect and the great UX.
EX:
Contact Us

Access js and css files from API gateway and lambda

I have an API Gateway in AWS that calls a a lambda function that returns some html. That html is then properly rendered on the screen but without any styles or js files included. How do I get those to the client as well? Is there a better method than creating /js and /css GET endpoints on the API Gateway to go get those files? I was hoping I could just store them in S3 and they'd get autoloaded from there.
Store them on S3, and enable S3 static website hosting. Then include the correct URL to those assets in the HTML.
I put in the exact address of each js/css file I wanted to include in my html. You need to use https address, not the http address of the bucket. Each file has it's own https address which can be found by following Mark B's instructions above. Notably, going through the AWS admin console, navigate to the file in the S3 bucket, click the "Properties" button in the upper right, copy the "Link" field, and post that into the html file (which was also hosted in S3 in my case). Html looks like this:
<link href="https://s3-us-west-2.amazonaws.com/my-bucket-name/css/bootstrap.min.css" rel="stylesheet">
I don't have static website hosting enabled on the bucket. I don't have any CORS permissions allowing reading from a certain host.