I'm using dojo hosted on google's CDN, which means google's cdn is dojo's base url path. But I have files on my server I want to load with dojo.require. how can I do this? currently I'm getting an error that it can't access the file, but that's because it doesn't exist on google's cdn.
You can config at data-dojo-config or in djConfig
By setting baseUrl and modulePaths
This link would be help. http://dojotoolkit.org/documentation/tutorials/1.6/cdn/
Related
I am working on one website which has the feature like widget, there are few pages that I need to allow to open in iFrame. I have created one separate index.html file which has one iframe and that iframe's src is URL of the one page of the NuxtJS app. When I run this index.html file(open in browser) I am getting an error
Refused to display 'https://example.com/widgets/new/' in a frame because it set 'X-Frame-Options' to 'deny'.
When I run the project locally(localhost) and use the local URL https://localhost:3000/widgets/new/ of the page as iframe's src then it works. but not in production.
I looked around the internet but couldn't find any related solution to try.
Is there any option or config in NuxtJS which I can use to set the x-frame option to allow(true) on certain routes or pages?
I am using the Amazon S3 Bucket to store the project and using the Amazon CloudFront to serve the website.
In my webpack config I have the publicPath set like so:
publicPath: '/js'
This way it points to public/js. Also in my index.pug file, which is loaded by the server and not in the public folder I have this:
extends layout
block content
main#app
script(src="/js/bundle.js")
Unfortunately, this enables people accessing my site to visit example.com/js/bundle.js. Is there a way to prevent this?
If /js/bundle.js is a script file you are using in your web page, then there is NO way to prevent the browser from going directly to http://example.com/js/bundle.js. That's the exact URL that the browser uses to load the script from your web page so that URL has to work.
ALL Javascript that runs in your web page is openly available to the public. You cannot change that. That's the architecture of the web and browsers.
Unfortunately, this enables people accessing my site to visit example.com/js/bundle.js. Is there a way to prevent this?
No. You cannot prevent it.
I am using React and React Router in my single page web application. Since I'm doing client side rendering, I'd like to serve all of my static files (HTML, CSS, JS) with a CDN. I'm using Amazon S3 to host the files and Amazon CloudFront as the CDN.
When the user requests /css/styles.css, the file exists so S3 serves it.
When the user requests /foo/bar, this is a dynamic URL so S3 adds a hashbang: /#!/foo/bar. This will serve index.html. On my client side I remove the hashbang so my URLs are pretty.
This all works great for 100% of my users.
All static files are served through a CDN
A dynamic URL will be routed to /#!/{...} which serves index.html (my single page application)
My client side removes the hashbang so the URLs are pretty again
The problem
The problem is that Google won't crawl my website. Here's why:
Google requests /
They see a bunch of links, e.g. to /foo/bar
Google requests /foo/bar
They get redirected to /#!/foo/bar (302 Found)
They remove the hashbang and request /
Why is the hashbang being removed? My app works great for 100% of my users so why do I need to redesign it in such a way just to get Google to crawl it properly? It's 2016, just follow the hashbang...
</rant>
Am I doing something wrong? Is there a better way to get S3 to serve index.html when it doesn't recognize the path?
Setting up a node server to handle these paths isn't the correct solution because that defeats the entire purpose of having a CDN.
In this thread Michael Jackson, top contributor to React Router, says "Thankfully hashbang is no longer in widespread use." How would you change my set up to not use the hashbang?
You can also check out this trick. You need to setup cloudfront distribution and then alter 404 behaviour in "Error Pages" section of your distribution. That way you can again domain.com/foo/bar links :)
I know this has been a few months old, but for anyone that came across the same problem, you can simply specify "index.html" as the error document in S3. Error document property can be found under bucket Properties => static Website Hosting => Enable website hosting.
Please keep in mind that, taking this approach means you will be responsible for handling Http errors like 404 in your own application along with other http errors.
The Hash bang is not recommended when you want to make SEO friendly website, even if its indexed in Google, the page will display only a little and thin content.
The best way to do your website is by using the latest trend and techniques which is "Progressive web enhancement" search for it on Google and you will find many articles about it.
Mainly you should do a separate link for each page, and when the user clicks on any page he will be redirected to this page using any effect you want or even if it single page website.
In this case, Google will have a unique link for each page and the user will have the fancy effect and the great UX.
EX:
Contact Us
I have an API Gateway in AWS that calls a a lambda function that returns some html. That html is then properly rendered on the screen but without any styles or js files included. How do I get those to the client as well? Is there a better method than creating /js and /css GET endpoints on the API Gateway to go get those files? I was hoping I could just store them in S3 and they'd get autoloaded from there.
Store them on S3, and enable S3 static website hosting. Then include the correct URL to those assets in the HTML.
I put in the exact address of each js/css file I wanted to include in my html. You need to use https address, not the http address of the bucket. Each file has it's own https address which can be found by following Mark B's instructions above. Notably, going through the AWS admin console, navigate to the file in the S3 bucket, click the "Properties" button in the upper right, copy the "Link" field, and post that into the html file (which was also hosted in S3 in my case). Html looks like this:
<link href="https://s3-us-west-2.amazonaws.com/my-bucket-name/css/bootstrap.min.css" rel="stylesheet">
I don't have static website hosting enabled on the bucket. I don't have any CORS permissions allowing reading from a certain host.
I start to play with ExpressJS for an app. I use the app.use(express.static(__dirname + '/public')); line to configure access to the public folder.
But as I use a CDN, I would like to point to the public folder who will contain JS, CSS & img files. This is an example :
http://cdn.com/public/css/style.css
Is anyone who can help me to fix the issue ?
Thanks
"Use a CDN" means "load files from the CDN's servers instead of your own app server". Thus when you use a CDN, your app server does not handle those files. You just need to change your URLs in your HTML to point to the CDN.
Actually, CDN provider like MAXCdn are waiting for a readable folder. So, I'm looking for the express paramater that replace __dirname.
So I did a simple thing : add a repository nested. In this way, I get http://domain.tld/public/img/someImage.jpg way.
You could use express-simple-cdn node module and then in Jade template use CDN() function:
link(rel="stylesheet", href=CDN('/css/style.css'))
It would output:
<link rel="stylesheet" href="http://cdn.com/public/css/style.css">