Always serve index.html when page not found - asp.net-core

I am about to deploy a React app using react-router in our intranet. I am using Kestrel without a reverse proxy.
When I start browsing the site by typing https://myserver/, the page gets served and I can click links which take me to https://myserver/subpage, but subpage does not exist inside wwwroot, only react-router uses this to determine the contents to display. Now if the user presses the browser's reload button, a 404 is returned.
Should I configure Kestrel to serve index.html in case the requested resource is not found? If so, yes? Or is there a more elegant solution?

Related

app.MapFallbackToFile causes reload the entire SPA site if the URL typed manually

I use the latest recommended SPA + .Net Core-based Web APi pattern where the FE referenced to BE, FE serves proxy to BE during development, and app.UseDefaultFiles()serves index.html where the SPA resides during production. This pattern means no proxy middleware is required as it was in opposite direction when the BE serves FE as a proxy.
app.UseDefaultFiles(); <-- Here the site is loaded first time
app.UseStaticFiles();
app.MapControllers();
app.MapFallbackToFile("/index.html"); <-- Here the site is reloaded if URL typed(changed) manually
Client-side routing is the point. Specifically, I use Vue Router and IIS hosting. When the site is already opened, and a user types URL in the browser, it falls down to app.MapFallbackToFile("/index.html") and then Vue router handles the route.
The problem is that the site is always completely reloading when the URL is just changed (let say from mysite.com/a to mysite.com/b) in this scenario, as I would press F5. It's not always necessarily bad but I would like to control it.
The question is: how to get rid of app.MapFallbackToFile("/index.html") and somehow pass the captured URL to the SPA, as it would be naked SPA without backend which now stays in front of frontend.
If have tried Vue Spa with ASP.NET Core 6 minimal setup and it seems for me, that there is no way to achieve what you want.
When user enters or changes the URL address, the browser navigate away from the page and do a GET request to BE (Backend).
Here is the catch-all fallback route required, otherwise the user gets the 404 error from the web server.
I presume you use the HTML5 History Mode. Here is a part from the Vue Router Docs about this problem.
Since our app is a single page client side app, without a proper
server configuration, the users will get a 404 error if they access
https://example.com/user/id directly in their browser. Now that's
ugly.
Not to worry: To fix the issue, all you need to do is add a simple
catch-all fallback route to your server. If the URL doesn't match any
static assets, it should serve the same index.html page that your app
lives in. Beautiful, again!
If somebody yet knows the solution, please post a new answer.
It would be great to know how to do it!

Vue : 404 page not found after refresh page

I have Vue website works correctlly on localhost ,after I build it and uploaded it on server ,the routes works fine but have two problems :
1- when I click on route and the page open ,then if i refresh the page it gives me an error 404 page not found.
2- the connect to api by axios don't work?
How can I solve them?
The problem is your web server. Make sure that your web server (Apache, Nginx, Express etc.) always points to the Index.html.
Your web server is not aware that the SPA should do the routing.

Single Page Application Routing

Modern single page applications use routing mechanisms which don't have to rely on fragments or additional url parameters, but simply leverage the url path. How does the browser know when to ask the server for a resource and when to ask the single page application for a spa-page controlled by a router? Is there a browser API which makes it possible to take over the control of url handing which is then taken over by e.g. the vue-router or another routing spa library?
In Vue Router (and I assume other libraries/frameworks are the same) this is achieved through the HTML5 history API (pushState(), replaceState(), and popstate) which allows you to manipulate the browser's history but won't cause the browser to reload the page or look for a resource, keeping the UI in sync with the URL.
For example, observe what happens to the address bar when you enter this command in your browser's console
history.pushState({urlPath:'/some/page/on/stackoverflow'},"",'/some/page/on/stackoverflow')
The new URL is even added to your browser's history so if you navigate away from the page and come back to it you'll be directed to the new URL.
Of course all these URLs are non-existent on the server. So to avoid the problem of 404 errors when a user tries to directly access a non-existent resource you'd have to add a fallback route that redirects to your index.html page where your app lives.
Vue Router's HTML5 History Mode
React Router's <BrowserRouter>
How does the browser know when to ask the server for a resource and
when to ask the single page application for a spa-page controlled by a
router?
SPA Frameworks use routing libraries.
Suppose your javascript app is already loaded in the browser. When you navigate to a route that is defined in your routes array, the library prevents an http call to the server and handles it internally in your javascript code. Otherwise the call is forwarded to the server as a GET Http request.
here is an answer that discribes this behaviour with a clear scenario

tarruda datetimepicker link does not working on https

I am using tarruda datetimepicker for my project, it works all good until I move to https. Tarruda datetimepicker link is http. I get warning
Mixed Content: The page at 'https://mywebsite.com' was loaded over HTTPS, but requested an insecure stylesheet 'http://tarruda.github.io/bootstrap-datetimepicker/assets/css/bootstrap-datetimepicker.min.css'. This request has been blocked; the content must be served over HTTPS.
What can I do to fix this?
Host the file locally or change the link to use https -- https://tarruda.github.io/bootstrap-datetimepicker/assets/css/bootstrap-datetimepicker.min.css. I'd prefer hosting the file locally over the link as it is not a CDN and the owner can choose to discontinue the Github page, essentially killing your link.

React Router + AWS Backend, how to SEO

I am using React and React Router in my single page web application. Since I'm doing client side rendering, I'd like to serve all of my static files (HTML, CSS, JS) with a CDN. I'm using Amazon S3 to host the files and Amazon CloudFront as the CDN.
When the user requests /css/styles.css, the file exists so S3 serves it.
When the user requests /foo/bar, this is a dynamic URL so S3 adds a hashbang: /#!/foo/bar. This will serve index.html. On my client side I remove the hashbang so my URLs are pretty.
This all works great for 100% of my users.
All static files are served through a CDN
A dynamic URL will be routed to /#!/{...} which serves index.html (my single page application)
My client side removes the hashbang so the URLs are pretty again
The problem
The problem is that Google won't crawl my website. Here's why:
Google requests /
They see a bunch of links, e.g. to /foo/bar
Google requests /foo/bar
They get redirected to /#!/foo/bar (302 Found)
They remove the hashbang and request /
Why is the hashbang being removed? My app works great for 100% of my users so why do I need to redesign it in such a way just to get Google to crawl it properly? It's 2016, just follow the hashbang...
</rant>
Am I doing something wrong? Is there a better way to get S3 to serve index.html when it doesn't recognize the path?
Setting up a node server to handle these paths isn't the correct solution because that defeats the entire purpose of having a CDN.
In this thread Michael Jackson, top contributor to React Router, says "Thankfully hashbang is no longer in widespread use." How would you change my set up to not use the hashbang?
You can also check out this trick. You need to setup cloudfront distribution and then alter 404 behaviour in "Error Pages" section of your distribution. That way you can again domain.com/foo/bar links :)
I know this has been a few months old, but for anyone that came across the same problem, you can simply specify "index.html" as the error document in S3. Error document property can be found under bucket Properties => static Website Hosting => Enable website hosting.
Please keep in mind that, taking this approach means you will be responsible for handling Http errors like 404 in your own application along with other http errors.
The Hash bang is not recommended when you want to make SEO friendly website, even if its indexed in Google, the page will display only a little and thin content.
The best way to do your website is by using the latest trend and techniques which is "Progressive web enhancement" search for it on Google and you will find many articles about it.
Mainly you should do a separate link for each page, and when the user clicks on any page he will be redirected to this page using any effect you want or even if it single page website.
In this case, Google will have a unique link for each page and the user will have the fancy effect and the great UX.
EX:
Contact Us