nuxt/vue route url to hashed-url - vue.js

didn't manage to find much on this, but I am using anchors on my website, hence I will get a URL as such
localhost:3000/#about.
I am wondering if it's possible to route a URL as such localhost:3000/about to the hashed URL localhost:3000/#about without having a physical about-page.
Essentially, when I directly access the absolute link, I am routed to the hashed/anchored page.

Related

Single Page Application Routing

Modern single page applications use routing mechanisms which don't have to rely on fragments or additional url parameters, but simply leverage the url path. How does the browser know when to ask the server for a resource and when to ask the single page application for a spa-page controlled by a router? Is there a browser API which makes it possible to take over the control of url handing which is then taken over by e.g. the vue-router or another routing spa library?
In Vue Router (and I assume other libraries/frameworks are the same) this is achieved through the HTML5 history API (pushState(), replaceState(), and popstate) which allows you to manipulate the browser's history but won't cause the browser to reload the page or look for a resource, keeping the UI in sync with the URL.
For example, observe what happens to the address bar when you enter this command in your browser's console
history.pushState({urlPath:'/some/page/on/stackoverflow'},"",'/some/page/on/stackoverflow')
The new URL is even added to your browser's history so if you navigate away from the page and come back to it you'll be directed to the new URL.
Of course all these URLs are non-existent on the server. So to avoid the problem of 404 errors when a user tries to directly access a non-existent resource you'd have to add a fallback route that redirects to your index.html page where your app lives.
Vue Router's HTML5 History Mode
React Router's <BrowserRouter>
How does the browser know when to ask the server for a resource and
when to ask the single page application for a spa-page controlled by a
router?
SPA Frameworks use routing libraries.
Suppose your javascript app is already loaded in the browser. When you navigate to a route that is defined in your routes array, the library prevents an http call to the server and handles it internally in your javascript code. Otherwise the call is forwarded to the server as a GET Http request.
here is an answer that discribes this behaviour with a clear scenario

How to remove hashtag(#) from vue-router URL?

I want remove hashtag(#) from urls, but also i need to save no-reload mode. Can i do that?
I have: page.com/#/home
I want: page.com/home
I tried mode: 'history', but page reloads with it.
UPD: Is it possible to create SPA app without page reloading and with normal URLs?
When activating the history mode, you need to first configure your server according to the documentation. The reason for that is, that the history mode just changes the URL of the current page. When the user actually reloads the page, he'll get a 404 error, because the requested URL is not actually there. Reconfiguring the server to serve always the main index.html of your SPA resolves this issue.
When using a # in the URL (no history mode), the browser tries to navigate to the element with the ID, which was given after the # (within the same document). This was the original behavior of the fragment identifier. Therefore, if you add a link to your HTML with such a fragment identifier, the browser won't reload the page but actually look for the ID inside the document. The vue-router watches this change and routes you to the correct route. This is the reason it works with hashes. If you just add a regular URL to the HTML, the browser's native behavior is to actually navigate to this page (hard-link). This leads to your experienced reload effect.
The way to handle this, is, to never use regular links to route within a Vue Single-Page-Application. Use the tag <router-link> for routing between one page and another (but only within the SPA). This is the way to go, no matter if the browser allows the navigation with # without reloading or not. Here is the documentation for the recommended routing tag: link
You can also route from one route to another programmatically. Use $router.push() for that. Here is the documentation for that: link

React Router + AWS Backend, how to SEO

I am using React and React Router in my single page web application. Since I'm doing client side rendering, I'd like to serve all of my static files (HTML, CSS, JS) with a CDN. I'm using Amazon S3 to host the files and Amazon CloudFront as the CDN.
When the user requests /css/styles.css, the file exists so S3 serves it.
When the user requests /foo/bar, this is a dynamic URL so S3 adds a hashbang: /#!/foo/bar. This will serve index.html. On my client side I remove the hashbang so my URLs are pretty.
This all works great for 100% of my users.
All static files are served through a CDN
A dynamic URL will be routed to /#!/{...} which serves index.html (my single page application)
My client side removes the hashbang so the URLs are pretty again
The problem
The problem is that Google won't crawl my website. Here's why:
Google requests /
They see a bunch of links, e.g. to /foo/bar
Google requests /foo/bar
They get redirected to /#!/foo/bar (302 Found)
They remove the hashbang and request /
Why is the hashbang being removed? My app works great for 100% of my users so why do I need to redesign it in such a way just to get Google to crawl it properly? It's 2016, just follow the hashbang...
</rant>
Am I doing something wrong? Is there a better way to get S3 to serve index.html when it doesn't recognize the path?
Setting up a node server to handle these paths isn't the correct solution because that defeats the entire purpose of having a CDN.
In this thread Michael Jackson, top contributor to React Router, says "Thankfully hashbang is no longer in widespread use." How would you change my set up to not use the hashbang?
You can also check out this trick. You need to setup cloudfront distribution and then alter 404 behaviour in "Error Pages" section of your distribution. That way you can again domain.com/foo/bar links :)
I know this has been a few months old, but for anyone that came across the same problem, you can simply specify "index.html" as the error document in S3. Error document property can be found under bucket Properties => static Website Hosting => Enable website hosting.
Please keep in mind that, taking this approach means you will be responsible for handling Http errors like 404 in your own application along with other http errors.
The Hash bang is not recommended when you want to make SEO friendly website, even if its indexed in Google, the page will display only a little and thin content.
The best way to do your website is by using the latest trend and techniques which is "Progressive web enhancement" search for it on Google and you will find many articles about it.
Mainly you should do a separate link for each page, and when the user clicks on any page he will be redirected to this page using any effect you want or even if it single page website.
In this case, Google will have a unique link for each page and the user will have the fancy effect and the great UX.
EX:
Contact Us

How to Avoid a Mixed-Content Error When Displaying a Search Result?

Question:
How can I include both https: and http: results from a single domain in a Google custom search engine but display any such result in an iframe with a secure parent window?
How It's Structured:
My Google custom search engine currently searches "mydomainname.com/directory/" with the option to "Include all pages whose address contains this URL". It operates on a specific page of the website to search pages within the specified directory. The Link Target set in Websearch Settings is an iframe on the same page as the search bar.
The browser window and the iframe src are both on the same secure domain. And since the search results are all from a directory within the site structure, are all on this same domain as well.
Currently some results appear as "https://..." and some appear "www...". Obviously, this creates a mixed-content error when the browser window is https:// and an attempt is made to display a http:// search result in the iframe.
The results that are http:// will, of course, also work as https:// urls. I do not know what makes a page or file appear in the search results as "www." or "https://" when they all originate from a single secure domain.
The "http://" results appear even if I specify the site to be searched as https://www.mydomainname.com/directory/. I don't want to exclude these results, but I want them to be able to be displayed when browsing the site securely.
The Objective:
So the bottom-line rule that I need to work around is that insecure pages or files cannot be loaded into an iframe on a secure web page. I obviously want users to be able to utilize the https:// site but then I need the search to function in such a way that allows for all possible search results for these users.
The reason I need the results' target to be this iframe is that this is the frame that displays all the content of the web page. The search results work in harmony with the organization of other information. Such that choosing a link from a category in the page's navigation and choosing a search result from the custom search result display the chosen content into the same location, the iframe.
What I've Tried:
I've tried designating https:// specifically in the Google Search Engine (gse) settings and removing : 'http' from the script line gcse.src =(document.location.protocol == 'https:' ? 'https:' : 'http:') + '//cse.google.com/cse.js?cx=' + cx;.
I looked in the script file that it's linking to: http://cse.google.com/cse.js?cx=012685392925564329750:ghl2znnfada but I can't decipher what might need to be changed in it.
In the error log on the console I don't see much to be relevant except for the expected inability to load insecure pages while browsing securely. But there is this that looks like (maybe) it's relevant? though I could be completely wrong because I can't really decipher it either:
Mixed Content: The page at
'https://mydomainname.com/directory/index.php' was loaded over HTTPS,
but requested an insecure script 'http://www.google.com/jsapi?
key=ABQIAAAAdCtw6Xq1Q31YAr7VSQOSvxS5g7WKqCWUBuUdhz3-
rUOumR2saRSPGvey2WjYALW7f5_JzakSL3lAEg'. This request has been blocked;
the content must be served over HTTPS.
Insecure Script from Error Message:
http://www.google.com/jsapi?key=ABQIAAAAdCtw6Xq1Q31YAr7VSQOSvxS5g7WKqCWUBuUdhz3-rUOumR2saRSPGvey2WjYALW7f5_JzakSL3lAEg
Proposed Paths to a Solution:
I am open to any solution methods that may be possible. I have considered several routes but am not sure how to properly execute them or have failed in my attempts to execute them.
Some solutions I thought may work are:
Show all results as https:// links (without excluding any) so that they can be accessed whether on a secure connection to the site or not.
Redirect any links clicked without https:// to be loaded into the iframe as https://
Change something about the pages and files on the server so that they only appear in the search results as https://
Change something about Google's search engine script so it parses all found results as https://
Somehow show links as http:// if browsing non-secure, and https:// if browsing secure *
*I don't know how viable or efficient this would be
The most robust solution is to migrate all your website in https :
use 301 (permanent) redirect from http to https
and activate HSTS (if possible with includeSubdomains)
Google will take a little time to update his index but the HSTS will automatically replace http by https so you should avoid any mixed content issues.

How to Inform Google For Page URL Modifications in Same Domain?

I am renewing my web page and changing the site structure. It was in Asp and now it will be in Asp.Net
So page URLs will be modified. And some pages will be removed, some will be added. But mostly, the content and page names are same, only URLs will change.
The site has SEO work in it and we want to loose it minimum.Site is registered in Analytics and Webmaster Tools.
Google searches will end up blank pages and I don't want to loose my rank.
So I'm looking for a way to inform Google about new page URLs. Domain is same, only URLs. For example: the home page was /default.asp and now /home.aspx
Is there a way to tell Google that a particular URL address or page name has changed?
If all that is changing are the page URLs, Google Analytics cannot "know" that a page is the same, just with diferent URL.
But, you could apply a customized pageview using the _trackPageView() method, giving it the original url as parameter.
If you choose to do this, you will have to exclude the line that uses the method in the original GA code and apply it elsewhere, or pass the parameter to it directly with the orignial URL. All this is done in each page.
You can also read more about the method here.
For IIS (Asp.Net) you want to look into the following to find out how to do 301 redirects:
Response.RedirectPermanent(...) for redirecting from a page
or
"IIS 7 Routing Module and web.config" to set up bulk redirecting
I'd also suggest you consider supporting Search Engine Friendly (SEF) URLs while your making the move. The Routing Module can help you there as well.
You need to implement some form of 301 (301 is key) redirects. This way when google or any other search hits the old page, the index is refreshed with the new page. Asp.net allows you to do these redirects even at the IIS level, and where I'd suggest that they live. You'll also want to submit an up to date site map on webmaster tools.
Edit: Here's a good link on the redirects, http://www.iis.net/ConfigReference/system.webServer/httpRedirect