We are about to officially launch Cleanfox : www.cleanfox.io
The issue that Google indexes the website only in English and when I look at the search results on Google.fr, the indexed content is in English.
I have gone through all the required stuff on both the Google Webmaster Console adding both FR and EN. I added hreflang attributes in both meta and links (the two links that lead to another language)... But nothing happens, all the content is just indexed in English.
The problem is that you are using the same URL for both languages (see answer with more details).
Furthermore, with rel-alternate+hreflang you should point to translations of the current document, but you always seem to point to /en//fr (which then redirects to /). So for example, the following declaration on https://www.cleanfox.io/forest is wrong:
<link rel="alternate" hreflang="fr" href="https://www.cleanfox.io/fr">
<link rel="alternate" hreflang="en" href="https://www.cleanfox.io/en">
Neither /fr nor /en is a translation of /forest.
for this you have create page for fr language so google will consider both are differnt page and will cr
Related
I'm trying to relocate a few select posts from my blogger URL to my new blog located in a Wix website.
I'm trying to use the meta refresh tag to get my SEO transfered for each of my blogger posts.
Blogger does not provide 301 redirects outside of the blogger domain. Hence I'm using the meta refresh tags.
I notice that Wix's blog pages have Ajax based URL links. Should I be providing the URL (of the Wix post) in the Meta Refresh tag (in the blogger post) with the "#!" or should the URL in the meta refresh be the one with "?_escaped_fragment_"?
Which of these URLs will transfer the SEO from the blogger post to the Wix post?
If you intend to preserve the link profile and search engine optimisation value of the posts, then a Meta refresh cannot quite replace a 301 redirect.
To answer your question, though, Google can deal with hashbang (#!) as well as escaped fragments, depending on how the Wix site is coded. You should definitely refer to Google's guide to making AJAX crawlable:
https://developers.google.com/webmasters/ajax-crawling/docs/learn-more
Use the following code in head tag:
<noscript>
<meta http-equiv="Refresh" content="3;url=yourpage.html">
</noscript>
Google can understand #! sign. That would not be a problem.
If you query site:www.[something-made-with-wix].com on Google, You'll see all the links in the form of #! in the results.
You can try this one as an example.
After many trial and error I have found the answer to my own question.
Here's what happened when I did this on the old/url
<meta http-equiv="Refresh" content="2; URL=new/url/#!BlogPost" />
This did the redirection after 2sec, but after weeks of waiting, the old/url continued to show on google and the new/url never showed up.
Then I tried this on the old/url:
<meta http-equiv="Refresh" content="2; URL=new/url/?_escaped_fragment_=BlogPost" />
This did nothing as well.
Then I figured that if content=n (n is a number other than 0) , this is treated as a 302 redirect. Which is a temporary redirect.
So I tried the following:
<meta http-equiv="Refresh" content="0; URL=new/url/?_escaped_fragment_=BlogPost" />
This was a weird reaction that google gave. The old/url got removed from the search results and the new/url too was nowhere to be found. This is bad, never do this.
The final option was:
<meta http-equiv="Refresh" content="0; URL=new/url/#!=BlogPost" />
This finally did the trick. The link juice passed on from the old/url to the new/url after a few days. It is important however to go to google webmaster and get the old/url re-crawled. Only then will the link juice be passed on.
Please can you look into this, it may be useful for you:
<html xmlns="http://www.w3.org/1999/xhtml">
<head><title>
Welcome Back
title>
<meta http-equiv="Refresh" content="2; URL=/wwstore/Profile.aspx" />
head>
You can add this into an ASP.NET page with code like this:
// *** Create META tag and add to header controls
HtmlMeta RedirectMetaTag = new HtmlMeta();
RedirectMetaTag.HttpEquiv = "Refresh";
RedirectMetaTag.Content = string.Format("{0}; URL={1}", this.Context.Items["ErrorMessage_Timeout"], NewUrl);
this.Header.Controls.Add(RedirectMetaTag);
But I never put 2 and 2 together to realize that the meta tag is actually mapping an HTTP header. A much easier way to do this is to simply add a header:
Response.AppendHeader("Refresh", "4");
Or refresh and go off to another page:
Response.AppendHeader("Refresh", "4; url=profile.aspx");
For more details please look here : http://weblog.west-wind.com/posts/2006/Aug/04/No-more-Meta-Refresh-Tags
Based on the Google info about hreflang, I came up with this but I've the en and default point to same URL instead of having another en/. Will that be fine? I don't want to create another folder as it require additional maintenance.
Basically, the default and main site is in English. If user needs to see the other language they just go to extra folder zh-*.
<link rel=”alternate” href=”http://example.com” hreflang=”en”>
<link rel=”alternate” href=”http://example.com/zh-hant” hreflang=”zh-Hant”>
<link rel=”alternate” href=”http://example.com/zh-hanz” hreflang=”zh-Hans”>
<link rel=”alternate” href=”http://example.com” hreflang=”x-default”>
Also is it okay I make it short in the URL to have zh-hant to cht and zh-hans to chs?
If English is default, you don't need to state English alternative. Only default is enough.
For the URLs, it's up to you. You can do whatever you like.
Google Plus is pretty good at pulling images specified by Open Graph meta tags when standard URLs are shared like:
http://stackoverflow.com/questions/22342854/what-is-the-optimal-algorithm-for-the-game-2048
See:
But things start to get screwy when you start appending query strings, such as is done in this URL:
http://stackoverflow.com/questions/22342854/what-is-the-optimal-algorithm-for-the-game-2048?utm_source=google-plus&utm_medium=social&utm_campaign=stackoverflow-general-promotion
And for certain URLs + query strings the default image seems to make no sense at all:
http://skeptics.stackexchange.com/questions/4508/can-every-grain-of-sand-be-addressed-in-ipv6?xyz_12312313
The image featured in the above screengrab is the user pic of the guy who last left an answer to the shared question.
Is there any way to force Google Plus to fall back on images defined by og:image tags even when query strings are appended?
No, there is no way to fallback with Google+.
This behaviour is possible with Facebook scraper because it supports checking for og:url which Google+ does not support (Why???). These are the items Google+ supports
<meta property="og:title" content="..." />
<meta property="og:image" content="..." />
<meta property="og:description" content="..." />
Normally when query parameters are added if og:url is defined
Their recommended format is Schema as described at https://developers.google.com/+/web/snippet/
The order in which Google+ checks
Schema
Open Graph
Title and meta description tags
Guess???
Seeing that multiple Schema are defined on the pages you linked, according to the https://developers.google.com/+/web/snippet/ documentation, it should take the information from the itemscope defined nearest to the top
<body class="question-page new-topbar" itemscope itemtype="http://schema.org/QAPage">
which is a little funny/weird since their tool doesn't pick this up http://www.google.com/webmasters/tools/richsnippets?q=stackoverflow.com%2Fquestions%2F22342854%2Fwhat-is-the-optimal-algorithm-for-the-game-2048%3Futm_source%3Dgoogle-plus%26utm_medium%3Dsocial%26utm_campaign%3Dstackoverflow-general-promotion
So, then this brings us back to looking at your second image
The title is different as well, so og:title isn't being detected either. <title> is being scraped instead
What does this all mean?
Google plus sucks with markup for sharing.
You will need to adjust your top most Schema.org microdata and hope Google+ makes sense of it when adding params to the canonical url.
<body itemscope itemtype="http://schema.org/QAPage">
<h1 itemprop="name">Shiny Trinket</h1>
<img itemprop="image" src="{image-url}" />
<p itemprop="description">Shiny trinkets are shiny.</p>
</body>
Read this in the FAQ section for OpenGraph in Google+ :
Why isn't my +Snippet image appearing?
Images that are too small or not square enough are not included in the +Snippet, even if the images are explicitly referenced by schema.org microdata or Open Graph markup. Specifically, the height must be at least 120px, and if the width is less than 100px, then the aspect ratio must be no greater than 3.0.
I'm using the "/myqcs/rest/places/feed" URL to get all places, and I need to extract the friendly url name. I think the only way to do that is to look at the end of the "alternate" link.
For some places, the "alternate" link looks like this:
<link href="https://host/lotus/myquickr/driver-competitions-community" rel="alternate" type="application/atom+xml">
but for some, it looks like that:
<link href="https://host/lotus/myquickr/!ut/p/c4/04_SB8K8xLLM9MSSzPy8xBz9CP0os3hDC19DY0NfE0P3UBNHA09DY39nJz8Pz9AwU_2CbEdFALQNZ3I!/" rel="alternate" type="application/atom+xml">
So I can't get the friendly url from the second link, there's just a UID.
Why is that, and what can I do?
For a page to be accessible via a friendly URL both the page and all of its parent pages must have friendly names assigned to them. The friendly URL is made up of all the friendly names in the page hierarchy. e.g. for the URL my/nested/page we have 3 pages assigned friendly names 'my', 'nested' and 'page'. If the 'nested' page did not have a friendly name assigned to it then a friendly URL can not be generated for the 'page' page.
For the pages that just generate a UID verify they have friendly names for their full hierarchy.
If the pages have a full path of friendly names assigned then I think you will need to delve into the Portal Navigation Model SPI and generate your own output, see:
http://publib.boulder.ibm.com/infocenter/wpdoc/v6r0/index.jsp?topic=/com.ibm.wp.ent.doc/wps/dgn_ptlnavig.html
http://publib.boulder.ibm.com/infocenter/wpdoc/v6r0/topic/com.ibm.wp.ent.doc/wps/nav_state_spi.html
I think I found the solution - when you are an admin, you can make make yourself a site manager, and then the friendly URL is used by Quickr, so it probably boils down to site membership.
I have listing pages that take a page argument on the url like the following:
http://www.domain.com/foo/bar/?page=7
Should I just include the URL without params or should I list all pages in my sitemap.xml?
EDIT
Paginated content are listings, like an index. Therefore their content is also (in more detail) found in detail pages. But these paginated ones are the only way to reach detail pages.
I really wanted to find you a reliable source for this one, but I couldn't. Which means you'll have to make do with my intuition:
If the articles exist only in their paginated form, and you want them to be indexed as separate pages, list them all. They'll all have distinct content on them, so you won't be penalised for duplication.
I found details of one exception; including page 1 twice. Basically you need to choose whether the first page will be /foo/bar/?page=1 or just /foo/bar/, then do a 301 redirect from the version you don't want to use.
Hope this helps (even just a little).
Tom
NO!: You should add Meta-Tags to you paginated sites. This helps google to understand your pagination system.
Example:
On page 1 you would add into <head>:
<link rel="next" href="http://www.example.com/article?story=abc&page=2" />
On page 2 you would add:
<link rel="prev" href="http://www.example.com/article?story=abc&page=1" />
<link rel="next" href="http://www.example.com/article?story=abc&page=3" />
On page 3 you would add:
<link rel="prev" href="http://www.example.com/article?story=abc&page=2" />
<link rel="next" href="http://www.example.com/article?story=abc&page=4" />
And on page 4 you would add:
<link rel="prev" href="http://www.example.com/article?story=abc&page=3" />
See this document: Pagination with rel=“next” and rel=“prev”
In this case the ?page=7 probably relates to the content management systems page. In you site map file you can add this. In the site map if you want each of these pages to be displayed in what ever uses this file yes you should add them.