I have listing pages that take a page argument on the url like the following:
http://www.domain.com/foo/bar/?page=7
Should I just include the URL without params or should I list all pages in my sitemap.xml?
EDIT
Paginated content are listings, like an index. Therefore their content is also (in more detail) found in detail pages. But these paginated ones are the only way to reach detail pages.
I really wanted to find you a reliable source for this one, but I couldn't. Which means you'll have to make do with my intuition:
If the articles exist only in their paginated form, and you want them to be indexed as separate pages, list them all. They'll all have distinct content on them, so you won't be penalised for duplication.
I found details of one exception; including page 1 twice. Basically you need to choose whether the first page will be /foo/bar/?page=1 or just /foo/bar/, then do a 301 redirect from the version you don't want to use.
Hope this helps (even just a little).
Tom
NO!: You should add Meta-Tags to you paginated sites. This helps google to understand your pagination system.
Example:
On page 1 you would add into <head>:
<link rel="next" href="http://www.example.com/article?story=abc&page=2" />
On page 2 you would add:
<link rel="prev" href="http://www.example.com/article?story=abc&page=1" />
<link rel="next" href="http://www.example.com/article?story=abc&page=3" />
On page 3 you would add:
<link rel="prev" href="http://www.example.com/article?story=abc&page=2" />
<link rel="next" href="http://www.example.com/article?story=abc&page=4" />
And on page 4 you would add:
<link rel="prev" href="http://www.example.com/article?story=abc&page=3" />
See this document: Pagination with rel=“next” and rel=“prev”
In this case the ?page=7 probably relates to the content management systems page. In you site map file you can add this. In the site map if you want each of these pages to be displayed in what ever uses this file yes you should add them.
Related
We are about to officially launch Cleanfox : www.cleanfox.io
The issue that Google indexes the website only in English and when I look at the search results on Google.fr, the indexed content is in English.
I have gone through all the required stuff on both the Google Webmaster Console adding both FR and EN. I added hreflang attributes in both meta and links (the two links that lead to another language)... But nothing happens, all the content is just indexed in English.
The problem is that you are using the same URL for both languages (see answer with more details).
Furthermore, with rel-alternate+hreflang you should point to translations of the current document, but you always seem to point to /en//fr (which then redirects to /). So for example, the following declaration on https://www.cleanfox.io/forest is wrong:
<link rel="alternate" hreflang="fr" href="https://www.cleanfox.io/fr">
<link rel="alternate" hreflang="en" href="https://www.cleanfox.io/en">
Neither /fr nor /en is a translation of /forest.
for this you have create page for fr language so google will consider both are differnt page and will cr
I'm trying to relocate a few select posts from my blogger URL to my new blog located in a Wix website.
I'm trying to use the meta refresh tag to get my SEO transfered for each of my blogger posts.
Blogger does not provide 301 redirects outside of the blogger domain. Hence I'm using the meta refresh tags.
I notice that Wix's blog pages have Ajax based URL links. Should I be providing the URL (of the Wix post) in the Meta Refresh tag (in the blogger post) with the "#!" or should the URL in the meta refresh be the one with "?_escaped_fragment_"?
Which of these URLs will transfer the SEO from the blogger post to the Wix post?
If you intend to preserve the link profile and search engine optimisation value of the posts, then a Meta refresh cannot quite replace a 301 redirect.
To answer your question, though, Google can deal with hashbang (#!) as well as escaped fragments, depending on how the Wix site is coded. You should definitely refer to Google's guide to making AJAX crawlable:
https://developers.google.com/webmasters/ajax-crawling/docs/learn-more
Use the following code in head tag:
<noscript>
<meta http-equiv="Refresh" content="3;url=yourpage.html">
</noscript>
Google can understand #! sign. That would not be a problem.
If you query site:www.[something-made-with-wix].com on Google, You'll see all the links in the form of #! in the results.
You can try this one as an example.
After many trial and error I have found the answer to my own question.
Here's what happened when I did this on the old/url
<meta http-equiv="Refresh" content="2; URL=new/url/#!BlogPost" />
This did the redirection after 2sec, but after weeks of waiting, the old/url continued to show on google and the new/url never showed up.
Then I tried this on the old/url:
<meta http-equiv="Refresh" content="2; URL=new/url/?_escaped_fragment_=BlogPost" />
This did nothing as well.
Then I figured that if content=n (n is a number other than 0) , this is treated as a 302 redirect. Which is a temporary redirect.
So I tried the following:
<meta http-equiv="Refresh" content="0; URL=new/url/?_escaped_fragment_=BlogPost" />
This was a weird reaction that google gave. The old/url got removed from the search results and the new/url too was nowhere to be found. This is bad, never do this.
The final option was:
<meta http-equiv="Refresh" content="0; URL=new/url/#!=BlogPost" />
This finally did the trick. The link juice passed on from the old/url to the new/url after a few days. It is important however to go to google webmaster and get the old/url re-crawled. Only then will the link juice be passed on.
Please can you look into this, it may be useful for you:
<html xmlns="http://www.w3.org/1999/xhtml">
<head><title>
Welcome Back
title>
<meta http-equiv="Refresh" content="2; URL=/wwstore/Profile.aspx" />
head>
You can add this into an ASP.NET page with code like this:
// *** Create META tag and add to header controls
HtmlMeta RedirectMetaTag = new HtmlMeta();
RedirectMetaTag.HttpEquiv = "Refresh";
RedirectMetaTag.Content = string.Format("{0}; URL={1}", this.Context.Items["ErrorMessage_Timeout"], NewUrl);
this.Header.Controls.Add(RedirectMetaTag);
But I never put 2 and 2 together to realize that the meta tag is actually mapping an HTTP header. A much easier way to do this is to simply add a header:
Response.AppendHeader("Refresh", "4");
Or refresh and go off to another page:
Response.AppendHeader("Refresh", "4; url=profile.aspx");
For more details please look here : http://weblog.west-wind.com/posts/2006/Aug/04/No-more-Meta-Refresh-Tags
Google Plus is pretty good at pulling images specified by Open Graph meta tags when standard URLs are shared like:
http://stackoverflow.com/questions/22342854/what-is-the-optimal-algorithm-for-the-game-2048
See:
But things start to get screwy when you start appending query strings, such as is done in this URL:
http://stackoverflow.com/questions/22342854/what-is-the-optimal-algorithm-for-the-game-2048?utm_source=google-plus&utm_medium=social&utm_campaign=stackoverflow-general-promotion
And for certain URLs + query strings the default image seems to make no sense at all:
http://skeptics.stackexchange.com/questions/4508/can-every-grain-of-sand-be-addressed-in-ipv6?xyz_12312313
The image featured in the above screengrab is the user pic of the guy who last left an answer to the shared question.
Is there any way to force Google Plus to fall back on images defined by og:image tags even when query strings are appended?
No, there is no way to fallback with Google+.
This behaviour is possible with Facebook scraper because it supports checking for og:url which Google+ does not support (Why???). These are the items Google+ supports
<meta property="og:title" content="..." />
<meta property="og:image" content="..." />
<meta property="og:description" content="..." />
Normally when query parameters are added if og:url is defined
Their recommended format is Schema as described at https://developers.google.com/+/web/snippet/
The order in which Google+ checks
Schema
Open Graph
Title and meta description tags
Guess???
Seeing that multiple Schema are defined on the pages you linked, according to the https://developers.google.com/+/web/snippet/ documentation, it should take the information from the itemscope defined nearest to the top
<body class="question-page new-topbar" itemscope itemtype="http://schema.org/QAPage">
which is a little funny/weird since their tool doesn't pick this up http://www.google.com/webmasters/tools/richsnippets?q=stackoverflow.com%2Fquestions%2F22342854%2Fwhat-is-the-optimal-algorithm-for-the-game-2048%3Futm_source%3Dgoogle-plus%26utm_medium%3Dsocial%26utm_campaign%3Dstackoverflow-general-promotion
So, then this brings us back to looking at your second image
The title is different as well, so og:title isn't being detected either. <title> is being scraped instead
What does this all mean?
Google plus sucks with markup for sharing.
You will need to adjust your top most Schema.org microdata and hope Google+ makes sense of it when adding params to the canonical url.
<body itemscope itemtype="http://schema.org/QAPage">
<h1 itemprop="name">Shiny Trinket</h1>
<img itemprop="image" src="{image-url}" />
<p itemprop="description">Shiny trinkets are shiny.</p>
</body>
Read this in the FAQ section for OpenGraph in Google+ :
Why isn't my +Snippet image appearing?
Images that are too small or not square enough are not included in the +Snippet, even if the images are explicitly referenced by schema.org microdata or Open Graph markup. Specifically, the height must be at least 120px, and if the width is less than 100px, then the aspect ratio must be no greater than 3.0.
I have a dynamic page, where the contents and title will change based on the parameters in the URL. I want the same to be done for meta tag description. As I don't have a sound knowledge of SEO, I don't know whether it will be valid or not.
Say suppose URL contains word "test"
I will do,
if("test" is present)
{
<title>test</test>
<meta decription="test"/>
}
else
{
<title>test1</test>
<meta decription="test1"/>
}
Can I do this? Does giving two meta tag descriptions for same page work.
It is best practice to have different, on the page content based values of the title element and the meta description for each web page. It is not forbidden by the the HTML5 specification to have multiple <meta name="description" content="YOUR DESCRIPTION"> elements but I would guess that search engines process only the first appearance of the element. So my recommendation would be use one <meta name="description" content="YOUR DESCRIPTION"> element for each page.
As long as you code it server-side (eg in PHP) when the page is generated rather than client-side (javascript) after the page has loaded, then it will be fine. That's how most CMS systems work already.
Done server-side, only one of the description tags will actually appear in the code Google see.
Done client-side, it is likely that they will see no description at all as I don't think many search engines render javascript.
Please suggest me, By writing <meta name="robots" content="nofollow">in the submaster page will include the links of master page or not? Thanks.
example.com/master-page/sub-master-page
AND
example.com/master-page
These both are two different URLs therefore no-following links on one page will not effect the links on the other page.
You will have to include the no-follow meta tags on both the pages separately to make external links no-follow on both the pages:
<meta name="robots" content="nofollow"/>
Every page identified by a unique URL is unique and crawlers index each URL separately. Considering this fact and logic, your meta tag on sub page will not affect the parent page.