How do I define/change BigCommerce's rel canonical link functionality? - bigcommerce

Currently BigCommerce does a pretty good job at defining the canonical link for pages. But I am looking to update the behaviour for product list pages and remove the page number out of the link.
Currently it behaves as /category/?page=1, /category/?page2, and so on. I wish to elimate the page number completely and simple use /category
I prefer that search engines view all these pages as a single page as it is just the same data that is indexed from other places.
Currently the page defines the canonical link in the header as:
%%Page.CanonicalLink%%
I am looking to see if anyone has encountered this problem and is looking for a solution.
Thanks

Related

How to remove duplicate title and meta description tags if google indexed them

So, I have been building an ecommerce site for a small company.
The url structure is : www.example.com/product_category/product_name and the site has around 1000 products.
I've checked google webmaster tools and in the HTML improvements section it shows that I have multiple title and meta description tags for all the product pages. They all appear two times, both:
-www.example.com/product_category/product_name
and
-www.example.com/product_category/product_name/ (with slash in the end)
got indexed as separate pages.
I've added a 301 redirect from every www.example.com/product_category/product_name/ to www.example.com/product_category/product_name, but this was almost two weeks ago. I have resubmitted my sitemap and asked google to fetch the whole page a few times. Nothing has changed, GWT still shows the pages as duplicate tags.
I did not get any manual action message.
So I have two questions:
-how can I accelerate the reindexation process, if it's possible?
-and do these tags hurt my organic search results? I've googled it, yes and some say it does and some say it doesn't.
An option is to set a canonical link on both URLs (with and without /) using the URL without a /. Little by little, Google will stop complaining. Keep in mind Google Webmaster Tools is slow to react, especially when you don't have much traffic or backlinks.
And yes, duplicate tags can influence your rankings negatively because users won't have proper and specific information for each page.
Set a canonical link on both Urls is a solution but it take time from my experience.
The fasted way is to block old URL in robots.txt file.
Disallow: /old_url
canonical tag is option but why you are not adding different title and description for all pages.
you can add dynamic meta tags one time and it will create automatically for all pages so we dont worry about duplication.

Two URL's, same content, is this considered duplicate content by search engines?

I've developed a service that allows users to search for stores on www.mysite.com.
I also have partners that uses my service. To the user, it looks like they are on my partners web site, when in fact they are on my site. I have only replaced my own header and footer, with my partners header and footer.
For the user, it looks like they are on mysite.partner.com when in reality they are on partner.mysite.com.
If you understood what I tried to explain, my question is:
Will Google and other search engines consider this duplicate content?
Update - canonical page
If I understand canonical pages correctly, www.mysite.com is my canonical page.
So when my partner uses mysite.partner.com?store=wallmart&id=123 which "redirects" (CNAME) to partner.mysite.com?store=wallmart&id=123, my server recognize my sub-domain.
So what I need to do, is to dynamically add the following in my <HEAD> section:
<link rel="canonical" href="mysite.com?store=wallmart&id=123">
Is this correct?
It's duplicate content but there is no penalty as such.
The problem is, for a specific search Google will pick one version of a page and filter out the others from the results. If your partner is targeting the same region then you are in direct competition.
The canonical tag is a way to tell Google which is the official version. If you use it then only the canonical page will show up in search results. So if you canonicalise back to your domain then your partners will be excluded from search results. Only your domains pages will ever show up. Not good for your partners.
There is no win. The only way your partners will do well is if they have their own content or target a different region and you don't do the canonical tag.
So your partners have a chance, I would not add the canonical. Then it's down to the Google gods to decide which of your duplicate pages gets shown.
Definitely. You'll want to use canonical tagging to stop this happening.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=139394
Yes. It will be considered as duplicate content by Google. Cause you have replaced only footer and header. By recent Google algorithm, content should be unique for website or even blog. If content is not unique, your website will be penalized by Google.

How to force google to show my first page from a page set with pagination?

I have a website and in my website I have, for example, a list of Audi models. I saw, using google webmaster tools, that my website appears in the google search by the word audi, but the target page was the 22nd page from my result set, not the first. I need my first page to appead, not my last (or middle), but I cannot tell google that this is a parameter, because my URLs are rewritten using mod rewrite. Any ideas?
BTW, I have read in a SEO forum, that it's a bad idea to use a cannonical tag. So is it really a bad idea in my case?
You can't force Google to do anything, however, they have made it easier to deal with pagination issues with a recent post on rel="next" and rel="prev".
But the primary problem you face is signalling to Google that your first (main) page is the starting point - this is achieved using internal link and back-link "juice" focussed on that page. You need to ensure that the first page of results is linked to properly from higher-value pages (like the home-page).
Google recently announced that you can use View All which will allow them to find and index entire articles that are normally broken up using pagination and display them all as one result.

Is there a way to prevent Googlebot from indexing certain parts of a page?

Is it possible to fine-tune directives to Google to such an extent that it will ignore part of a page, yet still index the rest?
There are a couple of different issues we've come across which would be helped by this, such as:
RSS feed/news ticker-type text on a page displaying content from an external source
users entering contact phone etc. details who want them visible on the site but would rather they not be google-able
I'm aware that both of the above can be addressed via other techniques (such as writing the content with JavaScript), but am wondering if anyone knows if there's a cleaner option already available from Google?
I've been doing some digging on this and came across mentions of googleon and googleoff tags, but these seem to be exclusive to Google Search Appliances.
Does anyone know if there's a similar set of tags to which Googlebot will adhere?
Edit: Just to clarify, I don't want to go down the dangerous route of cloaking/serving up different content to Google, which is why I'm looking to see if there's a "legit" way of achieving what I'd like to do here.
What you're asking for, can't really be done, Google either takes the entire page, or none of it.
You could do some sneaky tricks though like insert the part of the page you don't want indexed in an iFrame and use robots.txt to ask Google not to index that iFrame.
In short NO - unless you use cloaking with is discouraged by Google.
Please check out the official documentation from here
http://code.google.com/apis/searchappliance/documentation/46/admin_crawl/Preparing.html
Go to section "Excluding Unwanted Text from the Index"
<!--googleoff: index-->
here will be skipped
<!--googleon: index-->
Found useful resource for using certain duplicate content and not to allow index by search engine for such content.
<p>This is normal (X)HTML content that will be indexed by Google.</p>
<!--googleoff: index-->
<p>This (X)HTML content will NOT be indexed by Google.</p>
<!--googleon: index>
At your server detect the search bot by IP using PHP or ASP. Then feed the IP addresses that fall into that list a version of the page you wish to be indexed. In that search engine friendly version of your page use the canonical link tag to specify to the search engine the page version that you do not want to be indexed.
This way the page with the content that do want to be index will be indexed by address only while the only the content you wish to be indexed will be indexed. This method will not get you blocked by the search engines and is completely safe.
Yes definitely you can stop Google from indexing some parts of your website by creating custom robots.txt and write which portions you don't want to index like wpadmins, or a particular post or page so you can do that easily by creating this robots.txt file .before creating check your site robots.txt for example www.yoursite.com/robots.txt.
All search engines either index or ignore the entire page. The only possible way to implement what you want is to:
(a) have two different versions of the same page
(b) detect the browser used
(c) If it's a search engine, serve the second version of your page.
This link might prove helpful.
There are meta-tags for bots, and there's also the robots.txt, with which you can restrict access to certain directories.

Canonical links and paging

Google has been pushing its new canonical link feature, I agree it is really useful. Now instead of having a ton of entry points in to an area you can have one entry.
I was wondering, does this feature play nice with paging?
For example: I have this page which has 8 pages of content, if I specify the canonical of http://community.mediabrowser.tv/permalinks/154/iso-always-detected-as-a-movie-when-checking-metadata for the page, will there be any undesired side effects? Will this be better overall? Will this mean that a hit on page 5 will take users to page 1?
When specifying a canonical URL, it should have substantially the same content. Pages 2-8 have different content. Yes, if Google were to honor your canonical link on page 5, it would send users to page 1.
You should use the canonical link on page 1 so that Google knows that http://community.mediabrowser.tv/topics/154 and http://community.mediabrowser.tv/topics/154?page=1&response_type=3 are the same as http://community.mediabrowser.tv/permalinks/154/iso-always-detected-as-a-movie-when-checking-metadata
You may also want to put canonical links on the other pages so Google knows that http://community.mediabrowser.tv/topics/154?page=5 is the same as http://community.mediabrowser.tv/topics/154?page=5&response_type=3
You should only add canonical links on pages with identical content. For example, a set of links presented in a different order: sorted by date or alphabetically.
In your case all pages have different content (albeit representing several pages of the same article or conversation thread). Which means you don't need to canonicalize them.
Still if you do, all that happens is that Google gives more priority to the first page, rather than the other pages when displaying them in search results.
Canonical links do not affect your visitors. They only suggest priority and possible duplicate content to bots.
More info from Google here