scrapy strategy with links with names - scrapy

I am trying to find the best strategy to read links with names (eg. href=/mypage#sectionA )
If I don't do anything special, this kind of link can get skipped if I've already visited that page. If I check if my url has a hash (#), I can parse the result before yielding a new request, but it works only if the link point to a name on the same page.
How should I manage this kind of link? Disable duplicate check and potentially parse a page many many times?

Related

Google SEO - duplicate content in web pages for submitting sitemaps

I hope my question is not too irrelevant to stackoverflow.
this is my website: http://www.rader.my
It's a car information website. The content is dynamic. Therefore, google crawler could not find all the cars specification pages in my website.
I created a sitemap with all my cars URL in it (for instance: http://www.rader.my/Details.php?ID=13 is for one car). I know I haven't made any mistake in my .xml file format and structure. But after submission, google only indexed one URL which is my index.php.
I have also read about rel="canonical". But I don't think in my case I should use such a thing since all my pages ARE different with different content but only the structure is the same.
Is there anything that I missed? Why google doesn't accept my URLs even though the contents are different? What can I do to fix this?
Thanks and regards,
Amin
I have a similar type of site. Google is good about figuring out dynamic sites. They'll crawl the pages and figure out the unique content as time goes on. Give it time.
You should do all the standard things:
Make sure each page has a unique H1 tag.
Make sure each page has substantial unique content
Unique keywords and description tags aren't as useful as they used to be but they can't hurt.
Cross-link internally. Create category pages that include links to all of one manufacturer and have each of the pages of that manufacturer link back to 'similar' pages.
Get links to your pages. Nothing helps getting indexed like external authority.

will limiting dynamic urls with robots.txt improve my SEO ranking?

My website has about 200 useful articles. Because the website has an internal search function with lots of parameters, the search engines end up spidering urls with all possible permutations of additional parameters such as tags, search phrases, versions, dates etc. Most of these pages are simply a list of search results with some snippets of the original articles.
According to Google's Webmaster-tools Google spidered only about 150 of the 200 entries in the xml sitemap. It looks as if Google has not yet seen all of the content years after it went online.
I plan to add a few "Disallow:" lines to robots.txt so that the search engines no longer spiders those dynamic urls. In addition I plan to disable some url parameters in the Webmaster-tools "website configuration" --> "url parameter" section.
Will that improve or hurt my current SEO ranking? It will look as if my website is losing thousands of content pages.
This is exactly what canonical URLs are for. If one page (e.g. article) can be reached by more then one URL then you need to specify the primary URL using a canonical URL. This prevents duplicate content issues and tells Google which URL to display in their search results.
So do not block any of your articles and you don't need to enter any parameters, either. Just use canonical URLs and you'll be fine.
As nn4l pointed out, canonical is not a good solution for search pages.
The first thing you should do is have search results pages include a robots meta tag saying noindex. This will help get them removed from your index and let Google focus on your real content. Google should slowly remove them as they get re-crawled.
Other measures:
In GWMT tell Google to ignore all those search parameters. Just a band aid but may help speed up the recovery.
Don't block the search page in the robots.txt file as this will block the robots from crawling and cleanly removing those pages already indexed. Wait till your index is clear before doing a full block like that.
Your search system must be based on links (a tags) or GET based forms and not POST based forms. This is why they got indexed. Switching them to POST based forms should stop robots from trying to index those pages in the first place. JavaScript or AJAX is another way to do it.

Two identical URL's but different order in parameters: Duplicated content?

My own CMS automatically adds new parameters to links in a page to specify a given language.
It works quite well but it doesn't always put the var in the same position, giving me a link to same page/language:
www.xxx.yy/index.php?mod=blog&page=3&lang=en
or
www.xxx.yy/index.php?mod=blog&lang=en&page=3
Will search engines be smart enough to detect both urls as the same? Or will detect as two different urls and therefore mark them as duplicated content?
I will fix this issue anyway, but I'm curious about this since long time ago.
Google definitely supports this, as they explicitly mention that example in their webmaster blog:
Like www.example.com/skates.asp?color=black&brand=riedell and www.example.com/skates.asp?brand=riedell&color=black. Having this type of duplicate content on your site can potentially affect your site's performance, but it doesn't cause penalties. From our article on duplicate content:
Duplicate content on a site is not grounds for action on that site unless it appears that the intent of the duplicate content is to be deceptive and manipulate search engine results. If your site suffers from duplicate content issues, and you don't follow the advice listed above, we do a good job of choosing a version of the content to show in our search results.
For all other duplicate content worries, consider specifying a canonical url.

Canonical links and paging

Google has been pushing its new canonical link feature, I agree it is really useful. Now instead of having a ton of entry points in to an area you can have one entry.
I was wondering, does this feature play nice with paging?
For example: I have this page which has 8 pages of content, if I specify the canonical of http://community.mediabrowser.tv/permalinks/154/iso-always-detected-as-a-movie-when-checking-metadata for the page, will there be any undesired side effects? Will this be better overall? Will this mean that a hit on page 5 will take users to page 1?
When specifying a canonical URL, it should have substantially the same content. Pages 2-8 have different content. Yes, if Google were to honor your canonical link on page 5, it would send users to page 1.
You should use the canonical link on page 1 so that Google knows that http://community.mediabrowser.tv/topics/154 and http://community.mediabrowser.tv/topics/154?page=1&response_type=3 are the same as http://community.mediabrowser.tv/permalinks/154/iso-always-detected-as-a-movie-when-checking-metadata
You may also want to put canonical links on the other pages so Google knows that http://community.mediabrowser.tv/topics/154?page=5 is the same as http://community.mediabrowser.tv/topics/154?page=5&response_type=3
You should only add canonical links on pages with identical content. For example, a set of links presented in a different order: sorted by date or alphabetically.
In your case all pages have different content (albeit representing several pages of the same article or conversation thread). Which means you don't need to canonicalize them.
Still if you do, all that happens is that Google gives more priority to the first page, rather than the other pages when displaying them in search results.
Canonical links do not affect your visitors. They only suggest priority and possible duplicate content to bots.
More info from Google here

Efficient way to add Canonical tags

If the value of the href for Canonical tags is populated via javascript function, would that affect the Search engine indexing (as search engines ignore javascript) ?
I'm not sure I fully understand the question as you worded it. But here's my take:
Canonical tags are used to make sure that Google (et al) knows that the same page with different URLs are, in fact, the same page.
This saves Google a lot of processing time, because it will treat those pages as a single page instead of trying to index every one of them. Also, your domain's search engine ranking will probably go up because Google doesn't think you're duplicating content.
For any page that could be duplicated because of parameters, you should include a canonical link of the page you want known as the original. So yes, it would help in your case. Though you cannot put a canonical link on someone else's domain pointing to your domain, so putting it on a partner's page would not have the intended consequences.
If you want more information, read up here: Google Webmaster Central: Specify Your Canonical