Is serving a bot-friendly page to google-bot likely ot adversly affect SEO? [closed] - seo

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a site which has a homepage that contains a great deal of javascript. I am concious that this isn't great for mobile clients, javascript-less browsers and crawlers/bots. The page uses propper <noscript /> alternatives, alt attributes, etc.
The user-agent can easily be sniffed to serve up a the page content without Javascript (there is a non-javascript version of the content already on the site), but I don't want to be seen to be cheating to crawlers (google-bot).
Humans that use mobile-clients and javascript-less browsers would surely appreciate a tailored version (given an option to switch back to the full version if they want). Bots might think they're being cheated.
Finally, the site has been indexed very well so far, so I am tempted to not tailor it for google-bot, just for humans that use mobile clients and javascript-less browsers. It seems like a safer option.

If you serve different content to the search engines then you do your users you are cloaking and definitely in violation of Google's terms of service.
The proper way to handle generated with JavaScript is to use progressive enhancement. This means that all of your content is available without JavaScript being required to fetch or display it. Then you enhance that content using JavaScript. This way everyone has access to the same content but users with JavaScript get a better experience. This is good usability and good for SEO.

Related

How to exclude certain portions of my website from bots/web crawlers? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 months ago.
Improve this question
I have made custom segments/blocks around my website which I use it for advertising/marketing. Google search bots are considering those as part of my website and gets confused of what my site really is versus advertisement.
This negatively impacts my SEO. Is there a way I can register or use certain directives or elements to inform google and other bots to avoid crawling that portion or even if crawled should not be considered as part of that page.
I am aware of robots.txt file but that is for an entire page. I would like to block a certain blocks within each page. This could be a side bar a floating bar.
there's no way to ensure all bots don't index parts of a page. it's kind of a all or nothing thing.
can could use a robots.txt file and with
Disallow: /iframes/
then load the content you don't want indexed into iframes.
there's also the data-nosnippet-attr tag attribute.

How to ensure search engines index specific version of site [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I am just starting to work on my first responsive site and am realizing some SEO things may fit on the front page of our site may not fit 'naturally' on the front page of the mobile version.
Is there anyway to ensure search engines see the full-size site?
Once complicating matter is that I am designing the site 'mobile first'. So the site does not default to full-size, it defaults to mobile sizes.
Assuming you deliver the same content to the end user regardless of device, and just show/hide or reformat based upon a media query, it really doesn’t matter. Google will still get the full content of the page so will index all of your content. What is visible in the viewport isn’t really significant to Google.
Google will, however, understand the use of media queries and give you some additional SEO benefits as a result. Google favours responsive design over separate sites/mobile specific pages. Responsive design also helps improve the indexing efficiency of the Googlebot.
One thing they do advise is not to block access to any of your ‘external’ resources (css, js, images etc)
Plenty of good information here

How much SEF have to be the URLs? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm really attracted to the webservice.js module. I'd like to use it as a real webserver, using only an HTML page with JS calling the webservice to retrieve the datas.
The problem I'm seeing is about the search engines, as I do wish my website to be search engine optimized.
So I thought I could be fallbacking to plain HTML when JS is not enabled (just going to the url, webservice.js sends back some datas in plain HTML). For this, the links will be displayed in the HTML markup on the frontpage.
The problem is about how much SEF have the URLs to be?
I mean, the webservice will allow me to have URLs of this kind : http://domain.com/content?get=title-uri-encoded.
Is it search-engine friendly? I know having http://domain.com/content/title-uri-encoded would be better, but is the kind I'm thinking of still friendly?
PS : I'm not sure whether this post belongs to SO or Programmers.se...
You probably want to look into progressive enhancement techniques or Google's proposed AJAX solution.
You may end up with a URL structure like this:
AJAX enabled public version
http://domain.com/content#!get=title-uri-encoded
Search Engine version (plain html)
http://domain.com/content?_escaped_fragment_=get=title-uri-encoded

SEO : things to consider\implement for your website's content [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
lets say i have a website that i am developing...
the site may have wallpapers, question & answers, info (e.g imdb,wikipedia etcetera)
what do i need to do so that when some search engine analyzes a particular page of my website for particular, lets say 'XYZ', it finds 'XYZ', content it finds 'XYZ' content if it present in that page...
please i am new to this so pardon my non-techy jargon...
The most important tips in SEO revolve around what not to do:
Keep Java and Flash as minimal as is possible, web crawlers can't parse them. Javascript can accomplish the vast majority of Flash-like animations, but it's generally best to avoid them altogether.
Avoid using images to replace text or headings. Remember that any text in images won't be parsed. If necessary, there are SEO-friendly ways of replacing text with images, but any time you have text not visible to the user, you risk the crawler thinking your trying to cheat the system.
Don't try to be too clever. The best way to optimize your search results is to have quality content which engages your audience. Be wary of anyone who claims they can improve your results artificially; Google is usually smarter than they are.
Search engines (like Google) usually use the content in <h1> tags to find out the content of your page and determine how relevant your page is to that content by the number of sites that link to your page.

SEO blacklisting for cloaking [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am using postbacks to perform paging on a large amount of data. Since I did not have a sitemap for google to read, there will be products that google will never know about due to the fact that google does not push any buttons.
I am doing cloaking to spit out all the products with no paging if the user-agent is that of a search engine. There may be some work arounds for situations like this which include hidden buttons to paged urls.
What about information you want indexed buy google but you want to charge for the content. Imagine that I have articles that I want users to be able to find in google, but when the user visits the page, only half the content is displayed and users will have to pay for the rest.
I have heard that google may blacklist you for cloaking. I am not being evil, just helpful. Does google recognize the intention?
Here is a FAQ by google on that topic. I suggest to use CSS to hide some content. For example just give links to your products as an alternative to your buttons and use display:none; on them. The layout stays intact and the search engines will find your pages. However most search engines will not find out about cloaking and other techniques, but maybe competitors will denigrate you. In any way: Don't risk it. Use sitemaps, use RSS feeds, use XML documents or even PDF files with links to offer your whole range of products. Good luck!
This is why Google supports a sitemap protocol. The sitemap file needs to render as XML, but can certainly be a code-generated file, so you can produce on-demand from the database. And then point to it from your robots.txt file, as well as telling Google about it explicitly from your Google Webmaster Console area.
Highly doubtful. If you are serving different content based on IP address or User-Agent from the same URL, it's cloaking, regardless of the intentions. How would a spider parse two sets of content and figure out the "intent"?
There is intense disagreement over whether "good" cloakers are even helping the user anyway.
Why not just add a sitemap?
I don't think G will recognize your intent, unfortunately. Have you considered creating a sitemap dynamically? http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=40318