Static Pages vs. Dynamic Pages, Which is Better for SEO? [closed] - seo

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Static Pages vs. Dynamic Pages, Which is Better for SEO?
Am not SEO , just i want to know..which is better...
Regards

It doesn't matter. In both cases you send HTML as a response to the browser or search engine bot.

You mean static websites (HTML only) versus dynamic websites (PHP, ASP, JSP, ...)?
There is only one relevant difference between static and dynamic pages for SEO, and that are URLs. Static pages work "naturally", that is, the organization of the URLs in folders follows the organization of your website, there is only one URL for each page, etc...
If you use a dynamic website, it depends on how do you structure it. If you have a separate server page for each page then it's the same. If you use a front controller pattern, then you should attempt at using URL rewriting, so that your URLs follow the logical structure of your site.
For the rest, there is no difference, as both static and dynamic pages just produce HTML, which is the content consumed by users and search engine, regardless of the technology employed.

Basically I agree with the argument that it does not matter regarding to SEO whether a web site is a dynamic or static.
However, there are some caveats that you have to consider.
URL--- You have to make sure all of the URLs are user-friendly.
Loading speed---- It does not necessarily mean all of dynamic web sites are slower than static ones. But you have to make sure that the loading speed of your web site is as quick as possible. FYI, Google recently stated openly that they will put loading speed into consideration.
If you make sure those two things are right. Then there is no big difference any more.

The static pages are the ancesters of web pages, of course they are the best for SEO because google bots are smart but their algorythm is more adapted on this kind of web site. the bots can check the code informations very quickly. That's why the static web pages are better for SEO.

Related

How to help search engines to find all the pages on my website [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I currently program a website which gives information about food products.
The way the website works is that there's a search engine -> the users search for the product they want to know something about -> the website shows all the products that they may want to see, and every product has his own page with all the information about it.
So my question is: how search engines, like google, will be able to find all the product pages?
Search engines use many different ways to find new pages. Most commonly their web crawlers follow (external as well as internal) hyperlinks.
While a typical informational website links to all available pages in its site-wide navigation (so web crawlers can reach all pages by following internal links), other websites don’t necessarily link to all their pages (maybe because you can only reach them via forms, or because it doesn’t make sense for them to provide all links, etc.).
To allow discovery/crawling of new pages of these sites, too, they can provide a site map. This is essentially just a page linking to all existing pages, but often with structured metadata that can help search engines.
So just make sure that all your pages are linked somehow. Either via "natural" internal links on your site, or by providing a sitemap (ideally following the sitemaps.org protocol), or both.
For questions about SEO advice (which is off-topic here on SO), see our sister site https://webmasters.stackexchange.com/.
Please add sitemap in your site for google crawling all pages easily and indexing properly.
also add xml sitemap
your website need SEO process.

Is serving a bot-friendly page to google-bot likely ot adversly affect SEO? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a site which has a homepage that contains a great deal of javascript. I am concious that this isn't great for mobile clients, javascript-less browsers and crawlers/bots. The page uses propper <noscript /> alternatives, alt attributes, etc.
The user-agent can easily be sniffed to serve up a the page content without Javascript (there is a non-javascript version of the content already on the site), but I don't want to be seen to be cheating to crawlers (google-bot).
Humans that use mobile-clients and javascript-less browsers would surely appreciate a tailored version (given an option to switch back to the full version if they want). Bots might think they're being cheated.
Finally, the site has been indexed very well so far, so I am tempted to not tailor it for google-bot, just for humans that use mobile clients and javascript-less browsers. It seems like a safer option.
If you serve different content to the search engines then you do your users you are cloaking and definitely in violation of Google's terms of service.
The proper way to handle generated with JavaScript is to use progressive enhancement. This means that all of your content is available without JavaScript being required to fetch or display it. Then you enhance that content using JavaScript. This way everyone has access to the same content but users with JavaScript get a better experience. This is good usability and good for SEO.

Improve dictionary's internal linking structure [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
What I want to achieve:
I have an online dictionary which works quite fine - but the crawling by search engines (especially Google) could be better.
So I would like to improve the internal linking structure on my website so that Google can easily find (almost) all pages of the dictionary.
What I know yet:
The number of internal links per page should not exceed 100. Search engines don't like pages containing masses of links - looks spammy. And a website is not to be designed for search engines but for the users. So the usability should not suffer from this optimization, best case would be if the usability does even increase.
My ideas for improving the internal linking structure so far:
on each dictionary entry page: link 25 similar words which could be mixed up
create an index: list of all dictionary entries (75 per page)
...
Can you help me to optimize the linking structure?
Thank you very much in advance!
You could link to synonyms and antonyms, which would be both user-friendly and crawler-friendly. But I think the biggest thing you could do to improve crawling, particularly by Google, would be to add a sitemap:
Sitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URLs for a site along with additional metadata about each URL (when it was last updated, how often it usually changes, and how important it is, relative to other URLs in the site) so that search engines can more intelligently crawl the site.
Google has lots of information on Sitemaps and how to generate them on their webmaster help pages.

SEO : things to consider\implement for your website's content [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
lets say i have a website that i am developing...
the site may have wallpapers, question & answers, info (e.g imdb,wikipedia etcetera)
what do i need to do so that when some search engine analyzes a particular page of my website for particular, lets say 'XYZ', it finds 'XYZ', content it finds 'XYZ' content if it present in that page...
please i am new to this so pardon my non-techy jargon...
The most important tips in SEO revolve around what not to do:
Keep Java and Flash as minimal as is possible, web crawlers can't parse them. Javascript can accomplish the vast majority of Flash-like animations, but it's generally best to avoid them altogether.
Avoid using images to replace text or headings. Remember that any text in images won't be parsed. If necessary, there are SEO-friendly ways of replacing text with images, but any time you have text not visible to the user, you risk the crawler thinking your trying to cheat the system.
Don't try to be too clever. The best way to optimize your search results is to have quality content which engages your audience. Be wary of anyone who claims they can improve your results artificially; Google is usually smarter than they are.
Search engines (like Google) usually use the content in <h1> tags to find out the content of your page and determine how relevant your page is to that content by the number of sites that link to your page.

SEO blacklisting for cloaking [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am using postbacks to perform paging on a large amount of data. Since I did not have a sitemap for google to read, there will be products that google will never know about due to the fact that google does not push any buttons.
I am doing cloaking to spit out all the products with no paging if the user-agent is that of a search engine. There may be some work arounds for situations like this which include hidden buttons to paged urls.
What about information you want indexed buy google but you want to charge for the content. Imagine that I have articles that I want users to be able to find in google, but when the user visits the page, only half the content is displayed and users will have to pay for the rest.
I have heard that google may blacklist you for cloaking. I am not being evil, just helpful. Does google recognize the intention?
Here is a FAQ by google on that topic. I suggest to use CSS to hide some content. For example just give links to your products as an alternative to your buttons and use display:none; on them. The layout stays intact and the search engines will find your pages. However most search engines will not find out about cloaking and other techniques, but maybe competitors will denigrate you. In any way: Don't risk it. Use sitemaps, use RSS feeds, use XML documents or even PDF files with links to offer your whole range of products. Good luck!
This is why Google supports a sitemap protocol. The sitemap file needs to render as XML, but can certainly be a code-generated file, so you can produce on-demand from the database. And then point to it from your robots.txt file, as well as telling Google about it explicitly from your Google Webmaster Console area.
Highly doubtful. If you are serving different content based on IP address or User-Agent from the same URL, it's cloaking, regardless of the intentions. How would a spider parse two sets of content and figure out the "intent"?
There is intense disagreement over whether "good" cloakers are even helping the user anyway.
Why not just add a sitemap?
I don't think G will recognize your intent, unfortunately. Have you considered creating a sitemap dynamically? http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=40318