Improve dictionary's internal linking structure [closed] - seo

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
What I want to achieve:
I have an online dictionary which works quite fine - but the crawling by search engines (especially Google) could be better.
So I would like to improve the internal linking structure on my website so that Google can easily find (almost) all pages of the dictionary.
What I know yet:
The number of internal links per page should not exceed 100. Search engines don't like pages containing masses of links - looks spammy. And a website is not to be designed for search engines but for the users. So the usability should not suffer from this optimization, best case would be if the usability does even increase.
My ideas for improving the internal linking structure so far:
on each dictionary entry page: link 25 similar words which could be mixed up
create an index: list of all dictionary entries (75 per page)
...
Can you help me to optimize the linking structure?
Thank you very much in advance!

You could link to synonyms and antonyms, which would be both user-friendly and crawler-friendly. But I think the biggest thing you could do to improve crawling, particularly by Google, would be to add a sitemap:
Sitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URLs for a site along with additional metadata about each URL (when it was last updated, how often it usually changes, and how important it is, relative to other URLs in the site) so that search engines can more intelligently crawl the site.
Google has lots of information on Sitemaps and how to generate them on their webmaster help pages.

Related

How to help search engines to find all the pages on my website [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I currently program a website which gives information about food products.
The way the website works is that there's a search engine -> the users search for the product they want to know something about -> the website shows all the products that they may want to see, and every product has his own page with all the information about it.
So my question is: how search engines, like google, will be able to find all the product pages?
Search engines use many different ways to find new pages. Most commonly their web crawlers follow (external as well as internal) hyperlinks.
While a typical informational website links to all available pages in its site-wide navigation (so web crawlers can reach all pages by following internal links), other websites don’t necessarily link to all their pages (maybe because you can only reach them via forms, or because it doesn’t make sense for them to provide all links, etc.).
To allow discovery/crawling of new pages of these sites, too, they can provide a site map. This is essentially just a page linking to all existing pages, but often with structured metadata that can help search engines.
So just make sure that all your pages are linked somehow. Either via "natural" internal links on your site, or by providing a sitemap (ideally following the sitemaps.org protocol), or both.
For questions about SEO advice (which is off-topic here on SO), see our sister site https://webmasters.stackexchange.com/.
Please add sitemap in your site for google crawling all pages easily and indexing properly.
also add xml sitemap
your website need SEO process.

SEO with similar pages [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Our company has created a "comparison" tool that uses unique urls to choose who you want to compare, example:
http://www.sportingcharts.com/nhl/2010-edmonton-oilers/vs/2008-calgary-flames/
http://www.sportingcharts.com/nhl/1993-carolina-hurricanes/vs/2008-dallas-stars/
Does anyone know if this is a recommended SEO strategy or is it better to use query string parameters instead of completely different urls. One advantage I was thinking of is this could grab long tail traffic searches such as "2010 Edmonton Oilers Vs 1995 Calgary Flames" but having this many URLS might also hurt the general SEO of these pages.
Does anyone have any experience in creating pages like this? What is the recommended strategy?
The style of URL is not going to matter much to search engines.
From a search engine perspective they are going to care more that:
You have 30 teams and 24 seasons. You are creating 30*24*30*24 = over 500,000 pages.
Each page has very little content. Its just two team names and some numerical stats.
The content that you do have is heavily duplicated across pages.
The search volume for your targeted keywords is going to be very low. Very few people search for two team names with two different years.
If I ran a search engine, I would not want to have my crawlers waste time crawling that site. I wouldn't want the pages in the index.
I expect that your site will suffer from "thin content", "duplicate content", and "excessive pages" issues because of this section.

Robots.txt in my project root [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I've seen tutorials/articles discussing using Robots.txt. Is this still a necessary practice? Do we still need to use this technique?
Robots.txt file is not necessary but it is recommended for those who want to block few pages or folders on your website being crawled by search engine crawlers.
I agree with the above answer. Robot.txt file is used for blocking pages and folders from crawling by search engines. For eg. You can block the search engines from crawling and indexing the Session IDs created, which in rare cases could become a security threat! Other than this, I don't see much importance.
The way that a lot of the robots crawl through your site and rank your page has changed recently as well.
I believe for a short period of time the use of Robot.txt may have helped quite a bit, but no adays most other options you'll take in regards to SEO will have more of a positive impact than this little .txt file ever will.
Same goes for backlinks, they used to be far far more important than they are now for you getting ranked.
Robots.txt is not for indexing . its used to blocks the things that you don't want search engines to index
Robots.txt can help with indexation with large sites, if you use it to reveal an XML sitemap file.
Like this:
Sitemap: http://www.domain.com/sitemap.xml
Within the XML file, you can list up to 50,000 URLs for search engines to index. There are plugins for many content management systems that can generate and update these files automatically.

best way to allow search engine to crawl site [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Thanks for reading my question. I am building a site that will be listing products from each manufacturer. I'm planning to structure the URL as following variations:
www.mysite.com/manufacturer_name/product_name/product_id
www.mysite.com/product_name/product_id
www.mysite.com/manufacturer_name
There are millions of products and I want all the major search engine to crawl them. What is the best way to go about doing that?
Would simply submitting site to all the search engines be enough? I would assume if I submit the manufacturer page which lists out all the manufacturer name as links the search engine will click on each links and click on all the products displayed within each manufacturer links (I will have paging for products) so the search engine can keep crawling the site for more products within each manufacturer until it runs out of the page number.
Would that be sufficient to list out each product on the every search engine? or is there a new and better way to do this? May be there are new SEO tricks that I'm not aware of. I am hoping if you can point me to the right direction.
I've previously used robot.txt to tell search engines which pages to crawl and that seemed to work fine.
Thanks,
bad_at_coding
Submit an XML sitemap. The easiest way to do this is to link to it in your robots.txt file.
Sample robots.txt file:
Sitemap: http://example.com/sitemap_location.xml
See Submitting Sitemaps for more on this topic from Google

SEO : things to consider\implement for your website's content [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
lets say i have a website that i am developing...
the site may have wallpapers, question & answers, info (e.g imdb,wikipedia etcetera)
what do i need to do so that when some search engine analyzes a particular page of my website for particular, lets say 'XYZ', it finds 'XYZ', content it finds 'XYZ' content if it present in that page...
please i am new to this so pardon my non-techy jargon...
The most important tips in SEO revolve around what not to do:
Keep Java and Flash as minimal as is possible, web crawlers can't parse them. Javascript can accomplish the vast majority of Flash-like animations, but it's generally best to avoid them altogether.
Avoid using images to replace text or headings. Remember that any text in images won't be parsed. If necessary, there are SEO-friendly ways of replacing text with images, but any time you have text not visible to the user, you risk the crawler thinking your trying to cheat the system.
Don't try to be too clever. The best way to optimize your search results is to have quality content which engages your audience. Be wary of anyone who claims they can improve your results artificially; Google is usually smarter than they are.
Search engines (like Google) usually use the content in <h1> tags to find out the content of your page and determine how relevant your page is to that content by the number of sites that link to your page.