Sitemap on dynamic generated content - indexing

I'd need to index my website on Google and other search engines, though my website is a database of IP addresses and the webpage is dynamically generated like
example.com/show.php?&ip=ipvalue
Would it be possible to index on Google and other search engines every IP address I have on the database by linking directly the direct URL as showed above?
I know how to set up a proper sitemap file to index static content though I cannot understand how I could tell a search engine to index a URL that doesn't physically exist unless the user passes a value which is in the database.

No one (neither human visitors nor search engine bots) knows if a document is created dynamically or if it exists as a static file.
A search engine would have no reason to handle a link to http://example.com/show.php?ip=127.0.0.1 differently from a link to http://example.com/ip/127.0.0.1. By using URL rewriting (e.g., mod_rewrite for Apache), you could rewrite your URLs in such a way.
So: Just link to these pages from a place that search engine crawlers can access.

Related

How do permalinks get stored?

I would like to implement permalinks on my website (I use JSP+Servlets if this makes any difference) and was wondering how they work. Are they stored as physical pages on the server or values go into the database and URLs are generated dynamically?
For example, http://jsfiddle dot netnet/8MBHZ/
Is 8MBHZ a physical html page?
This is the static URL of the page. Such request comes to the server, the value 8MBHZ is retrieved from the URL. Using this value, you can find the page content in the database. Then this extracted content is rendered.
(Static URLs are not indexed multiple times (in contrast to dynamic). This has a positive effect on search engine optimization (SEO)).

redirect google users from indexed html snapshot to my site main page

i have business listing site (www.brate.com) where people can search for local businesses and rate them.
the entire site is build using GWT (i.e. Ajax) and the all content is generated dynamically. Now i am in a phase where i want the site to be SEO friendly, below is my approach and please advise me if its the best way to implement it.
1- create static HTML snapshot of each business and its related data (site, address, phone number, user reviews...etc) and put all the generated HTML files under a single directory
2- create a sitemap xml file that contains all the above HTML links
3- configure webmaster to crawl and index all generated HTML snapshots
now my logic is that when google search query list one of the above generated html files in its search results i want to redirect the user to the site main page (www.brate.com) not the html snapshot.
can i use a redirect like "" in the generated snapshots?
if not what is the best way to achieve the above mentioned logic?
Thanks
Sameeh, one suggested approach for GWT
Ensure that you have correctly handled history tokens for all your pages in GWT. Let the tokens start with exclamation (!).
Associate GWT history tokens with generated pages using #! notation
Let tokens be keyword rich as we do for any URL optimization in SEO
Read through https://developers.google.com/webmasters/ajax-crawling/ for understanding #! notation.
Details on support by Bing: http://searchengineland.com/bing-now-supports-googles-crawlable-ajax-standard-84149

Page contains multiple canonical issues

i am using ISS SEO toolkit to analyze our website. Form that i got 450 canonical issues. all the errors in the same format as follows:
The page with URL "http://dynamicsexchange.com/images/Logoscroll/Images/511201091716pm_a.jpg" can also be accessed by using URL "http://www.dynamicsexchange.com/images/Logoscroll/Images/511201091716pm_a.jpg".Search engines identify unique pages by using URLs. When a single page can be accessed by using any one of multiple URLs, a search engine assumes that there are multiple unique pages. Use a single URL to reference a page to prevent dilution of page relevance. You can prevent dilution by following a standard URL format.
Please help me to solve these problems with examples. and i am using masterpage concept.
i am using IIS web server please give me the solution to set the 301 redirect (i am using master page concept) with example
Choose whether you want your visitors to use http://dynamicsexchange.com or http://www.dynamicsexchange.com. Then set up a 301 redirect to redirect any traffic from the one you don't want to the one you do want.
That way any traffic which accidentally links to the wrong one will be redirected to the right one. Search engines will then record all links to the right one, and so your page rank won't be diluted.
How to set up the 301 redirect will depend on what your web server is.

Can search engines index pages generated by server side code?

I'm guessing a site like stack overflow doesn't keep an html file around for every question ever asked. Instead, server-side code creates the page every time a question is clicked on(I think). Is it possible for search engines to index every quesiton on Stack Overflow, or would a page-per-question need to be kept in the directory so the search engine can crawl it?
Yes. Search engines can index dynamically generated pages no problem. In fact, from the search engine bot's perspective, it can't really even distinguish between a dynamically generated page and a static one.
You might be interested by the Dynamic URLs vs. static URLs post on the Official Google Webmaster Central Blog.
Yes it's perfectly possible - when a link is followed the server returns HTML just like any other web page. The only difference is that the server generated it, rather than a person.
As far as the client (be it a browser or search engine) is concerned, there is no difference between a server-generated page and a static file. They're virtually indistinguishable (depending on how the page is generated, it might be missing Last-Modified headers, etc). As such, yes, search engines can index generated pages without a problem.
That said, there is something to be said for giving them a hint. Using sitemaps, for example, gives a search engine a nice listing of all your pages, so it's less likely to miss them. More importantly, it can summarize last modified times, to focus the search engine's attention on what has changed recently. This isn't mandatory, but it does help - regardless of whether the pages are static HTML or generated.
Any link that uses a GET can be followed by most crawlers. Anything that requires a POST will generally be ignored.
The mechanism for generating the page is irrelevant.
yes if this is not restricted by robot.txt or meta tags.Search engine requests web page like normal user,no one have access to server side code(if your site isn't hacked))
Search engines can see pretty much anything on a given Web page that isn't hidden behind client-side code (i.e., JavaScript).
So, if there's a URL that you can enter into your browser's address bar to get this page, and this page is linked to from somewhere, a search engine will find it and "see" the same content that you do. The fact that the page was generated dynamically by a server is irrelevant to a search engine, since what is sent to a browser upon requesting a URL is still just an HTML file.
In other words, that HTML file doesn't exist in the same form on the server - i.e., it's actually some server-side code that generates HTML, not a static HTML file - but that's not what a search engine is crawling though and indexing, rather links to document URLs that are exactly what you see in your browser's address bar.

What should i add to my site to make google index the subpages as well

I am a beginner web developer and i have a site JammuLinks.com, it is built on php. It is a city local listing search engine. Basically i've written search pages which take in a parameter, fetch the records from the database and display it. So it is dynamically generating the content. However if you look at the bottom of the site, i have added many static links where i have hard coded the parameters in the link like searchresult.php?tablename='schools'. So my question is
Since google crawls the page and also the links listed in the page, will it be crawling the results page data as well? How can i identify if it has. So far i tried site:www.jammulinks.com but it results the homepage and the blog alone.
What more can i add to make the static links be indexed by it as well.
The best way to do this is to create a sitemap document (you can even get the template from Google's webmaster portion of their sites, www.google.com/webmasters/ I believe).