Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
I set up a site with a template and the title was something they supplied as a default. When I searched for my site's title, it showed up in results, but it was with their default title. After changing it a couple days ago, my site still shows up with the default title instead of what I changed it to.
Is there any way I can force Google to update their information so the title I have now shows up in results instead of the default title?
This will refresh your website immediately:
From Web Master tools Menu -> Crawl -> Fetch as Google
Leave URL blank to fetch the homepage then click Fetch
Submit to Index button will appear beside the fetched result; click it then choose > URL and all linked pages > OK
Just wait, Google should normally revisit your site and update its informations. But if you are hurried, you can try the following steps :
Increase the crawl speed of your site in Google Webmaster Tools : http://support.google.com/webmasters/bin/answer.py?hl=en&answer=48620
Ping your website on service like http://pingomatic.com/
Submit if you have not yet or resubmit an updated sitemap of your website.
Fetching as Google works, as already suggested. However stage 2 should be - submit your sites to several large social bookmarking sites like digg, reddit, stumbleupon, etc etc. There are huge lists of these sites out there.
Google notices everything on these sites and it will speed up the re crawling process. You can keep track of when Google last cached your site (There is a big difference between crawling and caching) by going to.. cache:sitename.com
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
Due to an update error, I put in prod a robots.txt file that was intended for a test server. Result, the prod ended up with this robots.txt :
User-Agent: *
Disallow: /
That was 10 days ago and I now have more than 7000 URLS blocked Error (Submitted URL blocked by robots.txt) or Warning (Indexed through blocked byt robots.txt).
Yesterday, of course, I corrected the robots.txt file.
What can I do to speed up the correction by Google or any other search engine?
You could use the robots.txt test feature. https://www.google.com/webmasters/tools/robots-testing-tool
Once the robots.txt test has passed, click the "Submit" button and a popup window should appear. and then click option #3 "Submit" button again --
Ask Google to update
Submit a request to let Google know your robots.txt file has been updated.
Other then that, I think you'll have to wait for Googlebot to crawl the site again.
Best of luck :).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I have a stack system that passes page tokens in the URL. As well my pages are dynamically created content so I have one php page to access the content with parameters.
index.php?grade=7&page=astronomy&pageno=2&token=foo1
I understand the search indexing goal to be The goal is to have only one link per unique set of data on your website.
Bing has a way to specify specific parameters to ignore.
Google it seems uses rel="canonical" but is it possible to use this to tell Google to ignore the token parameter? My URL (without tokens) can be anything like:
index.php?grade=5&page=astronomy&pageno=2
index.php?grade=6&page=math&pageno=1
index.php?grade=7&page=chemistry&page2=combustion&pageno=4
If there is not a solution for Google... Other possible solutions:
If I provide a site map for each base page, I can supply base URLs but any crawing of that page's links will crate tokens on resulting pages. Plus I would have to constantly recreate the site map to cover new pages (e.g. 25 posts per page, post 26 is on page 2).
One idea I've had is to identify bots on page load (I do this already) and disable all tokens for bots. Since (I'm presuming) bots don't use session data between pages anyway, the back buttons and editing features are useless. Is it feasible (or is it crazy) to write custom code for bots?
Thanks for your thoughts.
You can use the Google Webmaster Tools to tell Google to ignore certain URL parameters.
This is covered on the Google Webmaster Help page.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a product page in my website which I have added 3 years before.
Now the product production was stopped and the product page was removed from website.
What I did is I started displaying message in the product page telling that the production of the product got stopped.
when some one searches in google for that products the product page which was removed from site shows up first in google search.
The page rank for the product page is also high.
I don't want the removed product page to be shown at the top of search result.
What is the proper method to remove a page from website so that it gets depicted by what ever google have indexed in its table.
Thanks for the reply
Delete It
The proper way to remove a page from a site is to delete the actual file that is been returned to the user/bot when the page is requested. If the file is not on the webserver, any well configured webserver will return a 404 and the bot/spider will choose to remove that from the index in the next refresh.
Redirect It
If you want to keep the good "google juice" or SERP ranking the page has, probably due to any inbound links from external sites, you'd be best to set your websever to do a 302 redirect to a similar (updated product).
Keep and convert
However, if the page is doing so well that it ranks #1 for searches to the entire site, you need to use this to your advantage. Leave the bulk of the copy on the page the same, but highlight to the viewer that the product no longer exists and provide some helpful options to the user instead: tell them about a newer, better product, tell them why it's no longer available, tell them where they can go to get support if they already have the discontinued product.
I am completely agree with above suggestion and want to add just one point.
If you want to remove that page from Google Search Result; just login to Google webmaster tool (you must have verified that website in Google webmaster tool) and add that particular page for index removal request.
Google will de-index that page and it will be removed from Google search rankings.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
i'm getting confused what are realy nofollow attributes do.
I do believe that they tell search engine spiders to do not follow the target.
But my question is: do nofollow links alter the Pagerank?
Thanks
ok, here is a simplified pipe google uses to serve your pages
discovery
crawling
indexing
ranking
discovery is basically the step to discovery new urls, common sources are
links it finds on other pages
sitemap.xml
adding a nofollow to a link on your page and google disocovers that links basically pushes the link into the google discovery queue, but with the flag "nofollow = do not crawl that site (based on this discovery), do not index this url (based on this discovery), do not rank this url (based on this discovery)"
so basically you have de-valued that specific link. the link does not count as a vote for that other page.
said that:
it does not help you to save "pagerank" - the concept of pagerank is just thoughtcancer - the "link juice" does not stay on your page, it just get flushed into nirvana. congrats. it's like voting with a note not to count that vote.
there are only 2 use cases when a nofollow links makes sense
if you can't (user generated content without editorial quality assurance)
or if you won't (a link to a site you want to point out is sh*t)
vote for another page.
p.s.: this is the site for not programming related SEO questions https://webmasters.stackexchange.com/
nofollow is poorly named. I'll try and give another explanation:
All the links on a web page acquire link juice that they can pass on to the pages they link to.
The amount of juice available to a page is based on the link juice it receives from other pages that link it. This all relates to the PageRank algorithm.
How the juice is distributed to the links is outside the question, and a Google secret. But each link gets a share.
nofollow in a link says don't pass on my share of link juice.
What is believed is that this link juice is just leaked out so using nofollow cannot be used to retain ranking. Just to deny the recipient of any boost in their ranking.
A good use for nofollow is when external users can add their own links in your website. This can protect you from people spamming you to pass on juice to their own websites.
nofollow is indeed badly named. What it does is prevent the passing of PageRank and anchor text benefit to the receiving link. However, nofollow links can still be beneficial. Trust and authority can still be passed on, so a link from Wikipedia is still very valuable.
Nofollow attribute means not to pass the PR with link..Nofollow links does not alter Page rank..but they can help in driving traffic towards to your site.. :)
If you have links on your pages that link to external web sites, you can add nofollow so that your site does not "spill" page rank to the external pages that you link to (which they would if you don't add nofollow).
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm working on optimizing my site for Google's search engine, and lately I've noticed that when doing a "site:www.joemajewski.com" query, I get results for pages that shouldn't be indexed at all.
Let's take a look at this page, for example: http://www.joemajewski.com/wow/profile.php?id=3
I created my own CMS, and this is simply a breakdown of user id #3's statistics, which I noticed is indexed by Google, although it shouldn't be. I understand that it takes some time before Google's results reflect accurately on my site's content, but this has been improperly indexed for nearly six months now.
Here are the precautions that I have taken:
My robots.txt file has a line like this:
Disallow: /wow/profile.php*
When running the url through Google Webmaster Tools, it indicates that I did, indeed, correctly create the disallow command. It did state, however, that a page that doesn't get crawled may still get displayed in the search results if it's being linked to. Thus, I took one more precaution.
In the source code I included the following meta data:
<meta name="robots" content="noindex,follow" />
I am assuming that follow means to use the page when calculating PageRank, etc, and the noindex tells Google to not display the page in the search results.
This page, profile.php, is used to take the $_GET['id'] and find the corresponding registered user. It displays a bit of information about that user, but is in no way relevant enough to warrant a display in the search results, so that is why I am trying to stop Google from indexing it.
This is not the only page Google is indexing that I would like removed. I also have a WordPress blog, and there are many category pages, tag pages, and archive pages that I would like removed, and am doing the same procedures to attempt to remove them.
Can someone explain how to get pages removed from Google's search results, and possibly some criteria that should help determine what types of pages that I don't want indexed. In terms of my WordPress blog, the only pages that I truly want indexed are my articles. Everything else I have tried to block, with little luck from Google.
Can someone also explain why it's bad to have pages indexed that don't provide any new or relevant content, such as pages for WordPress tags or categories, which are clearly never going to receive traffic from Google.
Thanks!
It would be a better idea to revise your meta robots directives to:
<meta name="robots" content="noindex,noarchive,nosnippet,follow" />
My robots file was blocking access to the page where the meta tag was included. Thus, even though the meta tag told Google to not index my pages, Google never got that far.
Case closed. :P
If you have blocked and tested URL in robots.txt, it must work. Here you don't need to add additional meta tag into particular page.
I am sure, give some time to Google for crawling your website. It should work !
For removing URLs, you can use Google webmaster tool. (i am sure you know that)