Proper way to remove Page from website [closed] - seo

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a product page in my website which I have added 3 years before.
Now the product production was stopped and the product page was removed from website.
What I did is I started displaying message in the product page telling that the production of the product got stopped.
when some one searches in google for that products the product page which was removed from site shows up first in google search.
The page rank for the product page is also high.
I don't want the removed product page to be shown at the top of search result.
What is the proper method to remove a page from website so that it gets depicted by what ever google have indexed in its table.
Thanks for the reply

Delete It
The proper way to remove a page from a site is to delete the actual file that is been returned to the user/bot when the page is requested. If the file is not on the webserver, any well configured webserver will return a 404 and the bot/spider will choose to remove that from the index in the next refresh.
Redirect It
If you want to keep the good "google juice" or SERP ranking the page has, probably due to any inbound links from external sites, you'd be best to set your websever to do a 302 redirect to a similar (updated product).
Keep and convert
However, if the page is doing so well that it ranks #1 for searches to the entire site, you need to use this to your advantage. Leave the bulk of the copy on the page the same, but highlight to the viewer that the product no longer exists and provide some helpful options to the user instead: tell them about a newer, better product, tell them why it's no longer available, tell them where they can go to get support if they already have the discontinued product.

I am completely agree with above suggestion and want to add just one point.
If you want to remove that page from Google Search Result; just login to Google webmaster tool (you must have verified that website in Google webmaster tool) and add that particular page for index removal request.
Google will de-index that page and it will be removed from Google search rankings.

Related

Duplicate without user-selected canonical [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 months ago.
Improve this question
We have a live website with URL e.g. abc.com but when the site fully loads, it gets redirected to abc.com/home.
I indexed all the pages to google search console, under coverage it says,
Duplicate without user-selected canonical and the page is not under valid URL's. We have not added the URL "abc.com/home" in the sitemap that we have submitted to the search console.
how do I deal with "Duplicate without user-selected canonical" so that I get good rankings on SEO?
Google maintains a Support Article listing all the different ways you can specify which of your links to treat as the "canonical" or "main" version when google detects that your site has multiple pages that are duplicates (that is if they are in fact actually duplicates, if the pages aren't "meant" to be duplicates, find out why and fix it).
Reasons you may see duplicate URL's:
There are valid reasons why your site might have different URLs that point to the same page, or have duplicate or very similar pages at different URLs. Here are the most common:
To support multiple device types:
https://example.com/news/koala-rampage
https://m.example.com/news/koala-rampage
https://amp.example.com/news/koala-rampage
To enable dynamic URLs for things like search parameters or session IDs:
https://www.example.com/products?category=dresses&color=green
https://example.com/dresses/cocktail?gclid=ABCD
https://www.example.com/dresses/green/greendress.html
If your blog system automatically saves multiple URLs as you position the same post under multiple sections.
https://blog.example.com/dresses/green-dresses-are-awesome/
https://blog.example.com/green-things/green-dresses-are-awesome/
If your server is configured to serve the same content for www/non-www http/https variants:
http://example.com/green-dresses
https://example.com/green-dresses
http://www.example.com/green-dresses
If content you provide on a blog for syndication to other sites is replicated in part or in full on those domains:
https://news.example.com/green-dresses-for-every-day-155672.html (syndicated post)
https://blog.example.com/dresses/green-dresses-are-awesome/3245/
(original post)
I should also add an example for analytics campaigns. In my case, google is detecting url's with third-party (not google) campaign URL parameters as separate (and therefore duplicate) pages.
Telling Google about your Canonical pages:
The support article also includes a table and details on various methods for telling google about canonical pages roughly in importance order:
add a <link> tag to the HTML of all duplicate pages with the rel=canonical attribute to point to the new URL (i.e. googles example: <link rel="canonical" href="https://example.com/dresses/green-dresses" />)
rel=canonical HTTP headers. (i.e. Link: <http://www.example.com/downloads/white-paper.pdf>; rel="canonical" )
Submit your canonical URL's in a sitemap
Use 301 (permanent) redirects for URLs that have permanently moved so that the old and new locations aren't marked as duplicates of each other

Canonical Link Element for Dynamic Pages ( rel="canonical") [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I have a stack system that passes page tokens in the URL. As well my pages are dynamically created content so I have one php page to access the content with parameters.
index.php?grade=7&page=astronomy&pageno=2&token=foo1
I understand the search indexing goal to be The goal is to have only one link per unique set of data on your website.
Bing has a way to specify specific parameters to ignore.
Google it seems uses rel="canonical" but is it possible to use this to tell Google to ignore the token parameter? My URL (without tokens) can be anything like:
index.php?grade=5&page=astronomy&pageno=2
index.php?grade=6&page=math&pageno=1
index.php?grade=7&page=chemistry&page2=combustion&pageno=4
If there is not a solution for Google... Other possible solutions:
If I provide a site map for each base page, I can supply base URLs but any crawing of that page's links will crate tokens on resulting pages. Plus I would have to constantly recreate the site map to cover new pages (e.g. 25 posts per page, post 26 is on page 2).
One idea I've had is to identify bots on page load (I do this already) and disable all tokens for bots. Since (I'm presuming) bots don't use session data between pages anyway, the back buttons and editing features are useless. Is it feasible (or is it crazy) to write custom code for bots?
Thanks for your thoughts.
You can use the Google Webmaster Tools to tell Google to ignore certain URL parameters.
This is covered on the Google Webmaster Help page.

SEO - What to do when content is taken offline [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm going to have a site where content remains on the site for a period of 15 days and then gets removed.
I don't know too much about SEO, but my concern is about the SEO implications of having "content" indexed by the search engines, and then one day it suddenly goes and leaves a 404.
What is the best thing I can do to cope with content that comes and goes in the most SEO friendly way possible?
The best way will be to respond with HTTP Status Code 410;
from w3c:
The requested resource is no longer available at the server and no
forwarding address is known. This condition is expected to be
considered permanent. Clients with link editing capabilities SHOULD
delete references to the Request-URI after user approval. If the
server does not know, or has no facility to determine, whether or not
the condition is permanent, the status code 404 (Not Found) SHOULD be
used instead. This response is cacheable unless indicated otherwise.
The 410 response is primarily intended to assist the task of web
maintenance by notifying the recipient that the resource is
intentionally unavailable and that the server owners desire that
remote links to that resource be removed. Such an event is common for
limited-time, promotional services and for resources belonging to
individuals no longer working at the server's site. It is not
necessary to mark all permanently unavailable resources as "gone" or
to keep the mark for any length of time -- that is left to the
discretion of the server owner.
more about status codes here
To keep the traffic it may be an option to not delete but archive the old content. So it remains accessible by its old URL but linked at some deeper points in the archive on your site.
If you really want to delete it then it is totally ok to return with 404 or 410. Spiders understand that the resource is not available anymore.
Most search engines use something called a robot.txt file. You can specify which URLs and Paths you want the search engine to ignore. So if all of your content is at www.domain.com/content/* then you can have Google ignore that whole branch of your site.

Refresh Google Search Results for My Site [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
I set up a site with a template and the title was something they supplied as a default. When I searched for my site's title, it showed up in results, but it was with their default title. After changing it a couple days ago, my site still shows up with the default title instead of what I changed it to.
Is there any way I can force Google to update their information so the title I have now shows up in results instead of the default title?
This will refresh your website immediately:
From Web Master tools Menu -> Crawl -> Fetch as Google
Leave URL blank to fetch the homepage then click Fetch
Submit to Index button will appear beside the fetched result; click it then choose > URL and all linked pages > OK
Just wait, Google should normally revisit your site and update its informations. But if you are hurried, you can try the following steps :
Increase the crawl speed of your site in Google Webmaster Tools : http://support.google.com/webmasters/bin/answer.py?hl=en&answer=48620
Ping your website on service like http://pingomatic.com/
Submit if you have not yet or resubmit an updated sitemap of your website.
Fetching as Google works, as already suggested. However stage 2 should be - submit your sites to several large social bookmarking sites like digg, reddit, stumbleupon, etc etc. There are huge lists of these sites out there.
Google notices everything on these sites and it will speed up the re crawling process. You can keep track of when Google last cached your site (There is a big difference between crawling and caching) by going to.. cache:sitename.com

SEO Help with Pages Indexed by Google [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm working on optimizing my site for Google's search engine, and lately I've noticed that when doing a "site:www.joemajewski.com" query, I get results for pages that shouldn't be indexed at all.
Let's take a look at this page, for example: http://www.joemajewski.com/wow/profile.php?id=3
I created my own CMS, and this is simply a breakdown of user id #3's statistics, which I noticed is indexed by Google, although it shouldn't be. I understand that it takes some time before Google's results reflect accurately on my site's content, but this has been improperly indexed for nearly six months now.
Here are the precautions that I have taken:
My robots.txt file has a line like this:
Disallow: /wow/profile.php*
When running the url through Google Webmaster Tools, it indicates that I did, indeed, correctly create the disallow command. It did state, however, that a page that doesn't get crawled may still get displayed in the search results if it's being linked to. Thus, I took one more precaution.
In the source code I included the following meta data:
<meta name="robots" content="noindex,follow" />
I am assuming that follow means to use the page when calculating PageRank, etc, and the noindex tells Google to not display the page in the search results.
This page, profile.php, is used to take the $_GET['id'] and find the corresponding registered user. It displays a bit of information about that user, but is in no way relevant enough to warrant a display in the search results, so that is why I am trying to stop Google from indexing it.
This is not the only page Google is indexing that I would like removed. I also have a WordPress blog, and there are many category pages, tag pages, and archive pages that I would like removed, and am doing the same procedures to attempt to remove them.
Can someone explain how to get pages removed from Google's search results, and possibly some criteria that should help determine what types of pages that I don't want indexed. In terms of my WordPress blog, the only pages that I truly want indexed are my articles. Everything else I have tried to block, with little luck from Google.
Can someone also explain why it's bad to have pages indexed that don't provide any new or relevant content, such as pages for WordPress tags or categories, which are clearly never going to receive traffic from Google.
Thanks!
It would be a better idea to revise your meta robots directives to:
<meta name="robots" content="noindex,noarchive,nosnippet,follow" />
My robots file was blocking access to the page where the meta tag was included. Thus, even though the meta tag told Google to not index my pages, Google never got that far.
Case closed. :P
If you have blocked and tested URL in robots.txt, it must work. Here you don't need to add additional meta tag into particular page.
I am sure, give some time to Google for crawling your website. It should work !
For removing URLs, you can use Google webmaster tool. (i am sure you know that)