How to re index new keywords - seo

My question is I had changed My Websites Keywords and submitted it to the google i.e verified by the google MetaTag. and Google is indexing that keywords. I have changed my keywords, But google indexing older keywords not the recent keywords, Will I need to re-verify by the google MetaTag or please suggest me any other solution which will index my new keywords

If you are talking about webmaster tools (Google MetaTag): no you don't need to resubmit, google will find it on it's own.
If you are talking about the robots metatag: no you don't need to resubmit, google will find it on it's own.
If you are talking about the "keywords" tag: don't use it (http://www.mattcutts.com/blog/keywords-meta-tag-in-web-search/)

It will take some time to index your keywords. You can make change in on page factors such as fresh content but make sure about keyword density.

Related

Search Database via Google Custom Search? Attached Google CSE to (SQL/NoSQL) Database for website?

TOPIC - Google Search Engine / Custom Search - with Database
References
Search for "Google Search Engine" and "Google Custom Search"
(New to StackOverflow; just joined the other day.I'm limited to 2 links I can post right now).
NOTE:
I have not YET decided/committed to any specific coding language, framework, etc. Not until I figure out how to accomplish my question (below).
BACKGROUND INFO
What I'm trying to do (for now) is add a "search-box/ search engine" to a simple website I'm building out. Before I get too far into it (planning ahead) I would like to use Google CSE if all possible (which can do A LOT of things and works well). However, I will have a database (not sure on type YET. Will depend on what my options and I can do with CSE) of "items" that I want to be able to quickly search (in the search-box) i.e. like Amazon.com.
QUESTION:
Is there any way at all, to use Google Custom Search and or Custom Search API to search/attach a database (SQL, NoSQL, or others)? I would HIGHLY prefer being able to do all of this in Google Cloud Platform, and use one of their storage/database products.
If I get what you try to do, Google CSE is enough.
From the google doc you linked :
#Defining a Custom Search Engine in Control Panel
In the Sites to search section, add the pages you want to include in
your search engine. You can include any sites you want, not just the
sites you own. You can include whole site URLs or individual pages
URLs. You can also use URL patterns.
#Enabling Autocomplete
[...]you can enable or disable autocomplete feature using
enableAutoComplete attribute.
For the Is there any way at all [..] to search a database, I'll said not directly, but it's not a big problem.
Google CSE work on "indexable web pages", so it'll not work again a raw DB, restricted internet, or custom network not under http(s)://.
But in your case, if you make a DB, I suppose you'll have to make web page to display the data you store inside to your users ? (like products pages on Amazon)
If yes, then you'll run Google CSE again these pages by adding your http://[server ip] or http://[domain name] in the white list.
As far as I know, custom search won't guarantee all your content will be indexed.
You probably want to try exporting a full sitemap.xml, a RSS feed and if the custom search results from either of these won't satisfy you, you will probably want to look at the google search appliance product.
There's also http://sphinxsearch.com/ by the way.

Google duplicate content issue for social network applications

I am making a social network application where user will come and share the posts like facebook. But now I have some doubts like lets say a user is just shared a content by coping it from another site and same with the case of images. So does google crawler consider it as a duplicate content or not?
If yes then how I can tell to the google crawler that "don't consider it as a spam, its a social networking site and the content is shared by the user not by the me". Is there any way or any kind of technique that help me.
Google might consider it to be duplicate content, in which case the search algorithm will choose 1 version, which it believes to be the original or more important one and drop the other.
This isn't a bad thing per se - unless you see that most of your site's content is becoming duplicated.
You can use canonical URL declarations to do what you are saying, but i wouldn't advise it.
If your website belongs to one of these types - forum or e-commerce, it will not be punished for duplicate content issue. I think "social platform" is one type of forum.
If your pages are too similar, the result is that the two or more similar pages will scatter the click rate, flow etc, so the rank in SERPs may not look well.
I suggest do not use "canonical" because this instruction tell the crawlers do not crawl/count this page. If you use it, in the webmaster tool, you will see the indexed pages decrease a lot.
Do not too worry about the duplicate content issue. You can see this article: Google’s Matt Cutts: Duplicate Content Won’t Hurt You, Unless It Is Spammy

How to Index new keywords

My question is I had changed My Websites Keywords and submitted it to the google i.e verified by the google MetaTag. and Google is indexing that keywords. I have changed my keywords, But google indexing older keywords not the recent keywords,
Will I need to re-verify by the google MetaTag
or please suggest me any other solution which will index my new keywords
The google crawler doesn't run everyday. Sometimes it can take up to a week for new keywords to be indexed. It's just a matter of waiting. They'll show up.
If you have pages where the content changes frequently, when you provide google with your sitemap, you can stipulate this. A simple tool that will auto generate this for you is http://www.xml-sitemaps.com/
Just alter the change
The keywords meta tag is not important. Search engines don't pay any attention to it anymore. I wouldn't waste my time worrying about.
Here's a quote from SEOmoz's Beginner Guide to SEO (http://guides.seomoz.org/chapter-4-basics-of-search-engine-friendly-design-and-development): "The meta keywords tag had value at one time, but is no longer valuable or important to search engine optimization."
Instead, I would worry about making sure the content on your site is well-written, relevant to your topic, and keyword-focused (but not keyword-crammed...sound natural). Make sure your title tags are appropriate to what's on each page. That will get you farther than meta keywords.
Additionally, it takes time for search engines to index changes. Be patient on any changes you make to your site.

How do I get Google to index changes made to my website's keywords?

I have made changes to my website's keywords, description, and title, but Google is not indexing the new keyword. Instead, I have found that Google is indexing the older one.
How can I get Google to index my site using the new keywords that I have added?
Periods between crawling a page vary a lot across pages. A post to SO will be crawled and indexed by Google in seconds. Your personal page that hasn't changed content in 20 years might not even be crawled as much as once a year.
Submitting a sitemap to the webmaster tools will likely re-crawl your website to validate your sitemap. You could use this to speed up the re-crawling.
However, as #Charles noted, the keywords meta-tag is mostly ignored by Google. So it sounds like you're wasting your time.

Is there a way to prevent Googlebot from indexing certain parts of a page?

Is it possible to fine-tune directives to Google to such an extent that it will ignore part of a page, yet still index the rest?
There are a couple of different issues we've come across which would be helped by this, such as:
RSS feed/news ticker-type text on a page displaying content from an external source
users entering contact phone etc. details who want them visible on the site but would rather they not be google-able
I'm aware that both of the above can be addressed via other techniques (such as writing the content with JavaScript), but am wondering if anyone knows if there's a cleaner option already available from Google?
I've been doing some digging on this and came across mentions of googleon and googleoff tags, but these seem to be exclusive to Google Search Appliances.
Does anyone know if there's a similar set of tags to which Googlebot will adhere?
Edit: Just to clarify, I don't want to go down the dangerous route of cloaking/serving up different content to Google, which is why I'm looking to see if there's a "legit" way of achieving what I'd like to do here.
What you're asking for, can't really be done, Google either takes the entire page, or none of it.
You could do some sneaky tricks though like insert the part of the page you don't want indexed in an iFrame and use robots.txt to ask Google not to index that iFrame.
In short NO - unless you use cloaking with is discouraged by Google.
Please check out the official documentation from here
http://code.google.com/apis/searchappliance/documentation/46/admin_crawl/Preparing.html
Go to section "Excluding Unwanted Text from the Index"
<!--googleoff: index-->
here will be skipped
<!--googleon: index-->
Found useful resource for using certain duplicate content and not to allow index by search engine for such content.
<p>This is normal (X)HTML content that will be indexed by Google.</p>
<!--googleoff: index-->
<p>This (X)HTML content will NOT be indexed by Google.</p>
<!--googleon: index>
At your server detect the search bot by IP using PHP or ASP. Then feed the IP addresses that fall into that list a version of the page you wish to be indexed. In that search engine friendly version of your page use the canonical link tag to specify to the search engine the page version that you do not want to be indexed.
This way the page with the content that do want to be index will be indexed by address only while the only the content you wish to be indexed will be indexed. This method will not get you blocked by the search engines and is completely safe.
Yes definitely you can stop Google from indexing some parts of your website by creating custom robots.txt and write which portions you don't want to index like wpadmins, or a particular post or page so you can do that easily by creating this robots.txt file .before creating check your site robots.txt for example www.yoursite.com/robots.txt.
All search engines either index or ignore the entire page. The only possible way to implement what you want is to:
(a) have two different versions of the same page
(b) detect the browser used
(c) If it's a search engine, serve the second version of your page.
This link might prove helpful.
There are meta-tags for bots, and there's also the robots.txt, with which you can restrict access to certain directories.