I'm trying to improve our search ranking for 'mock web services' or 'web service mocks' however I've hit a problem - Google webmaster doesnt list 'web services' / 'webservices' / 'web-services' as a Content Keyword even though all variants are on our site.
Does Google and / or webmaster ignore certain words like 'web' because they are too common?
(I'm sure you dont need to include all variants of a word to be relevant for that term, not only is google cleverer than that but it might not look consistent to your users.)
Yes, Google can ignore certain words or phrases, though the keyword content is just a frequency count. It doesnt mean you will or wont rank for terms listed. :
Q: Why do my Webmaster Tools stats show common phrases such as "buy
now" that are not directly related to my site? A: While some common
words and phrases are filtered by Webmaster Tools, there may be some
that you use which are not. Having these words or phrases listed in
your Webmaster Tools account does not mean that our algorithms will
view your site as being only relevant for those keywords. While
Webmaster Tools mostly counts the occurences of words on your site,
our web-search algorithms use well over 200 other factors for
crawling, indexing and ranking. In other words: don't worry if you see
keywords like this listed in your Webmaster Tools account.
https://sites.google.com/site/webmasterhelpforum/en/faq--webmaster-tools#strange-words2
Related
I want to sell some products that are also prezent on another webshop. They are providing a datafeed with every information about the product, and they have nothing against that i post the info on my webshop.
The question is should i worry about duplicate content? The number of products is to high and it`s not worth rewriteing their description. Will google think that i stole the content?
Depends.
Personally i would prevent Google from indexing DC pages by adding this to the <head>...</head>:
<meta name="robots" content="noindex,follow"/>
The URLs, which come into question, won't rank anyway. So it's (usually) Ok to keep them completely out of Google's sight and don't have to worry any more about all the Algorithm-Updates.
Or, if i have a lot of pages and need more Crawl-Budget, i would use the robots.txt file:
User-agent: *
Disallow: /path/to/affiliate/products/
In this case the Linkjuice cannot flow freely within my site anymore, but all the important pages get indexed. Plus it's incredibly easy to implement. (Just don't do this if you have a lot of deeplinks to your Products from your Homepage etc.)
Matt Cutts in 2009:
"Can product descriptions be considered duplicate content?"
http://www.youtube.com/watch?v=z07IfCtYbLw
He doesn't say "its bad" but he clearly shows that Google doesn't like it.
Matt Cutts in 2012:
"Is it useful to have a section of my site that re-posts articles from other sites?"
http://www.youtube.com/watch?v=o7sfUDr3w8I States that it's propably a good idea to remove DC Pages (Like content from RSS-Feeds, Press Releases or Product-Description Feeds).
So to make a long story short - I really don't say "start panic" or whatever, i just say "remove everything from your site which could send out negative signals to Google, so you don't have to worry about it anymore" and then you can go on and build up your Brand to sell as many products as possible ;o)
Do worry about the content the site comes under the category of Affiliate sites so the product description would be same. it wont effect your site
If you want to do it properly I would get all the content re-writen. There is an amazing service out there too callws wordai.com.
Their site will re-write the content for you as if a human has on the Turing Plan.
You can then check the content with copyscape.com too see how unique it is!
Best of luck.
I want to know the page rank for certain key words against my page. For example I wrote "best movies 2012" my page does come, but in 30th to 50th page. I want to query in the result set Google gave against my keywords so that I can see the rank of my page and my competitors against typical keywords.
I think you may be confusing PageRank with positions. PageRank is an algorithm that Google uses to determine the authority of your site. This doesn't always affect the positions of certain keywords.
There are plenty of good programs and web services around that you can use such as
http://raventools.com/
Most of the good free web services have been closed down due to Google now limiting the amount of searches performed and charging for this data.
You could check out:
http://www.semrush.com
It's free but you have to register to get data.
There are several web services providing this functionality: http://raventools.com/ or http://seomoz.org/
Or, you can perform the task manually. Here is an example on how to query google search using Java: How can you search Google Programmatically Java API
You need to compare your webpage PageRank and website PR against those of the competition. The best indication we have of website PR is the HomePage PagRank.
Ensure that you do this for the appropriate Google domain - USA - Google.com - UK Google.co.uk etc
The technique is described in more detail on http://www.keywordseopro.com
You can repeat the technique for each keyword.
We're about to embark on a restructuring of our Website, and we will be separating some of our customers into different groups.
Currently all of our customers visit our homepage: www.example.com
What we are going to be doing is sending customers to specific landing pages depending on marketing segmentation.
For instance, people who we know are more likely to book a hotel might go to www.example.com/hotels, whilst people who like cars will go to www.example.com/cars.
The content might be ever so slightly different (a banner or parameter might change) but the vast majority of text (copy, layout) will stay the same.
Firstly, are Canonical Tags appropriate to use in this case to direct any Google juice back to www.example.com?
Secondly, since we will be marketing to specific groups, we will not want these pages to be indexed by Google, nor for them to appear in search rankings. With this in mind, are Canonical Tags still the correct tag to be using? That is, do Canonical Tags pass on the Google Juice to the canonical page, meaning the referrer page is not indexed?
If the core content of all those pages is the same then I think using the canonical tag will work. If Google accepts the canonicalness of the pages then it will always send people to the page you specify.
What do you mean by "sending customers to specific landing pages depending on marketing segmentation"? How is that implemented?
If all that changes is adverts then why not use the one page and dynamically insert the adverts that suit the visitor?
Seems that if you don't want the specific landing pages indexed by Google, or appearing in the search rankings, then the pages wouldn't have any 'Google juice' to consolidate. In that case, canonical tags won't hurt, but I don't think they'll have any effect.
To keep the landing pages from being indexed, you could use robots.txt, as well as the robots meta tag.
I'm trying to find out if there is a programmatic way to determine how far down in a search engine's search results my site shows up for given keywords. For example, my query would provide my domain name, and keywords, and the result would return a say 94 indicating that my site was the 94th result. I'm specifically interested in how to do this with google but also interested in Bing and Yahoo.
No.
There is no programmatic access to such data. People generally roll out their own version of such trackers. Get the Google search page and use regexes to find your position. But now different results are show in different geographies and results are personalize.
gl=us parameter will help you getting results from US, you can change geography accordingly to get the results.
Before creating this from scratch, you may want to save yourself some time (and money) by using a service that does exactly that [and more]: Ginzametrics.
They have a free plan (so you can test if it fits your requirements and check if it's really worth creating your own tool), an API and can even import data from Google Analytics.
Does anyone know how I could track what search terms people are using to arrive at my site. For instance, someone searchs google for 'giant inflatable house' and clicks through to my site. I want to be able to capture those keywords and which search engine they came from.
You must parse the referer. For exemple a google search query will contains: http://www.google.be/search?q=oostende+taxi&ie=UTF-8&oe=UTF-8&hl=en&client=safari
It's a real life query, yes I'm in Oostebde right now :)
See the query string. You can determine pretty easily what I was looking for.
Not all search engines are seo friendly, must major players are.
How to get the referer ? It depends on the script language you use.
You should use a tool like Google analytics.
Besides the Google Analytics, Google Webmaster Tools is also very useful. It can report a detail analysis of the search queries' impressions, clicks, CTR, position etc.