I'm having trouble getting the Webmaster Tools rich snippet testing tool to properly return markup for schema.org's WebPageElement types.
http://schema.org/WebPageElement
Does anyone have a site that hosts this markup?
I'm looking for solutions for a website that has undesirable snippets returned on Google search. The website is an interactive library of slide presentations, with an advanced search function.
Many different search pages on this site are being dropped from the Google index every week. The snippet returned on these pages includes the navigation menu. There is no h1 tag and the first line of the navigation menu is in bold, so Google is identifying the menu as the main content of the page and returning this info in the search results.
I need Google to put the actual page content in the search results, to increase click through rate and resolve a probable duplicate content issue.
I thought it would be good to put an h1 tag on the site, and add schema for WebPageElement, SiteNavigationElement, WPHeader, WPFooter, and WebPage.
Does anyone have examples of this markup on their site?
In the past I've used the rich snippet tool and had it return error, and in every instance I found that my code did indeed contain an error, so I don't think it's the tool.
I have implemented several of the schema.org WebPageElement types in http://gamesforkidsfree.net/en/ including siteNavigationElement
You can check how it is being recognized by Google in Rich Snippets Testing Tool.
Also in Google Webmaster Tools, there is a section to check this kind of markup at "Optimization / Structured Data", for this case it shows:
Type Schema Items # Pages
---------------------------------------------------------
ItemPage schema.org 109,657 6,866
WPAdBlock schema.org 20,727 6,973
SiteNavigationElement schema.org 7,350 7,322
WPHeader schema.org 7,319 7,319
WPFooter schema.org 7,319 7,319
WebPage schema.org 649 649
Regarding duplicate content you can have a look at one of the many Google support pages about canonicalization (isn't that duplicate content? :) e.g. canonicalization -> hints.
It would be easier to answer if you could show the actual website or a SERP screenshot. By the way I don't think that your problem can be solved using that kind of markup since there is no evidence that Google supports it even if Schema.org is a Google initiative.
For what I understand you have two different kind of issues:
Bad search snippets. Google shows in the search snippet a fragment of the on page text that is relevant to the user query. So what you see on the search snippet largely depends on the query you typed in the search box. If you see a piece of the navigation menu in the snippets it could be that there is no relevant text in the indexed page so Google does not have anything better to show than the text in the navigation menu
Search pages being dropped from the Google index. This is a different, and more serious, problem. Are those "search pages" a good and relevant result compared to the other pages ranking for the query you are typing? Is the main topic of the page clear and explicit (remember that sometimes you nee to spoon-feed the search engines)? I'm giving you more questions than answers but, as I stated before, is not easy to diagnose a SEO problem without seeing the web site.
All the above being said, google does show in its SERP when you define BREADCRUMP and schema.org as a whole is being made by the search engine giants so implementing it ensures some level of better understanding of the bots about your page. Search engines do not tell you everything they do but if you follow the main standards they produce together you pretty much ensure yourself good content availability within the SERPs.
You shouldn't count much on the impact from that though.
I suggest you focus mainly on pretty urls, canonical usage, title, description and proper implementation of schema.org itemprop for your main content type on the inner pages as well as H1 for your title.
Also try to render your main content as high as possible within the html and avoid splitting your title, summary and image… best case scenario they should be close to each other with H1, IMG and P elements and not be divided by divs, tables and so on.
You can have a look at this site http://svejo.net/1792774-protsesat-na-tsifrovizatsiya-v-balgariya-zapochva
It has a pretty good SEO on its article pages and shows up quite nicely and often in SERPs because of its on-page SEO.
I hope this helps you.
Related
I want to achieve this when a site is searched for on Google (sub links below description).
Are these Google Sitelinks?
From what I've seen when researching into this, Sitelinks are larger and sit side-by-side, as shown in the image in this question.
If these aren't Sitelinks, can they be defined and how would this be done?
Yes these are sitelinks, the large and big sitelinks mainly they appear for the homepage or any other pages you have with a high page rank.
The little links that appears beside each other are also sitelinks for page that have less page rank or less content with poor HTML structure.
You can't control which links to appear on Google, many factors affect them like HTML structure, page rank, content, CTR and search query.
You can only remove them from Google webmaster tools by demoting a certain link from a certain page you have.
These are one-line sitelinks, introduced in 2009-04.
They are similar to the "full two-column" sitelinks, but one-line sitelinks can appear for every result, not only the first one.
I am very new to schemas (this is my first time) and I am a little confused on this info. I was reading into schemas for breadcrumbs and I came across 2 different methods:
Google way: From what I read from here, Google shows example of adding Microdata using http://data-vocabulary.org/Breadcrumb
Schema.org example: The example in Schema.org shows a very different approach. Something like this:
<div itemprop="breadcrumb">
Books >
Literature & Fiction >
Classics
</div>
My questions are:
(1) Is it better for me to use the Schema.org method instead of Data-Vocabulary.org in 2014? When I read the discussions in this topic here where some has said that Data-Vocabulary.org is outdated and Schema.org is the latest method. Is this a valid statement for today? I have still seen a lot of websites using Data-Vocabulary.org similar to Google's example.
(2) The Schema.org method is too simple and unlike Google's Data-vocabulary.org example that adds itemprop="url" for URLs, itemprop="title" for titles, etc. individually. But the Schema.org method just wraps the whole breadcrumbs and doesn't declare individual URLs and titles. So would Google's search engine understand the URLs and titles if I used the Schema.org method? Or is Google's Data-Vocabulary.org method better for Google's search engine results?
(3) Lastly, with the breadcrumb separator does it only show the separator used in the HTML markup? For instance, I have breadcrumb separator added via CSS and it's not in the HTML markup. So in this case, if the breadcrumbs are shown in search results, would it automatically add the > separator or will it show exactly the way I've shown in my HTML?
Schema.org and Data-Vocabulary.org are vocabularies. If you want, you could use both of them for the same content (the Microdata syntax makes this hard/impossible, but it’s easy with the RDFa syntax).
If you are interested in a specific consumer for your markup, it makes sense to check their documentation to see what exactly they support (of course you can’t be sure if their documentation is correct and complete).
In case of Google Search and their Rich Snippets, the documentation would be: Rich snippets - Breadcrumbs (currently "experimental"). On this page, they only give examples using the Data-Vocabulary.org.
(Note: Stack Overflow is the wrong place for discussing actual support and behaviour of third-party services like Google Search. On our sister site Webmasters such questions might be on-topic.)
According to google's Structured Data Testing Tool, there are no errors in my review schema code, but the stars still are not displaying in the preview. Does anyone have any idea why? I thought maybe it was a nesting issue, but I tried to organize the data in all kinds of arrangements and to no avail. Any thoughts would be very appreciated!
Thanks in advance!
Here's the page I'm referring to:
http://www.junkluggers.com/locations/westchester-ny/white-plains-ny-junk-removal-and-furniture-pickup/
(The review I'm working on is the one at the bottom of the page, not the testimonial on the right sidebar.)
According to Google:
" If you've added structured data for rich snippets, but they are not appearing in search results, the problem can be caused by two types of issues:
Technical issues with the structured data markup or with the Google’s ability to crawl, index, and utilize the structured data.
Quality issues, that is, structured data that is technically correct, but does not adhere to Google’s quality guidelines."
Full answer here: https://support.google.com/webmasters/answer/1093493?hl=en
Along with RustyFluff's comment, I do notice a few technical errors in your markup, Catherine. In a nutshell, you haven't defined who or what is being reviewed, and you should be using the reviewBody property instead of description. You also should remove the city from within the author's name markup. And something else that I should point out is that you should remove the authorship markup from the page, as it's not appropriate for an authorship tag according to Google's guidelines. Also, the publisher tag only needs to go on your homepage, and it should link to your Google+ business page, not to a personal profile.
Keep in mind, though, that even if your markup is technically perfect, there are no guarantees that Google will display your rich snippets. They determine that based on, among other things, various quality signals.
I'm making a site which will have reviews of the privacy policies of hundreds of thousands of other sites on the internet. Its initial content is based on my running through the CommonCrawl 5 billion page web dump and analyzing all the privacy policies with a script, to identify certain characteristics (e.g. "Sells your personal info").
According to the SEO MOZ Beginner's Guide to SEO:
Search engines tend to only crawl about 100 links on any given page.
This loose restriction is necessary to keep down on spam and conserve
rankings.
I was wondering what would be a smart way to create a web of navigation that leaves no page orphaned, but would still avoid this SEO penalty they speak of. I have a few ideas:
Create alphabetical pages (or Google Sitemap .xml's), like "Sites beginning with Ado*". And it would link "Adobe.com" there for example. This, or any other meaningless split of the pages, seems kind of contrived and I wonder whether Google might not like it.
Using meta keywords or descriptions to categorize
Find some way to apply more interesting categories, such as geographical or content-based. My concern here is I'm not sure how I would be able to apply such categories across the board to so many sites. I suppose if need be I could write another classifier to try and analyze the content of the pages from the crawl. Sounds like a big job in and of itself though.
Use the DMOZ project to help categorize the pages.
Wikipedia and StackOverflow have obviously solved this problem very well by allowing users to categorize or tag all of the pages. In my case I don't have that luxury, but I want to find the best option available.
At the core of this question is how Google responds to different navigation structures. Does it penalize those who create a web of pages in a programmatic/meaningless way? Or does it not care so long as everything is connected via links?
Google PageRank does not penalize you for having >100 links on a page. But each link above a certain threshold decreases in value/importance in the PageRank algorithm.
Quoting SEOMOZ and Matt Cutts:
Could You Be Penalized?
Before we dig in too deep, I want to make it clear that the 100-link
limit has never been a penalty situation. In an August 2007 interview,
Rand quotes Matt Cutts as saying:
The "keep the number of links to under 100" is in the technical
guideline section, not the quality guidelines section. That means
we're not going to remove a page if you have 101 or 102 links on the
page. Think of this more as a rule of thumb.
At the time, it's likely
that Google started ignoring links after a certain point, but at worst
this kept those post-100 links from passing PageRank. The page itself
wasn't going to be de-indexed or penalized.
So the question really is how to get Google to take all your links seriously. You accomplish this by generating a XML sitemap for Google to crawl (you can either have a static sitemap.xml file, or its content can be dynamically generated). You will want to read up on the About Sitemaps section of the Google Webmaster Tools help documents.
Just like having too many links on a page is an issue,having too many links in a XML sitemap file is also an issue. What you need to do is paginate your XML sitemap. Jeff Atwood talks about how StackOverflow implements this: The Importance of Sitemaps. Jeff also discusses the same issue on StackOverflow podcast #24.
Also, this concept applies to Bing as well.
If you type in "Boyce Avenue" on Google, it shows a rich snippet of their upcoming events which I'm assuming is using Event Information. However, if you type in "boyceavenue.com/tour.html" in Google's rich snippet tool, nothing shows up. Why does this happen?
I have answered this here in your other question.
Google and other search engines are using various data sources for generating Rich Snippets and similar functions. Rich meta-data crawled from the pages, be it RDFa, microdata, or microformats, is only ONE (yet important) source.
Google seems to have bilateral agreements with -- typically large -- sites for consuming their structured data directly. This why you may see rich snippets on pages that have no data markup in the HTML.
Google Fusion Tables (http://www.google.com/fusiontables/Home/) may also become a technique for exposing structured data to Google.
However, in general, using schema.org or http://purl.org/goodrelations/ (for ecommerce) markup in RDFa or microdata syntax is IMO the best option for new sites,
because the data will be accessible to search engines and browser extensions alike and
because the data will also send relevance signals to search engines (see http://wiki.goodrelations-vocabulary.org/GoodRelations_for_Semantic_SEO for more details.
Best wishes
Martin Hepp