Star rating not showing up in Google results, but in testing tool works fine - schema

So I have this problem, I've done all changes need and suggested by google, and still I don't get those stars under my search since my articles have rating system and it's all packed.
It does show it in google test tool alright but in actual google results it's just simple result block with no extra stuff that I've marked-up, one of those being star rating.

It would be a good idea if you could post a URL of a page that you've marked up so that we could take a look. It's still possible to have technical errors in your markup, and it's also possible that your markup does not meet Google's guidelines, even if the testing tool shows no errors. But even if your markup is technically perfect and it meets Google's guidelines, there are no guarantees that Google will display your rich snippets in the SERPs. Google uses a number of various quality signals to determine if, when, and which rich snippets to display for a page.
But again, if you could share a URL with us, we could at least take a closer look at things. Thanks.

Related

What is wrong with my rich snippets? Where are my stars?

According to google's Structured Data Testing Tool, there are no errors in my review schema code, but the stars still are not displaying in the preview. Does anyone have any idea why? I thought maybe it was a nesting issue, but I tried to organize the data in all kinds of arrangements and to no avail. Any thoughts would be very appreciated!
Thanks in advance!
Here's the page I'm referring to:
http://www.junkluggers.com/locations/westchester-ny/white-plains-ny-junk-removal-and-furniture-pickup/
(The review I'm working on is the one at the bottom of the page, not the testimonial on the right sidebar.)
According to Google:
" If you've added structured data for rich snippets, but they are not appearing in search results, the problem can be caused by two types of issues:
Technical issues with the structured data markup or with the Google’s ability to crawl, index, and utilize the structured data.
Quality issues, that is, structured data that is technically correct, but does not adhere to Google’s quality guidelines."
Full answer here: https://support.google.com/webmasters/answer/1093493?hl=en
Along with RustyFluff's comment, I do notice a few technical errors in your markup, Catherine. In a nutshell, you haven't defined who or what is being reviewed, and you should be using the reviewBody property instead of description. You also should remove the city from within the author's name markup. And something else that I should point out is that you should remove the authorship markup from the page, as it's not appropriate for an authorship tag according to Google's guidelines. Also, the publisher tag only needs to go on your homepage, and it should link to your Google+ business page, not to a personal profile.
Keep in mind, though, that even if your markup is technically perfect, there are no guarantees that Google will display your rich snippets. They determine that based on, among other things, various quality signals.

Hiding a page part from Google, does it hurt SEO?

We all know that showing inexistent stuff to Google bots is not allowed and will hurt the search positioning but what about the other way around; showing stuff to visitors that are not displayed for Google bots?
I need to do this because I have photo pages each with the short title and the photo along with textarea containing the embed HTML code. googlebot is taking the embed code and putting it at the page description on its search results which is very ugly.
Please advise.
When you start playing with tricks like that, you need to consider several things.
... showing stuff to visitors that are not displayed for Google bots.
That approach is a bit tricky.
You can certainly check User-agents to see if a visitor is Googlebot, but Google can add any number of new spiders with different User-agents, which will index your images in the end. You will have to constantly monitor that.
Testing of each code release your website will have to check "images and Googlebot" scenario. That will extend testing phase and testing cost.
That can also affect future development - all changes will have to be done with "images and Googlebot" scenario in mind which can introduce additional constraints to your system.
Personally I would choose a bit different approach:
First of all review if you can use any methods recommended by Google. Google provides a few nice pages describing that problem e.g. Blocking Google or Block or remove pages using a robots.txt file.
If that is not enough, maybe restructuring of you HTML would help. Consider using JavaScript to build some customer facing interfaces.
And whatever you do, try to keep it as simple as possible, otherwise very complex solutions can turn around and bite you.
It is very difficult to give you very good advise without knowledge of your system, constraints and strategy. But I hope my answer will help you out to choose good architecture / solution for your system.
Boy, you want more.
Google does not because of a respect therefore judge you cheat, he needs a review, as long as your purpose to the user experience, the common cheating tactics, Google does not think you cheating.
just block these pages with robots.txt and you`ll be fine, it is not cheating - that's why they came with solution like that in the first place

schema.org markups for search results pages

Was wondering if there are some markups in schema.org for a search results page which Google currently honors .. I was trying
ItemList (http://schema.org/ItemList)
and
AggregateOffer (http://schema.org/AggregateOffer),
but none of them seems to be coming up on Google yet (as in they still dont support it or show up that markup on the search page). Are there any other markups I can try ?
Thank you :)
Search for a restaurant, place, or product and you'll see microformats that google recognizes and uses to format its search results. Yelp reviews all also have a price range. They are used widely. I am pretty sure they use the Places stuff widely as well, and believe I have seen cases of books having author name and so on displayed.
But...
How they are used, in what cases, for what sites, and for what queries google decides to use this information is entirely up to the search engine.
Within weeks of announcements about microformats for product ratings, sites entirely unrelated to the topic were adding microformats having product rating information, so think of them as a hint that Google (and other SE's) might use in some cases when they are confident that it's accurate and helpful.
It might just take time for Google to trust your site.

How to get Author data into Google search results without Microformats?

First, I apologize if this is not considered programming related enough for some peoples taste, however I feel it is appropriate as my question is related to what you put in a websites markup, I think so anyways.
Ok so I searched Google for the term dribbble invite and on page 2 of my results, or at this URL Google result the 5th result on page 2 (will probably be different for you based on your location and other factors) There is a result like the image below
Notice the author Photo and name. I am looking for how to do this with a website? From my research in the past it looks like it is done with Microformats however a search through the source code of the page HERE does not appear to be using any Microformats.
Any idea how this is happening for that website?
Typically, this is done through Google+.
There's a pretty good article on how-to here :
http://www.labnol.org/internet/author-profile-in-google/19775/

How does Google Know you are Cloaking?

I can't seem to find any information on how google determines if you are cloaking your content. How, from a technical standpoint, do you think they are determining this? Are they sending in things other than the googlebot and comparing it to the googlebot results? Do they have a team of human beings comparing? Or can they somehow tell that you have checked the user agent and executed a different code path because you saw "googlebot" in the name?
It's in relation to this question on legitimate url cloaking for seo. If textual content is exactly the same, but the rendering is different (1995-style html vs. ajax vs. flash), is there really a problem with cloaking?
Thanks for your put on this one.
As far as I know, how Google prepares search engine results is secret and constantly changing. Spoofing different user-agents is easy, so they might do that. They also might, in the case of Javascript, actually render partial or entire pages. "Do they have a team of human beings comparing?" This is doubtful. A lot has been written on Google's crawling strategies including this, but if humans are involved, they're only called in for specific cases. I even doubt this: any person-power spent is probably spent by tweaking the crawling engine.
Google looks at your site while presenting user-agent's other than googlebot.
See the Google Chrome comic book page 11 where it describes (even better than layman's terms) about how a Google tool can take a schematic of a web page. They could be using this or similar technology for Google search indexing and cloak detection - at least that would be another good use for it.
Google does hire contractors (indirectly, through an outside agency, for very low pay) to manually review documents returned as search results and judge their relevance to the search terms, quality of translations, etc. I highly doubt that this is their only tool for detecting cloaking, but it is one of them.
In reality, many of Google's algos are trivially reversed and are far from rocket science. In the case of, so called, "cloaking detection" all of the previous guesses are on the money (apart from, somewhat ironically, John K lol) If you don't believe me set up some test sites (inputs) and some 'cloaking test cases' (further inputs), submit your sites to uncle Google (processing) and test your non-assumptions via pseudo-advanced human-based cognitive correlationary quantum perceptions (<-- btw, i made that up for entertainment value (and now i'm nesting parentheses to really mess with your mind :)) AKA "checking google resuts to see if you are banned yet" (outputs). Loop until enlightenment == True (noob!) lol
A very simple test would be to compare the file size of a webpage the Googlbot saw against the file size of the page scanned by an alias user of Google that looks like a normal user.
This would detect most suspect candidates for closeer examination.
They call your page using tools like curl and they construct a hash based on the page without the user agent, then they construct another hash with the googlebot user-agent. Both hashes must be similars, they have algorithms to check the hashes and know if its cloaking or not