I have an issue with my website http://www.kuhm.fr/ and the "+1" on all pages (like http://www.kuhm.fr/erreurs-communes/): when people click on "+1", the number increases, but if you refresh the page, the number is the same than before.
The worst : then the button is red (so Google see that the visitor plused) but his "+1" is not recorded.
It worked once (some monthes ago) but now it's over.
Shares (and +1 on shares) are ok, but not "+1" on the website.
Some idea to help me ?
Your page does have some issues that may or may not be the issue:
On your home page, you are incuding the Google+ JavaScript over and over again with each listed article. Only include the plusone.js file a single time on any page. At the very least, you'll improve performance from load time and from the DOM scanning that the script does.
On your article pages, you are also including the plusone.js file multiple times, but also in different manners, sometimes you're loading it asynchronously (desired) and other times its loaded synchronously (not efficient).
There appears to be a problem fetching the snippet from your page, for example, the thumbnail is broken and the script is returning a HTTP 500 internal server error. Check your pages with the Structured data testing tool. This could be a Google bug, but you should verify that your own code is correct too.
In my testing the +1's were getting recorded but sometimes the count of 1 was displayed and sometimes the count of 0 was displayed. It looks like a bug on Google's side so you should file a bug report.
Related
I've searched all over the place and I can't figure out what I'm doing wrong. No matter what I still get a Page does not contain authorship markup on the structured data testing tool
I have two sites with almost identical pages. The rel=author tags are inserted the same way.
Here is an example of one page that works: http://bit.ly/18odGef
Here is an example of one page that doesn't: http://bit.ly/12vXdAm
I tried adding ?rel=author to the end of the Google+ profile URL, which doesn't seem to work on either site. I am not blocking anything via nofollow or robots.txt. The tool is not being blocked by a firewall or anything. Can anyone see what I'm doing wrong here and why it works for one site, but not the other?
FYI, the site that does not work used to work without a problem. I hadn't changed anything with how the author markup was organized until I realized it wasn't working anymore.
When I test both of those pages in Google's structured data test tool, it shows that authorship is working correctly for both pages.
Here are the results for the page you said was working: https://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fnikonites.com%2Fd5100%2F2507-d5100-vs-d90.html%23axzz2rFFm1eVv
Here are the results for the page you said wasn't working: https://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fcellphoneforums.net%2Fsamsung-galaxy%2Ft359099-enable-auto-correct-galaxy-note-ii.html%23axzz2rFFlwz3W
How does one write a script to download one's Google web history?
I know about
https://www.google.com/history/
https://www.google.com/history/lookup?hl=en&authuser=0&max=1326122791634447
feed:https://www.google.com/history/lookup?month=1&day=9&yr=2011&output=rss
but they fail when called programmatically rather than through a browser.
I wrote up a blog post on how to download your entire Google Web History using a script I put together.
It all works directly within your web browser on the client side (i.e. no data is transmitted to a third-party), and you can download it to a CSV file. You can view the source code here:
http://geeklad.com/tools/google-history/google-history.js
My blog post has a bookmarklet you can use to easily launch the script. It works by accessing the same feed, but performs the iteration of reading the entire history 1000 records at a time, converting it into a CSV string, and making the data downloadable at the touch of a button.
I ran it against my own history, and successfully downloaded over 130K records, which came out to around 30MB when exported to CSV.
EDIT: It seems that number of foks that have used my script have run into problems, likely due to some oddities in their history data. Unfortunately, since the script does everything within the browser, I cannot debug it when it encounters histories that break it. If you're a JavaScript developer, use my script, and it appears your history has caused it to break; please feel free to help me fix it and send me any updates to the code.
I tried GeekLad's system, unfortunately two breaking changes have occurred #1 URL has changed ( I modified and hosted my own copy which led to #2 type=rss arguments no longer works.
I only needed the timestamps... so began the best/worst hack I've written in a while.
Step 1 - https://stackoverflow.com/a/3177718/9908 - Using chrome disable ALL security protocols.
Step 2 - https://gist.github.com/devdave/22b578d562a0dc1a8303
Using contentscript.js and manifest.json, make a chrome extension, host ransack.js locally to whatever service you want ( PHP, Ruby, Python, etc ). Goto https://history.google.com/history/ after installing your contentscript extension in developer mode ( unpacked ). It will automatically inject ransack.js + jQuery into the dom, harvest the data, and then move on to the next "Later" link.
Every 60 seconds, Google will force you to re-login randomly so this is not a start and walk away process BUT it does work and if they up the obfustication ante, you can always resort to chaining Ajax calls and send the page back to the backend for post processing. At full tilt, my abomination script collected 1 page a second of data.
On moral grounds I will not help anyone modify this script to get search terms and results as this process is not sanctioned by Google ( though not blocked apparently ) and recommend it only to sufficiently motivated individuals to make it work for them. By my estimates it took me 3-4 hours to get all 9 years of data ( 90K records ) # 1 page every 900ms or faster.
While this thing is going, DO NOT browse the rest of the web because Chrome is running with no safeguards in place, most of them exist for a reason.
One can download her search logs directly from Google (In case downloading it using a script is not the primary purpose),
Steps:
1) Login and Go to https://history.google.com/history/
2) Just below your profile picture logo, towards the right side, you can find an icon for settings. See the second option called "Download". Click on that.
3) Then click on "Create Archive", then Google will mail you the log within minutes.
maybe before issuing a request to get the feed the script shuld add a User-Agent HTTP header of well known browser, for Google to decide that the request came from that browser.
I have a website and in my website I have, for example, a list of Audi models. I saw, using google webmaster tools, that my website appears in the google search by the word audi, but the target page was the 22nd page from my result set, not the first. I need my first page to appead, not my last (or middle), but I cannot tell google that this is a parameter, because my URLs are rewritten using mod rewrite. Any ideas?
BTW, I have read in a SEO forum, that it's a bad idea to use a cannonical tag. So is it really a bad idea in my case?
You can't force Google to do anything, however, they have made it easier to deal with pagination issues with a recent post on rel="next" and rel="prev".
But the primary problem you face is signalling to Google that your first (main) page is the starting point - this is achieved using internal link and back-link "juice" focussed on that page. You need to ensure that the first page of results is linked to properly from higher-value pages (like the home-page).
Google recently announced that you can use View All which will allow them to find and index entire articles that are normally broken up using pagination and display them all as one result.
I am doing some page speed optimisation on a rather large website. I would like to be able to record the overall loading time for each page on my site. So, from request to completion of all elements loading.
Can anyone recommend software/tools/etc that will provide me with a log of pages along with their total loading time so I can target the pages that take the longest?
I should probably add that I am already aware of firebug, Yslow!, Page Speed plugin and 'Site Performance' in Google Webmaster Tools. Webmaster tools provides the closest answer to my problem, but only the top 10.
Try log parser... http://support.microsoft.com/kb/910447
Apart from that, not totally related to your query... this would be a good tool as well for analysis.
http://www.microsoft.com/downloads/en/details.aspx?FamilyID=60585590-57DF-4FC1-8F0C-05A286059406
Some e-Marketing tools claim to choose which web page to display based on where you were before. That is, if you've been browsing truck sites and then go to Ford.com, your first page would be of the Ford Explorer.
I know you can get the immediate preceding page with HTTP_REFERRER, but how do you know where they were 6 sites ago?
Javascript this should get you started: http://www.dicabrio.com/javascript/steal-history.php
There are more nefarius means to: http://ha.ckers.org/blog/20070228/steal-browser-history-without-javascript/
Edit:I wanted to add that although this works it is a sleazy marketing teqnique and an invasion of privacy.
Unrelated but relevant, if you only want to look one page back and you can't get to the headers of a page, then document.referrer gives you the place a visitor came from.
You can't access the values for the entries in browser history (neither client side nor server side). All you can do is to send the browser back or forward a number of steps. The entries of the history are otherwise hidden from programmatic access.
Also note that HTTP_REFERER won't be there if the user typed the address in the URL bar instead of following a link to your page.
The browser history can't be directly accessed, but you can compare a list of sites with the user's history. This can be done because the browser attributes a different CSS style to a link that hasn't been visited and one that has.
Using this style difference you can change the content of you pages using pure CSS, but in general javascript is used. There is a good article here about using this trick to improve the user experience by displaying only the RSS aggregator or social bookmarking links that the user actually uses: http://www.niallkennedy.com/blog/2008/02/browser-history-sniff.html