Google custom search engine result counts changing as results are paged - google-custom-search

I have implemented a Google custom search engine in a website. When searching a word, it brings out results like totalResults - 168 and I am retrieving it by 10 results per page.
Up to 60 results it works fine, but in the 7th page, the total results from Google api response changes to a total of 67.
I am using a free version of the Google custom search API. I don't know whether it is working correctly or not. Please provide me a solution if it is wrong or correct me if I am wrong.

It is because of google search behaviour well explained in following links.
https://productforums.google.com/forum/m/#!topic/customsearch/D4--2TfYk9A
https://productforums.google.com/forum/m/#!topic/customsearch/SjlrUMa-X-k

Related

How do I search this? Possible to access more than 100 JSON api search results if I pay for it?

How to search this?
I want to be able to:
1. create a search engine
2. programatically search it thorugh an API (python, or other)
3. paginate through the results (all of them, if I chose)
4. store URL's or results that I want.
Is this even possible with Google Custom Search Engine?
I enabled billing, my CC is up to date with Google, I do steps 1..3 above.
On a search, I will get back 4,000 results for example, but I can only access 10 at a time with the API, none more, and when I reach 100 results I am shut off.
I want to be able to process 1000 results if I wish.
Before you reply, do you personally have working code that goes beyond the 100 limit?
If so, would be very much interested in speaking, learning how you did it.
I am using Python at the moment, but it could be any language.
--
I tried using the &start=100, 200, and so on to paginate through, but this does not work.
I tried getting 100 results in a python script, ending the program, calling it again setting start=100 (after the first set returned), and nothing happened.
I want to be able to use the Google Custom Search API, pay Google for a monthly subscription but have not found that this is possible.
For any given search, I want to decide how many results to process, could be 1K, could be 20K, I simply need/want access to the full result set, but I do not, have not seemed to find a way to do this.
The API allows only a max result depth of 100. See https://developers.google.com/custom-search/v1/cse/list

Custom Search API not returning all results

I am a long time customer of using the Custom Search API.
The problem - as described in the CSE documentation - is that the API is intended to search your own site and not the web in general. It misses results, for example from books.google.com, and results from other languages etc.
Is there another (paid) API that returns all results?
Sample search string: "الاستخدامات التالية من التطبيق"
(The above search gets 1 result in Google Search but 0 results in the Custom Search I am paying for.)
Thanks.
I didn't want to switch to Bing, but I was getting better results in the end.
For anyone else having this issue:
https://learn.microsoft.com/en-us/rest/api/cognitiveservices/bing-web-api-v7-reference

Google Custom Search API: Using it as a scraper?

Is this API simply for searching your website only, or can any standard google search (even advanced search features) be submitted to it? I understand there is a limit of 100 per day, I am just curious if it can be invoked from say your own machine as the code samples and introduction indicate its intended use is for displaying results on your website. I want to search outside of a given domain and scrape standard google results for any given search. This will not be an ajax call.
My current understanding:
You're limited to 100/day only if you don't pay.
You do have to specify domains, but some tlds are fine (eg: .uk)
There's a limit to 100 search results for any given search query (ten pages of up to ten responses)
It can be invoked from your own machine.

How to retrieve all tweets from a user and not just the first 3,200 as Twitter limits it’s timeline and API to

With https://dev.twitter.com/docs/api/1/get/statuses/user_timeline I can get 3,200 most recent tweets. However, certain sites like http://www.mytweet16.com/ seems to bypass the limit, and my browse through the API documentation could not find anything.
How do they do it, or is there another API that doesn't have the limit?
You can use twitter search page to bypass 3,200 limit. However you have to scroll down many times in the search results page. For example, I searched tweets from #beyinsiz_adam. This is the link of search results:
https://twitter.com/search?q=from%3Abeyinsiz_adam&src=typd&f=realtime
Now in order to scroll down many times, you can use the following javascript code.
var myVar=setInterval(function(){myTimer()},1000);
function myTimer() {
window.scrollTo(0,document.body.scrollHeight);
}
Just run it in the FireBug console. And wait some time to load all tweets.
The only way to see more is to start saving them before the user's tweet count hits 3200. Services which show more than 3200 tweets have saved them in their own dbs. There's currently no way to get more than that through any Twitter API.
http://www.quora.com/Is-there-a-way-to-get-more-than-3200-tweets-from-a-twitter-user-using-Twitters-API-or-scraping
https://dev.twitter.com/discussions/276
Note from that second link: "…the 3,200 limit is for browsing the timeline only. Tweets can always be requested by their ID using the GET statuses/show/:id method."
I've been in this (Twitter) industry for a long time and witnessed lots of changes in Twitter API and documentation. I would like to clarify one thing to you. There is no way to surpass 3200 tweets limit. Twitter doesn't provide this data even in its new premium API.
The only way someone can surpass this limit is by saving the tweets of an individual Twitter user.
There are tools available which claim to have a wide database and provide more than 3200 tweets. Few of them are followersanalysis.com, keyhole.co which I know of.
You can use a tool I wrote that bypasses the limit.
It saves the Tweets in a JSON format.
https://github.com/pauldotknopf/twitter-dump
You can use a Python library snscrape to do it. Or you can use ExportData tool to get all tweets for the user, which returns already preprocessed CSV and spreadsheet files. The first option is free, but has less information and requires more manual work.

Get A Users Over All Retweet and Mention Counts using Twitter API

We are working on some analytics using the amount a user is retweeted or mentioned... I can't seem to find a way to get these numbers using the apis does anyone have any ideas?
https://api.twitter.com/1/statuses/user_timeline.json?include_entities=true&include_rts=true&screen_name={screen_name}&count={count}
it's important to include the line include_entities=true to the request. This will give you an expanded response including re-tweet and mention counts.
Get Status / User Timeline
Twitter API Console
Update:
to get tweets from the last 90 days, there is a Node.js library you can use called Snapbird
https://github.com/remy/snapbird
.. and here is another resource covering the same topic.
http://blog.tweetsmarter.com/twitter-search/10-ways-and-20-features-for-searching-old-tweets/