Google Search API for desktop application. Which API to use to make maximum requests per day? - api

I'm building an application that should query Google search very often. But i'm having trouble choosing which API i should use. There are so many of them - AJAX, REST, Web, SOAP, Custom and maybe something else. Some of them are deprecated now. From that list, from what i understand, only AJAX and Custom Search API are not. Custom Search API has 100 requests per day limit. Very small amount. I couldn't find any published limits for AJAX API, but it looks like i can do only 20 requests per hour or so. Also not so good.
So, which API should i use in desktop application to get as much as possible? And second question: what else i can do to increase the limit? Maybe set appropriate http headers, use API key or something else?

Related

How to add pagination in API

I want to know how to add pagination in APIs for enhanced data retrieval most specifically the youtube API!
I didn't try out anything so far as it's a new concept towards me!
What i personally do usually is one of two things,
(the most preferred way for me) I create more than one API Token. every X requests i dynamically change the API that executes the request., then it avoids throttlings.
When requesting or sending a large amount of requests, you can stop dynamically every X time.

Restrict API access to my Frontend app using daily random API key

I'm creating an API that will be accessed solely by my Vue.js application for displaying data like movie times, video on demand links, etc. No private sensitive info, but I would like to avoid other people and bots using my resources to get free data. I know restricting an API to my own one-page frontend application is almost impossible, as someone can always either:
Get the API key from the page source
Spoof the referrer header to the one that my API is restricted to
So I was thinking to "attenuate the damage", i.e. the amount of bots using my API, by having the backend server generate an API key every day at noon for example. Then when the PHP loads the Vue.js application, it inserts that API key in the Vue.js code, which will use it to query my Python API. If Vue gets an "incorrect API key" error (case when the page was loaded at 11:59 and a request was sent at 12:01), the Vue.js would refresh the page to get the new key.
This way, if someone took the API key from the source, it would expire in less than 24 hours anyways. Of course someone could scrape the page to get the API key every day and still use the API, but I feel this would stop a lot of bots and spammers.
Has anyone ever tried anything like this? Does it sound like a viable solution or there is something better to be done that I couldn't find on StackOverflow?
How about Server-side Rendering from the same location (server, private DNS) as the API server?
You can write fetch(api) within the server function itself.

Reverse Geocoding with Worklight

I'm currently working on a Worklight Project that deals with location based services. I want to be able to get the ZipCode of an user's current location for the iOS platform specifically. I researched online and there are many ways to approach this. I currently have it implemented using a custom cordova plugin using native location manager features and retrieve the zip code through reverse geocoding. This approach seem like I'm doing it the long way. I noticed that google provides an api call for the reverse geocoding by just supplying the lat and long. However, there is a limit to how many calls you can make.
Users of the free API:
2,500 requests per 24 hour period.
10 requests per second.
Maps for Business customers:
100,000 requests per 24 hour period.
10 requests per second.
This app needs to have no restrictions on how many times it can get the location based on zip code.
Does Worklight have a simpler or better way of getting the zip Code for user's location(I've checked the worklight api reference calls but didn't see anything about retrieving user's zip code)?
Worklight provide a way to implement this by using adapters, but not the API itself. Although you could the adapter to work as something like a local cache of the ZIP you already know.
To save money due to the APIs that would be usually based on a number of calls, we would need to have some cache, database(more likely: CouchDB or mongoDB) to handle this cache of what you already know.
A mobile(app-side) solution + a server side solution. On putting this 2 together, worklight would help you.

Limits of the Wikipedia API

I read that wikipedia's API is called MediaWiki. My question is regarding this API. Does this API have a maximum of calls per day/ hours / minutes ? I can't seem to find it.
See the wikimedia REST API "Terms and Conditions" for the latest rate limits (200 requests per second in 2022). What do you plan to do with the Wikipedia API?
They state some API:Etiquette and API:FAQ.
There is no hard and fast limit on read requests, but we ask that you
be considerate and try not to take a site down. Most sysadmins reserve
the right to unceremoniously block you if you do endanger the
stability of their site.
If you make your requests in series rather than in parallel (i.e. wait
for the one request to finish before sending a new request, such that
you're never making more than one request at the same time), then you
should definitely be fine. Also try to combine things into one request where you can (e.g. use multiple titles in a titles parameter instead of making a new request for each title
API FAQ states you can retrieve 50 pages per API request.
You can use Data Dumps as well if you need content offline (likely a little outdated).
For a graceful termination of your script in case you hit any of the limits, you can handle errors & warnings in API calls using these status messages.
If there is no need of a "live sample", it would be better to use a data-dump.

How often to run the cron, to mine twitter public timeline?

The webapps that depend on the public timeline of twitter, how often do they collect the data? There must be hundreds of thousands of messages every minute, correct? How do they manage to collect all the tweets, without missing any of them?
Some services (Friendfeed is a good example) are granted access to the Twitter Streaming API, aka the 'firehose'. It requires approval and a written agreement.
The publictimeline is not a great place to mine data anymore. Twitter now uses its Streaming APIs to output tweets like crazy. The closest comparison to the publictimeline would be the spritzer method, but that only includes a small sample. If you need to gather all (or more) tweets than the spritzer method, you'll need to sign a written agreement to get access to other Streaming API (HTTP push) feeds, such as the firehose feed, which returns all public tweets.
The twitter API is rate limited, as has been said. The public timeline (twitter.com/public_timeline) is not rate limited in the same sense, but it is only updated every 5 seconds, so most tweets never appear there.
There are I think three or four companies that have access to the firehose, as Twitter's full feed is called. FriendFeed is one of these. Another is Gnip. Gnip resells the feed to other companies. This is probably the only feasible way to get a full twitter feed.
Go here:
http://twitter.com/help/request_whitelisting
and get your account white-listed (allows 20,000 per hour) if 100 requests per hour isn't enough.
#ceejayoz its not 100 GET requests its 100 requests in general excluding a few requests like verify_credentials and rate_limit_status.