Streaming api number of terms limit - twitter-streaming-api

I am using twitter streaming api to track terms. I knot that there is a limit that each term should not be more than 60 characters in length. I want to know is there ant limit on the number of terms/keywords for a single stream ?
Note: I am implementing this in a ruby on rails application using twitterstream gem

It seems so, here:
The default access level allows up to 400 track keywords, 5,000 follow
userids and 25 0.1-360 degree location boxes. If you need elevated
access to the Streaming API, you should explore our partner providers
of Twitter data here.
Source: https://dev.twitter.com/docs/streaming-api/methods

Related

Twitter Streaming API limits?

I understand the Twitter REST API has strict request limits (few hundred times per 15 minutes), and that the streaming API is sometimes better for retrieving live data.
My question is, what exactly are the streaming API limits? Twitter references a percentage on their docs, but not a specific amount. Any insight is greatly appreciated.
What I'm trying to do:
Simple page for me to view the latest tweet (& date / time it was posted) from ~1000 twitter users. It seems I would rapidly hit the limit using the REST API, so would the streaming API be required for this application?
You should be fine using the Streaming API, unless those ~1000 users combined are tweeting more than (very) roughly 60 tweets per second at any moment.
Using the Streaming API endpoint statuses/filter with the follow parameter, you are allowed up to 5000 users. There is no rate limit except when the stream returns more than about 1% of the all tweets being tweeted at that moment. (60 tweets per second is 1% of the average rate of tweets, which is always fluctuating, so don't rely on that number.)
If your stream does go above the 1% threshold, you can detect this. (See the LIMIT notice.) Then you would use the REST API to find missed tweets.
Twitter simply will not allow multiple streams from one registered app/account. Doing so will result in the older one being closed.
Also too many connection tries are not allowed as well and will result in a user being blocked.
Reference docs: Public Streaming API (outdated)

how many apps can I build with one twitter account

I'm using Twitter API to crawl some data.
I'm just wondering how many apps I can build with one account to let me bypass the rate limit?
Thanks in advance!
Deliberately creating multiples for the purpose of bypassing the rate limit is against the Twitter API Terms of Service and will get your applications banned.
If you need more data than the REST API rate limits allow, then you may need to use the Streaming API.
The rate limits exist to ensure the best quality of service for as many people as possible. Don't abuse them.

How to get unblocked after exceeding the Google Geocode API usage limit?

Have searched for answers on this for 2 days now with very little luck.
I'm developing a Drupal 7 site which has a Geofield field being autopopulated from an address field using the Google Geocoder API, but as of a couple of days ago this stopped working:
Exception: Google API returned bad status.\nStatus: OVER_QUERY_LIMIT in geocoder_google() (line 52 of /home/.../modules/geocoder/plugins/geocoder_handler/google.inc).
I can remove the proximity search filter that is sending too many requests to the Google API but I can't progress because I run into the above error every time I try to add a new record to the database (which just does one lookup to get a geocode from an address field but fails). Is there any way to unblock my site from Google's API or reset my usage? I've added an API key but to no avail. This was all working fine up until very recently, which I guess is when I unknowingly exceeded the usage limit.
I have limited API experience and am a Drupal/PHP beginner so please be gentle! Happy to provide more info, code, error messages etc if needed. Relevant Drupal 7 modules being used are OpenLayers, OpenLayers Proximity, Geofield, GeoPHP and Geocoder. Thanks for any help anyone can offer.
From Google Geocode Documentation:
Use of the Google Geocoding API is subject to a query limit of 2,500 geolocation requests per day. (User of Google Maps API for Business may perform up to 100,000 requests per day.) This limit is enforced to prevent abuse and/or repurposing of the Geocoding API, and this limit may be changed in the future without notice. Additionally, we enforce a request rate limit to prevent abuse of the service. If you exceed the 24-hour limit or otherwise abuse the service, the Geocoding API may stop working for you temporarily. If you continue to exceed this limit, your access to the Geocoding API may be blocked.
So, I guess you have to wait 24 hours, or upgrade to the business version.

Google API Request Limit

Does anyone know where I can find Google API Request Limits for their different services?
On simulating 500+ concurrent users it seems to fail silently fairly often (maybe 1 in 10 loads)
Any ideas?
The information is in their support resources. I am not aware of a central place, but it's all there. Searching the docs for "request limit" should usually do the trick.
The Geocoding API's limits for example can be found here.
Google Maps API Web Services and Google Static Maps API limits were cut effective a few days ago. Starting October 1st 2011 commercial web sites and apps using Google Maps API for free receive:
max of 2,500 calls/day, if modified using Styled Maps feature
max of 25,000 calls/day in total
Fusion tables are preferable to the Google Maps API alone, particularly with respect to rate limits:
Applications using the Google Fusion Tables API can send a maximum of
5 requests per second to the Google Fusion Tables server.
I think they removed the limit recently: can't even find a mention of it in documentation pages where I know for sure that it was mentioned and read about the limit removal somewhere this summer.
Even their new EULA states that their service is not limited but they remain free to limit it however they want at any moment.
500 concurrent users doesn't seem to be that much though, even if limitations where in place; are you sure it's Google what's failing?

How often to run the cron, to mine twitter public timeline?

The webapps that depend on the public timeline of twitter, how often do they collect the data? There must be hundreds of thousands of messages every minute, correct? How do they manage to collect all the tweets, without missing any of them?
Some services (Friendfeed is a good example) are granted access to the Twitter Streaming API, aka the 'firehose'. It requires approval and a written agreement.
The publictimeline is not a great place to mine data anymore. Twitter now uses its Streaming APIs to output tweets like crazy. The closest comparison to the publictimeline would be the spritzer method, but that only includes a small sample. If you need to gather all (or more) tweets than the spritzer method, you'll need to sign a written agreement to get access to other Streaming API (HTTP push) feeds, such as the firehose feed, which returns all public tweets.
The twitter API is rate limited, as has been said. The public timeline (twitter.com/public_timeline) is not rate limited in the same sense, but it is only updated every 5 seconds, so most tweets never appear there.
There are I think three or four companies that have access to the firehose, as Twitter's full feed is called. FriendFeed is one of these. Another is Gnip. Gnip resells the feed to other companies. This is probably the only feasible way to get a full twitter feed.
Go here:
http://twitter.com/help/request_whitelisting
and get your account white-listed (allows 20,000 per hour) if 100 requests per hour isn't enough.
#ceejayoz its not 100 GET requests its 100 requests in general excluding a few requests like verify_credentials and rate_limit_status.