I would like to know Here servers automatically reject IPs associated to Tor servers?
Because, I've tried many times to do a request with API (which never answered).
When checking the HERE FAQs it does not say anything about blocking certain IPs, f. ex. from Tor, but it does tell you a bit about limits there adhere to, such as 250,000 Transactions per month.
That's probably a good starting point to check for limits of the HERE API.
Related
I have been looking into various different APIs which can provide my the weather data I need in JSON format. A lot of these API's have certain limits such as: in order to get more requests per minute, you need to pay more money per month so that your app can make more API requests.
However, a lot of these API's also have free account which five you limited access to them.
So what I was thinking is, wouldn't it be possible for a developer to just make lots of different developer accounts with an API provider and then just make lots of different API keys?
That way, they wouldn't have to pay anything as they could stick with the free accounts. Whenever one of the API keys has reached the maximum daily request calls, the developer could just put a switch statement in their code which gets their software to use a different API key.
I see no reason why this wouldn't work from a technical point of view... but, is such a thing allowed?
Thanks, Dan.
This would technically be possible, and it happens.
It is also probably against the service's terms, a good reason for the service to ban all your sock puppet accounts, and perhaps even illegal.
If the service that offers the API has spent time and money implementing a per-developer limit for their API, they have almost certainly enforced that in their terms of service, and you would be wise to respect those.
(relevant xkcd)
I know this is a difficult question but here it is, in context:
Our company has a request to build a WordPress website for a certain client. The caveat is that, on one day per year, for a period of about 20 minutes, 5,000 - 10,000 people will attempt to access the home page of this website. Their purpose: Only to acquire an outbound link to another site.
My concern is, no matter what kind of hosting we provide, the server may reject the connections after a certain number of connections are reached.
Any ideas on this?
This does not depend on WordPress. WordPress is basically software to render webpages: it helps you to quickly modify the content content of a page. Other software like for instance Apache accepts connections and redirects the calls to for instance WordPress.
Apache can be configured to accept more connections. I think the default is about 200. If that's bad really depends. If the purpose is only to give another URL, you can say that connections will be terminated fast. So that's not really an issue. If on the other hand you want to generate an entire page using PHP and MySQL it can take some time before a client is satisfied. In that case 200 connections are perhaps not sufficient.
As B-Lat points out. You can use cloud computing platforms like Google App Engine or Microsoft Azure that provide a lot of server power. But only bill their clients on the consumption on these resources. In other words you can accept thousands of connections at once. But you don't need to pay for the other days when clients visit your website less often.
How can I get maps for addresses without requests limits ? Google provide only 2500 requests per day. First of all, I want to use free services. Thank you.
You left a ton of info out... What the heck is maps for addresses? Do you mean map tiles? Or are you talking about geocoding? Like getting addresses for maps.
Is it a website making the calls or mobile? Where are you exicuting the code from?
If you are talking about gps geocoding (getting an adress from a GPS cord) then there are tricks you can use to get around those limits. If it's based on a key then its a 2500 limit for the key. However, there are apis you can use that are based on calling IP (google is one) If you make the client make the call then unless your client is making 2500 calls your good to go.
You will notice here that the geocoding call doesn't require an api key. So the usagelimit is going to be based on calling IP
https://developers.google.com/maps/documentation/geocoding/#GeocodingRequests
Here's a similar question Usage limit on Bing geocoding vs Google geocoding?.
Google will start denying your request at around 2500. Bing has a much higher daily limit (used to be 30k - i think it's up to 50k now).
There are a number of free geo-coding services. I recommend staggering your requests to use multiple services if you need a large number of addresses coded daily. Here's a list of 54 providers: http://www.programmableweb.com/apitag/geocoding.
My goal is to synchronize a web-application with an internal database. The web-application has a public API, but in order to fully synchronize the two sources I would need to make around 2000 separate API calls every time. My instinct tells me that this is excessive and possibly irresponsible, but I lack the experience to know for sure.
In this particular case the web-application is Asana, but I've encountered similar situations before with other services. Is there any way to know if you're abusing a service through excessive API calls? I know I'm not going to DOS a company like Asana, but I can't shake the feeling that there must be a better way than making ~150k requests per day.
The only other option I can think of is to update the web-service only when I know there's been a change in the database, but I'll lose a lot of capability that way.
I apologize for the subjectivity of this question, but I'm really hoping that someone can explain if there's any kind of etiquette that's expected when using public APIs.
(I work at Asana)
This is an excellent question, or rather set of questions.
You are designing a system that will repeatedly make requests for every object. What will happen as the number of objects grows? Even if your initial request rate were reasonable, this would suffer problems with scalability. A more scalable solution is one that scales with the number of changes in the system. This will also grow over time, but much more slowly - the number of changes a single user can make per day is relatively constant, but the total number of objects they've created over time grows and grows. So my first piece of advice would be to avoid doing things this way, and instead find a way to detect changes and just act on those. It would be interesting to know why you feel you'll lose capability by taking this approach.
Now, I happen to know that the Asana API does not currently provide you with any friendly mechanism to just detect changes in the system. This is a commonly requested feature and we are looking into it, though I unfortunately cannot promise a delivery date. So you might be left with no choice but to poll our system for now.
As for being polite to the API, many service providers set limits on their API usage to prevent accidental or malicious use of the API from impacting the service to their other customers -- Asana is no exception. Sometimes these limits are published, other times not, and there is no standard limit: it all depends on the service. But it is very thoughtful of you to be curious about service limitations.
That said, 150k requests per day is, for the Asana API, kind of a lot. If all of our API users gave us that much traffic, we might be serving more requests per day than Google Web Search, and we're not quite that scalable yet. :) Technically, sometimes, we might handle requests at that volume from a single user.
If you must poll, try to poll on intervals like 15 minutes. But please do not poll your entire workspace on this time period; it's likely to be too much traffic/data. We're working on trying to provide you with a better solution.
If you do happen to make too many requests of the Asana API, you will get back HTTP status code 429 instead of your desired response; you can read more about that here (https://asana.com/developers/documentation/getting-started/errors).
I want to be able to pull how many followers Twitter accounts have in rails. However, I want to do this for many accounts each day 10,000 +. I am only allowed around 150 requests per ip address.
I am a newb to rails, but I have heard of solutions like ip masking, bouncing, and proxy servers to get around this problem.
I have also heard that heroku ip's change all the time for your app, so this may not be a problem.
My main question is...can anyone explain what strategy is possible to make more calls to an api with rate limiting with a rails app on heroku?
Trying to circumvent the restrictions of the API is a very bad idea. You can require users to authorize with Twitter in order to get certain requests to count against their individual API limits instead of yours.
Also, not all calls are rate limited. Some have individual limits, others are limited as part of a group. Look into more creative ways to use the API in ways that reduce the number of requests you need to make.