Is it possible that I already exceeded the limit of query for Google Maps Static in just an hour of testing? - api

I just started playing with the Google Map API for Static Images, and in just an hour it has appeared this image:
http://www.coon.it/drop/limit.png
Is that normal?
My page need 6 static image, that means that I call it, like 170 times?
I don't think it's possible since the pictures are always the same and the documentation says that if i call the same image it doesn't count.
What can i do?
Thank you

Although there is an absolute limit on the number of images, that limit may be calculated over a shorter period than 24 hours. 2400/day could be interpreted as 100/hour, so you can't use all 2400 in one go [I can't remember what the limit is for Static Maps, but you get the idea].
With most Google services there is also a rate limit, and it's possible that fetching six images almost simultaneously breaks the rate limit. Rate limits vary depending on server load, but spacing requests out to 200ms should be ok.
Or it may look like an automated image-fetcher (especially if you have run it 30 times in the last hour). Google don't like automated crawlers either.

What should it display? With me it just displays a landscape with a row of trees(what i gues is what you want to see?).

Related

User simulation with jmeter for stress and scalability testing

Hi there I am trying to conduct stress and scalability testing for a web application. One of the problem is that it is not clear how many users the website can handle. So first I am conducting a stress testing with different user number. Consider below picture:
In the picture first I am starting with 100 user by gradually increasing the load. So i have requirement for example:
Response time should not be 7 sec
throughput should not fall under 35 request per second
Percentage of error should not be more than 10 percent of the total request
So if this 100 user satisfies the requirement I will continue the test with 150 user and will go on until it breaks at least 2 requirement out of 3. And with that user I will perform the scalability test. Is this approach right? Please give me advise and how should I simulate user if not right?
The approach is more or less right. However you could save your time and efforts for finding the breaking point of the system with a single test only, just start with 1 user and gradually increase the load until the maximum.
JMeter's theoretical limit for a single Thread Group is 2,147,483,647, your actual limit will be less as most probably your hardware resources are limited, check out What’s the Max Number of Users You Can Test on JMeter? article for more details.
Whatever, if you put a reasonably high number of threads in the Thread Group and configure the load to increase gradually you can run your test and pay attention to the following charts:
Active Threads Over Time - shows the number of active users
Response Times Over Time - shows the system response time
Transactions Per Second - shows the throughput (the number of requests per second)
At first stage of the test response time will be the same and the throughput will be growing as you increase the load. At the certain point of test the response time will start increasing and throughput will be going down - that would be the limit of your system (so called "bottleneck")

How do I search this? Possible to access more than 100 JSON api search results if I pay for it?

How to search this?
I want to be able to:
1. create a search engine
2. programatically search it thorugh an API (python, or other)
3. paginate through the results (all of them, if I chose)
4. store URL's or results that I want.
Is this even possible with Google Custom Search Engine?
I enabled billing, my CC is up to date with Google, I do steps 1..3 above.
On a search, I will get back 4,000 results for example, but I can only access 10 at a time with the API, none more, and when I reach 100 results I am shut off.
I want to be able to process 1000 results if I wish.
Before you reply, do you personally have working code that goes beyond the 100 limit?
If so, would be very much interested in speaking, learning how you did it.
I am using Python at the moment, but it could be any language.
--
I tried using the &start=100, 200, and so on to paginate through, but this does not work.
I tried getting 100 results in a python script, ending the program, calling it again setting start=100 (after the first set returned), and nothing happened.
I want to be able to use the Google Custom Search API, pay Google for a monthly subscription but have not found that this is possible.
For any given search, I want to decide how many results to process, could be 1K, could be 20K, I simply need/want access to the full result set, but I do not, have not seemed to find a way to do this.
The API allows only a max result depth of 100. See https://developers.google.com/custom-search/v1/cse/list

What is the difference between parsing betting website for live scores vs official website API?

I want to monitor some live scores on soccer matches. I have 2 ways to do this:
official api from the website(free)
parse websites source code myself and get data from it( need to do it every second)
What is the difference? Is calling API faster?
This can depend on quite a lot external to this specific scenario, but given the context, yes the API's would much faster. The difference is in what data is being sent/received/parsed.
In either scenario you'd need some timer to tick and parse the results (website or API) so there's no performance difference in the "wait code", but the big difference will be in the data itself that is parsed. When you call the API, chances are more likely that you will send a specific parameter or call a specific function that indicates what you're looking for, pseudo-code example:
SoccerSiteApi.GetValue(SCORE, team1, team2);
Or
SoccerSiteApi.GetCurrentScores(team1, team2);
By calling the API, you are only sending and receiving a few hundred bytes (or more depending on data) and getting back exactly what you want, that is, you don't need to parse the scores out of the values sent back since they are the scores, so no processing time is spent doing anything additional with the data itself.
If, however, you were to parse the entire web site, you would need to make an HTTP GET request (and all that entails) to get the entire page (which could be a couple hundred KB or MB depending on content) and then spend processing time extracting the exact data you were looking for, and then doing this every second.
So the biggest difference is amount of data and time spent processing it.
Hope that can help

Cache strategy for google places api nearby result.

Im trying to figure out how i would go about caching the result i get back from a nearby search. So i can get upto 60 places based upon my current long lat position.
So im thinking that if user1 ask google for nearby places within a radius of 1km from position long: xxx lat: xxx. And i get a result back with places. And then user2 is asking for nearby places 500m away from user1. Then i would like to be able to use the cached data that user1 already fetched from google.
Any suggestion on how i would go about implement this kind of functionality?
Or should i just cache each place by their long,lat and implement my own geosearch?
I'm not aware of any built in google maps library that allows you to do radius search without calling the server. Best bet would be to make your own geosearch on the fetched points or use an open source library to help you.
If you do use cached data but remember that there's no guarantee how much of the 60 places fetched in a 1km radius are also within the 500m radius. If you did another query for 60 places within a 500m radius you'd get more accurate information.
Disclaimer: Google has certain rules in place about caching data. 10.5 d) https://developers.google.com/maps/terms#section_10

Rate limits and max data points per upload

noob here using Arduino Wifi (Adafruit CC3000) to send data. Or try. I have read and understood about rate limits and a limit on number of data items uploaded per connection. But I searched and cannot find what the numbers are for those limits. One per minute? I have deduced by testing that the number of data items is two, and tried to send my 6 data points in three consecutive calls of two. That crashes and burns with assorted errors, so I suspect I'm hitting the rate limit.
P.S. Two datapoints per minute work, so API key, etc is not an issue.
Can anyone tell me what these limits are?
Thanks very much for your time.
If you have a developer account (free), you have a max of 25 requests per minute. This includes if you have an app on the front end pulling/sampling the data. It includes all the GET PUT and socket connections as well.
Src: http://forums.electricimp.com/discussion/2108/max-frequency-of-updates-to-xively-wo-getting-booted/p1