When processing a large amount of data uploads using the OneDrive API I see the following error returned:
"The request wasn't made because the same type of request was repeated too many times. Wait 7 seconds and try again."
This occurs each time when doing volume based uploads.
Has any one come across this or a workaround for this ?
Related
I have spreadsheet with around 900 GET requests that look like this "http://www.omdbapi.com/?apikey=XXXXXXX&t=Alien"
I can pull one at the time and get the right data, but is there a way I can just dump a batch of them in postman?
I tired pasting mutiple url in the field but I get the issue that request url size is too big.
We used the Google Cloud Function provided by Cloudflare to import data from Google Cloud Storage in to Google BigQuery (refer to: https://developers.cloudflare.com/logs/analytics-integrations/google-cloud/). The cloud function was running into an error saying:
"Quota exceeded: Your table exceeded quota for imports or query appends per table"
I queried the INFORMATION_SCHEMA.JOBS_BY_PROJECT table and found the errorresult.location is 'load_job_per_table.long'. The jobid is '26bb1792-1ca4-42c6-b61f-54abca74a2ee'.
Looked at the Quotas page for BigQuery API service but non of the quotas status showed exceeded. Some are blank though.
Could anyone help me with which Google Cloud Quota or limit it exceeded? If so, how to increase the quota? The cloudflare function is used by another google account and it works well without any error.
Thanks,
Jinglei
Try to look for the specific quota error in Cloud Logging where there are log entries.
I had a similar issues with BigQuery Data Transfer quota being reached. Please see my example Cloud Logging filter:
resource.type="cloud_function"
severity=ERROR
timestamp>="2021-01-14T00:00:00-08:00"
textPayload:"429 Quota"
Change the timestamp according and maybe remove the textPayload filter.
You can also just click in the interface for severe errors and search in the UI.
Here is another example:
severity=ERROR
timestamp>="2021-01-16T00:00:00-08:00"
NOT protoPayload.status.message:"Already Exists: Dataset ga360-bigquery-azuredatalake"
NOT protoPayload.status.message:"Syntax error"
NOT protoPayload.status.message:"Not found"
NOT protoPayload.status.message:"Table name"
NOT textPayload:"project_transfer_config_path"
NOT protoPayload.methodName : "InsertDataset"
textPayload:"429 Quota"
Identify quota will be done by What different action you are performing through your function, it could be any .. Insert, Update, Volume, External IP and so on. Then try to analysis frequency or metric values and try to evaluate with Google defined quotas it will give you an indicator which quota is getting exceeded.
You can refer following video over same.
CloudFlare says they have a fix coming for the quota issue: https://github.com/cloudflare/cloudflare-gcp/issues/72
I am getting an error notifying me of exceeding the API Quota. However, I have a quota of 150,000 requests and have only used up 12,000 of them. What is causing this error?
example of code:
from googleplaces import GooglePlaces
api_key = ''
google_places = GooglePlaces(api_key)
query_result = google_places.nearby_search(location="Vancouver, Canada", keyword="Subway")
for place in query_result.places:
print(place.name)
Error Message:
googleplaces.GooglePlacesError: Request to URL https://maps.googleapis.com/maps/api/geocode/json?sensor=false&address=Vancouver%2C+Canada failed with response code: OVER_QUERY_LIMIT
The request from error message is not a Places API request. This is Geocoding API request.
https://maps.googleapis.com/maps/api/geocode/json?sensor=false&address=Vancouver%2C+Canada
It doesn't include any API key. That means you can have only 2500 daily geocoding requests without an API key. Also, geocoding requests have a query per second limits (QPS) which is 50 queries per second. You might be exceeding the QPS limit as well.
https://developers.google.com/maps/documentation/geocoding/usage-limits
Not sure why the library that supposed to be calling Places API web service in reality calls Geocoding API web service. Maybe this is some kind of fallback in case if Places API doesn't provide any result.
For anyone looking at this post in a more recent time period, Google has made it so it is NECESSARY to have an api key to use their maps api. So please ensure you have an api key for your program. Also keep in mind the different throttle limits.
See below for more info:
https://developers.google.com/maps/documentation/geocoding/usage-and-billing#:~:text=While%20there%20is%20no%20maximum,side%20and%20server%2Dside%20queries.
For the past couple of years our app has been using SoundCloud's API without any issue. Recently we've started running into 504 errors when trying to request a user's track list. The request for the user's metadata are perfectly fine, but the track list will return a 504 about 80% of the time now.
Has anyone else experienced this? Any SoundCloud engineers able to give some support?
A sample URL is:
https://api.soundcloud.com/users/1887081/tracks.json?client_id=[OUR_APP_ID]
The docs for this call can be found here:
https://developers.soundcloud.com/docs/api/reference#tracks
Example response for the error:
That user ID, 1887081 has 78 tracks. The length of the search query and fetch is clearly longer than what their middleware/API is willing to wait for. I have two recommendations:
Write their support and ask them to optimize their backend, or query/index. In lieu of that, they could also increase the timeout.
You should use pagination. limit=10 and offset=0 to fetch the first 10. offset=10 to fetch the next page, etc.
Also, if this is a production level app on your end, I would recommend use an API monitoring tool like Runscope. You can do automated scheduled monitoring with simple assertions (no programming), such as checking for a status 200, or even specific content that you know should be there in the JSON, etc. That way, when things go south, or performance degrades in any way, you'll know ahead of time, rather than having to figure it out after your app breaks because of 403s.
Before this question gets closed as too specific I quote from the official Google OAuth Group:
As of March 4, 2013, discussion on this group has move to google-oauth
tag on Stack Overflow
We support the Google OAuth2 on Stack Overflow. Google engineers
monitor and answer against the tag google-oauth. You should use this
tag when asking questions.
Starting this morning (2014-07-18) some of my users are getting error 500 Internal Server error with payload { "error" : "internal_failure" } when trying to obtain an access token using a previously obtained authorization code from Google's token endpoint: https://accounts.google.com/o/oauth2/token.
Hopefully some Google engineer monitoring the google-oauth tag here would be able to provide more insight.
500 Internal Server error is a hiccup on googles side or flood protection. Its normally resolved by sending the same request again while implementing exponential backoff.
I find it strange that the request was from the Oauth servers which would make me think maybe the servers were down at the time. Resending the request should solve the problem.
Handling 500 or 503 responses
A 500 or 503 error might result during heavy load or for larger more
complex requests. For larger requests consider requesting data for a
shorter time period. Also consider implementing exponential backoff.
The frequency of these errors can be dependent on the view (profile)
and the amount of reporting data associated with that view; A query
that causes a 500 or 503 error for one view (profile) will not
necessarily cause an error for the same query with a different view
(profile).