I have a php application using the BigCommerce PHP APi v 3.0, the application sends multiple API request to get some data on the store orders (it's a private application in BC), my problem is that of late more and more request are failing and my application is returning a 500 server error because of it. The behaviour is very odd because at times it works and at others it returns that error... Can anyone help me with this, has this happen to more ppl?
Thanks!
To answer my own question here, here's what i got as a response from BigCommerce support:
Thank you for contacting Bigcommerce and for your report.
The 500 errors you have been seeing in greater number this week last
week are expected as there have been several server performance
issues. A couple due to DDoS attacks, some relating to our object
storage system, and issues with our webhooks queue that specifically
affected the API proxy. We are sorry about these service interruptions
and know that our Technical Operations team and been working hard to
first correct then look to prevent the root issues. Assuming no new
issues crop up we should see a reduction in 500 errors back to more
expected, much less frequent levels.
We do appreciate your feedback on this and for bearing with us while
we work to get things back to normal operating levels. While 500
errors are something that can occur they should not be as frequent as
they have likely been seen this week.
Related
We are using the Google Calendar API to keep a sync between our app and events in our users' google calendar.
We have started regularly getting rate limiting errors (403).
However our usage according the APIs and Services page of the google cloud console is well below the stated limits (10,000 queries per minute and 600 per user per minute). We are also using the batch API to send our requests so cannot implement exponential backoff
Anyone got any advice on avoiding these rate limiting errors?
Rate limiting errors with google are basically flood protection you are going to fast. Dont hold to much stock in what the status shows on the Google developer console the numbers in those graphs are guesstimates at best and they are not Realtime.
The main cause for rate limit is that when you send a request there is no way of know which server your request is going to be run on. There is also no way of knowing what other requests are being run on the same server. So your request may run faster or slower than you would expect sometimes which makes it hard to track down exactly what 10,000 queries per minute and 600 per user per minute actually is.
10000 requests run on an overloaded server may run in 2 minutes while on a server that is not being overloaded it could be run in 30 seconds meaning the next request you send will blow out the quota.
As there is really no way of avoiding it you you should just ensure that your application is capable of responding to it by sending the request again. I wrote an article a number of years ago about how i would track my requests locally in my application and then ensured that it kept things at the right speed flood buster
Really as long as your application responds by sending the request again you should be ok.
We're doing a high volume of "getToken" requests (supplying a code) using the OAuth2 client from the "googleapis" npm package (version 14.2.0).
Out of 84,940 successful requests within a 24 hour period, 290 fail with the notorious "invalid_grant" error. We've checked everything found in this question , but are still experiencing these errors. Obviously this is a very small error rate (< 0.5%), probably within acceptable range if these were general errors from Google (500).
Can "invalid_grant" also represent general errors? If so, we'd be inclined to just ignore these errors as part of the reality of doing so many logins.
If not, are there ways to dig deeper into the issue?
I found a lot of posts about this issue, but none is like mine.
I have a website that subscribes to IG real-time updates. Since some time ago trying to subscribe to new notifications returns the above error. Seems like IG can't reach the callback_url.
Interesting findings while troubleshooting:
I tried to manually (directly) access the callback_url and it works flawlessly.
When I run the site on my local machine and make sure the callback_url is to my server + the verify_token is a valid one then it all works as it should! IG reaches the callback_url.
I used the apigee console to make the subscription request and there it sometimes works.. Meaning - I craft the request to IG API and click Send. If it fails then I click again (I don't change parameters) and then it works..
Web server logs are in line with the results I get (when it complains it can't reach then there's no log of a request).
Does anyone have any tip or suggestion how to fix this?
Thanks!
While chasing my own tail for a solution I also contacted IG support. I described the issue using same description as above.
After 2 weeks suddenly everything started working again. Magic! Suddenly no errors at all! Needless to say nothing has changed on my end (hardware / network / code).
I didn't receive any response from their support, but considering that a previous email to them a few months ago took about 2 weeks to answer then I am quite sure some support guy simply fixed the issue on their side and didn't even bother to let me know what was going on.
This could be related to the fact that they killed most of their real time notifications API.
Anyway, leaving this here so that if anyone encounters a similar issue then just know you may need to email IG support.
I'm using the official Ruby gem to request the Instagram API (GET users/xx/media/recent). After a bug in my code led to every visiter to my site requesting the API, I suspect I'm a victim of IP banning and/or Rate Limit throttling. Now all of my requests with curl return a "500 server error" response. The Ruby gem returns "Something is technically wrong".
I am surprised by this behaviour because:
I didn't exceed the Rate limit by much since my site has few users. I estimate I did around 1-2 requests per second for a few weeks, until the bug in my code was detected. (The Rate limit is 5000 reqs/hour according to Instagram.)
I fixed the bug about two weeks ago, and have since cached the results from Instagram for long periods. I now make 120 requests per hour at most. I would expect the IP ban to be lifted after a period of time.
Have anyone else become victim of this issue, and resolved it in some way? I have posted a bug report through the developer pages at instagram.com, but expect no answer from them.
I should add that changing client_id doesn't make any difference, and using my production client_id locally works fine.
Wait and see : http://developers.instagram.com/post/82701625883/api-returning-500-errors-on-specific-ip
They know the problem.
Best regards,
it seems they have implemented some kind of DDOS prevention software and I can no longer access the ticker. does anyone know how to deal with this?
the service is working this morning so I guess it has nothing to do with the DDOS software but was probably a transient error. what gets me is that I could pull up the ticker from my browser but not from code, which is why I wrote. but it's working today