AWS API Gateway: Error 429 Too many requests - api

I'm trying to create a backend system with AWS API Gateway and Lambda.
In the past days I created a PUT method for a new API resource, with an API Key as a simple first security step. The PUT method invoke a Lambda function on AWS.
Then I deployed this API to a "prod" stage for some tests.
In the first days everything were working well as expected: I created a call to the API with postman and I received all the data I was expecting.
But a couple of days ago I started to receive always the 429 "Too many requests" response. I created also a new stage, but nothing changed: also the new stage, with the same version or with newer version, is getting always the same error.
The API is not reaching any limit, because they are called 4 or 5 times per day, not per second (checked on CloudWatch). There is no cycle, it is only a single invocation.
I suppose there is no error on the lambda side, because if I test the API in the AWS API Gateway console I get no error (and the lambda was working well in the past, no new changes from that version). The error only shows when I use an external client to test my api (in my case it is Postman).
Can anyone help to solve this problem?
UPDATE: I've just created a POST method on the same resource, with the same parameters and the same lambda. It is working. I wonder if the problem is related to the PUT methods in general or if within 2 days also my POST method will be affected by the same problem.

I had the same problem. I deleted and recreated the deployment. It did work in my case.

Here is a link to errors related to Amazon's API gateway. The last paragraph has additional information on the 429 error you discussed above.

I had the same issue. I created the case in AWS, and they suggested that I implement this dependsOn fix in the template file. Refer: Link
And it worked for me.

Related

OpenTelemetry, AspNetCore and Jaeger - Getting started questions

I've been doing some reading and watching some videos on aspnetcore and otel.
Its been a bit challenging b/c the api surface appears to have evolved quite a bit since 2020.
I've got my aspnetcore solution wired up with otel via OpenTelemetry.Extensions.Hosting, OpenTelemetry.Instrumentation.HTTP, OpenTelemetry.Instrumentation.AspNetCore and using a Jaeger exporter.
I have a couple of questions;
In the sample's I've seen I can set the top level service name via
service registration on the exporters, but this property appears to
have been dropped in the most recent packages. How do I set the top
level service name (its showing up a unknown service in jaeger)?
I'm trying to propagate my tenant identifier to all span related
calls. I'm using activity.Current.AddBaggage("TenantId",MyTenantId)
but the value isn't getting exported to jaeger (the baggage items
aren't present in the raw json received by jaeger).
I'd like to include the activity id in the response headers for all
outgoing responses. Do I need to write this myself, or is it baked
into the aspnet core otel code somewhere?
Thanks!
How do I set the top level service name (its showing up a unknown service in jaeger)?
You achieve this using Resource. There are couple of ways to do this. You could set the env variables OTEL_RESOURCE_ATTRIBUTES=service.name=my-service-name or the OTEL_SERVICE_NAME=my-service-name
If you want to do this programatically you would register a Resource object to The TracerProvider.
I'm trying to propagate my tenant identifier to all span related calls. I'm using activity.Current.AddBaggage("TenantId",MyTenantId) but the value isn't getting exported to jaeger (the baggage items aren't present in the raw json received by jaeger).
I don't think baggage items are automatically added to trace. For this use you probably should be looking at TraceState.
I'd like to include the activity id in the response headers for all outgoing responses. Do I need to write this myself, or is it baked into the aspnet core otel code somewhere?
I am not super familiar with aspnet instrumentation components but there should be some sort of hooks to do this. JS, Python and other Client libraries has this feature.

RESTDataSource - How to know if response comes from get request or cache

I need to get some data from a REST API in my GraphQL API. For that I'm extending RESTDataSource from apollo-datasource-rest.
From what I understood, RESTDataSource caches automatically requests but I'd like to verify if it is indeed cached. Is there a way to know if my request is getting its data from the cache or if it's hitting the REST API?
I noticed that the first request takes some time, but the following ones are way faster and also, the didReceiveResponse method is not called everytime I make a query. Is it because the data is loaded from the cache?
I'm using apollo-server-express.
Thanks for your help!
You can time the requests like following:
console.time('restdatasource get req')
this.get(url)
console.timeEnd('restdatasource get req')
Now, if the time is under 100-150 milliseconds, that should be a request coming from the cache.
You can monitor the console, under the network tab. You will be able to see what endpoints the application is calling. If it uses cached data, there will be no new request to your endpoint logged
If you are trying to verify this locally, one good option is to setup a local proxy so that you can see all the network calls being made. (no network call meaning the call was read from cache) Then you can simply configure your app using this apollo documentation to forward all outgoing calls through a proxy like mitmproxy.

UrlFetchApp.fetch stopped working on Monday the 7th after no issues for months with two api's

Any one else see the following problem.
I use zendesk API and pipeline deals api.
code has been in use for 2 months (no issues all working)
As of this week (no changes to the code) both API's fail on post with create calls (Gets work fine and authentication also working fine for both API's).
The execution log shows correct data being encoded example below (removed actual values)
UrlFetchApp.fetch([https://supernahelp.zendesk.com/api/v2/organizations.json, {headers={Authorization=Basic someencodedauthdata, Content-Type=application/json}, method=post, payload={"organization":{"name":"somecustomer","domain_names":"xyc.edu","organization_fields":{"supernauniqueid":"Sup-2308233814","crmdashboard":"someurladdedhere"}}}, muteHttpExceptions=true}])
The payload was passed through JSON.stringify to add to API call and has been working fine for ever.
Error return to from execution log "call to make to ZD {"error":"RecordInvalid","description":"Record validation errors","details":{"name":[{"description":"Name: cannot be blank","error":"BlankValue"}"
Which basically means API could not parse the body correctly for the name value which was sent
I opened case with Zendesk and they got there logs and showed me what they received (not the same record)
only a snippet
{"{\"organization\":{\"name\":\"customer name here \"
I noticed \ added to the payload (not by my code) but this was added by GAS.
AND
Pipeline API has same issue payload Post commands are rejected with bad payload.
Both failed on the same day, and no longer work at all.
this tells me others must have issue with post commands?
looking for help as code worked fine and then stopped and it looks like GAS is adding escape codes out of the blue
Andrew
GAS was broken, seems content type encoding into headers stopped working and moving the content type ad syntax was changed (broke many others scripts as well).
https://code.google.com/p/google-apps-script-issues/issues/detail?id=5585&can=6&colspec=Stars%20Opened%20ID%20Type%20Status%20Summary%20Component%20Owner
Andrew

SoapUI with Groovy Script calling multiple APIs

I am using SoapUI with Groovy script and running into an issue when calling multiple APIs. In the system I am testing one WSDL/API handles the account registration, and returns an authenticator. I then use that returned authenticator to call a different WSDL/API and verify some information. I am able to call each of these WSDLs/APIs separate but when I put them together in a Groovy Script it doesn't work.
testRunner.runTestStepByName("RegisterUser");
testRunner.runTestStepByName("Property Transfer");
if(props.getPropertyValue("userCreated") == "success"){
testRunner.runTestStepByName("AuthenticateStoreUser");
To explain the first line will run the TestStep "RegisterUser". I then do a "Property Transfer" step which takes a few response values from "RegisterUser" - the first is "Status" to see if it succeeded or failed, second is the "Authenticator". I then do an if statement to check if "RegisterUser" succeeded then attempt to call "AuthenticateStoreUser". At this point everything looks fine. Though when it calls "AuthenticateStoreUser" it shows the thinking bar then fails like a timeout, and if I check the "raw" tab for the request it says
<missing xml data>.
Note, that if I try the "AuthenticateStoreUser" by itself the call works fine. It is only after calling "RegisterUser" in the Groovy Script that it behaves strange. I have tried this with a few different calls and believe it is an issue calling two different APIs.
Has anyone dealt with this scenario, or can provide further direction to what may be happening?
(I would have preferred to simply comment on the question, but I don't have enough rep yet)
Have you checked the Error log tab at the bottom when this occurs? If so, what does it say and is there a stacktrace you could share?

Use AWS S3 success_action_redirect policy with XHR

I'm using signed POST to upload file directly to amazon S3. I had some trouble with the signature of the policy using PHP but finally fixed it and here is the sample of code.
This xhr request is send in javascript and I'm waiting for an answer from amazon. At first I was using success_action_status setting it to 201 to get the XML response.
What I'd like to do is using the success_action_redirect to call a script on my server to create a record in the database.
The reason why is that I could create the record in the database and if anything wrong happen at this stage I can return an error message directly at this point. Also it saves me another ajax request to my server.
So I've tried to set this up specifying the success_action_redirect to http:\\localhost\callback.php where I have a script that is waiting for some parameters.
But it looks like this script is never called and the response of the xhr.send() is empty.
I think it's a cross-browser issue and I'm wondering if it would be possible to use jsonp somehow to pass-by this?
Any ideas?
UPDATE
Apparently xhr is following redirect natively so it should work but when I specified the success_action_redirect it returns error Server responded with 0 code.
At first I thought it was because the redirect URL was on my local server so I've changed it to an accessible server but no chance.
Anyone knows why it's returning this error message?
I also run into this problem. It seems like nobody has a solution to this like this
maybe the best workaround i have found is something like this.
It seems thet the only workaround includes a second xhr-request to execute the callback manually. therefore the
success_action_status
should be used. Witht his you will get a 201 response if the upload was successful and you can start a second request for the actual callback. For me it looks like the only possible solution at the moment.
Any other solutions?