I need to get data from an api that reuqires first getting a sessionId using a username and password. The sessionId that is provided from the getSession endpoint needs to be passed in with all requests in the header.
I'm able to use httpclient to do all this for making requests, but how can this be automated such that requests can be made on a schedule, for example every 5 minutes? Would I need to cache the provided session so that I can reuse it as long as it is alive? I'm guessing you wouldn't want to request a new sessionId with every request. How, would I know that it is alive without specifically making some kind of request to the api?
Any pointers to examples or resources on this type of automated api request process would be welcome.
Related
I'm using MS-provided bot sample with Teams messaging extensions feature. Only posted my Azure AD creds, no more changes. Running it locally...
When a user clicks messaging extension button in Teams request arrived on Microsoft.BotBuilderSamples.Controllers.BotController.PostAsync() method. If this method works longer than 25 seconds, Teams show to the user error message. Docs say that it should be only 15 seconds, but it seems Teams became more tolerant these days, okay.
But in this case the second request arrives in this method after first one (it takes place even if method works 16 seconds, not 26)! It's with same body and headers besides Authorization header (it contains new token).
So...What does it mean? What is this behavior created for? How to prevent it?
And who does this second request after all? I look in fiddler and see only one request to MS server from my desktop Teams client. When I'm make similar request from Postman, it arrives only one time.
Copying answer from comments for better understanding:
From a scenario it is the best if the bot responds within 5 seconds. We are waiting for more, but that should not be something that the we should rely on. Also as #subba reddi said If there is no response from bot controller within 15 seconds, Teams service retry once. So, you will see double calls in your controller. So, make sure your bot respond within 15 seconds.
I made a web api with asp core.
When a client sends two requests in a row, A and B. Is the order of requests guaranteed as with TCP protocol?
Can I be sure the request A is always processed before request B in my web api?
No, you can't be sure that the first request is processed before the second one as the requests could be handled by different threads, so there's no guarantee about the order you'll receive your responses.
If you want to be sure to display data related only to your last request, you could use a counter on the client side, increment that at every request and send it to your API. On the server side, the response will then contains your counter and your client will only show the response that has the matching counter in the content.
I am building an application which uses the Amazon MWS API.
The API has limits for how frequently you can hit it.
I am looking for a tool that can act as a reverse-proxy, save the MWS API responses, and eventually masquerade as the MWS API without ever hitting it, returning only responses from the cache.
Some tools do this, but what I need is a bit more complicated.
Say I request a report from Amazon MWS:
I'll call RequestReport
I'll get a ReportRequestId back
I'll start server polling GetReportRequestList to find out what the current status of the report request is. The report request will go likely go through the statuses SUBMITTED then DONE, but it could also be set to ERROR or CANCELLED
When the report request status returned by GetReportRequestList is DONE, I can finally call GetReport and get the data.
The behavior from step 3 is what I'm trying to replicate.
This external API cache should be able to produce different results for the same request: the first response should yield SUBMITTED and then the second response should yield DONE.
I should be able to easily configure these flows as I wish, setting the responses I want for the 1st, 2nd, nth request.
I would like this tool to necessitate minimal configuration, I do not want to configure routes or anything, I want it to automatically cache everything and then return everything from the cache, never flushing it.
Also, I need this level of control over what's returned in a response, depending on the count of requests done up to that point.
I'm building a idempotent REST based POST API call. I want to implement idempotency behavior to avoid clients creating duplicate resource during network failure & timeout. Client passes a ClientToken in request header of every API call. My POST request has standard payload and I have validation logic around it. What is ideal idempotency behavior expected from an API during a retry? Should it depend just on the ClientToken and ignore request payload or should I run the validation logic on the request payload before invoking idempotent checks using ClientToken?
It depends, but for the idempotent API's that I've implemented, I always check the token first.
Because I only store the idempotency token in the same transaction as the changes made to the database (inserting the new resource for example) I know that if it is their, whatever is being requested has already been committed and worked previously.
If the token exists, I'll return a 201 created to the client with the link (for a POST) immediately before validating the payload.
The reason for this is that the rules of the game for clients is the idempotency token allows you to retry EXACTLY the same request. If someone writes a client that is silly enough to change the payload and use the same idempotency token, that's on their head.
The bonus of checking the idempotency token first is it can potentially save a bit of validation work, if the validation of a payload is heavy going.
First of all, POST, as a HTTP method, is not idempotent. Idempotent means that multiple identical requests should have the same effect as a single request.
If you change the payload, you no longer have identical requests. If you use them with the same token then what happens on the server? Does a second, different request, cause side effects? If it does then it is no longer indempotent. If on the other hand the result is the same then the method is indempotent so the payload does not matter but you no longer have identical requests needed to respect idempotency of HTTP.
I would personally check the request to have the same payload and reject subsequent requests that have a different payload.
My app has an API that users can request data. Sometimes that data takes time to process and is breaking my code.
I need a solution for this and I was thinking in using delayed_job but I'm not sure how this works. If the user makes a request, I need to give him an answer. Even if I process the data in background, the call still needs to wait until the job returns.
What is the solution for this? I am not sure how to do it.
Thanks
Heroku has a 30 second timeout, which is why your requests are failing (Probably H12 or H13 in your heroku logs).
There are three methods to work around this.
Keep the connection open by sending blank data.
You'll need to respond within the first 30 seconds and every 55 seconds after that. Use the time in between to process the data. Sending spaces should not affect the ability of the browser to read the response.
Callback
Have the user provide a callback URL in the initial request. When you finish processing the data, hit the callback url with your response.
Polling
As suggested by Codeglot, you can provide the user with a key. To check on their request, they can ping your server with that key.
Tell the user that their data is being processed and will be available shortly. Youtube, Vimeo, Facebook, Twitter, they all do this.