Logic apps - HTTP connector POST call to API returns 202 and location header but the polling returns 404 - api

We have implemented a Logic app to call do a POST call to a third-party API which returns a 202 with location header. The Logic app in the backend automatically polls using the location header resulting in GET request to the third-party provider hoping to receive a 200 response once the processing is complete. However, the GET requests are resulting in 404 errors.
We have tried disabling the check location headers but for some reason Logic apps still continues to send the GET requests and at a faster rate.
Is there any way to stop the GET request from Logic Apps or should this be the third-party provider's responsibility to handle the polling and not send 404's?

Yes, you can stop the GET request from your Logic Apps. Basically it totally depends on your workflow. If you are designing a stateful workflow then I would suggest that not to stop the GET request.
For stateful workflow all HTTP-based actions follow the standard asynchronous operation pattern as the default behavior. Where after an HTTP action calls or sends a request to an endpoint or API, the receiver immediately returns a "202 ACCEPTED" response. And the response can include a location header which the caller can use to poll or check the status for the asynchronous request until the receiver stops processing and returns a "200 OK" success response or other non-202 response.
But if you are designing a stateless workflow, then caller doesn't have to wait for the request to finish processing and can continue to run the next action. In this case the receiver return the "202 ACCEPTED" response as-is, and proceed to the next step in the workflow execution. A stateless workflow won't poll the specified URI to check the status.
You can stop the GET request from your logic app by following any of the two approaches mentioned below.
Turn off Asynchronous Pattern setting.
You can achieve this by going to the Logic App Designer, on the HTTP action's title bar, selecting the ellipses (...) button and setting Asynchronous Pattern to Off if enabled.
Disable asynchronous pattern in HTTP action's JSON definition.
In the HTTP action's underlying JSON definition, add the "DisableAsyncPattern" operation option to the action's definition so that the action follows the synchronous operation pattern. Check this document for more information.
Also check this Asynchronous request-response behavior document by Microsoft for more understanding.

Related

Retry client request after external condition is met

I want to retry a KTor client request after an external condition is met (such as showing a UI to re-request authorization from the user) when they receive a certain HTTP status code.
HttpRequestRetry is time based and not a great fit for suspending a request while the user executes the external action.
The Auth plugin does not have a concept of needing to redo an authorization due to expiration time has been reached.
Looking into how the Auth and HttpRequestRetry work, they both access an internal function
takeFromWithExecutionContext to make the sub-request and tie them back to the original coroutine scope.
Is there another way to make a custom retry system that does not rely on internal methods?

Test async data processing flows with Karate Labs

I'm looking for best practices or the recommended approach to test async code execution with Karate.
Our use cases are all pretty similar but a basic one is:
Client makes HTTP request to API
API accepts request and creates a messages which is added to a queue
API replies with ACCEPTED / 202
Worker picks up message from queue processes it and updates database
Eventually after the work is finished another endpoint delivers updated data
How can I check with Karate that after processing has finished other endpoints return the correct result?
Concrete real life example:
Client requests a processing intensive data export to API e.g. via HTTP POST /api/export
API create message with information for creating the export and puts it on AWS SQS queue
API replies with 202
Worker receives message and creates export, uploads result as ZIP to S3 and finally creates and entry in the database symbolizing this export
Client can now query list exports endpoint e.g. via HTTP GET /api/exports
API returns 200 with the list of exports incl. the newly created entry
Generally I have two ideas on how to approach this:
Use karate retry until on the endpoint that returns the list of exports
In the API response (step #3) return the message ID and use the HTTP API of SQS to poll that endpoint until the message has been processed and then query the list endpoint to check the result
Is either of those approach recommended or should I choose an entirely different solution?
The moment queuing comes into the picture, I would not recommend retry until. It would work if you are in a hurry, but if you are ok to write a little bit of Java code, please read on. Note that this Java "glue code" needs to be written only once, and then the team responsible for writing the functional flows will be up and running.
I personally would prefer option (2) just because when a test fails, you will have a lot more diagnostic information and traces to look at.
Pretty sure you won't have a problem using AWS Java libs to do things such as polling SQS.
I think this example will answer all your questions: https://twitter.com/getkarate/status/1417023536082812935

JSON:API HTTP status code for duplicate content creation avoidance

Suppose I have an endpoint that supports creating new messages. I am avoiding the creation of two times the same message in the backend, in case the user tries to push the button twice (or in case the frontend app behaves strangely).
Currently for the duplicate action my server is responding with a 303 see other pointing to the previously created resource URL. But I see I could also use a 302 found. Which one seems more appropriate ?
Note that the duplicate avoidance strategy can be more complex (eg for an appointment we would check whether the POSTed appointment is within one hour of an existing one)
I recommend using HTTP Status Code 409: Conflict.
The 3XX family of status codes are generally used when the client needs to take additional action, such as redirection, to complete the request. More generally, status codes communicate back to the client what actions they need to take or provide them with necessary information about the request.
Generally for these kind of "bad" requests (such as repeated requests failing due to duplication) you would respond with a 400 status code to indicate to the client that there was an issue with their request and it was not processed. You could use the response to communicate more precisely the issue.
Also to consider, if the request is just "fire and forget" from the client then as long as you've handled the case for duplication and no more behavior is needed from the client it might be acceptable to send a 200 response. This tells the client "the request was received and handled appropriately, nothing more you need to do." However this is a bit deceptive as it does not indicate the error to the client or allow for any modified behavior.
The JSON:API specification defines:
A server MUST return 409 Conflict when processing a POST request to create a resource with a client-generated ID that already exists.

Should idempotent POST API call check API request payload before using client token to check idempotency?

I'm building a idempotent REST based POST API call. I want to implement idempotency behavior to avoid clients creating duplicate resource during network failure & timeout. Client passes a ClientToken in request header of every API call. My POST request has standard payload and I have validation logic around it. What is ideal idempotency behavior expected from an API during a retry? Should it depend just on the ClientToken and ignore request payload or should I run the validation logic on the request payload before invoking idempotent checks using ClientToken?
It depends, but for the idempotent API's that I've implemented, I always check the token first.
Because I only store the idempotency token in the same transaction as the changes made to the database (inserting the new resource for example) I know that if it is their, whatever is being requested has already been committed and worked previously.
If the token exists, I'll return a 201 created to the client with the link (for a POST) immediately before validating the payload.
The reason for this is that the rules of the game for clients is the idempotency token allows you to retry EXACTLY the same request. If someone writes a client that is silly enough to change the payload and use the same idempotency token, that's on their head.
The bonus of checking the idempotency token first is it can potentially save a bit of validation work, if the validation of a payload is heavy going.
First of all, POST, as a HTTP method, is not idempotent. Idempotent means that multiple identical requests should have the same effect as a single request.
If you change the payload, you no longer have identical requests. If you use them with the same token then what happens on the server? Does a second, different request, cause side effects? If it does then it is no longer indempotent. If on the other hand the result is the same then the method is indempotent so the payload does not matter but you no longer have identical requests needed to respect idempotency of HTTP.
I would personally check the request to have the same payload and reject subsequent requests that have a different payload.

WCF Service- Sending back object to calling App

My WCF service(hosted as Windows Service), has some 'SendEmail' methods, which sends out emails after doing some processing.
Now, I have got another requirement where client wants to preview emails before they are being sent out, so my WCF service needs to return whole email object to calling web app.
If client is happy with emails object, they can simply click 'Send out' which will then again call WCF service to send the emails.
Because at times it can take a bit longer for emails object processingy, I do not want calling application to wait until emails object is ready.
Can anyone please guide what changes I need to make to my WCF service (which currently has all one way operation)?
Also, please guide me whether I need to go for Asynch operation or message queuing or may be a duplex contract?
Thank you!
Based on your description I think you will have to:
Change current operation from sending email to storing email (probably in database).
Add additional operation for retrieving prepared emails for current user
Add additional method to confirm sending one or more emails and removing them from storage.
The process will be:
User will trigger some http request which will result in calling your WCF service for processing (first operation)
WCF service will initiate some processing (asynchronously or firt operation will be one-way so that client doesn't have to wait).
Processing will save email somehow
Depend on duration of processing you can either use AJAX to poll WebApp which will in turn poll WCF service for prepared emails or you will create separate page which will user have to access to see prepared emails. Both methods are using second operation.
User will check prepared email(s) and trigger http request which will result in calling third operation to send those emails.
You have multiple options:
Use Ladislav's approach. Only to add that service returns a token and then client uses the token to poll until a time out or a successful response. Also server keeps these temp emails for a while and after a timeout purges them.
Use duplex communication so that server also gets a way to callback the client and does so when it has finished processing. But don't do this - and here is my view why not.
Use an Asynchronous approach. You can find nice info here.