Concerning Podio webhooks:
I have a desire to deactive hooks that I identify as no longer being relevant to my app's need. My problem is that it's possible that my app no longer has authorization to do a delete hook. The api docs say:
The hook must respond with a 2xx status code. If the status code is different from 2xx more than 50 consecutive times the hook will return to being unverified and will have to be verified again to be active. Additionally, your hook may return to unverified if you do not send responses in a timely manner. You should handle any heavy processing asynchronously.
How long does the delay need to be to qualify as not "in a timely manner"? Or is there a better/faster way to deactivate the hooks without authorization? I don't always want to wait for 50 consectutive times.
If webhook is not needed you should delete it: Delete hook method
If app is no longer authorized to delete webhook, then I'd respond with 4xx error, something like 410 Gone seems to be most appropriate answer in such case.
"In a timely manner" means 15 seconds. It's mentioned here: https://developers.podio.com/examples/webhooks
Related
I want to pause all dispatched thunks when I am offline and resume when I'm back online. Is createListenerMiddleware a good option for this?
I could create like a seperate redux slice to save the dispatched actions and just redispatch them, but this will lead to existing thunks returning a rejected Promise and throwing errors where called. I want the consumers to just not resolve the promise until internet is back again.
Would you use createListenerMiddleware for this or how can I pause redux thunk actions until online event triggers. I guess I cannot intercept thunks in normal middleware, right?
I tried also different libraries for that but I could not get any of them working because not maintained / broken types / etc.
No, the listener middleware is intentionally for "listening". It doesn't have the ability to stop or pause actions, and it runs after an action has already reached the reducers.
You may want to look at https://redux-offline.github.io/redux-offline/ instead.
There are also a number of Redux middleware for various forms of pausing, throttling, and similar behavior listed at https://github.com/markerikson/redux-ecosystem-links/blob/master/middleware-async.md#timeouts-and-delays . However, most of those were collected prior to 2018, and most likely few of them are written in TS. Still, some of those may give you ideas or code you can paste into your project for writing your own custom middleware.
Often times I see the following for polling:
Send a request and get back a unique ID.
Poll a "Status" endpoint, which tells the client when the request has been completed.
Send a request to fetch the response.
Why steps (2) and (3) can not be combined?
If the response isn't ready yet, it'll return no response back, and some status indicating that.
If it is ready, it'll return the response.
Why are (2) and (3) often separate steps?
Is it ready is a boolean true/false and a response can be anything. In general it's easier to call "is it ready" then write logic to handle true and false than to write logic to get a response, determine if the response is not ready or is the data type you need.
In this way, the logic is all client side but if you combined them you'd need to have logic on both client and server (both to say it's not ready and to handle the actual response). You could do it but keeping it separate just keeps things neater.
This pattern is generally defined by the HTTP 202 status code, which is the HTTP protocol's mechanism of initiating asynchronous requests.
We can think of a 202 response as indicating that a job has been created. If and when that job executes, it may (or may not) produce some business entity. Presumably the client receiving a 202 is ultimately interested in that business entity, which may (or may not) exist in the future, but certainly does not exist now, hence the 202 response.
So one simple reason for returning a pointer to a job status is because the job status exists now and we prefer to identify things that exist now rather than things that may (or may not) exist in the future. The endpoint receiving the request may not even be capable of generating an ID for the (future) business entity.
Another reason is status codes. A status endpoint returns a custom job status capable of describing unlimited potential states in which a job can exist. These job states are beyond the scope of HTTP status codes. The standard codes defined by the w3 already have precise definitions; and there simply is no standard HTTP status code that means "keep polling".
The reason is that they are different resources from REST perspective.
Let's examine this a bit through an example:
If you want to place an order then first you have to submit an order request
Then there is lengthy, asynchronous process in the background which checks the payment validity, the items availability in the inventories, etc.
If everything goes fine then there will be an order aggregate with some subelements (like order items, shipping address, etc.)
From REST perspective:
There is a POST /orders endpoint to place an order
There is a GET /order_requests/{id} endpoint to retrieve order request
There is a GET /orders/{id} endpoint to retrieve order details
Whenever the order and all related sub-resources are created then the 2. endpoint usually responds with a 303 See Other status code to ask the consumer to redirect to GET /orders/{id}.
I am currently developing a Microservice that is interacting with other microservices.
The problem now is that those interactions are really time-consuming. I already implemented concurrent calls via Uni and uses caching where useful. Now I still have some calls that still need some seconds in order to respond and now I thought of another thing, which I could do, in order to improve the performance:
Is it possible to send a response before the sucessfull persistence of data? I send requests to the other microservices where they have to persist the results of my methods. Can I already send the user the result in a first response and make a second response if the persistence process was sucessfull?
With that, the front-end could already begin working even though my API is not 100% finished.
I saw that there is a possible status-code 207 but it's rather used with streams where someone wants to split large files. Is there another possibility? Thanks in advance.
"Is it possible to send a response before the sucessfull persistence of data? Can I already send the user the result in a first response and make a second response if the persistence process was sucessfull? With that, the front-end could already begin working even though my API is not 100% finished."
You can and should, but it is a philosophy change in your API and possibly you have to consider some edge cases and techniques to deal with them.
In case of a long running API call, you can issue an "ack" response, a traditional 200 one, only the answer would just mean the operation is asynchronous and will complete in the future, something like { id:49584958, apicall:"create", status:"queued", result:true }
Then you can
poll your API with the returned ID to see if the operation that is still ongoing, has succeeded or failed.
have a SSE channel (realtime server side events) where your server can issue status messages as pending operations finish
maybe using persistent connections and keepalives, or flushing the response in the middle, you can achieve what you point out, ie. like a segmented response. I am not familiar with that approach as I normally go for the suggesions above.
But in any case, edge cases apply exactly the same: For example, what happens if then through your API a user issues calls dependent on the success of an ongoing or not even started previous command? like for example, get information about something still being persisted?
You will have to deal with these situations with mechanisms like:
Reject related operations until pending call is resolved "server side": Api could return ie. a BUSY error informing that operations are still ongoing when you want to, for example, delete something that still is being created.
Queue all operations so the server executes all them sequentially.
Allow some simulatenous operations if you find they will not collide (ie. create 2 unrelated items)
I'm using MS-provided bot sample with Teams messaging extensions feature. Only posted my Azure AD creds, no more changes. Running it locally...
When a user clicks messaging extension button in Teams request arrived on Microsoft.BotBuilderSamples.Controllers.BotController.PostAsync() method. If this method works longer than 25 seconds, Teams show to the user error message. Docs say that it should be only 15 seconds, but it seems Teams became more tolerant these days, okay.
But in this case the second request arrives in this method after first one (it takes place even if method works 16 seconds, not 26)! It's with same body and headers besides Authorization header (it contains new token).
So...What does it mean? What is this behavior created for? How to prevent it?
And who does this second request after all? I look in fiddler and see only one request to MS server from my desktop Teams client. When I'm make similar request from Postman, it arrives only one time.
Copying answer from comments for better understanding:
From a scenario it is the best if the bot responds within 5 seconds. We are waiting for more, but that should not be something that the we should rely on. Also as #subba reddi said If there is no response from bot controller within 15 seconds, Teams service retry once. So, you will see double calls in your controller. So, make sure your bot respond within 15 seconds.
Suppose I have an endpoint that supports creating new messages. I am avoiding the creation of two times the same message in the backend, in case the user tries to push the button twice (or in case the frontend app behaves strangely).
Currently for the duplicate action my server is responding with a 303 see other pointing to the previously created resource URL. But I see I could also use a 302 found. Which one seems more appropriate ?
Note that the duplicate avoidance strategy can be more complex (eg for an appointment we would check whether the POSTed appointment is within one hour of an existing one)
I recommend using HTTP Status Code 409: Conflict.
The 3XX family of status codes are generally used when the client needs to take additional action, such as redirection, to complete the request. More generally, status codes communicate back to the client what actions they need to take or provide them with necessary information about the request.
Generally for these kind of "bad" requests (such as repeated requests failing due to duplication) you would respond with a 400 status code to indicate to the client that there was an issue with their request and it was not processed. You could use the response to communicate more precisely the issue.
Also to consider, if the request is just "fire and forget" from the client then as long as you've handled the case for duplication and no more behavior is needed from the client it might be acceptable to send a 200 response. This tells the client "the request was received and handled appropriately, nothing more you need to do." However this is a bit deceptive as it does not indicate the error to the client or allow for any modified behavior.
The JSON:API specification defines:
A server MUST return 409 Conflict when processing a POST request to create a resource with a client-generated ID that already exists.