Is request order garenteed in web api? - asp.net-core

I made a web api with asp core.
When a client sends two requests in a row, A and B. Is the order of requests guaranteed as with TCP protocol?
Can I be sure the request A is always processed before request B in my web api?

No, you can't be sure that the first request is processed before the second one as the requests could be handled by different threads, so there's no guarantee about the order you'll receive your responses.
If you want to be sure to display data related only to your last request, you could use a counter on the client side, increment that at every request and send it to your API. On the server side, the response will then contains your counter and your client will only show the response that has the matching counter in the content.

Related

C# how to automate api requests that require a sessionId that expires

I need to get data from an api that reuqires first getting a sessionId using a username and password. The sessionId that is provided from the getSession endpoint needs to be passed in with all requests in the header.
I'm able to use httpclient to do all this for making requests, but how can this be automated such that requests can be made on a schedule, for example every 5 minutes? Would I need to cache the provided session so that I can reuse it as long as it is alive? I'm guessing you wouldn't want to request a new sessionId with every request. How, would I know that it is alive without specifically making some kind of request to the api?
Any pointers to examples or resources on this type of automated api request process would be welcome.

Using etag with pagination to serve data conditionally in chunks

I have several APIs that serve a large number of records to an application. The API responses are generally user-based (same API might serve different responses to different users).
To make it easier for the application side to get and load the data, the application receives the response in chunks. For example, it makes n consecutive requests, like this:
/api/myapi/1
/api/myapi/2
...
/api/myapi/n
The application caches the API responses in it's local memory as objects. In each API response, an etag header is sent back to the application, containing the hashed value of the response. In each request made by the application, this etag value is sent along with request parameters. Based on the etag value, the server determines whether the application has an old response in it cache or not.
In the case of the API's that serve data in chunks, the application still only keeps one etag for API. That makes it impossible for the server to check whether the application's cached data are fresh or not.
An illustration:
In the case of the API above, each time the request is made, the server calculates the etag and sends it along with the response. Each time the application receives a response, it updates the value of the etag. In the end (when the nth call is made), the application only has the nth etag stored.
When the application needs these data again, first it makes the request to the server (/api/myapi/1 ), sending the nth etag. Most probably, the response with the first set of data will be different than the nth response, therefore the server asks the application to retake the data. The application retakes the data and updates the etag. This repeats till the nth request.
As you can see, even if the total response (all the sets of data) have not changed, the server will always compare the etag from the response of the previous set with the etag of the current data. This means that the server response will always be 'retake the data', which is wrong.
The alternatives I came up with are:
The application stores all etags and sends etag[i] in the request /api/myapi/i
Another way would be for the server to store all etags for every user (which I don't find effective). That could cause a problem in the case when the server successfully has sent a response but the application was not able to update that set of data. The server would not know that the application is still using an old response.
The third alternative would be for the server to calculate the etag of the whole response, but send the response in n sets. Every time it would send the same etag (that of the whole sets). This means that the server has to do the same job twice, which I still do not like as a solution.
PS: The front-end developer says that it is not possible for him to store more than 1 etag for each request. That makes alternative 1 somewhat complicated (although not impossible).
Is there any other way to treat this scenario? If not, what could be a more efficient solution?

How REST API handle continuous data update

I have REST backend api, and front end will call api to get data.
I was wondering how REST api handles continuous data update, for example,
in jenkins, we will see that if we execute build job, we can see the continous log output on page until job finishes. How REST accomplish that?
Jenkins will just continue to send data. That's it. It simply carries on sending (at least that's what I'd presume it does). Normally the response contains a header field indicating how much data the response contains (Content-Length). But this field is not necessary. The server can omit it. In such a case the response body ends when the server closes the connection. See RFC 7230:
Otherwise, this is a response message without a declared message body length, so the message body length is determined by the number of octets received prior to the server closing the connection.
Another possibility would be to use the chunked transfer encoding. Then the server sends a chunk of data having its own Content-Length header. The server terminates this by sending a zero-length last chunk.
Websocksts would be a third possibility.
I was searching for an answer myself and then the obvious solution struck me. In order to see what type of communication a service is using, you can simply view it from browser side using Developer Tools.
In Google Chrome it will be F12 -> Network.
In case of Jenkins, front-end is sending AJAX requests to backend for data:
every 5 seconds on Dashboards page
every second during Pipeline run (Console Output page), that you have mentioned.
I have also checked the approach in AWS. When checking the status of instances (example: Initializing... , Booting...), it queries the backend every second. It seems to be a standard interval for its services.
Additional note:
When running an AWS Remote Console though, it first sends requests for remote console instance status (backend answers with { status: "BOOTING" }, etc.). After backend returns status as "RUNNING", it starts a WebSocket session between your browser and AWS backend (you can notice it by applying WS filter in developer tools).
Then it is no longer REST API, but WebSockets, that is a different protocol (stateful).

Should idempotent POST API call check API request payload before using client token to check idempotency?

I'm building a idempotent REST based POST API call. I want to implement idempotency behavior to avoid clients creating duplicate resource during network failure & timeout. Client passes a ClientToken in request header of every API call. My POST request has standard payload and I have validation logic around it. What is ideal idempotency behavior expected from an API during a retry? Should it depend just on the ClientToken and ignore request payload or should I run the validation logic on the request payload before invoking idempotent checks using ClientToken?
It depends, but for the idempotent API's that I've implemented, I always check the token first.
Because I only store the idempotency token in the same transaction as the changes made to the database (inserting the new resource for example) I know that if it is their, whatever is being requested has already been committed and worked previously.
If the token exists, I'll return a 201 created to the client with the link (for a POST) immediately before validating the payload.
The reason for this is that the rules of the game for clients is the idempotency token allows you to retry EXACTLY the same request. If someone writes a client that is silly enough to change the payload and use the same idempotency token, that's on their head.
The bonus of checking the idempotency token first is it can potentially save a bit of validation work, if the validation of a payload is heavy going.
First of all, POST, as a HTTP method, is not idempotent. Idempotent means that multiple identical requests should have the same effect as a single request.
If you change the payload, you no longer have identical requests. If you use them with the same token then what happens on the server? Does a second, different request, cause side effects? If it does then it is no longer indempotent. If on the other hand the result is the same then the method is indempotent so the payload does not matter but you no longer have identical requests needed to respect idempotency of HTTP.
I would personally check the request to have the same payload and reject subsequent requests that have a different payload.

Multiple calls to service not handling all requests

I have a Silverlight app that uses WCF for communication with my SQL Server via Entity Framework. When I send multiple requests to the service it fails to send all. If I use the Callback event and send each request as the previous one completes all is well. How can I get this to work without this workaround?
Edited:
For i As Integer = 0 To someNumberOfTimesInLoop
serv.CloseElement_IncAsync(Params...)
Next
So I wonder if it's a concurrency problem as they hit at the same time and the Id fields for these tables are not incremented in time?
I have added this code in my App.xaml.vb based on this blog:
WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp)
Any ideas?