Is there a way to add header to apache response on how long it took to retrieve a resource? - apache

Is there a module or a built-in function in apache which I can use/activate to send information how long it took to retrieve/process a resource?
For example the resource http://dom.net/resource is accessed. The response header will include the total time it took to wait for the resource to be ready before it gets sent back to the client.

Apache doesn't really 'wait' until the resource is ready before sending the response back to you - it streams data back to the client as and when it receives it.
Depending on what you're interested in measuring, you could record the time taken for the client to receive the first byte/last byte back from Apache, or measure the time taken for Apache to receive the first byte from the (remote?) resource like so. The time taken for Apache to receive the entire response back from the remote resource is not something you can send in the headers, as the headers will have been sent to the client before the remote response is fully received. This information could trivially be written to the Apache logs, however.

Related

ASP.NET Core and 102 status code implementation

I have long operation, which called via Web API. Status code 102 says to us:
An interim response used to inform the client that the server has
accepted the complete request, but has not yet completed it.
This status code SHOULD only be sent when the server has a reasonable
expectation that the request will take significant time to complete.
As guidance, if a method is taking longer than 20 seconds (a
reasonable, but arbitrary value) to process the server SHOULD return a
102 (Processing) response. The server MUST send a final response after
the request has been completed.
So, I want to return 102 status code to client, then client waits response about result of operation. How to implement it on .NET?
I read this thread: How To Return Http 102 Processing in Asp.Net Web Api?
This thread has good explanation what is necessary, but no response. I don't understand how it implement on .NET, not theory...
Using HTTP 102 requires that the server send two responses for one request. ASP.NET (Core or not) does not support sending a response to the client without completely ending the request. Any attempt to send two responses will end up in throwing an exception and just not working. (I tried a couple different ways)
There's a good discussion here about how it's not actually in the HTTP spec, so implementing it isn't really required.
There are a couple alternatives I can think of:
Use web sockets (a persistent connection that allows data to be sent back and forth), like with SignalR, for example.
If your request takes a long time because it's getting data from elsewhere, you can try pulling in that data via a stream and send it to the client via a stream. That will send the data as it's coming in, rather than loading it all into memory first before sending it. Here's an example of streaming data from a database to the response: https://stackoverflow.com/a/45682190/1202807

Using etag with pagination to serve data conditionally in chunks

I have several APIs that serve a large number of records to an application. The API responses are generally user-based (same API might serve different responses to different users).
To make it easier for the application side to get and load the data, the application receives the response in chunks. For example, it makes n consecutive requests, like this:
/api/myapi/1
/api/myapi/2
...
/api/myapi/n
The application caches the API responses in it's local memory as objects. In each API response, an etag header is sent back to the application, containing the hashed value of the response. In each request made by the application, this etag value is sent along with request parameters. Based on the etag value, the server determines whether the application has an old response in it cache or not.
In the case of the API's that serve data in chunks, the application still only keeps one etag for API. That makes it impossible for the server to check whether the application's cached data are fresh or not.
An illustration:
In the case of the API above, each time the request is made, the server calculates the etag and sends it along with the response. Each time the application receives a response, it updates the value of the etag. In the end (when the nth call is made), the application only has the nth etag stored.
When the application needs these data again, first it makes the request to the server (/api/myapi/1 ), sending the nth etag. Most probably, the response with the first set of data will be different than the nth response, therefore the server asks the application to retake the data. The application retakes the data and updates the etag. This repeats till the nth request.
As you can see, even if the total response (all the sets of data) have not changed, the server will always compare the etag from the response of the previous set with the etag of the current data. This means that the server response will always be 'retake the data', which is wrong.
The alternatives I came up with are:
The application stores all etags and sends etag[i] in the request /api/myapi/i
Another way would be for the server to store all etags for every user (which I don't find effective). That could cause a problem in the case when the server successfully has sent a response but the application was not able to update that set of data. The server would not know that the application is still using an old response.
The third alternative would be for the server to calculate the etag of the whole response, but send the response in n sets. Every time it would send the same etag (that of the whole sets). This means that the server has to do the same job twice, which I still do not like as a solution.
PS: The front-end developer says that it is not possible for him to store more than 1 etag for each request. That makes alternative 1 somewhat complicated (although not impossible).
Is there any other way to treat this scenario? If not, what could be a more efficient solution?

How REST API handle continuous data update

I have REST backend api, and front end will call api to get data.
I was wondering how REST api handles continuous data update, for example,
in jenkins, we will see that if we execute build job, we can see the continous log output on page until job finishes. How REST accomplish that?
Jenkins will just continue to send data. That's it. It simply carries on sending (at least that's what I'd presume it does). Normally the response contains a header field indicating how much data the response contains (Content-Length). But this field is not necessary. The server can omit it. In such a case the response body ends when the server closes the connection. See RFC 7230:
Otherwise, this is a response message without a declared message body length, so the message body length is determined by the number of octets received prior to the server closing the connection.
Another possibility would be to use the chunked transfer encoding. Then the server sends a chunk of data having its own Content-Length header. The server terminates this by sending a zero-length last chunk.
Websocksts would be a third possibility.
I was searching for an answer myself and then the obvious solution struck me. In order to see what type of communication a service is using, you can simply view it from browser side using Developer Tools.
In Google Chrome it will be F12 -> Network.
In case of Jenkins, front-end is sending AJAX requests to backend for data:
every 5 seconds on Dashboards page
every second during Pipeline run (Console Output page), that you have mentioned.
I have also checked the approach in AWS. When checking the status of instances (example: Initializing... , Booting...), it queries the backend every second. It seems to be a standard interval for its services.
Additional note:
When running an AWS Remote Console though, it first sends requests for remote console instance status (backend answers with { status: "BOOTING" }, etc.). After backend returns status as "RUNNING", it starts a WebSocket session between your browser and AWS backend (you can notice it by applying WS filter in developer tools).
Then it is no longer REST API, but WebSockets, that is a different protocol (stateful).

Understanding fiddler statistics

We are sending a HTTP WCF request to a 3rd party system hosted on our servers and were experiencing a significant delay between sending the request and getting the response. The 3rd party are claiming that they complete their work in a few seconds but in fiddler I can see a significant gap between the ServerBeginResponse and the GotResponseHeaders.
Now I'm not sure what could account for this delay? Could someone explain what the ServerBeginResponseand the GotResponseHeaders timers in Fiddler actually mean?
The timers mean pretty much exactly what they say-- The ServerGotRequest timer is set when Fiddler is done transmitting the HTTP request to the server. The GotResponseHeaders timer is set when Fiddler has read the complete set of response headers from the server.
In your screenshot, there's a huge delay between ServerBeginResponse (which is set when the first byte of the server's response is returned) and GotResponseHeaders which suggests that the server spent a significant amount of time in completing the return of the HTTP response headers.
If you send me (via Help > Send Feedback) a SAZ capture of this traffic, I can take a closer look at it.

Netty SSL mode strange behavior

I am trying to understand, why does Netty SSL mode work on strange way?
Also, the problem is following, when any SSL client(https browser, java client using ssl, also any ssl client application) connects to Netty server I get on beginning the full message, where I can recognize correctly the protocol used, but as long the channel stays connected, any following messages have strange structure, what is not happening same way with non-ssl mode.
As example on messageReceived method when the https browser connects to my server:
I have used PortUnificationServerHandler to switch protocols.. (without using nettys http handler, it is just example, because i use ssl mode for my own protocol too)
first message is ok, I get full header beginning with GET or POST
than I send response...
second message is only one byte long and contains "G" or "P" only.
third message is than the rest beginning either with ET or OST and the rest of http header and body..
here again follows my response...
fourth message is again one byte long and again contains only one byte..
fifth message again the rest... and on this way the game goes further..
here it is not important, which sub protocol is used, http or any else, after first message I get firstly one byte and on second message the rest of the request..
I wanted to build some art of proxy, get ssl data and send it unencoded on other listener, but when I do it directly without waiting for full data request, the target listener(http server as example) can not handle such data, if the target gets one byte as first only (even if the next message contains the rest), the channel gets immediately closed and request gets abandoned..
Ok, first though would be to do following, cache the first byte temporarily and wait for next message and than join those messages, and only than response, that works fine, but sometimes that is not correct approach, because the one byte is sometimes really the last message byte, and if i cache it and await wrongly next message, i can wait forever, because the https browser expects at this time some response and does not send any data more..
Now the question, is it possible to fix this problem with SSL? May be there are special settings having influence on this behavior?
I want fully joined message at once as is and not firstly first byte and than the rest..
Can you please confirm, that with newer Netty versions you have same behaving by using PortUnificationServerHandler (but without netty http handler, try some own handler.)
Is this behavior Ok so, I do not believe, it was projected so to work..
What you're experiencing is likely to be due to the countermeasures against the BEAST attack.
This isn't a problem. What seems to be the problem is that you're assuming that you're meant to read data in terms of messages/packets. This is not the case: TCP (and TLS/SSL) are meant to be used as streams of continuous data. You should keep reading data while data is available. Where to split incoming data where it's meaningful is guided by the application protocol. For HTTP, the indications are the blank line after the header and the Content-Length or chunked transfer encoding for the entity.
If you define your own protocol, you'll need a similar mechanism, whether you use plain HTTP or SSL/TLS. Assuming you don't need it only works by chance.
I had experienced this issue and found it was caused bu using JDK1.7. Moving back to JDK1.6 solved it. I did not have time to investigate further but have assumed for now that the SSLEngine implementation has changed in the JDK. I will investigate further when time permits.