Determine the memory usage in actual server through its header response size - apache

I have a log traces file from a server, there is a field header response size for a requests lets say 4585 etc, is there any relationship between the response size and the actual memory size in the server to generate this response?? I need to know the real size of a request when it processes it in the server, and after finish process, it generates response size, any idea or calculation method will be highly appreciated, thanks

There is no way to determine memory usage on the server from the response size.
The (hypothetical) way to determine memory usage for a request would be to directly measure it in the server's request handling code. But even that is difficult because it would be difficult the memory used by each request from other things. It would only be feasible if your server processed (strictly) one request at a time.

Related

Logic App HTTP action Bad request due to max buffer size

Real quickly I am trying to complete an http action in azure logic app that will send out a get request to return a csv file as the response body. The issue is when I run it I get a "BadRequest. Http request failed as there is an error: 'Cannot write more bytes to the buffer than the configured maximum buffer size: 104857600.'". I am not sure how to mitigate this buffer limit or whether I can increase it. I could use some help I really need this csv file returned so I can get it into to blob storage.
Please try this way:
1. In the HTTP action's upper-right corner, choose the ellipsis button (...), and then choose Settings.
2. Under Content Transfer, set Allow chunking to On.
You can refer to Handle large messages with chunking in Azure Logic Apps

Rest response getting truncated in WAS and IBM HTTP Server 8.5

We have built REST web service and deployed on Websphere application server and IBM HTTP Web server 8.5.
What is happening that for some of the POST requests where we have quite large response (more than 64 KB), we are not getting the complete response data.
Application has generated good JSON but JSON is getting truncated in between somewhere. Same request being fired multiple times but response is getting truncated for few requests randomly.
Our analysis says that whenever we get this truncated response, we get the response as multiple of 32KB i.e. actual response size can be of say 105KB but we get only 64KB or 96KB of response.
Any idea what can be the reason? Any configuration which can help us resolve the issue?
Thanks
Narinder
You may want to increase the size of the Write buffer on Web Container to stop it chunking the writes on multiple threads. The default size of the writer buffer is 32K, which does correspond to the multiple you are seeing.
To change this setting :
Application servers > -serverName- > Ports > Transport Chain > HttpQueueInboundDefault
click on the Web Container and set he Write Buffer size to an appropriate value. In most cases you want to set the buffer to be able to write all(or most) responses in one single write rather than multiple writes.
See also WebSphere Application Server 8.5 tuning

What Is Meant By Server Response Time

I'm doing website optimisations using Google's Pagespeed Insights to test improvements. Among the high-priority fix suggestions, is this:
Reduce server response time
In our test, your server responded in 2.1 seconds.
I read the 'helpful' doc linked in this section, and now I'm really confused.
Is the server response time the DNS response, the time to first-byte, or a combination? Is it purely a server-side thing, or could this be affected by, for example, a slow JavaScript resource or ready events in the DOM?
My first guess would have been that it's the time taken from the moment the request was issued, to the 1st byte received from the server, however Google's definition is not quite that:
(from this page https://developers.google.com/speed/docs/insights/Server)
Server response time measures how long it takes to load the necessary
HTML to begin rendering the page from your server, subtracting out the
network latency between Google and your server. There may be variance
from one run to the next, but the differences should not be too large.
In fact, highly variable server response time may indicate an
underlying performance issue.
To take 2.1 seconds would suggest to me that your application/webserver is buffering it's output, so all your server side processing is happening before it sends the content. If you don't buffer then the html can begin being sent to the browser more quickly which may help, however you lose the ability to do things like change response headers late in your logic.

What is the purpose of the MaxReceivedMessageSize on the client-side?

I did a test against a WCF server where the response from the server exceeds the MaxRecievedMessageSize property defined in the client-side binding object, resulting in a CommunicationException. I examined request and response using Fiddler. Despite exceeding the MaxRecievedMessageSize, the entirety of the response is sent to the client.
I believe I am missing the point of this behavior. As I see it, no bandwidth is saved as the data has already been received. The client application could have processed the data but the client binding has discarded before it is given to the application.
If saving bandwidth is not the purpose of the MaxReceivedMessageSize on the client-side, what is it for?
The answer is simple: security.
It would indeed be better for the bandwidth if your client could say to the server: "oh, by the way, don't bother sending me replies bigger than X bytes", but that is something they didn't implement :-)
And even if it was, what if the server has a bug, or is intentionally misbehaving...
What if the server returned a 2 TB string? Your client would then try to allocate a 2TB buffer to receive the request and will probably get a OutOfMemoryException. That would bring your client down.

Maximum binary contents length over WCF/Http

We have a WCF service that has a threshold of 30MB to send files over an http message, anything above that value gets transferred by file copy and the path sent back to the caller. Now we were requested to eliminate that file copy because customers complained it was too slow. So the decision was to remove any size limitation when sending binary content over WCF/HTTP.
My question is - how reliable is that? What type of issues will we encounter by pushing, say, a 2GB file over the wire in a single WCF message, if that is even possible?
Thanks!
If you set the MaxReceivedMessageSize in WCF to a high enough value on your WCF service, you can push a fairly large file through that service. The maximum is int64.MaxValue = 9,223,372,036,854,775,807, so you should be able to set a value to cover a 2GB message.
You might want to control the MaxBufferSize to ensure you're not trying to store too much into memory, and maybe consider switching to the more binary-efficient MTOM message encoding if you can. Note that the MaxReceivedMessageSize governs the size of the message after the binary file has been encoded, which means the original binary file size which can be sent over the service will be smaller than 2GB.
MSDN has a very nice article covering sending large amounts of data over WCF and what to look out for: Large Data and Streaming.
Edit: Turns out the max value allowed is actually Int64.MaxValue)