Logic App HTTP action Bad request due to max buffer size - httprequest

Real quickly I am trying to complete an http action in azure logic app that will send out a get request to return a csv file as the response body. The issue is when I run it I get a "BadRequest. Http request failed as there is an error: 'Cannot write more bytes to the buffer than the configured maximum buffer size: 104857600.'". I am not sure how to mitigate this buffer limit or whether I can increase it. I could use some help I really need this csv file returned so I can get it into to blob storage.

Please try this way:
1. In the HTTP action's upper-right corner, choose the ellipsis button (...), and then choose Settings.
2. Under Content Transfer, set Allow chunking to On.
You can refer to Handle large messages with chunking in Azure Logic Apps

Related

How to upload image with Socket Ktor?

I want to send image in Socket.
How can I do that with Ktor?
AFAIK , you cannot do like http multi part upload in websockets directly.
Several things you can try.
Try convert the image into Base64/Byte Array in client and send it to websocket server.
Several things you may need to handle is, if you use byte array you might need to handle file headers in the byte array.
If you read file from image you can get like this and pass it to the websocket.
var arr = File(path).inputStream().use { it.readBytes() }
Downside of doing this is , if image size is higher it may not work as expected. And if your websocket sends it to multiple listening clients, it will be quite overload and leads to some delay while sending additional data with file byte array.
Another best approach is to upload image to you sever using http multipart upload(Without socket) and send the image url to server. So url can be sent to clients listening to the particular socket. So the image data can be loaded in client only when required.
If you send byte array in webssocket for big image, the particular websocket response size will be higher than sending the image with image url.
Recommended approach will be method 2 mostly , except some specific use cases.

Requests - How to upload large chunks of file in requests?

I have to upload large files (~5GB). I am dividing the file in small chunks (10MB), can't send all data(+5GB) at once(as the api I am requesting fails for large data than 5GB if sent in one request). The api I am uploading to, has a specification that it needs minimum of 10MB data to be sent. I did use read(10485760) and send it via requests which works fine.
However, I do not want to read all the 10MB in the memory and if I leverage multithreading in my script, so each thread reading 10MB would cost me too much memory.
Is there a way I can send a total of 10MB to the api requests but read only 4096/8192 bytes at a time and transfer till I reach 10MB, so that I do not overuse memory.
Pls.note I cannot send the fileobj in the requests as that will use less memory but I will not be able to break the chunk at 10MB and entire 5GB data will go to the request, which I do not want.
Is there any way via requests. I see the httplib has it. https://github.com/python/cpython/blob/3.9/Lib/http/client.py - I will call the send(fh.read(4096) function here in loop till I complete 10MB and will complete one request of 10MB without heavy memory usage.
this is what documentation says:
In the event you are posting a very large file as a multipart/form-data request, you may want to stream the request. By default, requests does not support this, but there is a separate package which does - requests-toolbelt. You should read the toolbelt’s documentation for more details about how to use it.
so try to stream the upload if it doesn't work as per your needs then go for requests-toolbelt
In order to stream the upload, you need to pass stream=True in the function call whether its post or put.

Determine the memory usage in actual server through its header response size

I have a log traces file from a server, there is a field header response size for a requests lets say 4585 etc, is there any relationship between the response size and the actual memory size in the server to generate this response?? I need to know the real size of a request when it processes it in the server, and after finish process, it generates response size, any idea or calculation method will be highly appreciated, thanks
There is no way to determine memory usage on the server from the response size.
The (hypothetical) way to determine memory usage for a request would be to directly measure it in the server's request handling code. But even that is difficult because it would be difficult the memory used by each request from other things. It would only be feasible if your server processed (strictly) one request at a time.

Working of S3 file download

To download a file from S3 using java SDK, we need to do the following ->
Note- Multipart download is off.
S3Object s3Object = s3.getObject(getObjectRequest);
S3ObjectInputStream s3ObjectInputStream = s3Object.getObjectContent();
//Write to a file from this stream
When we make a getObject call, the SDK does a GET call on that object.
This call returns just the headers of the response.
When we actually start reading from the s3ObjectInputStream, we get the response body.
But this all is one REST call.
So, I was confused why the call returned only the headers first.
And how did S3 know when to start sending in the response body?
We are making only one call, so how are we notifying S3 that we have now started reading from the s3ObjectInputStream.
Where is the actual file stored till we read it from the stream ?
S3 starts sending the response body immediately.
You just haven't started reading it from the network.
getObject
Be extremely careful when using this method; the returned Amazon S3 object contains a direct stream of data from the HTTP connection. The underlying HTTP connection cannot be reused until the user finishes reading the data and closes the stream.
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3.html#getObject-com.amazonaws.services.s3.model.GetObjectRequest-
A small amount has been buffered, but the object isn't being stored anywhere. The network connection is stalled.
If you were to start a request and wait long enough before reading it, S3 would eventually detect the connection as stalled, give up, and close the connection.
In practice, it's easy to separate HTTP headers from body in a stream, because the boundary between them is always exactly \r\n\r\n. This 4 byte sequence is invalid within the headers, and mandatory after the headers, so the SDK simply stops extracting headers at this point in the response from S3 and builds and returns the response object, from which you can read the body from the stream from the network.

Rest response getting truncated in WAS and IBM HTTP Server 8.5

We have built REST web service and deployed on Websphere application server and IBM HTTP Web server 8.5.
What is happening that for some of the POST requests where we have quite large response (more than 64 KB), we are not getting the complete response data.
Application has generated good JSON but JSON is getting truncated in between somewhere. Same request being fired multiple times but response is getting truncated for few requests randomly.
Our analysis says that whenever we get this truncated response, we get the response as multiple of 32KB i.e. actual response size can be of say 105KB but we get only 64KB or 96KB of response.
Any idea what can be the reason? Any configuration which can help us resolve the issue?
Thanks
Narinder
You may want to increase the size of the Write buffer on Web Container to stop it chunking the writes on multiple threads. The default size of the writer buffer is 32K, which does correspond to the multiple you are seeing.
To change this setting :
Application servers > -serverName- > Ports > Transport Chain > HttpQueueInboundDefault
click on the Web Container and set he Write Buffer size to an appropriate value. In most cases you want to set the buffer to be able to write all(or most) responses in one single write rather than multiple writes.
See also WebSphere Application Server 8.5 tuning