Is there a way to stop lighttpd from buffering POSTs to a CGI executable?
It seems to me that all requests are fully buffered on disk before they are forwarded to the CGI executable, which makes it impossible for me to process the input in a stream-based way.
To clarify, I'm only talking about the request that is forwarded to the CGI executable on the standard input; I've already verified that the response is not buffered like that, and streaming output is indeed possible.
server.stream-request-body = 0 (default) buffer entire request body before connecting to backend
server.stream-request-body = 1 stream request body to backend; buffer to temp files
server.stream-request-body = 2 stream request body to backend; minimal buffering might block upload
when using HTTPS, it is recommended to additionally set ssl.read-ahead = "disable"
https://redmine.lighttpd.net/projects/lighttpd/wiki/Server_stream-request-bodyDetails
Related
I am using LWIP httpd cgi to receive html page POST and GET . Here i am trying to update the firmware using bin file uploaded through the html page . But after fe wpackets the connection closes and also not getting timew toprocessess the packet recived ,if i try to write the data received to flash then next packet wont receive and connection close i am using
err_t httpd_post_receive_data(void *connection, struct pbuf *p)
to receive file 536 sized packet after header removed receiving
it seems the timing or size to hold the packets are not enough , tried changing diffrent related macros of LWIP but no use the pbuf payload size also checked
I'm attempting to post some json to an express.js endpoint. If the size of the json is less than 64k, then it succeeds just fine. If it exceeds 64k, the request is never completely received by the server. The problem only occurs when running express directly locally. When running on heroku, the request proceeds without issue.
The problem is seen across MacOS, Linux (ubuntu 19), and Windows. It is present when using Chrome, Firefox, or Safari.
When I make requests using postman, the request fails.
If I make the request using curl, the request succeeds.
If I make the request after artificially throttling chrome to "slow 3G" levels in network settings, the request succeeds.
I've traced through express and discovered that the problem appears when attempting to parse the body. The request gets passed to body-parser.json() which in turns called getRawBody to get the Buffer from the request.
getRawBody is processing the incoming request stream and converting it into a buffer. It receives the first chunk of the request just fine, but never receives the second chunk. Eventually the request continues parsing with an empty buffer.
The size limit on bodyparser is set to 100mb, so it is not the problem. getRawBody never returns, so body-parser never gets a crack at it.
If I'm logging the events from getRawBody I can see the first chunk come in, but no other events are fired.
Watching wireshark logs, all the data is getting sent over the wire. But it looks like for some reason, express is not receiving all the chunks. I think it's got to be due to how express is processing the packets, have no idea how to proceed.
In the off chance anyone in the future is running into the same thing: The root problem in this case was that we were overwriting req.socket with our socket.io client. req.socket is used by node internally to transfer data. We were overwriting such that the first packets would get through, but not subsequent packets. So if the request was processed sufficiently quickly, all was well.
tl;dr: Don't overwrite req.socket.
I'm using traefik docker image v1.5, and when I try to upload a big file (like 1GB), traefik closes the connection and logs:
exceeding the max size 4194304
So is there any way to modify/remove this default restriction?
Traefik uses a Buffering middleware that gives you control on how you want to read the requests before sending them to services.
With Buffering, Traefik reads the entire request into memory (possibly buffering large requests into disk), and rejects requests that are over a specified limit.
This can help services deal with large data (multipart/form-data for example), and can minimize time spent sending data to a service.
Example configuration:
[backends]
[backends.backend1]
[backends.backend1.buffering]
maxRequestBodyBytes = 10485760
memRequestBodyBytes = 2097152
maxResponseBodyBytes = 10485760
memResponseBodyBytes = 2097152
retryExpression = "IsNetworkError() && Attempts() <= 2"
With the maxRequestBodyBytes option, you can configure the maximum allowed body size for the request (in Bytes).
If the request exceeds the allowed size, it is not forwarded to the service and the client gets a 413 (Request Entity Too Large) response.
Traefik Buffering Documentation
For me, the problems is not with V2 of Traefik, but with Cloudflare. Cloudflare limits max uploads to 100Mb on free accounts.
I had to upload with a internal URL to get around this issue.
My application servers .wav files, which are downloadable at certain URLs. I have to change the logic so that they will be streamed instead of being downloaded - so I will remove Content-Disposition header that was being explicitly set.
Piece of code:
// removed
//response.setHeader("Content-Disposition", "attachment;filename=" + fileName);
bis = new BufferedInputStream(inputStream);
bos = new BufferedOutputStream(sOutputStream);
byte[] buff = new byte[10000];
int bytesRead = 0;
while(-1 != (bytesRead = bis.read(buff))) {
bos.write(buff, 0, bytesRead);
}
bos.flush();
2nd or 3rd call to bos.write causes
ClientAbortException: java.net.SocketException: socket write error: Connection aborted by peer
at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:402)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:449)
at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:349)
at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:425)
at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:414)
at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:89)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
When I debug the code, at the time write method fails, the browser opens a player and another, identical request is being generated and succeeds.
When Content-Disposition is set eveything works fine. Any ideas?
It's because the client aborts the request after noticing that it's actually a media file and switches via client's media player to streaming mode via HTTP Range requests in order to improve buffering speed. The client will then fire multiple HTTP requests on different parts of the file (obviously, this works only efficiently if your servlet also really supports it ... a lot of homegrown file servlets don't and may eventually perform much worse).
As to those client abort exceptions in the server log, your best bet is to filter out and suppress them, or at least log with an DEBUG/INFO oneliner instead of with a whole stack trace.
See also:
How to stream audio/video files such as MP3, MP4, AVI, etc using a Servlet
ClientAbortException at application deployed at jboss with IE8 browser
I'm using SharpBITS to download file from AmazonS3.
> // Create new download job. BitsJob
> job = this._bitsManager.CreateJob(jobName, JobType.Download);
> // Add file to job.
> job.AddFile(downloadFile.RemoteUrl, downloadFile.LocalDestination);
> // Resume
> job.Resume();
It works for files which do no need authentication. However as soon as I add authentication query string for AmazonS3 file request the response from server is http state 403 -unauthorized. Url works file in browser.
Here is the HTTP request from BIT service:
HEAD /mybucket/6a66aeba-0acf-11df-aff6-7d44dc82f95a-000001/5809b987-0f65-11df-9942-f2c504c2c389/v10/summary.doc?AWSAccessKeyId=AAAAZ5SQ76RPQQAAAAA&Expires=1265489615&Signature=VboaRsOCMWWO7VparK3Z0SWE%2FiQ%3D HTTP/1.1
Accept: */*
Accept-Encoding: identity
User-Agent: Microsoft BITS/7.5
Connection: Keep-Alive
Host: s3.amazonaws.com
The only difference between the one from a web browser is the request type. Firefox makes a GET request and BITS makes a HEAD request. Are there any issues with Amazon S3 HEAD requests and query string authentication?
Regards, Blaz
You are probably right that a proxy is the only way around this. BITS uses the HEAD request to get a content length and decide whether or not it wants to chunk the file download. It then does the GET request to actually retrieve the file - sometimes as a whole if the file is small enough, otherwise with range headers.
If you can use a proxy or some other trick to give it any kind of response to the HEAD request, it should get unstuck. Even if the HEAD request is faked with a fictitious content length, BITS will move on to a GET. You may see duplicate GET requests in a case like this, because if the first GET request returns a content length longer than the original HEAD request, BITS may decide "oh crap, I better chunk this after all."
Given that, I'm kind of surprised it's not smart enough to recover from a 403 error on the HEAD request and still move on to the GET. What is the actual behaviour of the job? Have you tried watching it with bitsadmin /monitor? If the job is sitting in a transient error state, it may do that for around 20 mins and then ultimately recover.
Before beginning a download, BITS sends an HTTP HEAD request to the server in order to figure out the remote file's size, timestamp, etc. This is especially important for BranchCache-based BITS transfers and is the reason why server-side HTTP HEAD support is listed as an HTTP requirement for BITS downloads.
That being said, BITS bypasses the HTTP HEAD request phase, issuing an HTTP GET request right away, if either of the following conditions is true:
The BITS job is configured with the BITS_JOB_PROPERTY_DYNAMIC_CONTENT flag.
BranchCache is disabled AND the BITS job contains a single file.
Workaround (1) is the most appropriate, since it doesn't affect other BITS transfers in the system.
For workaround (2), BranchCache can be disabled through BITS' DisableBranchCache group policy. You'll need to do "gpupdate" from an elevated command prompt after making any Group Policy changes, or it will take ~90 minutes for the changes to take effect.