Removing Content-Disposition causes ClientAbortException: java.net.SocketException: socket write error: Connection aborted by peer - apache

My application servers .wav files, which are downloadable at certain URLs. I have to change the logic so that they will be streamed instead of being downloaded - so I will remove Content-Disposition header that was being explicitly set.
Piece of code:
// removed
//response.setHeader("Content-Disposition", "attachment;filename=" + fileName);
bis = new BufferedInputStream(inputStream);
bos = new BufferedOutputStream(sOutputStream);
byte[] buff = new byte[10000];
int bytesRead = 0;
while(-1 != (bytesRead = bis.read(buff))) {
bos.write(buff, 0, bytesRead);
}
bos.flush();
2nd or 3rd call to bos.write causes
ClientAbortException: java.net.SocketException: socket write error: Connection aborted by peer
at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:402)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:449)
at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:349)
at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:425)
at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:414)
at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:89)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
When I debug the code, at the time write method fails, the browser opens a player and another, identical request is being generated and succeeds.
When Content-Disposition is set eveything works fine. Any ideas?

It's because the client aborts the request after noticing that it's actually a media file and switches via client's media player to streaming mode via HTTP Range requests in order to improve buffering speed. The client will then fire multiple HTTP requests on different parts of the file (obviously, this works only efficiently if your servlet also really supports it ... a lot of homegrown file servlets don't and may eventually perform much worse).
As to those client abort exceptions in the server log, your best bet is to filter out and suppress them, or at least log with an DEBUG/INFO oneliner instead of with a whole stack trace.
See also:
How to stream audio/video files such as MP3, MP4, AVI, etc using a Servlet
ClientAbortException at application deployed at jboss with IE8 browser

Related

Ktor Server: How to find out when a client has finished downloading a file?

I want to run code when a HTTP client has finished downloading a file from Ktor Server. The most simple approach does not work:
routing {
get("/download") {
call.response.header(
ContentDisposition,
Attachment.withParameter(FileName, file.fileName).toString()
)
call.respondFile(file)
// client has not finished downloaded when we reach this point
}
}
I already tried intercepting pipelines, writing out the file in byte arrays myself, but all that detects a completed download too soon. If I for example stop the webserver and related services too soon, the client isn't able to actually complete the download.
So is there a way to reliably detect when the client has ACKed the last bytes of the file?

Camel AWS-S3 - Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection

I am using camel-aws to poll a file onto the remote S3 bucket to check if it has arrived or not.
I am not interested in the content of the file.
from("direct:my-route").
.from("aws-s3://my.bucket?useIAMCredentials=true&useAwsKMS=true&awsKMSKeyId=my-key-id&deleteAfterRead=false&operation=listObjects&includeBody=false&prefix=test1/etmp_xi_inbound.xml")
.log(" File detected: ${header.CamelAwsS3Key}")
.end();
I have set the includeBody to false to not to read the content of the file however I am getting below warning:
WARN c.a.s.s.i.S3AbortableInputStream - Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
Do you have autoCloseBody set to true? It seems that potentially in newer versions of Camel that they auto close the s3 connection and so having autoCloseBody=true means you are trying to close an already closed connection hence causing the error.

POST with bodies larger than 64k to Express.js failing to process

I'm attempting to post some json to an express.js endpoint. If the size of the json is less than 64k, then it succeeds just fine. If it exceeds 64k, the request is never completely received by the server. The problem only occurs when running express directly locally. When running on heroku, the request proceeds without issue.
The problem is seen across MacOS, Linux (ubuntu 19), and Windows. It is present when using Chrome, Firefox, or Safari.
When I make requests using postman, the request fails.
If I make the request using curl, the request succeeds.
If I make the request after artificially throttling chrome to "slow 3G" levels in network settings, the request succeeds.
I've traced through express and discovered that the problem appears when attempting to parse the body. The request gets passed to body-parser.json() which in turns called getRawBody to get the Buffer from the request.
getRawBody is processing the incoming request stream and converting it into a buffer. It receives the first chunk of the request just fine, but never receives the second chunk. Eventually the request continues parsing with an empty buffer.
The size limit on bodyparser is set to 100mb, so it is not the problem. getRawBody never returns, so body-parser never gets a crack at it.
If I'm logging the events from getRawBody I can see the first chunk come in, but no other events are fired.
Watching wireshark logs, all the data is getting sent over the wire. But it looks like for some reason, express is not receiving all the chunks. I think it's got to be due to how express is processing the packets, have no idea how to proceed.
In the off chance anyone in the future is running into the same thing: The root problem in this case was that we were overwriting req.socket with our socket.io client. req.socket is used by node internally to transfer data. We were overwriting such that the first packets would get through, but not subsequent packets. So if the request was processed sufficiently quickly, all was well.
tl;dr: Don't overwrite req.socket.

lighttpd: disable CGI buffering

Is there a way to stop lighttpd from buffering POSTs to a CGI executable?
It seems to me that all requests are fully buffered on disk before they are forwarded to the CGI executable, which makes it impossible for me to process the input in a stream-based way.
To clarify, I'm only talking about the request that is forwarded to the CGI executable on the standard input; I've already verified that the response is not buffered like that, and streaming output is indeed possible.
server.stream-request-body = 0 (default) buffer entire request body before connecting to backend
server.stream-request-body = 1 stream request body to backend; buffer to temp files
server.stream-request-body = 2 stream request body to backend; minimal buffering might block upload
when using HTTPS, it is recommended to additionally set ssl.read-ahead = "disable"
https://redmine.lighttpd.net/projects/lighttpd/wiki/Server_stream-request-bodyDetails

invalid stream header: 47455420 - Java Input Stream

Hello World!
Currently I'm writing a simple Client/Server application which uses sockets to do the communitcation. My Client and my Server application are working fine with each other but if I try to query my Server application with a real web-browser (like Mozilla Firefox), then it comes to an exception.
I think that my streams are not compatible with Mozilla Firefox. This little code line always leads to an IOException with the error message "invalid stream header: 47455420".
From Firefox I try to connect via: http://localhost:7777/some-webpage.html
This is my code:
server = new ServerSocket(7777);
Socket socket = server.accept();
try
{
ObjectInputStream inputStream = new ObjectInputStream(new BufferedInputStream(socket.getInputStream()));
}
catch (IOException ex)
{
System.out.println("This exception happens :-(");
System.out.println(ex.getLocalizedMessage());
}
Does anybody know why this happens?
Help is seen with pleasure.
Greetings
Benny
The ObjectInputStream expects a binary format. You can't use a web browser to produce the binary format that it reads. The web browser will talk HTTP protocol, and your server is not expecting that at all.
You probably need to learn about web services. You might find the JAX-RS support in CXF convenient for what you seem to want to do.
To just drop in to HTTP, the minimal thing to do is implement a servlet: google would be your friend in learning about them.