LWIP httpd how to receive a bin file uploaded through html page? - apache

I am using LWIP httpd cgi to receive html page POST and GET . Here i am trying to update the firmware using bin file uploaded through the html page . But after fe wpackets the connection closes and also not getting timew toprocessess the packet recived ,if i try to write the data received to flash then next packet wont receive and connection close i am using
err_t httpd_post_receive_data(void *connection, struct pbuf *p)
to receive file 536 sized packet after header removed receiving
it seems the timing or size to hold the packets are not enough , tried changing diffrent related macros of LWIP but no use the pbuf payload size also checked

Related

Camel AWS-S3 - Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection

I am using camel-aws to poll a file onto the remote S3 bucket to check if it has arrived or not.
I am not interested in the content of the file.
from("direct:my-route").
.from("aws-s3://my.bucket?useIAMCredentials=true&useAwsKMS=true&awsKMSKeyId=my-key-id&deleteAfterRead=false&operation=listObjects&includeBody=false&prefix=test1/etmp_xi_inbound.xml")
.log(" File detected: ${header.CamelAwsS3Key}")
.end();
I have set the includeBody to false to not to read the content of the file however I am getting below warning:
WARN c.a.s.s.i.S3AbortableInputStream - Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
Do you have autoCloseBody set to true? It seems that potentially in newer versions of Camel that they auto close the s3 connection and so having autoCloseBody=true means you are trying to close an already closed connection hence causing the error.

POST with bodies larger than 64k to Express.js failing to process

I'm attempting to post some json to an express.js endpoint. If the size of the json is less than 64k, then it succeeds just fine. If it exceeds 64k, the request is never completely received by the server. The problem only occurs when running express directly locally. When running on heroku, the request proceeds without issue.
The problem is seen across MacOS, Linux (ubuntu 19), and Windows. It is present when using Chrome, Firefox, or Safari.
When I make requests using postman, the request fails.
If I make the request using curl, the request succeeds.
If I make the request after artificially throttling chrome to "slow 3G" levels in network settings, the request succeeds.
I've traced through express and discovered that the problem appears when attempting to parse the body. The request gets passed to body-parser.json() which in turns called getRawBody to get the Buffer from the request.
getRawBody is processing the incoming request stream and converting it into a buffer. It receives the first chunk of the request just fine, but never receives the second chunk. Eventually the request continues parsing with an empty buffer.
The size limit on bodyparser is set to 100mb, so it is not the problem. getRawBody never returns, so body-parser never gets a crack at it.
If I'm logging the events from getRawBody I can see the first chunk come in, but no other events are fired.
Watching wireshark logs, all the data is getting sent over the wire. But it looks like for some reason, express is not receiving all the chunks. I think it's got to be due to how express is processing the packets, have no idea how to proceed.
In the off chance anyone in the future is running into the same thing: The root problem in this case was that we were overwriting req.socket with our socket.io client. req.socket is used by node internally to transfer data. We were overwriting such that the first packets would get through, but not subsequent packets. So if the request was processed sufficiently quickly, all was well.
tl;dr: Don't overwrite req.socket.

How to save a Single Packet Authorisation packet using the fwknop-client?

I was trying to save a SPA packet created via the fwknop-client on a client in client-server architecture.
The command that I have used is as follows
"fwknop -A tcp/22 -D server-ip --key-gen --use-hmac --save-packet --save-packet-file filename.pkt --save-rc-stanza -vv"
The command executes successfully but I'm not able to save the packet, the packet is not found in the system.
I also trying to append the packet using the "--save-packet-append" tag, but still not able to get the output.
The purpose for doing to above is to obtain the SPA packet and append the client certificate (asymmetric encryption) to this SPA packet which will be sent.
How can i save this packet to fulfil my purpose ?
Thank You

Removing Content-Disposition causes ClientAbortException: java.net.SocketException: socket write error: Connection aborted by peer

My application servers .wav files, which are downloadable at certain URLs. I have to change the logic so that they will be streamed instead of being downloaded - so I will remove Content-Disposition header that was being explicitly set.
Piece of code:
// removed
//response.setHeader("Content-Disposition", "attachment;filename=" + fileName);
bis = new BufferedInputStream(inputStream);
bos = new BufferedOutputStream(sOutputStream);
byte[] buff = new byte[10000];
int bytesRead = 0;
while(-1 != (bytesRead = bis.read(buff))) {
bos.write(buff, 0, bytesRead);
}
bos.flush();
2nd or 3rd call to bos.write causes
ClientAbortException: java.net.SocketException: socket write error: Connection aborted by peer
at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:402)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:449)
at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:349)
at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:425)
at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:414)
at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:89)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
When I debug the code, at the time write method fails, the browser opens a player and another, identical request is being generated and succeeds.
When Content-Disposition is set eveything works fine. Any ideas?
It's because the client aborts the request after noticing that it's actually a media file and switches via client's media player to streaming mode via HTTP Range requests in order to improve buffering speed. The client will then fire multiple HTTP requests on different parts of the file (obviously, this works only efficiently if your servlet also really supports it ... a lot of homegrown file servlets don't and may eventually perform much worse).
As to those client abort exceptions in the server log, your best bet is to filter out and suppress them, or at least log with an DEBUG/INFO oneliner instead of with a whole stack trace.
See also:
How to stream audio/video files such as MP3, MP4, AVI, etc using a Servlet
ClientAbortException at application deployed at jboss with IE8 browser

lighttpd: disable CGI buffering

Is there a way to stop lighttpd from buffering POSTs to a CGI executable?
It seems to me that all requests are fully buffered on disk before they are forwarded to the CGI executable, which makes it impossible for me to process the input in a stream-based way.
To clarify, I'm only talking about the request that is forwarded to the CGI executable on the standard input; I've already verified that the response is not buffered like that, and streaming output is indeed possible.
server.stream-request-body = 0 (default) buffer entire request body before connecting to backend
server.stream-request-body = 1 stream request body to backend; buffer to temp files
server.stream-request-body = 2 stream request body to backend; minimal buffering might block upload
when using HTTPS, it is recommended to additionally set ssl.read-ahead = "disable"
https://redmine.lighttpd.net/projects/lighttpd/wiki/Server_stream-request-bodyDetails