Is it possible to upload large files to Ktor & Netty server? - kotlin

I was making a simple file upload&download service and found out that, as far as I understand, Netty doesn't release direct buffers until request processing is over. As a result, I can't upload bigger files.
I was trying to make sure that the problem is not inside my code, so I created the most simple tiny Ktor application:
routing {
post("upload") {
call.receiveMultipart().forEachPart {}
call.respond(HttpStatusCode.OK)
}
}
The default direct memory size is about 3Gb, to make test simpler I limit it with:
System.setProperty("io.netty.maxDirectMemory", (10 * 1024 * 1024).toString())
before starting the NettyApplicationEngine.
Now if I upload a large file, for example with httpie, I got "Connection reset":
http -v --form POST http://localhost:42195/upload file#/tmp/FileStorageLoadTest-test-data1.tmp
http: error: ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) while doing POST request to URL: http://localhost:42195/upload
On the server side there is no information about the problem except for the "java.io.IOException: Broken delimiter occurred" exception. But if I put the breakpoint in NettyResponsePipeline#processCallFailed, the real exception is:
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 65536 byte(s) of direct memory (used: 10420231, max: 10485760)
It is a pity that this exception is not logged.
Also, I found out that the same code works without problems if I use Jetty engine instead.
Environment:
Ubuntu Linux
Java 8
Ktor=1.2.5
netty-transport-native-epoll=4.1.43.Final
(but if Netty started without native-epoll support, the problem is the same)

Related

Ktor Server: How to find out when a client has finished downloading a file?

I want to run code when a HTTP client has finished downloading a file from Ktor Server. The most simple approach does not work:
routing {
get("/download") {
call.response.header(
ContentDisposition,
Attachment.withParameter(FileName, file.fileName).toString()
)
call.respondFile(file)
// client has not finished downloaded when we reach this point
}
}
I already tried intercepting pipelines, writing out the file in byte arrays myself, but all that detects a completed download too soon. If I for example stop the webserver and related services too soon, the client isn't able to actually complete the download.
So is there a way to reliably detect when the client has ACKed the last bytes of the file?

Tomcat server causing broken pipe for big payloads

I made a simple spring-boot application that returns a static json response for all requests.
When the app gets a request with a large payload (~5mb json, 1 TP ), the client receives the following error:
java.net.SocketException: Broken pipe (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
I have tried increasing every limit i could - here are my tomcat settings:
spring.http.multipart.max-file-size=524288000
spring.http.multipart.max-request-size=524288000
spring.http.multipart.enabled=true
server.max-http-post-size=10000000
server.connection-timeout=30000
server.tomcat.max-connections=15000
server.tomcat.max-http-post-size=524288000
server.tomcat.accept-count=10000
server.tomcat.max-threads=200
server.tomcat.min-spare-threads=200
What can I do to make this simple spring boot with just one controller, to handle such payloads successfully?
This springboot application and the client sending the large payload run on an 8-core machine with 16gb ram. So resources shouldn't be a problem.
This was because the controller was returning a response without consuming the request body.
So the server closes the connection as soon as it receives the request, without consuming the full request body. The client still hadn't finished sending the request and the server closed the connection before that.
Solution:
1. Read the full request body in your code
2. Set tomcat's maxSwallowSize to a higher value (default : 2mb)
server.tomcat.max-swallow-size=10MB

How to connect ioredis to google cloud function?

I am currently running some Google Cloud functions (in typescript) that require a connection to a Redis instance in order to LPUSH into the queue (on other instances, I am using Redis as a queue worker).
Everything is fine, except I am getting a huge number of ECONNECTRESET and ECONNECTIMEOUT related errors despite everything working properly.
The following code can execute successfully on the cloud function but still, I am seeing constant errors thrown related to the connection to the Redis.
I think it is somehow related to how I am importing my client- ioredis. I have utils/index.ts, utils/redis.js and inside the redis.js I have:
const Redis = require('ioredis');
module.exports = new Redis(6380, 'MYCACHE.redis.cache.windows.net', { tls: true, password: 'PASS' });
Then I am importing this in my utils/index.ts like so: code missing
And exporting some aysnc function like: code missing
When executing in the GCF environment, I get the # of expected results in results.length and I see (by monitoring the Redis internally) this list was pushed as expected to the queue.
Nevertheless, these errors continue to appear incessantly.
ioredis] Unhandled error event: Error: read ECONNRESET at _errnoException (util.js:1022:11) at TLSWrap.onread (net.js:628:25)

wkhtmltopdf + ActionCable (error during websocket handshake)

SCENARIO
My rails application has a page that tooks a considerable amount of time to load. In order to increase our users' experience we decided to (firstly) only show a loader with an indication of how much of processing has been completed (instead of let them waiting blindly the response from server). That indication is shown with the helping of rails 5 ActionCable tool and once processing is completed the content is shown.
Obviously, in order to make it possible, a subscription to a channel is made as soon as page loads so that server can report the processing status and the final result.
GOAL
Generate a PDF from that page so that we can email users with that file attached.
PROBLEM
wkhtmltopdf is being used to generated the PDF but when accessing the page it isn't being able to handshake with ActionCable's websocket.
The following message is raised:
Warning: http://localhost:3000:0 Error during WebSocket handshake: protocol mismatch: actioncable-v1-json,actioncable-unsupported !=
The above message intrigued me because it's like the protocol it'd accept for handshaking should be... blank?!! :O (notice the right hand of the operator != - there is nothing there!).
Under the hood I know wkhtmltopdf uses qt webkit browser. I suppose the solution for this problem would be related to some configuration within webkit (but I don't know how - neither where - to set it from wkhtmltopdf).
System Stack
Linux (Ubuntu) 16.04
rails 5 + ActionCable
wkhtmltopdf 0.12.3 (with patched qt)

Amazon S3 File Read Timeout. Trying to download a file using JAVA

New to Amazon S3 usage.I get the following error when trying to access the file from Amazon S3 using a simple java method.
2016-08-23 09:46:48 INFO request:450 - Received successful response:200, AWS Request ID: F5EA01DB74D0D0F5
Caught an AmazonClientException, which means the client encountered an
internal error while trying to communicate with S3, such as not being
able to access the network.
Error Message: Unable to store object contents to disk: Read timed out
The exact lines of code worked yesterday.I was able to download 100% of 5GB file in 12 min. Today I'm in a better connected environment but only 2% or 3% of the file is downloaded and then the program fails.
Code that I'm using to download.
s3Client.getObject(new GetObjectRequest("mybucket", file.getKey()), localFile);
You need to set the connection timeout and the socket timeout in your client configuration.
Click here for a reference article
Here is an excerpt from the article:
Several HTTP transport options can be configured through the com.amazonaws.ClientConfiguration object. Default values will suffice for the majority of users, but users who want more control can configure:
Socket timeout
Connection timeout
Maximum retry attempts for retry-able errors
Maximum open HTTP connections
Here is an example on how to do it:
Downloading files >3Gb from S3 fails with "SocketTimeoutException: Read timed out"