JMeter SocketException when uploading 1GB file - file-upload

JMeter 5.4.1
OpenJDK 15.0.1
My test server is configured by default to allow a max 1073741824 byte file to be uploaded, a limit that is configurable. My goal is to validate that the configured limit is respected.
When I configure it for 1048576 bytes and exceed that limit with my upload, the server sends the response:
"{"type":"https://tools.ietf.org/html/rfc7231#section-6.5.1","title":"One or more validation errors occurred.","status":400,"traceId":"|bd37f36b-4cd68e1b8b433669.","errors":{"":["Failed to read the request form. Multipart body length limit 1048576 exceeded."]}}"
When I configure it for 1073741824 bytes and exceed that limit with my upload, JMeter reports the following error:
java.net.SocketException: Connection reset by peer
at java.base/sun.nio.ch.NioSocketImpl.implWrite(NioSocketImpl.java:420)
at java.base/sun.nio.ch.NioSocketImpl.write(NioSocketImpl.java:440)
at java.base/sun.nio.ch.NioSocketImpl$2.write(NioSocketImpl.java:826)
at java.base/java.net.Socket$SocketOutputStream.write(Socket.java:1051)
at java.base/sun.security.ssl.SSLSocketOutputRecord.deliver(SSLSocketOutputRecord.java:342)
at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1277)
at org.apache.http.impl.io.SessionOutputBufferImpl.streamWrite(SessionOutputBufferImpl.java:124)
at org.apache.http.impl.io.SessionOutputBufferImpl.flushBuffer(SessionOutputBufferImpl.java:136)
at org.apache.http.impl.io.SessionOutputBufferImpl.write(SessionOutputBufferImpl.java:167)
at org.apache.http.impl.io.ContentLengthOutputStream.write(ContentLengthOutputStream.java:113)
at org.apache.http.entity.mime.content.FileBody.writeTo(FileBody.java:121)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl$ViewableFileBody.writeTo(HTTPHC4Impl.java:1513)
at org.apache.http.entity.mime.AbstractMultipartForm.doWriteTo(AbstractMultipartForm.java:134)
at org.apache.http.entity.mime.AbstractMultipartForm.writeTo(AbstractMultipartForm.java:157)
at org.apache.http.entity.mime.MultipartFormEntity.writeTo(MultipartFormEntity.java:113)
at org.apache.http.impl.DefaultBHttpClientConnection.sendRequestEntity(DefaultBHttpClientConnection.java:156)
at org.apache.http.impl.conn.CPoolProxy.sendRequestEntity(CPoolProxy.java:152)
at org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:238)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl$2.doSendRequest(HTTPHC4Impl.java:458)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.executeRequest(HTTPHC4Impl.java:935)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.sample(HTTPHC4Impl.java:646)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:66)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1296)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1285)
at org.apache.jmeter.threads.JMeterThread.doSampling(JMeterThread.java:638)
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:558)
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:489)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256)
at java.base/java.lang.Thread.run(Thread.java:832)
My JMeter.bat file has the heap set as follows:
set HEAP=-Xms1g -Xmx4g -XX:MaxMetaspaceSize=256m
This looks to be similar to a post from last Oct, but there was no solution suggested/reported.
SocketException after sending huge request via JMeter

Connection reset by peer means that your server has reset the connection so you should rather look for the clue in your server logs.
The only thing I suggest to do on JMeter side is to consider switching to HTTP Raw Request sampler, it has nice feature of streaming the file directly to the server without loading it to memory first, I think you will find it extremely helpful when it comes to load testing with more than one virtual user. See HTTP Raw Request for SOAP + MTOM post on JMeter Plugins support forum for more details.

Related

How to configure Varnish in an API-platform project? [Response size limit issue]

Sometimes in my preproduction and production environment, the varnish container send me this error:
Error (null) Backend fetch failed
Backend fetch failed
Guru Meditation:
XID: (null)
This is due to the size of the body response.
So I did implement this test in my Postman test collection:
pm.test("Size is under 3Ko", function () {
pm.expect(pm.response.responseSize).to.be.below(3000);
});
To be sure that this error does not not appear again.
But I am wondering how can I configure it properly to accept a reasonable size of response?
This my configuration:
Api Platform 2.5.1
VCL 4.0
Varnish documentation states that the default maximum size of an HTTP response is 32 KB.
You can tune this by setting the http_resp_size runtime parameter.
Here's an example of an increased http_resp_size value:
varnishd -p http_resp_size=1M
If that doesn't help, please share the varnishlog output for that specific page, as well as the associated VCL code.
If you're unsure whether or not your http_resp_size was set to the correct value, you can run the following command on your Varnish server:
$ varnishadm param.show http_resp_size
Hope this helps.

Openshift online v3 - Timeout when reading response headers from daemon process

I created an python api on openshift online with python image. If you request all the data, it takes more than 30 seconds to respond. The server gives a 504 gateway timeout http response. How do you configure the length a response can take? > I created an annotation on the route, this seems to set proxy timeout.
haproxy.router.openshift.io/timeout: 600s
Problem remains, I now got logging. It looks like the message comes from mod_wsgi.
I want to try alter the configuration of the httpd (mod_wsgi-express process) from request-timeout 60 to request-timeout 600. Where doe you configure this. I am using base image https://github.com/sclorg/s2i-python-container/tree/master/2.7
Logging:
Timeout when reading response headers from daemon process 'localhost:8080':/tmp/mod_wsgi-localhost:8080:1000430000/htdocs
Does someone know how to fix this error on openshift online
Next to alter timeout of haproxy of the route of my app
haproxy.router.openshift.io/timeout: 600s
I altered the request-timeout and socket-timeout in app.sh of my python application. So the mod_wsgi-express server is configured with a higher timeout
ARGS="$ARGS --request-timeout 600"
ARGS="$ARGS --socket-timeout 600"
My application now wait 10 minutes before cancelling a request

Gwan report.c statistics

I am testing on G-wan server performance and it's very amazing!!! Here is the output from report.c
Requests
All: 5,725 (6.06% of Cache misses)
HTTP: 66 (1.15% of all requests)
Errors: 70 (1.22% of all requests)
CSP: 5,650 (98.69% of all requests) Exceptions: 1
Connections
Accepted: 4,717 (1.21 requests per connection)
Closed: 4,372
Timeouts: 682 (14.46%) Accept:682 Read:0 Slow:0 Build:0 Send:0 Close:0
Busy: 345 (Waiting: 334 Reading: 9 Replying: 2 Sending: 0 Pushing: 0 Relaying: 0 Closing: 0)
I found that the Errors rate seem to be quite high, and there an exceptions occur on CSP too, could anyone tell me what did "Errors" mean and how to avoid it? Thanks!
the "Errors" rate seem to be quite high
That's HTTP errors (wrong requests coming from a client, not found resources, etc. - look at the error.log file for a trace).
The only way to avoid HTTP errors is to prevent clients from connecting to the server.
If you can't live with this "high rate of HTTP errors" of 1.22% of all requests then use a G-WAN connection handler (with the HTTP_ERROR notification) to make G-WAN ignore HTTP errors and close the connection without sending an HTTP error message (just return 0; in the handler) - but that's probably not what most users want.
there an exceptions occur on CSP too
An exception means a 'graceful crash report' was issued for a servlet bug. As you have only 1 crash on 5,650 dynamic requests, that was probably during the servlet development. Look at your error.log and trace files to check what happened.
Note that the "cache misses" statistics are for static contents only (1.15% of all your HTTP requests).
Apparently, not all your clients are responding in the timely fashion: you have timeouts and pending requests.

Apache, mod_ssl "request failed: error reading the headers" for a specific user

Currently we have an Apache 2.2.3 server with mod_ssl 2.2.3 running Django, with users authenticating by using a x509 certificate.
So far the system is running perfectly except for a single user, who when trying to upload a file receives 400 Bad Request error, and the contents of the ssl_error_log regarding this operation are:
[<date>] [error] [client <client ip>] request failed: error reading the headers, referer: <referrer url>
The contents of the ssl_access_log are:
<client ip> - - [<date>] "POST <target page> HTTP/1.1" 400 321
Also, the user's browser is Firefox as far as I know.
I am completely unable to reproduce this bug and so far none of the other users have experienced it. Could you point out some reasons for this to happen?
I've experienced connectivity that stops the upstream after an X amount of bytes is sent. X was a pretty low value, as in enough to request some simple pages, but not to deal with ajax requests much less upload files. As far as I recall, this connectivity problem occurred only when tethering (from a specific Android phone, but I didnt even test other phones).
So if the upstream gets interrupted and the upload stalls, it makes sense apache would return this error, according to this post: "Apache waits a time equal to the Timeout directive (defaults to 5 minutes if not defined) for a response from the client. It is likely Apache is waiting for the CRLF that indicates the end of the headers, yet it is never received.."

Enable GlassFish Compression

How to enable glass fish compression? I enabled compression in http-lister properties
but no changed response
Login to admin console: localhost:4848
Go to the Network Config > Network Listener
Select the listener for which you want to enable gzip > HTTP Tab
Check if you have min compression size set. I don't remember from the top of my head if glassfish has a default min compression size. Pretty much if the resource does not exceed this value in size, it won't be compressed.
Check if you have correct compressableMimeType set. application/xml is not the same as text/xml, even if they're really both XMLs.
Responses to HTTP requests in version 1.0 are not compressed. You must send your requests in HTTP 1.1 to get gzipped responses from your glassfish server.
More over, you must add the header "Accept-Encoding: gzip" in your http requests.