Glassfish 5 error: GRIZZLY0205 Post too large - glassfish

GF5 build1, Java EE7 + Primefaces 6.1, trying to upload photo ~ 2MB in p:textEditor componnent I always get error:
Severe: java.lang.IllegalStateException: GRIZZLY0205: Post too large
Setting "Max Post Size" to -1 or any >1mljn value in Configurations - server config - Network Config - Network Listeners - http-listener-1 doesn't help. The same on GF 4.1

This is a x-www-form-url encoded content so we need set parameter: max-form-post-size. This isn't exposed via the UI, but you can configure it using cmd:
asadmin set configs.config.server-config.network-config.protocols.protocol.http-listener-1.http.max-form-post-size-bytes=-1

Related

X-Ray Daemon don't receive any data from envoy

I have a service running a task definition with three containers:
service itself
envoy
x-ray daemon
And I want to trace and monitor my services interacting with each other with x-ray.
But I don't see any data in x-ray.
I can see the request logs and everything in the envoy logs but there are no error messages about missing connection to the x-ray daemon.
Envoy container has three env variables:
APPMESH_VIRTUAL_NODE_NAME = mesh/mesh-name/virtualNode/service-virtual-node
ENABLE_ENVOY_XRAY_TRACING = 1
ENVOY_LOG_LEVEL = trace
The x-ray daemon is pretty plain and has just a name and an image (amazon/aws-xray-daemon:1).
But when looking in the logs of the x-ray dameon, there is only the following:
2022-05-31T14:48:05.042+02:00 2022-05-31T12:48:05Z [Info] Initializing AWS X-Ray daemon 3.0.0
2022-05-31T14:48:05.042+02:00 2022-05-31T12:48:05Z [Info] Using buffer memory limit of 76 MB
2022-05-31T14:48:05.042+02:00 2022-05-31T12:48:05Z [Info] 1216 segment buffers allocated
2022-05-31T14:48:05.051+02:00 2022-05-31T12:48:05Z [Info] Using region: eu-central-1
2022-05-31T14:48:05.788+02:00 2022-05-31T12:48:05Z [Error] Get instance id metadata failed: RequestError: send request failed
2022-05-31T14:48:05.788+02:00 caused by: Get http://169.254.169.254/latest/meta-data/instance-id: dial tcp xxx.xxx.xxx.254:80: connect: invalid argument
2022-05-31T14:48:05.789+02:00 2022-05-31T12:48:05Z [Info] Starting proxy http server on 127.0.0.1:2000
As far as I read, the error you can see in these logs doesn't affect the functionality (https://repost.aws/questions/QUr6JJxyeLRUK5M4tadg944w).
I'm pretty sure I'm missing a configuration or access right.
It's running already on staging but I set this up several weeks ago and I don't find any differences between the configurations.
Thanks in advance!
In my case, I made a copy-paste mistake by copying trailing line break into the name of the environment variable ENABLE_ENVOY_XRAY_TRACING which wasn't visible in the overview and only inside the text field.

Minio uploads through the web interface and API receives "Unauthorized request."

I can successfully upload files to my Minio server using mc command line client (logged in as root):
./mc cp roobina.jpg minio/mag
roobina.jpg: 63.50 KiB / 63.50 KiB
But when I try to upload a file to a bucket using minio's own web interface I receive this error:
Unauthorized request.
When using api (in a php application using AmazonS3 libraries), I receive this error:
Error:Error executing "PutObject" on "https://s3.***.net/clbu/public/4d/4b/d1ad580690058a636ad58e5af931541336ec.jpg"; AWS HTTP error: Client error: `PUT https://s3.***.net/clbu/public/4d/4b/d1ad580690058a636ad58e5af931541336ec.jpg` resulted in a `403 Forbidden` response:
Forbidden (truncated...) Unable to parse error information from response - Error parsing XML: String could not be parsed as XML
Could someone please help?
After looking at different possible causes, I found that mod_security of apache (used as reverse proxy of minio:9000) was interfering with uploads causing the problem.
I disabled mod_security on the reverse proxy account and the problem is now solved.

How to configure Varnish in an API-platform project? [Response size limit issue]

Sometimes in my preproduction and production environment, the varnish container send me this error:
Error (null) Backend fetch failed
Backend fetch failed
Guru Meditation:
XID: (null)
This is due to the size of the body response.
So I did implement this test in my Postman test collection:
pm.test("Size is under 3Ko", function () {
pm.expect(pm.response.responseSize).to.be.below(3000);
});
To be sure that this error does not not appear again.
But I am wondering how can I configure it properly to accept a reasonable size of response?
This my configuration:
Api Platform 2.5.1
VCL 4.0
Varnish documentation states that the default maximum size of an HTTP response is 32 KB.
You can tune this by setting the http_resp_size runtime parameter.
Here's an example of an increased http_resp_size value:
varnishd -p http_resp_size=1M
If that doesn't help, please share the varnishlog output for that specific page, as well as the associated VCL code.
If you're unsure whether or not your http_resp_size was set to the correct value, you can run the following command on your Varnish server:
$ varnishadm param.show http_resp_size
Hope this helps.

CSV Data Set : Parameterize URL variables in JMeter - wrong CSV file

I am testing backend application, which is in NodeJS and Java technology.
The communication protocols are:
WebSocket in NodeJs part
and HTTP in Java part)
In JMeter, I must parameterize URL, to switch between development URL, production and preproduction.
I did it using CSV file.
I created a folder containing CSVs, in the folder where I have JMeter 5.0.
I prepare 3 CSV files.
I have three csv file in folder bin in Jmeter such as:
development.csv,
production.csv.
prepod.csv
My CSV files are following:
protocol, host
http, 10.219.227.66
ws, 10.219.227.66
protocol, host
https, prepod.myprepod.io
ws, prepod.myprepod.io
protocol, host
https, production.myproduction.io
ws, production.myproduction.io
and I have set in JMeter:
WebSocket Open Connection
Serwer URL – ws
Server name or IP - ${host}
CSV Data Set Config
${__P(environment,development)}.csv
and this project doesn't work, in log I have:
Caused by: java.lang.IllegalArgumentException: File development.csv
must exist and be readable at
org.apache.jmeter.services.FileServer.createBufferedReader(FileServer.java:424)
~[ApacheJMeter_core.jar:5.0 r1840935] at
org.apache.jmeter.services.FileServer.readLine(FileServer.java:340)
~[ApacheJMeter_core.jar:5.0 r1840935] at
org.apache.jmeter.services.FileServer.readLine(FileServer.java:324)
~[ApacheJMeter_core.jar:5.0 r1840935] at
org.apache.jmeter.services.FileServer.reserveFile(FileServer.java:272)
~[ApacheJMeter_core.jar:5.0 r1840935] ... 8 more 2018-10-19
14:29:30,727 INFO o.a.j.t.JMeterThread: Thread finished: Authorize
success 1-1 2018-10-19 14:29:30,728 INFO o.a.j.e.StandardJMeterEngine:
Notifying test listeners of end of test 2018-10-19 14:29:30,728 INFO
o.a.j.g.u.JMeterMenuBar: setRunning(false, local)
What is wrong ?
As per message:
java.lang.IllegalArgumentException: File development.csv must exist and be readable at ...
It seems the test is using the default value "development" , so JMeter looks for development.csv
So I guess you're facing this in another environment, in this case you should run jmeter with this additional parameter:
-Jenvironment=production

Upload failed while using Jmeter ZK Plugin

I'm currently facing a problem when trying to upload a file after running the Jmeter using the zk-plugin. It works fine when uploading without running the Jmeter.
It displays a pop-up message in ZK:
Upload Aborted : (contentId is required)
Inside the Jmeter:
Thread Name: Thread Group 1-1
Sample Start: 2015-04-16 17:35:15 SGT
Load time: 2
Connect Time: 0
Latency: 0
Size in bytes: 2549
Headers size in bytes: 0
Body size in bytes: 2549
Sample Count: 1
Error Count: 1
Response code: Non HTTP response code: java.io.FileNotFoundException
Response message: Non HTTP response message: 13 4 2015.txt (The system cannot find the file specified)
Response headers: HTTPSampleResult fields: ContentType: DataEncoding: null
How to fix this problem?
Basically ZK could return not very meaningful messages so it can be different route causes of this issues.
Look below for possible points in deployment components configuration and check they one by one:
First of all - check that directory pointed to by java.io.tmpdir exists.
In case you use Tomcat java.io.tmpdir will be set to $CATALINA_BASE/temp by default.
Look into catalina.sh and check that directory pointed to by $CATALINA_TMPDIR exists and has corresponding permissions applied:
if [ -z "$CATALINA_TMPDIR" ] ; then
# Define the java.io.tmpdir to use for Catalina
CATALINA_TMPDIR="$CATALINA_BASE"/temp
fi
. . .
. . .
-Dcatalina.base=\"$CATALINA_BASE\" \
-Dcatalina.home=\"$CATALINA_HOME\" \
-Djava.io.tmpdir=\"$CATALINA_TMPDIR\" \
org.apache.catalina.startup.Bootstrap "$#" start
WEB-INF/zk.xml: max-upload-size value in ZK configuration descriptor (5120 Kb by default, should be enough).
WEB-INF/web.xml: max-file-size and max-request-size values in deployment descriptor:
<multipart-config>
<!-- 52MB max -->
<max-file-size>52428800</max-file-size>
<max-request-size>52428800</max-request-size>
<file-size-threshold>0</file-size-threshold>
</multipart-config>
conf/server.xml: maxPostSize value in Connector section (the maximum size in bytes of the POST which will be handled by the container FORM URL parameter parsing):
<Connector port="80" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"
maxPostSize="67589953" />
It seems like we can only upload file that is inside our jmeter/bin. I upload using some files inside the jmeter/bin and the message is gone.
During recording you need to put the file you want to upload in jmeter/bin folder. This is due to some limitations of browsers which do not transmit the full path.
Reference : File upload fails during recording using JMeter , the first answer by pmpm