413 request entity too large apache "tomcat" - apache

User when access BOBJ tomcat URL the AD SSO works without any issues, when the user tries to access the Apache load balancer, then we get the Request entity too large error message.
This is happening for few of the users and few of them can login without any issues.
Setup: configured Apache Load Balancer - to connect to two tomcat server via Workers.properties.
BOBJ AD SSO is configured on Tomcat server
Error :Request Entity Too Large
The requested resource
/BOE/portal/1712062105/BIPCoreWeb/VintelaServlet
does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit.
Configuration on
Apache
Httpd:
LimitRequestLine 65536
LimitRequestBody 0
LimitRequestFieldSize 65536
LimitRequestFields 10000
ProxyIOBufferSize 65536
worker: worker.ajp13.max_packet_size=65536
Tomcat:
Request someone to help in troubleshooting the error.

Possible solution!
Apache tomcat:
1. modify /opt/ tomcat/config/server.xml
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"
maxPostSize="209715200"
disableUploadTimeout="true"
maxHttpHeaderSize="1006384" />
2. modify /tomcat/webapps/manager/WEBINFO/web.xml
<multipart-config>
<!-- 50MB max -->
<max-file-size>104857600</max-file-size>
<max-request-size>209715200</max-request-size>
<file-size-threshold>0</file-size-threshold>
</multipart-config>
Nginx:
1. modify /etc/nginx/nginx.conf
2. add this " client_max_body_size 200M; "
http{
client_max_body_size 200M;
}
* Restart tomcat server
sudo systemctl restart tomcat
* Restart nginx server
sudo systemctl restart nginx

Similar issue for me, but the fix was slightly different:
worker.ajp13.max_packet_size=65536
This was actually in: path/apache2/conf/extra/workers.properties (probably just a typo in earlier answer)

The issue here is with the Apache parameter under Worker.properties file
We initial have set this to -> worker.ajp13.max_packet_size="65536"
However the syntax should be this :
worker..max_packet_size="65536”
Your site is basically the tomcat site which we refereed as worker1 and worker2.
Once we changed that value to below
worker: worker1.max_packet_size="65536"
This issue got fixed.
Hope this helps for users who have configure Apache as load balancer to two or more tomcat web application clusters.

I'm not 100% certain this will resolve your issue, but it seems to be related. In Tomcat's server.xml, add the following to Connector: maxHttpHeaderSize="65536". The whole line should look something like:
<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000"
redirectPort="8443" compression="on" URIEncoding="UTF-8" compressionMinSize="2048"
noCompressionUserAgents="gozilla, traviata"
compressableMimeType="text/html,text/xml,text/plain,text/css,text/javascript,text/json,application/json"
maxHttpHeaderSize="65536" />

Related

Mod_jk and Tomcat stuck at Sending Reply

Currently, the server at work is underperforming and the way it's set up is not ideal either. For this reason I'm trying to find a new way to do things that will hopefully help with both, performance and deployment.
The approach I decided for is to have tomcat instances for our webapps (currently there are two, so it'd be an instance per webapp) and use Apache as a "front". I'm not experienced in this, so It's normal I'm having issues here and there, but so far I've manage to get this going.
What I expect is to redirect from mysite.com index page to either mysite.com/service1 or mysite.com/service2. Service1 was setup in out test server at port 8080 and service2 at 8081. I installed Apache2 and mod_jk yesterday and set up apache with the contents of mysite.com. Today I started the configurations, that ended up as follow:
workers.properties
worker.list=s1
worker.s1.type=ajp13
worker.s1.port=8009
#host is localhost by default according to the documentation
jk.load
LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so
JkWorkersFile /etc/apache2/workers.properties
JkLogFile /var/log/apache2/mod_jk.log
JkLogLevel debug
JkMount /service1/* s1
Service1's server.xml connector (The rest is all default)
<Connector protocol="AJP/1.3" port="8009" redirectPort="8443" />
I had more, but because of the errors, I took a step back and tried with only one tomcat for now. I will add the second tomcat and a loadbalancer.
Ok, so what's going on?
I can access the server and the index page of our system with no problem. The problem is when I try to redirect to service1. It just loads without response, but if I try to access service1 directly by port 8080, it works properly (I tired commenting out this connector. No luck).
Looking at server-status, I see the request stuck at w/sending reply, and in mod_jk.log I see that the worker properly matches the request. So while my configurations seem to be right, there is something in between happening. I don't really know if it's something with Apache, Tomcat or Mod_jk. I also tried to follow several guides of how to do this, but all of them got me to 404s. Looking around here and ServerFault didn't shed much light unfortunately so I'm the one asking now.
Am I missing something? Should I just use another approach? I'm very new at this and I'm at loss right now. The configuration and the logs show that nothing is really wrong (at first glance, at least...) so I'm entirely sure if my case scneario is even posible with mod_jk... HOnestly to run it back and try with proxy is very tempting at this point, but if I am, I'd rather know where Im wrong.
Additional info: Running on Ubuntu Server 18.04, lastest apache2 and mod_jk avaliable from apt (as of Apr 14), java 1.8 and Tomcat 8.5.64.
There was a change in Tomcat last year (from version 8.5.51 and version 9.0.31), which introduced a secretRequired attribute to the AJP connector with a default of true (cf. documentation). Hence you can either:
add a shared secret between the AJP connector and mod_jk
or add secretRequired="false" to the AJP connector:
<Connector protocol="AJP/1.3" port="8009" secretRequired="false" redirectPort="8443" />
Remark: AJP is a very old protocol and rarely used. Since your installation is pretty new, you might consider using directly HTTP (cf. this talk).

Apache to Tomcat Cookie issue when using SSL termination through nginx

I'm testing our app on a kubernetes cluster. So we have Nginx Controller which handles SSL termination and passes HTTP traffic to Apache server. Apache server handles static content and forwards all JSP related to tomcat.
For some reason the webapp doesn't work on the first try (website works fine though) when doing SSL termination but if I reload the page and try to use the app again then during this second attempt everything works fine (means it doesn't load some of the automatic functions on first attempt and can be reproduced by clearing the cache and logging in).
I spoke to dev they mentioned it could be cookie issues.
Current setup which is not working:
Nginx controller (SSL termination) -> Apache (HTTP port 80 ) -> Tomcat (HTTP port 8080).
Setup which works fine:
Nginx controller (SSL passthrough) -> Apache (HTTPS port 443) -> Tomcat (HTTPS port 8080).
I can't get rid of Apache in between and it is really needed for the app temporarily.
What settings are required to make this work? I've tired the following:
Disable port 443 on apache
Disable 8443 ports and all redirects to port 8443 and listen only 8080
Modified web.xml to set http-only to true and secure bit to true on tomcat server.
<session-config>
<session-timeout>60</session-timeout>
<cookie-config>
<http-only>true</http-only>
<secure>true</secure>
</cookie-config>
</session-config>
Anything else that needs to be done? I've spent a day trying to troubleshooting this and couldn't figure it out yet.
Server.xml contains only these enabled lines, rest of them are commented out or defaults:
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000" />
<Connector port="8009" maxThreads="2000"
enableLookups="false" redirectPort="80" protocol="AJP/1.3"/>
<Engine name="Catalina" defaultHost="localhost" jvmRoute="server001">
# Removed cluster config since they're all default
<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true"
xmlValidation="false" xmlNamespaceAware="false">
<Context path="" docBase="/usr/local/tomcat/webapps/ROOT.war" debug="0" distributable="true">
</Context>
http.conf - has nothing but default since SSL is disabled. So no virtual host. Only thing that's added to http.conf is
JkMount /* ajp13
Worker.properties is the following.
# - An ajp13 worker that connects to localhost:8009
worker.list=ajp13
#------ DEFAULT ajp13 WORKER DEFINITION ------------------------------
# Defining a worker named ajp13 and of type ajp13
worker.ajp13.port=8009
worker.ajp13.host=$(TOMCAT_SERVER)
worker.ajp13.type=ajp13
worker.ajp13.reply_timeout=15000
worker.ajp13.lbfactor=1
#worker.ajp13.cachesize
worker.loadbalancer.balanced_workers=ajp13
The webapp needs to communicate with Java TCP server through websocket so we have a webSocket server written in nodejs. It just forwards traffic from websocket to Java server TCP connection.
But it has its own SSL certs setup. Since by default Nginx controller on GCP doesn't deal with ssl termination for TCP services, I have configured NodeJS backend service to accept SSL traffic directly on port 1234 for example. This service runs on the tomcat server. Don't know if this is creating a conflict since they all connect to same domain name.
Your problem probably arises, because the application does not understand that the request came from a secure channel.
Servlet API applications understand that a request was sent through a secure channel based on the result of ServletRequest#isSecure(). For requests that came through HTTP, this value depends on whether SSL was enabled or not.
When you use the AJP connector, this information and many more are trasmitted by the Apache server. This works perfectly well in the "SSL passthrough" configuration. However when the SSL connection terminates at NGINX you are in the situation described by the Reverse Proxy HOW-TO:
In some situations this is not enough though. Assume there is another less clever reverse proxy in front of your web server, for instance an HTTP load balancer or similar device which also serves as an SSL accelerator.
Then you are sure that all your clients use HTTPS, but your web server doesn't know about that. All it can see is requests coming from the accelerator using plain HTTP.
If you wanted to keep this configuration for a long time, I would suggest following the aforementioned HOW-TO. For short-term usage there is a simpler solution: you need to hardcode in Tomcat's configuration that all AJP requests are secure:
<Connector port="8009"
maxThreads="2000"
enableLookups="false"
protocol="AJP/1.3"
secure="true"
scheme="https"/>
The scheme attribute tells Tomcat which scheme was used by the original client, the connector will still use AJP.

Liferay Web Form recaptcha issue on SSL reverse proxied site

Our installation of Liferay Tomcat 6.2 EE bundle is behind an Apache HTTPD reverse proxy server with the SSL terminating at the load balancer. We do not have any SSL configuration on Tomcat 7 and are not using AJP.
We ran into an issue with using the web form portlet with the reCaptcha on the default site using SSL. The reCaptcha image was not rendered on the web form after configuring reCaptcha in the Control panel and then configuring the web form to use reCaptcha.
ReCaptcha worked on another HTTP Liferay 6.2 EE installation and site without an issue.
There were errors in the console in Firefox and Chrome:
Blocked loading mixed active content "http://www.google.com/recaptcha/api/challenge?k=asabsds50"[Learn More]
The reCaptcha call seemed to be made using http not https.
Thanks!
Liferay needs to have the tomcat configured in the server.xml to specify redirectport to be the same as the port tomcat is listening on ex. 8080 and adding the secure flag set to true. Restart Tomcat and test.
Apache reverse proxy in our case points to this port. This configuration worked. Now reCaptcha renders and the web forms submits successfully.
<Connector port="listeningport" protocol="HTTP/1.1"
connectionTimeout="20000" secure="true"
redirectPort="listeningport" URIEncoding="UTF-8" />
The old server.xml config was
<Connector port="listeningport" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" URIEncoding="UTF-8" />
Hope this helps

Starting automatically another tomcat instance when one goes down

I am working on a Spring-MVC application and using Tomcat to deploy
it. I looked up on net how to create a custom maintenance site when
tomcat is down. It involves using Apache2 in the front and relaying
requests to and fro tomcat, and the maintenance site can be put on
Apache2. Seemed like a lot of hassle just for a webpage when tomcat
is down.
For this reasons, I created a small project and deployed it in
another instance of tomcat as ROOT.war.
I would just like to know if there is any way, I can bring the
maintainance tomcat instance online when production is down.
Here is my server.xml of production for viewing :
<Connector port="80" protocol="HTTP/1.1" compression="force" compressionMinSize="1024"
connectionTimeout="20000"
redirectPort="443" URIEncoding="utf-8"
compressableMimeType="text/html,text/xml,text/plain,text/css,text/ javascript,application/x-javascript,application/javascript"/>
<Connector port="443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="200" compression="force"
compressionMinSize="1024" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS"
keystoreFile="my-keystore.jks" keystorePass="password" URIEncoding="utf-8"
compressableMimeType="text/html,text/xml,text/plain,text/css,text/ javascript,application/x-javascript,application/javascript"
/>
<Connector port="8010" protocol="AJP/1.3" redirectPort="443" URIEncoding="utf-8"
compressableMimeType="text/html,text/xml,text/plain,text/css,text/ javascript,application/x-javascript,application/javascript"
/>
Any help would be nice. Thanks a lot.
A good option used in high availability of application but not sure you would need that.
ran 2 tomcat in 2 separate ports always. one is production server and another maintainence server.
install haproxy so all request goes from here to the production server port.
when haproxy verifies that the port is down or the server not responding go to maintainence port tomcat instance.
in this way, the maintainence activity can be done without any issues. and due to some issues the prod server goes down it automatically call maintainence tomcat instance.

How do I point my web application from port number 8080 to 80?

I know this seems like a very basic question.
I have a Java EE web application running on port 8080. So when I try to access it, I have to type domainname.com:8080/DomainName . I want to access it by domainname.com . For which I'm supposed to change the port number from 8080 to 80. I made this change in my server.xml in the conf folder after going thru a few answers on SO :
<Connector port="80" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
But I still get a 404 error. Please help. Is there something I'm not doing/doing wrong?
I'm using Tomcat7 on a Windows server.
If there's a similar question (which I may have not come across) please post it in the comments, thanks!
You can install apache and configure it to work with your tomcat via AJP port so apache will listen on port 80 and redirect request to your tomcat
here some reference:
http://www.ntu.edu.sg/home/ehchua/programming/howto/ApachePlusTomcat_HowTo.html