Tomcat 8.5.x freezes - apache

I use Tomcat 8.5.x zip version (no GUI) and when I proceed a huge amount of requests from my load balancing Apache Server everything works fine. The problem occurs when I stop doing anything (stop sending requests) for like 5-10-15+ minutes, the moment when I want to log-in or do something, my site won't load, Tomcat won't accept the request and when I right-click on Tomcat, it unfreezes and the requests are proceeded. What to do?

Running Tomcat as service and increasing heap size did the work.

Related

HTTPS connection stops working after a few minutes

I have the following setup:
Service Fabric cluster running 5 machines, with several services running in Docker containers
A public IP which has port 443 open, forwarding to the service running Traefik
Traefik terminates the SSL, and proxies the request over to the service being requested over HTTP
Here's the behavior I get:
The first request to https:// is very, very slow. Chrome will usually eventually load it after a time timeouts or "no content" errors. Invoke-WebRequest in Powershell usually just times out with an "The underlying connection was closed" message.
However, once it loads, I can refresh things or run the command again and it responds very, very quickly. It'll work as long as there's regular traffic to the URL.
If I leave for a bit (not sure on the time, definitely a few minutes) it dies and goes back to the beginning.
My Question:
What would cause SSL handshakes to just break or take forever? What component in this stack is to blame? Is something in Service Fabric timing out? Is it a Traefik thing? I could switch over to Nginx if it's more stable. We use these same certs on IIS, and we don't have this problem.
I could use something like New Relic to constantly send a ping every minute to keep things alive, but I'd rather figure out why the connection is dying after a few minutes.
What are the best ways to go about debugging this? I don't see anything in the Traefik log files (In DEBUG mode), in fact when it doesn't connect, there's no record of the request in the access logs at all. Any tools that could help debug this? Thanks!
Is the Traefik service healthy on all 5 nodes, can you inspect the logs of all 5 instances? If not this might cause the Azure Load Balancer to load balance across nodes where Traefik is not listening which would cause intermittent and slow responses. Once a healthy Traefik responds, you'll get a sticky session cookie which will then make subsequent responses faster. You can enable ApplicationInsights monitoring for Traefik logs to save you crawling across all the machines: https://github.com/jjcollinge/traefik-on-service-fabric#debugging. I'd also recommend testing this without SSL to ensure Traefik can route correctly over HTTP first and then add HTTPS. That way you'll know it's something to do with the SSL configuration (i.e. mounted the certificates correctly, Traefik toml config, trusted certificates, etc.)

Reduce TTFB on PHP empty file

I have trouble reducing TTFB on PHP.
Even when file is empty it needs about 100ms. It doesn't matter .php or .html, both need the same time.
I have a decent server on HETZNER, server is not loaded and shoudn't have any problem.
Can it be problem with Cpanel?
Is it possible to have lower TTFB on php 7.0 (empty file)?
I managed to reduce connect time by changing DNS Server from hostgator (LaunchPad) to cloudflare (it was about 100ms and now it's ~1.5ms).
Any suggestions?
TTFB depend on network latency.
I've just performed a test on my web application I'm developing. The chrome TTFB analyze on my test server showed 225ms on my LTE connection, and the same application runs on my computer apache server has 14ms TTFB in chrome.
The ping test to my local machine shown 0.015ms latency, and the server 64ms latency. This is probably why your TTFB cannot be reduced under 100ms. It's rather your network connection latency and not the application. Try to perform the test on very fast connection or use apache ab to test it on the server console.

Why would Apache be slow when application server is quick?

We are using Apache as the web server, and it proxies requests to Jboss (think Tomcat) Java application server using AJP.
We have logging on for Apache and for our web application in Jboss.
We are seeing, not always but sometimes, cases where the processing time for a request in Jboss is less than half a second, but in the Apache log for the same request it is taking over 8 seconds to complete the request.
I can't even think where to start looking and I have not come up with a good Google search to try and work out why Apache is sitting on the request for so long. Any help appreciated.
Disclaimer: Educated guess taken from my experience with running such setups.
Preface
Apache can be configured to allow only a limited number of connections at the same time. In fact this is a prudent way to configure Apache since every connection uses a certain amount of resources and having no upper limit puts you at risk to run into a situation, where your main memory is exhausted and your server becomes unresponsive.
Resource exhaustion
That being said, Apache is usually configured as shown below, your numbers and modules may be different though. The principle still applies.
<IfModule mpm_prefork_module>
StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 150
MaxRequestsPerChild 0
</IfModule>
This indicates that Apache can process at most 150 concurrent connections.
If a client initiates the 151'th connection, the operating system kernel tries to forward this connection to the Apache process, but it won't answer any more connections. The kernel then enqueues the connection until another connection is closed by Apache.
The time it takes until the kernel can successfully initiate the connection will look to the user as if the request as such takes longer to complete.
The application-server on the other hand, doesn't know about the delay and received the request only after the connection has been initiated. To the application-server therefore everything looks normal.
If you don't have enough resources to increase the concurrent connections in Apache, consider switching to a more resource-efficient web-proxy, like nginx or Varnish.
I don't think apache is acutally slow in your case. I guess you are using keepalived connections between apache and jboss. Under some circumstances, for example the connector is using blocking IO strategy and mean while the number of apache httpd processes are higher than the number of executor threads configurated in jboss connector. It might cause the jboss container thread being blocked after it served a request. You should post your apache and jboss configurations in order to get more specific answers.

Should I run Tomcat by itself or Apache + Tomcat?

I was wondering if it would be okay to run Tomcat as both the web server and container? On the other hand, it seems that the right way to go about scaling your webapp is to use Apache HTTP listening on port 80 and connecting that to Tomcat listening on another port?
Are both ways acceptable? What is being used nowdays? Whats the prime difference? How do most major websites go about this?
Thanks.
Placing an Apache (or any other webserver) in front of your application server(s) (Tomcat) is a good thing for a number of reasons.
First consideration is about static resources and caching.
Tomcat will probably serve also a lot of static content, or even on dynamic content it will send some caching directives to browsers. However, each browser that hits your tomcat for the first time will cause tomcat to send the static file. Since processing a request is a bit more expensive in Tomcat than it is in Apache (because of Apache being super-optimized and exploiting very low level stuff not always available in Tomcat, because Tomcat extracting much more informations from the request than Apache needs etc...), it may be better for the static files to be server by Apache.
Since however configuring Apache to serve part of the content and Tomcat for the rest or the URL space is a daunting task, it is usually easier to have Tomcat serve everything with the right cache headers, and Apache in front of it capturing the content, serving it to the requiring browser, and caching it so that other browser hitting the same file will get served directly from Apache without even disturbing Tomcat.
Other than static files, also many dynamic stuff may not need to be updated every millisecond. For example, a json loaded by the homepage that tells the user how much stuff is in your database, is an expensive query performed thousands of times that can safely be performed each hour or so without making your users angry. So, tomcat may serve the json with proper one hour caching directive, Apache will cache the json fragment and serve it to any browser requiring it for one hour. There are obviously a ton of other ways to implement it (a caching filter, a JPA cache that caches the query etc...), but sending proper cache headers and using Apache as a reverse proxy is quite easy, REST compliant and scales well.
Another consideration is load balancing. Apache comes with a nice load balancing module, that can help you scale your application on a number of Tomcat instances, supposed that your application can scale horizontally or run on a cluster.
A third consideration is about ulrs, headers etc.. From time to time you may need to change some urls, or remove or override some headers. For example, before a major update you may want to disable caching on browsers for some hours to avoid browsers keep using stale data (same as lowering the DNS TTL before switching servers), or move the old application on another url space, or rewrite old URLs to new ones when possible. While reconfiguring the servlets inside your web.xml files is possible, and filters can do wonders, if you are using a framework that interprets the URLs you may need to do a lot of work on your sitemap files or similar stuff.
Having Apache or another web server in front of Tomcat may help a lot changing only Apache configuration files with modules like mod_rewrite.
So, I always recommend having Apache httpd in front of Tomcat. The small overhead on connection handling is usually recovered thanks to caching of resources, and the additional configuration works is regained the first time you need to move URLs or handle some headers.
It depends on your network and how you wish to have security set up.
If you have a two-firewall DMZ, with applications deployed inside the second firewall, it makes sense to have an Apache or IIS instance in between the two firewalls to handle security and proxy calls into the app server. If it's acceptable to put the Tomcat instance in the DMZ you're free to do so. The only downside that I see is that you'll have to open a port in the second firewall to access a database inside. That might put the database at risk.
Another consideration is traffic. You don't say anything about traffic, sizing servers, and possible load balancing and clustering. A load balancer in front of a cluster of app servers is more likely to be kept inside the second firewall. The Tomcat instance is capable of handling traffic on its own, but there are always volume limitations depending on the hardware it's deployed on and what the application is doing with each request. It's almost impossible to give a yes or no answer without more detailed, application-specific information.
Search the site for "tomcat without apache" - it's been asked before. I voted to close before finding duplicates.

How to Detect cause of 503 Service Temporarily Unavailable error and handle it?

i am getting the error 503 Service Temporarily Unavailable many times in my application
and i want to detect why this error occurs, how ? if there's a log file or something like that, since i am not familiar with apache.
second thing is that, is it possible to handle this error, that when it occurs apache is restarted ?
There is of course some apache log files. Search in your apache configuration files for 'Log' keyword, you'll certainly find plenty of them. Depending on your OS and installation places may vary (in a Typical Linux server it would be /var/log/apache2/[access|error].log).
Having a 503 error in Apache usually means the proxied page/service is not available. I assume you're using tomcat and that means tomcat is either not responding to apache (timeout?) or not even available (down? crashed?). So chances are that it's a configuration error in the way to connect apache and tomcat or an application inside tomcat that is not even sending a response for apache.
Sometimes, in production servers, it can as well be that you get too much traffic for the tomcat server, apache handle more request than the proxyied service (tomcat) can accept so the backend became unavailable.