Permanently limit catalina.out file size for tomcat - apache

I'm using tomcat 7 on a virtual machine running Ubuntu. Right now, I have a catalina.out file at /var/lib/tomcat7/logs/catalina.out that is over 20GB in size. I tried first rotating the log file through this tutorial. I then found out that it will monitor it at the end of the night. Even starting the service manually didn't really do much. So I removed the file, but it appeared after I restarted tomcat.
I then tried doing what the accepted answer here was which was to go into conf/logging.properties and change the following line from:
.handlers = 1catalina.org.apache.juli.FileHandler, java.util.logging.ConsoleHandler
to
.handlers = 1catalina.org.apache.juli.FileHandler
This seems to have worked for a minute, at least until I re-started my virtual machine. Once that happened, I ended up with another 20GB catalina.out file.
Is there some proven way that will either stop the file from getting above 5MB or will just limit the file at all?

Mulesoft exlains this in Rotating Catalina.out:
There are two answers.
The first, which is more direct, is that you can rotate Catalina.out by adding a simple pipe to the log rotation tool of your choice in Catalina's startup shell script. This will look something like:
"$CATALINA_BASE"/logs/catalina.out WeaponOfChoice 2>&1 &
Simply replace "WeaponOfChoice" with your favorite log rotation tool.
The second answer is less direct, but ultimately better. The best way to handle the rotation of Catalina.out is to make sure it never needs to rotate. Simply set the "swallowOutput" property to true for all Contexts in "server.xml".
This will route System.err and System.out to whatever Logging implementation you have configured, or JULI, if you haven't configured it. All commons-logger implementations rotate logs by default, so rotating Catalina.out is no longer your problem.

Related

Easy Hosting Control Panel creates multiple backup

I have a server that's running Ubuntu 16.04. and apparently Easy Hosting Control Panel keeps on creating multiple back-ups like 50 times a day which fills the 50 gb disk space and it's causing the server to crash.
The backup is creating multiple directories named Apache2.backupbyehcp inside /etc directory.
I've tried deleting the backups one by one and after a day there it is again.
I want to disable or limit the backups created.
Any help is greatly appreciated.
Here's a screen shot of the backup directories that are being created:
This is caused by:
Ehcp trying to recover webserver config, each time it detects that the webserver config is broken or webserver not responding.
This may result in such unexpected/unwanted behaviour.
What to do:
1st, check the problem in webserver configs, like, tail -f /var/log/ehcp.log
so that you can understand what is going wrong.
This is sometime caused by incorrect webserver custom configurations by admin or reseller. You may disable custom webserver configs via ehcp gui-> options.
(I strongly suggest finding the cause of this.)
If everything regarding the webserver is okay, but you just need to disable this backup,
open install_lib.php in ehcp dir, search for backupbyehcp and disable that line.
Hope this helps.

Apache2: upload restarts over and over

We are using different upload scripts with Perl-Module CGI for our CMS and have not encountered such a problem for years.
Our customer's employees are not able to get a successful download.
No matter what kind or size of file, no matter which browser they use, no matter if they do it at work or log in from home.
If they try to use one of our system's upload pages the following happens:
The reload seems to work till approx. 94% are downloaded. Suddenly, the reload restarts and the same procedure happens over and over again.
A look in the error log shows this:
Apache2::RequestIO::read: (70007) The timeout specified has expired at (eval 207) line 5
The wierd thing is if i log in our customer's system using our VPN-Tunnel i never can reproduce the error (i can from home though).
I have googled without much success.
I checked the apache timeout setting which was at 300 seconds - which is more than generous.
I even checked the content length field for a value of 0 because i found a forum entry refering to a CGI bug that related to a content length field of 0.
Now, i am really stuck and running out of ideas.
Can you give me some new ones, please?
The apache server is version 2.2.16, the perl CGI module is version 3.43 .
We are using mod_perl.
We did know our customer didn't use any kind of load balancing.
Without letting anyone else know our customers infrastructure departement activated a load balancer. This way requests went to different servers and timed out.

POST or PUSH operations from s3cmd fail on a certain internet connection

I'm currently automating a build script to push resources to Amazon s3 and am using http://s3tools.org/s3cmd (and https://github.com/s3tools/s3cmd) which I understood was the normal / main command line tool to use. Nothing too complicated, and I had done most of the testing out of office, but as soon as I got into there, the whole thing started to fall apart, and I'm crazy confused why.
A simple command like (with both 'mybucket' existing on s3 and 'file.ext' existing in the directory I'm running the command from),
s3cmd put file.ext s3://mybucket/
was failing with either
[Errno 104] Connection reset by peer
or
[Errno 32] Broken pipe
I know there's an issue with s3 with files over 5GB of size, but these files are nowhere near that, they're less than 1MB, never mind more than 1GB. The really weird thing was that another program, http://www.bucketexplorer.com/ worked perfectly, doing the exact same operations, on the same network.
What was weirder still was to test everything out, I tethered my laptop to my phone's 3G connection, and straight away everything worked perfectly again, and when I got home, and tested the commands again there, it worked perfectly again.
Any idea as to what might be causing this error on our work network, with s3cmd, but not Bucket Explorer?
There can be many reasons for this error, like TCP window scaling (or this one) and
DNS propagation.
I was able to workaround this by using a tiny multipart chunk size of 5MB:
s3cmd put --multipart-chunk-size=5 file.ext s3://mybucket/

what's the performance impact causing from the large size of Apache's access.log?

If the logs file like access.log or error.log gets very large, will the large-size problem impact the performance of Apache running or user accessing? From my understanding, Apache doesn't read entire logs into memory, but just make use of filehandle to write. Right? If so, I don't have to remove the logs manually every time when it's large enough except for the filesystem issue. Please help and correct me if I'm wrong. Or is there any Apache Log I/O issue I'm supposed to take care when running it?
Thx very much
Well, i totally agree with you. Per my understanding apache access the log files using handlers and just append the new message at the end of the file. That's way a huge log file will not make the difference when has to do with writing to the file. But may be if you want to access the file or open it with a kind of logging monitoring tool then the huge size will slowdown the process of reading the file.
So i would suggest you to use log rotation to have an overall better end result.
This suggestion is directly form the apche web site.
Log Rotation
On even a moderately busy server, the quantity of information stored in the log files is very large. The access log file typically grows 1 MB or more per 10,000 requests. It will consequently be necessary to periodically rotate the log files by moving or deleting the existing logs. This cannot be done while the server is running, because Apache will continue writing to the old log file as long as it holds the file open. Instead, the server must be restarted after the log files are moved or deleted so that it will open new log files.
From the Apache Software Foundation site

Impact of Apache Log-Files to Serverperformance

Im running a Apache Webserver under Windows Sever 2003.
I noticed that my logfile is bigger then 100MB. If I imagine, that Apache always has to open this file, jump to its end, add a new line and close it again, it seems a good idead to keep this log small.
But is this issue / possible performance-lack really that big?
Apache won't be opening and closing the file all the time - it will keep the file open with the file pointer at the end, ready to write the next line.
So the size of the logfile isn't an issue from a performance point of view.