what's the performance impact causing from the large size of Apache's access.log? - apache

If the logs file like access.log or error.log gets very large, will the large-size problem impact the performance of Apache running or user accessing? From my understanding, Apache doesn't read entire logs into memory, but just make use of filehandle to write. Right? If so, I don't have to remove the logs manually every time when it's large enough except for the filesystem issue. Please help and correct me if I'm wrong. Or is there any Apache Log I/O issue I'm supposed to take care when running it?
Thx very much

Well, i totally agree with you. Per my understanding apache access the log files using handlers and just append the new message at the end of the file. That's way a huge log file will not make the difference when has to do with writing to the file. But may be if you want to access the file or open it with a kind of logging monitoring tool then the huge size will slowdown the process of reading the file.
So i would suggest you to use log rotation to have an overall better end result.
This suggestion is directly form the apche web site.
Log Rotation
On even a moderately busy server, the quantity of information stored in the log files is very large. The access log file typically grows 1 MB or more per 10,000 requests. It will consequently be necessary to periodically rotate the log files by moving or deleting the existing logs. This cannot be done while the server is running, because Apache will continue writing to the old log file as long as it holds the file open. Instead, the server must be restarted after the log files are moved or deleted so that it will open new log files.
From the Apache Software Foundation site

Related

Which file is consuming most of the bandwidth?

My website is consuming much more bandwidth than it supposed to be. From Weblizer or awstats of WHM/ cPanel I can monitor the bandwidth usage, which type of files (jpg, png, php, css etc.) is consuming the bandwidth. But I couldn't get any specific file name. My assumption is the bandwidth usage is done by referral spaming. But from the "Visitors" page of cPanel I can see only last 1000 hits. Is there any way from where I can see that which image or css file is consuming the bandwidth.
If there is a particular file which you think is consuming the most bandwidth, then you use apachetop tool.
yum install apachetop
then run
apachetop -f /var/log/apache2/domlogs/website_name-ssl.log
replace website_name with which you wish too.
It will basically pick the entries from domlogs (which saves requests being served from websites, you may read more about domlogs here).
This will show the file which is being requested the most in real time basis and might give you an idea if particular image/php etc file has maximum requests.
Domlogs is a way to find which file request by which bot etc is being carried out. Your initial investigation may start from this point.

Easy Hosting Control Panel creates multiple backup

I have a server that's running Ubuntu 16.04. and apparently Easy Hosting Control Panel keeps on creating multiple back-ups like 50 times a day which fills the 50 gb disk space and it's causing the server to crash.
The backup is creating multiple directories named Apache2.backupbyehcp inside /etc directory.
I've tried deleting the backups one by one and after a day there it is again.
I want to disable or limit the backups created.
Any help is greatly appreciated.
Here's a screen shot of the backup directories that are being created:
This is caused by:
Ehcp trying to recover webserver config, each time it detects that the webserver config is broken or webserver not responding.
This may result in such unexpected/unwanted behaviour.
What to do:
1st, check the problem in webserver configs, like, tail -f /var/log/ehcp.log
so that you can understand what is going wrong.
This is sometime caused by incorrect webserver custom configurations by admin or reseller. You may disable custom webserver configs via ehcp gui-> options.
(I strongly suggest finding the cause of this.)
If everything regarding the webserver is okay, but you just need to disable this backup,
open install_lib.php in ehcp dir, search for backupbyehcp and disable that line.
Hope this helps.

Slow Response When IIS Doesn't Target Default wwwroot and Does Project Folder as the Root

Is it possible that setting the IIS root to the same directory to the project root will cause a slow performance?
I have an ASP.NET Web Application that handles some SQL commands to GET/POST records on the local SQL database. Recently I have came up with an idea that I no longer have to start debugging each time to test the code by changing the root of IIS from the default (C:\inetpub\wwwroot) to the root of the web-application project folder.
However, after that, I have encountered a problem where some manipulation on the web GUI, especially which include POST requests get extremely slow. For example, adding a new document or rewriting an existing one on the database now take about a minute whereas they did less than 20 seconds. Also, it seems that repeating POST commands make themselves slower (restarting the computer reset the situation). So I guess some read/write process may leave garbage and it conflicts with other processes.
Could anyone suggest any core issue about this phenomenon? Also please let me kwno if my explanation isn't enough clear to show the problem.
I have encountered a problem where some manipulation >on the web GUI, especially which include POST requests >get extremely slow
Changing the root directory is very unlikely to cause this issue.Your application was already performing very slow(20 seconds also is slow).
So no phenomenon in my opinion,You have to debug your application to find out where the delay is.To find out the root cause,you can use any profiler like perfview or a tool like debugdiag.
In case of debugdiag,choose the second option in the above link to capture a memory dump.Once you have a memory dump,simply double click the dump file and debugdiag will do an automated analysis and tell you where the problem is in your application code. E.g it can tell you your DB call is taking time .If you are not able to find,please post the analysis result updated with the question

Server timeout when re-assembling the uploaded file

I am running a simple server app to receive uploads from a fine-uploader web client. It is based on the fine-uploader Java example and is running in Tomcat6 with Apache sitting in front of it and using ProxyPass to route the requests. I am running into an occasional problem where the upload gets to 100% but ultimately fails. In the server logs, as well as on the client, I can see that Apache is timing out on the proxy with a 502 error.
After trying and seeing this myself, I realized the problem occurs with really large files. The Java server app was taking longer than 30 seconds to reassemble the chunks into a single file and so Apache would kill the connection and stop waiting. I have increased Apache Timeout to 300 seconds which should largely correct the problem but the potential remains.
Any ideas on other ways to handle this so that the connection between Apache and Tomcat is not killed while the app is assembling the chunks on the server? I am currently using 2 MB chunks and was thinking maybe I should use a larger chunk size. Perhaps with fewer chunks to assemble the server code could do it faster. I could test that but unless the speedup is dramatic it seems like the potential for problems remain and will just be waiting for a large enough upload to come along to trigger them.
It seems like you have two options:
Remove the timeout in Apache.
Delegate the chunk-combination effort to a separate thread, and return a response to the request as soon as possible.
With the latter approach, you will not be able to let Fine Uploader know if the chunk combination operation failed, but perhaps you can perform a few quick sanity checks before responding, such as determining if all chunks are accessible.
There's nothing Fine Uploader can do here, the issue is server side. After Fine Uploader sends the request, its job is done until your server responds.
As you mentioned, it may be reasonable to increase the chunk size or make other changes to speed up the chunk combination operation to lessen the chance of a timeout (if #1 or #2 above are not desirable).

Impact of Apache Log-Files to Serverperformance

Im running a Apache Webserver under Windows Sever 2003.
I noticed that my logfile is bigger then 100MB. If I imagine, that Apache always has to open this file, jump to its end, add a new line and close it again, it seems a good idead to keep this log small.
But is this issue / possible performance-lack really that big?
Apache won't be opening and closing the file all the time - it will keep the file open with the file pointer at the end, ready to write the next line.
So the size of the logfile isn't an issue from a performance point of view.