Httpd Process High memory usage - apache

Having some problems with httpd (Apache/2.2) memory usage.
Over time, memory usage in the httpd processes creep up until it's eventually at 100%. then it will restart automatically
The problem seems to be related to a specific machine (a different
machine with a similar configuration (Apache 2.2, code , OS version)
does not exhibit this behavior.

You can set MaxRequestsPerchild to recycle the processes periodically.

Related

Why several PHP processes in Ubuntu?

In my Ubuntu's System Monitor i am getting 17 processes for PHP.
They are getting a big part of my cpu and memory.
Some labelled "php" and others "php7.1".
Is this normal to happen? If not what is the suggested solution?
System: Ubuntu 14.04, Apache 2.4.7 with php 7.1 as a module.
Depends on how you integrate PHP in your webserver environment.
When 20 requests are handled, which performs PHP stuff, 20 times PHP is executed.
The amount of PHP processes should not be a problem. More interesting is, which PHP application needs much cpu and memory.

Apache Tomcat Crashes In Google Compute Engine f1-micro

I am running Apache Guacamole on a Google Cloud Compute Engine f1-micro with CentOS 7 because it is free.
Guacamole runs fine for some time (an hour or so) then unexpectantly crashes. I get the ERR_CONNECTION_REFUSED error in Chrome and when running htop I can see that all of the tomcat processes have stopped. To get it running again I just have to restart tomcat.
I have a message saying "Instance "guac" is overutilized. Consider switching to the machine type: g1-small (1 vCPU, 1.7 GB memory)" in the compute engine console.
I have tried limiting the memory allocation to tomcat, but that didn't seem to work.
Any suggestions?
I think the reason for the ERR_CONNECTION_REFUSED is likely due to the VM instance falling short on resources and in order to keep the OS up, process manager shuts down some processes. SSH is one of those processes, and once you reboot the vm, resource will resume operation in full.
As per the "over-utilization" notification recommending g1-small (1 vCPU, 1.7 GB memory)", please note that, f1-micro is a shared-core micro machine type with 0.2 vCPU, 0.60 GB of memory, backed by a shared physical core and is only ideal for running smaller non-resource intensive applications..
Depending on your Tomcat configuration, also note that:
Connecting to a database is an intensive process.
Creating a Tomcat with Google Marketplace, the default VM setting is "VM instance: 1 vCPU + 3.75 GB memory (n1-standard-1) so upgrading to machine type: g1-small (1 vCPU, 1.7 GB memory) so should ideal in your case.
Why was g1 small machine type recommended. Please note that Compute Engine uses the same CPU utilization numbers reported on the Compute Engine dashboard to determine what recommendations to make. These numbers are based on the average utilization of your instances over 60-seconds intervals, so they do not capture short CPU usage spikes.
So, applications with short usage spikes might need to run on a larger machine type than the one recommended by Google, to accommodate these spikes"
In summary my suggestion would be to upgrade as recommended. Also note that, the rightsizing gives warnings when VM is underutilized or overutilized and in this case, it is recommending to increase your VM size due to overutilization and keep in mind that this is only a recommendation based on the available data.

Commit transfer performance for large files to HTTP+SVN server

I have a SVN repository behind an Apache HTTPS server that stores small and large (+1GB) files. When I commit a large file, the transfer speed is about 10MB/sec (using a 1GBit network line). When I look at CPU utilization on the server, it is saturated with about 85% being consumed by apache2, and some 15% by the disk driver.
I have already tried disabling Apache logging and SSL, but that didn't help to improve the transfer speed. This makes me think that mod_dav_svn is using most of the CPU? I have also tried to increase the amount of available cores on the server (default = 1 core), but this mysteriously slows down the commits while httpd remains using 1 core. And setting SVNCompressionLevel 0 also didn't result in any noticeable speed improvement.
Is there any way to significantly increase the transfer speed through parallelization or some other optimization?
Server:
Debian 9.3
Apache 2.4.25
libapache2-mod-svn 1.9.5
svn repository: default FSFS config (i.e. all commented out in fsfs.conf). The HDD can write up to 30Mb/sec (hardware limited) without saturating the CPU (tested with copying). FS is NTFS, using ntfs-3g with big_writes enabled which is using some 10-15% CPU while writing #10MB/sec.
Client:
svn 1.8.13
CPU: first generation Intel Core #3.20Ghz
Obviously, I would be very pleased if I could transfer at 25-30MB/sec.
Is there any way to significantly increase the transfer speed through
parallelization or some other optimization?
Yes, there is. However, the question lacks necessary details about the SVN client and server version, the server's and FSFS repository configuration and the hardware it runs on. It is hard to tell what kind of optimizations will help in your case. You may want to upgrade your server and client to the latest versions and disable the compression in the server's config.
FYI: VisualSVN Server in my tests can deliver 1Gbps speed.

Difference between virtual machine process and host os process?

Suppose in my pc I have Ubuntu as Host OS. Now I installed a Virtual Machine say VirtualBox (hypervisor) and then deployed a centos and a redhat os inside that as guest OS.
Suppose CentOS and redhat has 2 processes running and Ubuntu is running 3 processes. So following are my questions:
There are how many processes that Ubuntu is having?
Is there any difference between GuestOS and HostOS processes?
If all guestos runs as a process then they will get less time as compared to other process running on host os.
Please clear my doubts here.
Thank you.
Well let me clear your doubts,
First of all there aren't any specific number of process for an OS, its called as cores or threads, technically you can define how many cores or threads you want to use on your virtual machine and it depends on the system configuration you use.
Secondly Guest OS is what you have created in the virtual machine and host is what your laptop or pc actually run. Host OS uses the actual hardware for the working whereas the Guest OS uses the virtual hardware like number of cores and type and size of hard drive defined by the user while adding a virtual machine.
Third, as I mentioned earlier Guest and Host OS works on the configurations used by you, if you user higher amount of cores/ threads in setting your virtual machine the Guest OS will get higher speed.
Ideally the virtual machines are used to test and create some functionality of the Operating Systems without affecting the internal OS, so you can think of it as a your parents house where you can live and grow but at the end you cannot go away from the fact that their contribution is more and so you cannot go beyond their features without leaving it and making your own home.
Linux operating systems are multi-threaded operating system. The host OS would consider virtual box as a thread. You can define number of cores and virtual hard disk size for guest OS by using virtual box.
Since virtual box runs in separate thread and other operations of host OS runs in separate threads, there would be less effect on speed of processing. But I've observed big variances in processing speed in systems which have low memory. Each and every thread needs specific allocation of memory for its smooth operation. So systems having more than 2 GB RAM managed virtual box very well.

Apache hangs/times out when backing up website with gzip or zip?

I'm running some websites on a dedicated Ubuntu web server. If I'm remembering correctly, it has 8 cores, 16GB memory, and running as a 64 bit Ubuntu. Content and files are delivered quickly to web browsers. Everything seems like a dream... until I run gzip or zip to backup an 8.6GB sized website.
When running gzip or zip, Apache stops delivering content. Internal server error messages are delivered until the compression process is complete. During the process, I can login via ssh without delays and run the top command. I can see that the zip process is taking about 50% CPU (I'm guessing that's 50% of a single CPU, not all 8?).
At first I thought this could be a log issue, with Apache logs growing out of control and not wanting to be messed with. Log files are under 5MB though and being rotated when they hit 5MB. Another current thought is that Apache only wants to run on one CPU and lets any other process take the lead. Not sure where to look to address that yet.
Any thoughts on how to troubleshoot this issue? Taking out all my sites while backups occur is not an option, and I can't seem to reproduce this issue on my local machines (granted, it's different hardware and configuration). My hopes are that this question is not to vague. I'm happy to provide additional details as needed.
Thanks for your brains in advance!
I'd suggest running your backup script under the "ionice" command. It will help prevent starving httpd from I/O.