Website sometimes gets jammed(timeout) - apache

I'm hosting my website www.xgclan.com with the latest apache 2.4.1 and sometimes my server gets jammed, it doesn't seem to send any data but you don't get a timeout like when the Apache process isn't running.
A reboot of the process resolves this issue.
It seems to happen when you open the website in multiple browsers on the same system.
I've tested it on 2 different systems to make sure its not a bandwith or cpu problem.

Putting this without the quotes "AcceptFilter http none" in the httpd.conf fixed the issue for me.
I found the solution here: http://www.apachelounge.com/viewtopic.php?t=4543&postdays=0&postorder=asc&start=20

Related

Apache "Failed to load resource: The network connection was lost."

First of all, my thanks to this community for helping me solve many, many issues over the years. In fact, I have never needed to post a new question - I was always able to find an answer (eventually).
Not so this time. I am a moderately experienced hobby developer, self-hosting a small set of sites on my Mac Mini (Apache 2.4, PHP 8.0, MySQL 5.6). I built a reasonably complex site (www.fundas.us/manhattanzen) and everything was working perfectly.
I then decided to add SSL encryption to my server (certificate purchased from ssl.com) and installed it with no issues. Checking the SSL configuration via "SSL Checker" and Whynopadlock.com confirms that the certificate is properly installed. The only "warning" I get is that I only have TLSv1 enabled on the server. This despite the fact that my httpd-ssl.conf file says "SSLProtocol -all SSLv3". I mention this in case it is the cause of my troubles.
The issue I am experiencing is that the SSL encrypted site works perfectly using Firefox and Chrome on the Mac Mini (Mojave), but fails using Safari on the same Mac and fails using any of the browsers on my iPad or iPhone. Safari's web console shows "Failed to load resource: The network connection was lost." and the server log shows "child pid XXXXX exit signal Segmentation fault (11)".
The resources that fail to load are some (but not all) of the css and js resources that reside on the local (Mac Mini) server. All other resources (residing on external servers) load fine.
I have tried a number of suggestions found on Stack Overflow, including
changing file permissions to 777 on the offending resources (js, css files)
setting KeepAlive to Off in httpd-default.conf
minifying offending resource files
increasing SSLSessionCache in httpd-ssl.conf
None of it has made any difference. I should also point out that I have configured .htaccess in the root folder of my site to force all incoming connections to https://
This seems like the last hurdle to make this website fully encrypted and fully functional and I am thoroughly stuck. I will appreciate any pointers you have for me. Many thanks.
Was able to figure this out and wanted to answer my own question, in case it helps anybody else.
First, the strange test results from SSL validation sites that my server was not TLSv1.2 ready. I fixed this by changing the SSLProtocol line in httpd-ssl.conf to explicitly only permit TLSv1.2 ("SSLProtocol all -SSLv3" --> "SSLProtocol TLSv1.2")
Second, the odd behavior of Safari (on both desktop and mobile) occasionally hanging unable to load a page (while other browsers had no issues). I found the solution to this at https://serverfault.com/questions/937253/https-doesnt-work-with-safari. Making the recommended change to httpd-ssl.conf and adding the line "Header unset Upgrade" solved the Safari issue.

wkhtmltopdf hangs on 10% and does not generate PDF

I'm using WKHtmlToPdf to generate some docs here at work, in internal applications, for over one and half year without any problem. Some applications are coded in C++, some in AutoIt3, and today, after restarting all the computers due to external reasons (power generator would be tested), wkhtmltopdf stopped working on all machines at my company.
I can't even run it from command line. Whether I try to convert a webpage or a local HTML file, it always hangs on 10%. All our machines are Windows 8 32 bits and runs their own install (the applications aren't running under a network share).
I tried downloading wkhtmltopdf again from the website, installing it, etc, but nothing worked. I also tried adding --disable-javascript option, which also didn't work. Cleaning %TEMP% folder did not help too.
I never faced anything like this. All the machines were restarted normally, going to start menu, etc. And it does not look like a network issue, since I'm accessing internet to write this, and we are a small company, we use a standard Wi-Fi router, just like your house. Nothing was changed, no file deleted, no Windows update, no network settings... just a restart. I saw some guys facing the same problem when trying to run wkhtmltopdf from PHP, but in this case, I have this problem even by running it from DOS, as anyone would do.
wkhtmltoimage is working fine. Just wkhtmltopdf stopped working.
Screenshot
In my case, wkhtmltopdf was hanging on files locally stored after the progress counter had made an initial jump to the percentage corresponding to 1 page. It turned out that I had an http_proxy variable set to some unaccessible proxy server. Clearing this environment variable solved the issue.

sites down but server up, what to do?

I could ping my websites but I got a 404 going there.
My server was running normally, no new mail for root.
Is there a service to alert you when your websites are down ?
What do you usually do to understand why it's down?
I took a look at the apache2/error.log and saw that it couldn't access one of the website I had deleted a few hours ago. I just did a a2dissite and restarted and it was fixed, I could access my websites.
I either got lucky or I postponed a problem. Any idea what I should do next to make sure everything is alright? (I'm on debian by the way)

Apache Crash Dialog

I'm running XAMPP on my windows machine and experiencing a problem with Apache crashing a couple times a day. When it does, a dialog pops up and I have to manually tell windows to end the program. After I do that, XAMPP automatically starts it back up in a couple of seconds with no issues. When it crashes while I'm not home though, the server is down until I get back. So I have two questions:
Are periodic crashes something that should be expected, or is this indicative of another issue I should be trying to pinpoint?
If this is something I should just learn to deal with, is there a way to automatically restart httpd.exe when these issues occur, so I don't experience down time when I'm away from home?
You'd look into log files, especially the Apache access and error logs, to see what happened, when you are not at home. I've met some similar situation: I have a problematic PHP script hosted on my server, when someone visits the page, it leads to an Apache crash.
I'd suggest you do the investigation as follows:
Search the timestamp of recent Apache restart.
Check the Apache access log to see whether there are some scripts have been accessed.
Manually access these scripts in your browser (to see if Apache will crash again)
You'd better check the PHP error log as well.
If there is really nothing suspicious, you can try WAMP bundle alternatively, which is also a very popular PHP development environment and it is stable.
Although there aren't many cases in which one should "expect" periodic crashes, in this case you are better of reconsidering your setup. From the frontpage of the XAMPP site:
XAMPP is the most popular PHP development environment
Sure, you can use it as "production" server, but XAMPP isn't build for hosting websites, it is intended as development server, so you don't have to manually setup Apache, PHP and MySQL on you dev machine. If you actually want to run your website for the public, setup Apache/IIS, MySQL and PHP manually, those products on there own are made for running in production. Or you can consider getting some cheap shared hosting somewhere, so you don't need to setup anything.

apache zombie processes on debian, what is the cause?

In top I keep seeing zombie processes (not more than one at a time), they disappear quickly (within 10 seconds), but a new zombie pops up a few seconds later. My server runs 3 sites, 2 written in PHP, one in Perl, all served by Apache . For the PHP sites I use mod_rewrite to create nice looking URLs. I having been trying to figure out which page or script causes these zombies, but can't find it. Is there a way to connect the PID of a process to the request it was executing?
To find out what causes the zombies I stopped the Perl site and one of the PHP sites, nothing changed, the zombies keep coming, so my best guess is that I have narrowed it down to one site, but then again, maybe it has nothing to do with a particular site I (I can't take the remaining site offline to check, since people are working with it).
I am running Debian on that server, this is the config:
Apache/2.2.9 (Debian) DAV/2 SVN/1.5.1 PHP/5.2.6-1+lenny8 with Suhosin-Patch
mod_ssl/2.2.9 OpenSSL/0.9.8g mod_perl/2.0.4 Perl/v5.10.0
Any help or pointing me in the right direction is greatly appreciated, I have been googling and trying things for days now (I learned a lot from it though ;-) ).
During the quiet Christmas holidays I had the opportunity to take the 3rd site offline for a couple of minutes. To my surprise I kept seeing zombies popping up, so it seems it has nothing to do with one site in particular but rather with some setting in Apache. Any ideas anyone?
I just answered a very similar question
Apache spawning zombie processes when php is called
the short answer is it's normal.
By enabling mod_status you'lle get some more details in the /status url of your server, and even the details of the last page served if you set the ExtendedStatus directive to "On". But you should not use that setting for a to long time in a production server.
Then I would like to know how do you know it's a zombie process? Are you sure it's not the 'normal' subprocess of apache, serving the client requests? How many subprocess do you have for your apache?