Centos server hanged due to postfix/sendmail spam emails - apache

My centos server is running web applications in LAMP stack. A couple of days back, the server was not responding for about 10 mins and I got http response failure alert from my monitoring tool. When I checked the httpd error log I found a huge log entry (~12000 lines) related to sendmail.
14585 sendmail: fatal: open /etc/postfix/main.cf: Permission denied
The server ran out of memory and not responding.
14534 [Fri Aug 19 22:14:52.597047 2016] [mpm_prefork:error] [pid 26641] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
14586 /usr/sbin/sendmail: error while loading shared libraries: /lib64/librt.so.1: cannot allocate version reference table: Cannot allocate memory
We are not using sendmail in any of our application. How can I stop this attack in future?
Thank you in advance!

Sorry I have no comment facilities; it looks like one of your website pages is vulnarable for code injection, finding out where and what page may be a huge job. Focus on input (forms) variables. Always sanitize input variables before using them! P.s. php uses "sendmail", even if you use Postfix, it will use a sendmail binary to send mail and the sendmail binary will redirect it to Postfix. If your forms work well and the 12k error log lines come out of the blue, then I would think someone is trying to inject code through your website (happens all the time by the way).

Related

ZShmStorage(): Fatal error: failed to allocate semaphore. Permission denied

I've read numerous posts about Semaphore issues, but none appear to be related to this particular issue. I am running Apache 2.4 and PHP 7.1 as part of Zend Server CE, and when starting the Apache service, the following fatal error shows up in the Apache error log:
ZShmStorage(): Fatal error: failed to allocate semaphore. Permission denied
I'm guessing that is related to shared memory storage, but I'm unclear ...
what it is actually trying to allocate the semaphore for
where to look for configuration relating to said semaphore
I thought maybe it was related to the Zend Cache, so I made sure that the permissions looked correct for the [ZendServerPath]/tmp/datacache.
I also thought maybe it was related to the PHP Sessions, so I made sure that the directory specified for the session.save_path in the [ZendServerPath]/etc/php.ini file had proper permissions as well.
I'm unsure where else to look, as there doesn't appear to be any discussions (that I've found) relating to this, but instead it's all about available space and whatnot. In the off chance that it was related to something that borked during allocation, I went ahead executed ipcs -s and noted that there were ~20 Semaphore Arrays present while the Zend Server was running. I stopped the service and ran ipcs -s again -- no Semaphore Arrays were left lingering. Restarted the Zend Server service and the error was encountered again and ~20 Semaphore Arrays were also reallocated.
Any assistance or info to point me in the right direction would be appreciated.
I know this is too late to help #amorio, but maybe it'll help someone else.
The error is specifically from ZShmStorage(), which means it's not apache or php's fault. It really doesn't help that the error message is a bit misleading.
A bit more context on the error, at least in my case. I'm betting it's the same using php(-cgi).
Starting php-fpm (/usr/local/zend/sbin/php-fpm --nodaemonize -c /usr/local/zend/etc/php-7.1-fpm.ini --pid /usr/local/zend/var/run/php-fpm.pid --fpm-config /usr/local/zend/etc/php-7.1-fpm.conf) [10.11.2020 18:18:05 ERROR] ZShmStorage(): Fatal error: failed to allocate semaphore. Permission denied
<br />
<b>Warning</b>: Could not initialize Zend Data Cache: Fatal error: failed to allocate semaphore. Permission denied in <b>Unknown</b> on line <b>0</b><br />
Zend Server installed to /usr/local/zend.
I ran into this same issue on ZendServer 9.1.10 on Ubuntu 16.04. In my case, I had to fix permissions on /usr/local/zend/tmp/datacache so www-data (apache user) had write access.
/usr/local/zend/etc/conf.d/datacache.ini is where the save path is defined.

What happens AFTER Apache says "Script timed out before returning headers" to the running script?

I have a Perl web app served by Apache httpd using plain mod_cgi or optionally mod_perl with PerlHandler ModPerl::Registry. Recently the app encountered the error Script timed out before returning headers on some invocations and behaved differently afterwards: While some requests seemed to be processed successfully in the background, after httpd sent status 504 to the client, others didn't.
So how exactly behaves httpd AFTER it reached its configured timeout and sent the error to the client? The request/response cycle is finished now, so I guess things like KeepAlive come into play to decide if the TCP connections stays alive or not etc. But what happens to the running script in which environment, e.g. mod_cgi vs. mod_perl?
Especially in mod_cgi, where new processes are started for each request, I would have guessed that httpd keeps the processes simply running. Because all our Perl files have a shebang, I'm not even sure if httpd is able to track the processes and does so or not. That could be completely different with mod_perl, because in that case httpd is aware of the interpreters and what they are doing etc. In fact, the same operation which timed out using plain mod_cgi, succeeded using mod_perl without any timeout, but even with a timeout in mod_cgi at least one request succeeded afterwards as well.
I find this question interesting for other runtimes than Perl as well, because they share the concepts of plain mod_cgi vs. some persistent runtime embedded into the httpd processes or using some external daemons.
So, my question is NOT about how to get the error message away. Instead I want to understand how httpd behaves AFTER the error occurred, because I don't seem to find much information on that topic. It's all just about increasing configuration values and try to avoid the problem in the first place, which is fine, but not what I need to know currently.
Thanks!
Both mod_cgi and mod_cgid set a cleanup function on the request scope to kill the child process, but they do it slightly different ways. This would happen shortly after the timeout is reported (a little time for mod_cgi to return control, the error response to be written, the request logged, etc)
mod_cgi uses a core facility in httpd that does SIGTERM, sleeps for 3 seconds, then does a SIGKILL.

Weird~Apache can not find the actually existing "bash" to execute my cgi file~

May be it's too easy for you to answer.
My problem is about cgi and apache web server.
Make it simple, I have a html "form.html" containing a form in it. To access it, typing "127.0.0.1/form.html" in browser.
After clicking "submit" in this html file, it is supposed to adress to "127.0.0.1\cgi-bin\cginame.cgi", the content of "cginame.cgi" is as below:
#!/bin/bash
if [ $REQUEST_METHOD="GET" ]
then
data=$QUERY_STRING
else
data='cat'
fi
java mortcal $data
"mortcal" is a java program calculating and return a HTML page containing results to user.
I'm using apache 2.2 and ubuntu 10.04.
The problem is when I click the "submit" button in "form.html", I got these in error log:
[Sat Sep 24 15:00:20 2011] [error] (2)No such file or directory: exec of '/usr/lib/cgi-bin/mortcgi.cgi' failed
[Sat Sep 24 15:00:20 2011] [error] [client 127.0.0.1] Premature end of script headers: mortcgi.cgi
I know it's because apache can not find "/bin/bash" to execute the cgi file. But I do have "/bin/bash".
It's so weird. Please help me out. Thank you in advance.
To execute CGI scripts, you need to configure Apache to allow this, and your script has to follow the HTTP protocol by sending back data in the right format, and right permissions, and on and on and on.
Here's a great tutorial with an example: http://httpd.apache.org/docs/2.2/howto/cgi.html
... however, I need to say: running a java program from within a shell script via Apache is a bad idea, in general. Each request loads the java runtime engine (JRE), runs the program, then unloads it. There are issues with environment, file ownership and so on -- all of this is why there are application servers like tomcat for java. So if you're just trying something, that's fine. If you're thinking this is a good way to get something done in a professional production environment, I would reconsider.
As noted, this seems like a poor way to do things, but:
Do the script file permissions allow execution for the web server user?
Are you using any security framework such as selinux which would apply additional restrictions?
I checked my configuration files. They are ok. So I kept searching on the web and finally I saw this:
"If you've copied over the script from a Windows machine, you may be being tripped up by ^M at end of line. You can use cat -v /usr/lib/cgi-bin/printenv.pl | head -1 to verify that there isn't a ^M at the end of the line. "
I did copy my cgi file from windows! I forgot to mention it because I did not think it's a big deal.
Now I have removed the ^M by typing this" :%s/^V^M//g in vi. This problem is resolved. Thanks very much for your answer, Mr.Harrison and Dark Falcon, Thank you all.

Tomcat showing this error "This is very likely to create a memory leak". How to resolve this issue?

I have created a web application in Apache Cocoon.This website is running properly but after every 3-4 days, it stops responding. It doesn't run until and unless, we restart the tomcat service. In the catalina.2011-05-09.log file, it shows following error:-
"May 9, 2011 3:17:34 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/webresources] is still processing a request that has yet to finish. This is very likely to create a memory leak. You can control the time allowed for requests to finish by using the unloadDelay attribute of the standard Context implementation."
I am not been able to understand the cause of this problem. Can someone suggest me how to resolve this issue?
You are using a library that is starting one or more threads and is not properly shutting them down or releasing other resources captured by the thread. This often happens with things like Apache HTTP components (I get this error with Http Components) and anything that uses separate threads internally. What libraries are you using in your Cocoon application?
It is telling you the issue:
[...] is still processing a request that has yet to finish
You need to find out what that request is/is going to. One easy way is to have something like PsiProbe installed.
Also, it's not a bad idea to restart Tomcat every night. It can help alleviate these kinds of issues until you find the root cause.

Apache upload failed when file size is over 100k

Below it is some information about my problem.
Our Apache2.2 is on windows 2008 server.
Basically the problem is user fails to upload file which is bigger than 100k to our server.
The error in Apache log is: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. : Error reading request entity data, referer: ......
There were a few times (not always) I could upload larger files(100k-800k, failed for 20m) in Chrome. In FF4 it always fails for uploading file over 100k. In IE8 it is similar to FF4.
It seems that it fails to get request from client, then I reset TimeOut in Apache setting to default value(300) which did not help at all.
I do not have the RequestLimitBody option set up and I am not using PHP. Anyone saw the similar error before? Now I am not sure what I can try next. Any advise would be appreciated!
Edit:
I just tried to use remote desk to upload files on the server and it worked fine. First thought was about the firewall which however is off all the time, Http Proxy is applied though.