Large commit stalls halfway through - apache

I have a problem with our subversion server. Doing small commits works fine, but as soon as someone tried to commit a large collection sizeable files the commit stalls halfway through and the client finally time out. My test set consists of roughly 2000 files and the total size of the commit is about 1 GB. When I commit the files the file uploading starts but about halfway through the transfer rate drops to 0kb/s and the commit just stalls and never recovers. If I splitting the commit into smaller pieces (<150 Mb) everything works just fine, but that breaks the atomicity of the commit structure and is something I really want to avoid.
When I look at the logs generate by Apache there is no error messages.
When I bumped the loglevel from debug to trace6 on the Apache server, there is some errors appearing at the moment when the upload stalls:
...
OpenSSL: I/O error, 2229 bytes expected to read on BIO
OpenSSL: read 1460/2229 bytes from BIO
...
Versions used:
We are running the connection to the subversion via apache, mod_dav, mod_dav_svn, mod_authz_svn and mod_auth_digest. The client connects via https.
Server:
OpenSuse 42.3
svnserve: 1.9.7
Apache: 2.4.23
Client:
Windows 10 enterprise
svn client: 1.10.0-dev.
What I tried so far:
I have tried increasing the TimeOut value in the apache configuration. The only difference is that the client ends up in stalled mode longer before posting the timeout message.
I have tried increasing the MaxKeepAliveRequests from 100 to 1000. No change.
I have tried adding SVNAllowBulkUpdates Prefer to the svn settings. No change.
Have anyone got any hints on how to debug these types of errors?

Related

Predis Error while reading line from server , timeout fix

I am using Redis with Daemon processes as well as in regular caching
Daemon processes with supervisor (Laravel Redis queues)
Regular caching as key value pair
timeout=300 is currently at my redis.conf file
It had been suggested to change it to timeout=0 at several Git links (https://github.com/predis/predis/issues/33)
My concern is that, if I do a timeout as 0, the redis sever will not drop any connection
Over a period of time, I see chances of getting error of max number of clients reached
Seeking advice for changing timeout --> 0 at redis.conf
Currently, I get following error logs frequently (every 2-3 min) [timeout=300]
{"message":"Error while reading line from the server. [tcp://10.10.101.237:6379]","context":
{"exception":{"class":"Predis\\Connection\\ConnectionException","message":"Error while reading
line from the server.
[tcp://10.10.101.237:6379]","code":0,"file":"/var/www/api/vendor/predis/predis/src/Connection/Ab
stractConnection.php:155"}},"level":400,"level_name":"ERROR","channel":"production","datetime":
{"date":"2020-09-23 07:14:01.207506","timezone_type":3,"timezone":"Asia/Kolkata"},"extra":[]}
I had changed to timeout = 0
Everything is working fine with it !!!
PS: Posting this, post an observation of 2 months after change

Centos server hanged due to postfix/sendmail spam emails

My centos server is running web applications in LAMP stack. A couple of days back, the server was not responding for about 10 mins and I got http response failure alert from my monitoring tool. When I checked the httpd error log I found a huge log entry (~12000 lines) related to sendmail.
14585 sendmail: fatal: open /etc/postfix/main.cf: Permission denied
The server ran out of memory and not responding.
14534 [Fri Aug 19 22:14:52.597047 2016] [mpm_prefork:error] [pid 26641] (12)Cannot allocate memory: AH00159: fork: Unable to fork new process
14586 /usr/sbin/sendmail: error while loading shared libraries: /lib64/librt.so.1: cannot allocate version reference table: Cannot allocate memory
We are not using sendmail in any of our application. How can I stop this attack in future?
Thank you in advance!
Sorry I have no comment facilities; it looks like one of your website pages is vulnarable for code injection, finding out where and what page may be a huge job. Focus on input (forms) variables. Always sanitize input variables before using them! P.s. php uses "sendmail", even if you use Postfix, it will use a sendmail binary to send mail and the sendmail binary will redirect it to Postfix. If your forms work well and the 12k error log lines come out of the blue, then I would think someone is trying to inject code through your website (happens all the time by the way).

Apache Request/Response Too Slow

I have an Apache server with 16GB of Ram. The script cap.php returns a very small chunk of data (500B). It starts a mysql connection and makes a simple query.
However, the response from the server is, in my opinion, too lengthy.
I attach a screenshot of the Developer Tool Panel in Chrome.
Beside SSL and the TTFB there is a strange delay of 300ms (Stalled).
If I try a curl from the WebServer:
curl -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -k -H 'miyazaki' https://127.0.0.1/ui/cap.php
Lookup time: 0.000
Connect time: 0.000
PreXfer time: 0.182
StartXfer time: 0.266
Total time: 0.266
Does anyone know what that is?
Eventually, I found that if you use SSL it is really better and it does really matter to switch on the KeepAlive directive into Apache. See the picture below.
According to the Chrome documentation:
Stalled/Blocking
Time the request spent waiting before it could be sent. This time is inclusive of any time spent in proxy negotiation.
Additionally, this time will include when the browser is waiting for
an already established connection to become available for re-use,
obeying Chrome's maximum six TCP connection per origin rule.
So this appears to be a client issue with Chrome talking to the network rather than a server config issue. As you are only making one request I think we can rule out the TCP limit per origin (unless you have lots of other tabs using up these connections) so would guess either limitations on your PC (network card, RAM, CPU) or infrastructure issues (e.g. You connect via a proxy and it takes time to set up that connection).
Your curl request doesn't seem to show this delay as it has just a 0.182 wait time to send the request (which is easily explained with https negotiation) and then a 0.266 total time to download (including the 0.182). This compares with 0.700 seconds when using Chrome so don't understand why you say "total time is similar" when to me it's clearly not?
Finally I do not understand your follow up answer. It looks to me like you have made request, presumably after a recent other request as this has skipped the whole network connection stage (including any grey stalling, blue DNS lookup, orange initial connection and purple https connection). So of course this quicker. But it's not comparing like for like with your first screenshot in your question and is not addressing your question.
But yes you absolutely should be using keep-alives (they are on by default in most web server so usually takes extra efforts to turn them off) and https resumption techniques (not on by default unless you explicitly add this to your https config) to benefit any additional requests sent shortly after the first. But these will not benefit the first connection of the session.

What happens AFTER Apache says "Script timed out before returning headers" to the running script?

I have a Perl web app served by Apache httpd using plain mod_cgi or optionally mod_perl with PerlHandler ModPerl::Registry. Recently the app encountered the error Script timed out before returning headers on some invocations and behaved differently afterwards: While some requests seemed to be processed successfully in the background, after httpd sent status 504 to the client, others didn't.
So how exactly behaves httpd AFTER it reached its configured timeout and sent the error to the client? The request/response cycle is finished now, so I guess things like KeepAlive come into play to decide if the TCP connections stays alive or not etc. But what happens to the running script in which environment, e.g. mod_cgi vs. mod_perl?
Especially in mod_cgi, where new processes are started for each request, I would have guessed that httpd keeps the processes simply running. Because all our Perl files have a shebang, I'm not even sure if httpd is able to track the processes and does so or not. That could be completely different with mod_perl, because in that case httpd is aware of the interpreters and what they are doing etc. In fact, the same operation which timed out using plain mod_cgi, succeeded using mod_perl without any timeout, but even with a timeout in mod_cgi at least one request succeeded afterwards as well.
I find this question interesting for other runtimes than Perl as well, because they share the concepts of plain mod_cgi vs. some persistent runtime embedded into the httpd processes or using some external daemons.
So, my question is NOT about how to get the error message away. Instead I want to understand how httpd behaves AFTER the error occurred, because I don't seem to find much information on that topic. It's all just about increasing configuration values and try to avoid the problem in the first place, which is fine, but not what I need to know currently.
Thanks!
Both mod_cgi and mod_cgid set a cleanup function on the request scope to kill the child process, but they do it slightly different ways. This would happen shortly after the timeout is reported (a little time for mod_cgi to return control, the error response to be written, the request logged, etc)
mod_cgi uses a core facility in httpd that does SIGTERM, sleeps for 3 seconds, then does a SIGKILL.

Apache upload failed when file size is over 100k

Below it is some information about my problem.
Our Apache2.2 is on windows 2008 server.
Basically the problem is user fails to upload file which is bigger than 100k to our server.
The error in Apache log is: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. : Error reading request entity data, referer: ......
There were a few times (not always) I could upload larger files(100k-800k, failed for 20m) in Chrome. In FF4 it always fails for uploading file over 100k. In IE8 it is similar to FF4.
It seems that it fails to get request from client, then I reset TimeOut in Apache setting to default value(300) which did not help at all.
I do not have the RequestLimitBody option set up and I am not using PHP. Anyone saw the similar error before? Now I am not sure what I can try next. Any advise would be appreciated!
Edit:
I just tried to use remote desk to upload files on the server and it worked fine. First thought was about the firewall which however is off all the time, Http Proxy is applied though.