How to get "my" VHOST in code executed by PerPostConfigRequire? - apache

I'm hosting multiple instances of the same web app in different versions using the same instance of HTTPd and one VHOST per instance/version of web app currently. I would like to use mod_perl's PostConfigRequire to implement a handler pre-loading all those instances during server startup instead of relying on mod_perl's registry-facilities per request, as is done currently. The latter breaks in case of too many concurrent requests and according to some tests, that problem doesn't occur anymore in case things get pre-loaded.
Because of the different versions of my web app, I need to use PerlOptions +Parent to provide each VHOST with its individual Perl interpreter pool. Additionally I can easily configure PerlPostConfigRequire per VHOST, but pre-loading needs to take into account which VHOST that handler is currently executed in. Within that handler I need to only load those files associated with some concrete VHOST.
The problem is that I'm having trouble how to find the VHOST currently executing that handler. In that past I have implemented something similar per request by looking at the currently requested URL and comparing that to VHOST configs, but I don't have a request now. Apache2::ServerUtil->server() is not providing my VHOST, but it's available at all e.g. using Apache2::Directive::conftree()->lookup('VirtualHost'). Only that I don't know which one is the one containing PerlPostConfigRequire currently executed. I'm pretty sure that PerlPostConfigRequire really gets executed per VHOST, because I get debugging output scaling with the number of VHOSTs configured as well.
Any ideas? Thanks!

The only thing which came into my mind is using environment variables per VHOST like in the following example. PerlSetVar doesn'T work because it's per request only and not available during startup of the web server.
PerlSetEnv SOME_ID "0c8c1236-86de-4483-9d4b-999e2cfd17c1"
PerlPostConfigRequire "Some.pm"
Each VHOST gets it's own ID and that environment variable seems to be uniquely available during execution of the configured package:
[Mon Aug 05 20:50:32.181559 2019] [:warn] [pid 17736:tid 704] [...]: 0c8c1236-86de-4483-9d4b-999e2cfd17c1: 1
[Mon Aug 05 20:50:32.836195 2019] [:warn] [pid 17736:tid 704] [...]: 9dc691d6-794a-4095-852e-e596cabf43d5: 1
[Mon Aug 05 20:50:34.180453 2019] [:warn] [pid 17736:tid 704] [...]: 0c8c1236-86de-4483-9d4b-999e2cfd17c1: 2
[Mon Aug 05 20:50:34.830098 2019] [:warn] [pid 17736:tid 704] [...]: 9dc691d6-794a-4095-852e-e596cabf43d5: 2
The logging code:
my $server = Apache2::ServerUtil->server();
$server->warn(__PACKAGE__ . ": $ENV{'SOME_ID'}: " . Apache2::ServerUtil::restart_count());
With that in mind, one can get all VHOSTs and search for the ID they define. It's important to note that lookup is not of much help here, it doesn't seem to support searching for PerlSetEnv directly, but one needs to implement that on hash refs returned for the VHOSTs manually:
my $confTree = Apache2::Directive::conftree();
my #vhosts = $confTree->lookup('VirtualHost');
Each result is a hash ref containing keys and values like the following:
$VAR1 = {\n 'PerlSetEnv' => 'AMS_MOD_PERL_PRE_LOADING_ID "9dc691d6-794a-4095-852e-e596cabf43d5"',\n[...]
'PerlSetEnv' => [\n '"AMS_MOD_PERL_PRE_LOADING_ID" "9dc691d6-794a-4095-852e-e596cabf43d5"',\n '"murks" "test"'\n ],
The structure of the hash ref depends on the actual config, so it might contain additional child hash refs I guess. Any better ideas welcome...

Related

Apache HTTPD 2.4 AH02429 error with phantom response header

I've an Apache HTTPD 2.4.37 which, since this morning, is responding with 500 and [Mon Jan 24 12:27:03.132322 2022] [http:error] [pid 3650579:tid 140496433313536] [client 10.42.0.47:53214] AH02429: Response header name '[Mon Jan 24 12' contains invalid characters, aborting request while trying to render a Perl application.
If I try to call the website with curl -v I cannot see such "header" in the response headers.
Morevoer, if I copy the conf.modules.d folder from an Apache HTTPD 2.4.6 version it then works as expected.
After some backtracking, it seems like that a request header I'm setting it's breaking the request when this is empty.
I was following https://httpd.apache.org/docs/2.4/env.html#fixheader to propagate an "invalid" (for Apache HTTPD) header and the regex used there matches even if the value of the header is empty (i.e. the header is not part of the request at all).
In such a case, for some reason the request gets broken.

AWS ElasticBeanstalk periodically goes down

I noticed lately that my laravel project in an AWS Elasticbeanstalk setup has been acting strangely. The server would go down in a few minutes. In a t3.small, it goes down in every 50 minutes. The health tab says that the memory is exhausted or something. It will go "Severe" for about 5-10 minutes then goes back without me doing anything. Basically just a whole zigzag in the monitoring. In the t3.nano it goes down at approximately every 5 minutes.
Here are some things that I've done that I suspect to be the cause
I've re-enabled pusher for broadcasting. The project has a pusher setup before and it was working fine. However, I disabled (removed all parts that uses it) as I don't need it yet. I re-enabled it and the problem occured
I've played with AWS WAF and Cloudfront. So I was studying those two parts before and played with some settings, however, I can't remember using any of them related to my EBS application. I did removed everything I've added on WAF and Cloudfront.
Here are some facts:
Whenever I remove the container commands creating the schedule:run and queue:work, it becomes completely fine. Totally "OK" status even
if I simulate sending hundreds of requests per second.
I tried scaling it to 3 instances and the result is still the same, however, the downtime just becomes slower
It gives a 503 error code whenever it's down
The setup for EBS is PHP 7.2 running on 64bit Amazon Linux/2.8.4
I'm testing the job and queue by sending 1 Pusher message every minute. It doesn't do anything except send the current time. This is
also the only cronjob running.
The cronjob works and I can also receive the Pusher messages, except during downtime
Here's an observation that I had with the logs
- There's an "internal dummy connection" related to Apache. The time when it is logged is identical to the time that the downtime occurs.
I've tried every hint on the logs, from juggling different settings on the cronjob and other possible causes. I've also tried asking my peers but no one has encountered such error before. In fact, they tested my cronjob and it's working properly for them.
I also have this error in the /var/log/httpd/error_log
[Fri Nov 23 19:07:35.208657 2018] [suexec:notice] [pid 3142] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Fri Nov 23 19:07:35.228633 2018] [http2:warn] [pid 3142] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive.
[Fri Nov 23 19:07:35.228644 2018] [http2:warn] [pid 3142] AH02951: mod_ssl does not seem to be enabled
[Fri Nov 23 19:07:35.229188 2018] [lbmethod_heartbeat:notice] [pid 3142] AH02282: No slotmem from mod_heartmonitor
[Fri Nov 23 19:07:35.267841 2018] [mpm_prefork:notice] [pid 3142] AH00163: Apache/2.4.34 (Amazon) configured -- resuming normal operations
[Fri Nov 23 19:07:35.267860 2018] [core:notice] [pid 3142] AH00094: Command line: '/usr/sbin/httpd -D FOR
This is a case of running into surprises with the CPU credit and throttling restrictions with t2/t3.* EC2 instances. 1 CPU credit allows a (t2/t3) instance to operate at 100% CPU for 1 minute. All t2/t3.* instance CPU credits are replenished at a constant rate per hour for running instances (this rate depends on the instance class). Hence, prolonged periods of load (above a certain threshold) will gradually deplete these credits, leading to the states that you have described above.
It's advised to use higher tier instances (m3.medium and above) to sustain production workloads consistently. Placing a load balancer in front of multiple instances is also a great way to maintain availability.
More information on the same can be found here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-credits-baseline-concepts.html

Apache 2 error log only shows the error message "0"

Since a few days, my Apache 2 error log is showing a lot messages like the following (IP addresses and URIs redacted):
[Thu Dec 12 13:46:42 2013] [error] [client 111.222.333.444] 0
[Thu Dec 12 13:52:27 2013] [error] [client 222.333.444.555] 0, referer: http://www.mydomain.com/
[Thu Dec 12 13:52:27 2013] [error] [client 222.333.444.555] 0, referer: http://www.mydomain.com/
[Thu Dec 12 13:53:54 2013] [error] [client 333.444.555.666] 0, referer: http://www.mydomain.com/subdirectory/
[Thu Dec 12 13:46:42 2013] [error] [client 444.555.666.777] 0
[Thu Dec 12 13:54:07 2013] [error] [client aaaa:1111:2222:ffff::] 0, referer: http://www.otherdomain.com/subdirectory/
What is this 0? There are no other messages shown (besides sometimes some other, normal messages, but very rarely).
The IP addresses are both IPv4 and IPv6. I checked the access log for the same date/time and IP addresses. Most of the times, there was an access for the exact same moment from this IP for different URIs on my webpage. But sometimes, there wasn't an access according to the access log.
It's a shared hosting environment, so I can't access the Apache settings (but I have ssh access to my home directory if this helps). I already googled and searched the Apache documentation, but didn't found anything (it's hard to search for "0"...)
/edit: I also asked the webhoster, they said they don't know what it's causing. I cross checked it with the Apache access log, these are requests to PHP scripts (mostly Joomla), but also requests to images as well as JS and CSS files. So I assume it's not a PHP script which is causing this.
If your error_log directive is unset errors will be written in your Apache log file for current VirtualHost.
So double check your PHP configuration (php.ini) or write a simple page with phpinfo()
If this is true, you should look inside your code (may be even into index.php).
Pay attention to this: usually there are two separate php.ini files for Apache /etc/php5/apache2/php.ini and CLI configuration /etc/php5/cli/php.ini.
Please also consider that, if you want change your PHP configuration, you can use ini_set function.
ini_set('error_log', '/var/log/php/error_new.log');
Remember: the destination directory must exist and your web server (or php engine) must have all permission to write into.
error_log format is not customizable, I suspect that it can be set to some higher level: debug or trace, where it can produce additional information.
Also please take into account, that error_log contains debug info from CGI/PHP/Perl scripts, so that 'zero' can be produced by some script that executed through apache as its module.

mod mono server 4 constantly crashes with soap requests

I have a c# Soap service that I have running on my Linux Suse 12.1 VPS server. This has been working fine without problems until I made a small change to the soap service and copied it onto my VPS. I thought it must have been an issue with my change so I rolled back my changes and still not working. Even some methods that haven't been touched are failing.
However, I have tested on my Dev machine which is on windows and is working fine and have also copied the soap interface on to a linux dev machine which is set up in the exact samme way as my VPS, i.e. OpenSuse 12.1 and has all the same stuff as my VPS web server. Both work absolutely fine, not got any problems what so ever.
On the VPS host, however, mod-mono-server is constantly crashing and even though it starts up, the asmx file cannot be read, just displays server error, and I need to run rcapache2 restart to get the test page to load up.
In the apache error log file I have the following:
[Thu Aug 30 20:10:19 2012] [error] (70014)End of file found: read_data
failed [Thu Aug 30 20:10:19 2012] [error] Command stream corrupted,
last command was 1 [Thu Aug 30 20:08:47 2012] [error] Command stream
corrupted, last command was 7
I have no idea what the problem might be, I've tried rebooting the VPS but no difference.
I am using the ASP.net 4 version of mod-mono-server.
Thanks for any help you can provide.
UPDATE 1
I have just noticed something else in the apache error. The log file contains the following
[Thu Aug 30 20:46:31 2012] [notice] caught SIGTERM, shutting down
[Thu Aug 30 20:46:32 2012] [warn] Init: Session Cache is not configured [hint: SSLSessionCache]
[Thu Aug 30 20:46:32 2012] [error] Not running mod-mono-server.exe because no MonoApplications, MonoApplicationsConfigFile or MonoApplicationConfigDir specified.
UPDATE 2
Have just made a discovery, not entirely sure if it helps. The soap service is working fine on the server as long as it doesn't require accessing a mysql database. If it performs a query, it displays an an internal server error 500 within the test page. but if called from PHP it causes it to crash mono. The database is a mysql database and is a local database. There's 34% RAM Free so I don't believe this is a memory issue. I've also emptied the database table to determine if this fixes it if it its something to do with the amount of data, but this hasn't fixed it either.
Thanks to #knocte suggestion I managed to figure out the problem.
When the Soap Service accesses the database it reads a config.xml to determine what username and password to use to access the database, I'm guessing that this config file managed to get corrupted during the transfer. Although I could read it in vi, but maybe there was something that was wrong with the file that stopped the soap service reading the config.
All I did was delete the file and copy and paste the content into the file manually.
For some reason mod-mono when it couldn't access the database it crashes mono, even though all the MySQL stuff within the soap service has MySQL Exception handling. #knocte suggestion of testing the soap service proved useful as when this was used to access the database from the soap service xsp4 would stop but display the error saying that it didn't have permission to access the database, even though it had the correct username and password in the config file.
Once I had re-created the config file the soap service works correctly again.
Thanks for your help.

apache mod_fcgid problems

I have a problem on multiple servers than use Apache module mod_fcgid to serve a cgi script that processes the request (ticket validation and similar processing) then serves files on the server based on the result of the processing.
I keep getting the following errors repeatedly in the logs:
[Mon Jan 30 23:11:41 2012] [warn] [client 95.35.160.193] mod_fcgid: error reading data, FastCGI server closed connection
[Mon Jan 30 23:11:41 2012] [warn] [client 95.35.160.193] (32)Broken pipe: mod_fcgid: ap_pass_brigade failed in handle_request_ipc function
[Mon Jan 30 23:13:34 2012] [warn] [client 37.8.52.128] mod_fcgid: can't apply process slot for /var/www/cgi-bin/assetx.fcgi
These problems cause the server to be slow and other times result in service temporarily unavailable error.
The servers have large traffic on them, I have currently configured the following fcgi directives as below:
FcgidMaxRequestsPerProcess 0
FcgidMaxProcesses 300
FcgidMinProcessesPerClass 0
FcgidIdleTimeout 240
FcgidIOTimeout 240
FcgidBusyTimeout 300
the average load on the servers is normal, the number of processes is on average 250 processes.
I have done research for days about this issue, some say it is a permission problem, I've followed their suggestion, didn't help. I tried to tune the parameters above, these are the final values I tried, but they didn't work as well. I am also trying out nginx to be used instead of apache but I cannot find a suitable way to run the cgi script with this high load on the server using nginx.
What can I do to fix this problem?
Your app is dying before Apache can contact it successfully. The answer is to find out why the app is dying.
FastCGI process should never die or quit, even in an error condition. Apache expects FastCGI script to just keep on being there.
You mention you have a cgi script. How did you modify it to support FastCGI?
Usually you need to switch to something like CGI::Fast, remove all calls to die and exit, and refactor your script to run using the CGI::Fast while loop.