AWS ElasticBeanstalk periodically goes down - apache

I noticed lately that my laravel project in an AWS Elasticbeanstalk setup has been acting strangely. The server would go down in a few minutes. In a t3.small, it goes down in every 50 minutes. The health tab says that the memory is exhausted or something. It will go "Severe" for about 5-10 minutes then goes back without me doing anything. Basically just a whole zigzag in the monitoring. In the t3.nano it goes down at approximately every 5 minutes.
Here are some things that I've done that I suspect to be the cause
I've re-enabled pusher for broadcasting. The project has a pusher setup before and it was working fine. However, I disabled (removed all parts that uses it) as I don't need it yet. I re-enabled it and the problem occured
I've played with AWS WAF and Cloudfront. So I was studying those two parts before and played with some settings, however, I can't remember using any of them related to my EBS application. I did removed everything I've added on WAF and Cloudfront.
Here are some facts:
Whenever I remove the container commands creating the schedule:run and queue:work, it becomes completely fine. Totally "OK" status even
if I simulate sending hundreds of requests per second.
I tried scaling it to 3 instances and the result is still the same, however, the downtime just becomes slower
It gives a 503 error code whenever it's down
The setup for EBS is PHP 7.2 running on 64bit Amazon Linux/2.8.4
I'm testing the job and queue by sending 1 Pusher message every minute. It doesn't do anything except send the current time. This is
also the only cronjob running.
The cronjob works and I can also receive the Pusher messages, except during downtime
Here's an observation that I had with the logs
- There's an "internal dummy connection" related to Apache. The time when it is logged is identical to the time that the downtime occurs.
I've tried every hint on the logs, from juggling different settings on the cronjob and other possible causes. I've also tried asking my peers but no one has encountered such error before. In fact, they tested my cronjob and it's working properly for them.
I also have this error in the /var/log/httpd/error_log
[Fri Nov 23 19:07:35.208657 2018] [suexec:notice] [pid 3142] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Fri Nov 23 19:07:35.228633 2018] [http2:warn] [pid 3142] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive.
[Fri Nov 23 19:07:35.228644 2018] [http2:warn] [pid 3142] AH02951: mod_ssl does not seem to be enabled
[Fri Nov 23 19:07:35.229188 2018] [lbmethod_heartbeat:notice] [pid 3142] AH02282: No slotmem from mod_heartmonitor
[Fri Nov 23 19:07:35.267841 2018] [mpm_prefork:notice] [pid 3142] AH00163: Apache/2.4.34 (Amazon) configured -- resuming normal operations
[Fri Nov 23 19:07:35.267860 2018] [core:notice] [pid 3142] AH00094: Command line: '/usr/sbin/httpd -D FOR

This is a case of running into surprises with the CPU credit and throttling restrictions with t2/t3.* EC2 instances. 1 CPU credit allows a (t2/t3) instance to operate at 100% CPU for 1 minute. All t2/t3.* instance CPU credits are replenished at a constant rate per hour for running instances (this rate depends on the instance class). Hence, prolonged periods of load (above a certain threshold) will gradually deplete these credits, leading to the states that you have described above.
It's advised to use higher tier instances (m3.medium and above) to sustain production workloads consistently. Placing a load balancer in front of multiple instances is also a great way to maintain availability.
More information on the same can be found here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-credits-baseline-concepts.html

Related

MAMP: Apache Server is shut down automatically in Windows 10

OS: Windows 10
MAMP: 4.1.1
After installation, every time I run the program, Apache light goes green for a second and then turns off automatically. MySQL runs fine though.
The log file located at C:\MAMP\logs\apache_error.log contains these lines.
[Fri Jan 17 18:03:42 2020] [warn] Init: Session Cache is not configured [hint: SSLSessionCache]
[Fri Jan 17 18:03:43 2020] [warn] pid file C:/MAMP/bin/apache/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run?
[Fri Jan 17 18:03:43 2020] [notice] Digest: generating secret for digest authentication ...
[Fri Jan 17 18:03:43 2020] [notice] Digest: done
I find a lot of people online encountered the same situation as mine. But I cannot find out the solution. So I come here to find the potential solution.
P.S. I don't have the common Port 80 confliction problem. So I'm sure my problem is not related to it.
Navigate to C:\MAMP\conf\apache\extra.
Edit the httpd-ssl.conf and comment the following line:
...
SSLSessionCache shmcb:/some/example/path/ssl_scache(512000)
...
To:
...
# SSLSessionCache shmcb:/some/example/path/ssl_scache(512000)
...
Also, check out https://cwiki.apache.org/confluence/display/HTTPD/SSLSessionCache for more info. Hope this helps.

How to get "my" VHOST in code executed by PerPostConfigRequire?

I'm hosting multiple instances of the same web app in different versions using the same instance of HTTPd and one VHOST per instance/version of web app currently. I would like to use mod_perl's PostConfigRequire to implement a handler pre-loading all those instances during server startup instead of relying on mod_perl's registry-facilities per request, as is done currently. The latter breaks in case of too many concurrent requests and according to some tests, that problem doesn't occur anymore in case things get pre-loaded.
Because of the different versions of my web app, I need to use PerlOptions +Parent to provide each VHOST with its individual Perl interpreter pool. Additionally I can easily configure PerlPostConfigRequire per VHOST, but pre-loading needs to take into account which VHOST that handler is currently executed in. Within that handler I need to only load those files associated with some concrete VHOST.
The problem is that I'm having trouble how to find the VHOST currently executing that handler. In that past I have implemented something similar per request by looking at the currently requested URL and comparing that to VHOST configs, but I don't have a request now. Apache2::ServerUtil->server() is not providing my VHOST, but it's available at all e.g. using Apache2::Directive::conftree()->lookup('VirtualHost'). Only that I don't know which one is the one containing PerlPostConfigRequire currently executed. I'm pretty sure that PerlPostConfigRequire really gets executed per VHOST, because I get debugging output scaling with the number of VHOSTs configured as well.
Any ideas? Thanks!
The only thing which came into my mind is using environment variables per VHOST like in the following example. PerlSetVar doesn'T work because it's per request only and not available during startup of the web server.
PerlSetEnv SOME_ID "0c8c1236-86de-4483-9d4b-999e2cfd17c1"
PerlPostConfigRequire "Some.pm"
Each VHOST gets it's own ID and that environment variable seems to be uniquely available during execution of the configured package:
[Mon Aug 05 20:50:32.181559 2019] [:warn] [pid 17736:tid 704] [...]: 0c8c1236-86de-4483-9d4b-999e2cfd17c1: 1
[Mon Aug 05 20:50:32.836195 2019] [:warn] [pid 17736:tid 704] [...]: 9dc691d6-794a-4095-852e-e596cabf43d5: 1
[Mon Aug 05 20:50:34.180453 2019] [:warn] [pid 17736:tid 704] [...]: 0c8c1236-86de-4483-9d4b-999e2cfd17c1: 2
[Mon Aug 05 20:50:34.830098 2019] [:warn] [pid 17736:tid 704] [...]: 9dc691d6-794a-4095-852e-e596cabf43d5: 2
The logging code:
my $server = Apache2::ServerUtil->server();
$server->warn(__PACKAGE__ . ": $ENV{'SOME_ID'}: " . Apache2::ServerUtil::restart_count());
With that in mind, one can get all VHOSTs and search for the ID they define. It's important to note that lookup is not of much help here, it doesn't seem to support searching for PerlSetEnv directly, but one needs to implement that on hash refs returned for the VHOSTs manually:
my $confTree = Apache2::Directive::conftree();
my #vhosts = $confTree->lookup('VirtualHost');
Each result is a hash ref containing keys and values like the following:
$VAR1 = {\n 'PerlSetEnv' => 'AMS_MOD_PERL_PRE_LOADING_ID "9dc691d6-794a-4095-852e-e596cabf43d5"',\n[...]
'PerlSetEnv' => [\n '"AMS_MOD_PERL_PRE_LOADING_ID" "9dc691d6-794a-4095-852e-e596cabf43d5"',\n '"murks" "test"'\n ],
The structure of the hash ref depends on the actual config, so it might contain additional child hash refs I guess. Any better ideas welcome...

Error starting apache web server

I am trying to start my apache and its throwing the below error
"[Sat May 27 13:55:03 2017] [notice] Disabled use of AcceptEx() WinSock2 API"
Does someone have an idea of what is going own with my apache?

Apache2 Request on SSL waits until time-out expired to return data

I am working with a server that I recently inherited from a departed developer. The server returns XML documents via a REST-ful interface over an SSL port. For small documents, the data is returned quickly. For larger (say, larger than 1 MB), the server waits until the server time-out value is exhausted and then returns the data.
I know this because if I set the time-out value to five minutes the data will be returned to a browser in a little over 300 seconds. If I drop the time-out value to two minutes, it will be returned in about 120 seconds. If I drop it to 10 seconds, then the data is returned in about 10 seconds.
Now, if I set my VirtualHost to port 80, the data is returned almost instantly, which is what I expect.
There are a number of diagnostics in the apache log files such as:
[Thu Apr 28 16:46:44.234689 2016] [ssl:info] [pid 22606] (70014)End of file found: [client 172.26.61.243:62030] AH01991: SSL input filter read failed.
[Thu Apr 28 16:46:44.237818 2016] [ssl:debug] [pid 22509] ssl_engine_io.c(1212): (70014)End of file found: [client 172.26.61.243:62030] AH02007: SSL handshake interrupted by system [Hint: Stop button pressed in browser?!]
[Thu Apr 28 16:46:44.569913 2016] [ssl:debug] [pid 22426] ssl_engine_io.c(1212): (70007)The timeout specified has expired: [client 172.26.61.243:62031] AH02007: SSL handshake interrupted by system [Hint: Stop button pressed in browser?!]
I do not know if these are relevant nor where to look for a solution. I have searched the internet, Apache and SSL documentation and found nothing relevant or useful.

apache mod_fcgid problems

I have a problem on multiple servers than use Apache module mod_fcgid to serve a cgi script that processes the request (ticket validation and similar processing) then serves files on the server based on the result of the processing.
I keep getting the following errors repeatedly in the logs:
[Mon Jan 30 23:11:41 2012] [warn] [client 95.35.160.193] mod_fcgid: error reading data, FastCGI server closed connection
[Mon Jan 30 23:11:41 2012] [warn] [client 95.35.160.193] (32)Broken pipe: mod_fcgid: ap_pass_brigade failed in handle_request_ipc function
[Mon Jan 30 23:13:34 2012] [warn] [client 37.8.52.128] mod_fcgid: can't apply process slot for /var/www/cgi-bin/assetx.fcgi
These problems cause the server to be slow and other times result in service temporarily unavailable error.
The servers have large traffic on them, I have currently configured the following fcgi directives as below:
FcgidMaxRequestsPerProcess 0
FcgidMaxProcesses 300
FcgidMinProcessesPerClass 0
FcgidIdleTimeout 240
FcgidIOTimeout 240
FcgidBusyTimeout 300
the average load on the servers is normal, the number of processes is on average 250 processes.
I have done research for days about this issue, some say it is a permission problem, I've followed their suggestion, didn't help. I tried to tune the parameters above, these are the final values I tried, but they didn't work as well. I am also trying out nginx to be used instead of apache but I cannot find a suitable way to run the cgi script with this high load on the server using nginx.
What can I do to fix this problem?
Your app is dying before Apache can contact it successfully. The answer is to find out why the app is dying.
FastCGI process should never die or quit, even in an error condition. Apache expects FastCGI script to just keep on being there.
You mention you have a cgi script. How did you modify it to support FastCGI?
Usually you need to switch to something like CGI::Fast, remove all calls to die and exit, and refactor your script to run using the CGI::Fast while loop.