Newrelic Apache Httpd Plugin on Centos - apache

I am trying to configure Httpd Plugin
I am getting 403 error , any suggestions how to fix this issue
INFO 2014-12-16 19:52:14,885 14962 MainProcess MainThread newrelic_plugin_agent.agent __init__ L55 : Agent v1.3.0 initialized, Linux-2.6.32-431.29.2.el6.x86_64-x86_64-with-glibc2.2.5 (CentOS 6.5 Final) CPython v2.6.6
INFO 2014-12-16 19:52:14,886 14962 MainProcess MainThread helper.controller run L251 : newrelic-plugin-agent v2.4.1 started
INFO 2014-12-16 19:52:14,886 14962 MainProcess MainThread newrelic_plugin_agent.agent start_plugin_polling L263 : Enabling plugin: apache_httpd
ERROR 2014-12-16 19:52:19,450 14962 MainProcess MainThread newrelic_plugin_agent.plugins.base http_get L354 : Error response from localhost:80/server-status?auto (403):
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access /server-status
on this server.</p>
<hr>
<address>Apache/2.2.15 (CentOS) Server at localhost Port 80</address>
</body></html>
ERROR 2014-12-16 19:52:19,451 14962 MainProcess MainThread newrelic_plugin_agent.plugins.apache_httpd error_message L61 : Could not match any of the stats, please make ensure Apache HTTPd is configured correctly. If you report this as a bug, please include the full output of the status page from localhost:80/server-status?auto in your ticket
WARNING 2014-12-16 19:52:19,451 14962 MainProcess MainThread newrelic_plugin_agent.agent send_components L217 : No metrics to send to NewRelic this interval
INFO 2014-12-16 19:52:19,451 14962 MainProcess MainThread newrelic_plugin_agent.agent process L133 : Stats processed in 4.57 seconds, next wake in 55 seconds

Add the following lines to the end of /etc/httpd/conf/httpd.conf if you prefer it in a separate file as I do, then create a file such as /etc/httpd/conf.d/status.conf and put them in there.
ExtendedStatus On
<Location /server-status>
SetHandler server-status
Order deny,allow
Deny from all
Allow from 127.0.0.1
</Location>
This is saying that you want to expose the server status on the /server-status uri. It also will only allow connections from 127.0.0.1, which means you are only allowing to server this page from requests made directly from the server.
Once you've made the change, make sure you run:
service httpd restart
you can then validate that it's working by doing any of the following depending on your system configuration.
lynx --dump http://127.0.0.1/server-status
curl -vs http://127.0.0.1/server-status
wget -qO- http://127.0.0.1/server-status

Related

Mercure keeps binding to port 80

I'm using the Mercure hub 0.13, everything works fine on my development machine, but on my test server the hub keeps on trying to bind on port 80, resulting in a error, as nginx is already running on port 80.
run: loading initial config: loading new config: http app module: start: tcp: listening on :80: listen tcp :80: bind: address already in use
I'm starting the hub with the following command:
MERCURE_PUBLISHER_JWT_KEY=$(cat publisher.key.pub) \
MERCURE_PUBLISHER_JWT_ALG=RS256 \
MERCURE_SUBSCRIBER_JWT_KEY=$(cat publisher.key.pub) \
MERCURE_SUBSCRIBER_JWT_ALG=RS256 \
./mercure run -config Caddyfile.dev
Caddyfile.dev is as follows:
# Learn how to configure the Mercure.rocks Hub on https://mercure.rocks/docs/hub/config
{
{$GLOBAL_OPTIONS}
}
{$SERVER_NAME:localhost:3000}
log
route {
redir / /.well-known/mercure/ui/
encode zstd gzip
mercure {
# Transport to use (default to Bolt)
transport_url {$MERCURE_TRANSPORT_URL:bolt://mercure.db}
# Publisher JWT key
publisher_jwt {env.MERCURE_PUBLISHER_JWT_KEY} {env.MERCURE_PUBLISHER_JWT_ALG}
# Subscriber JWT key
subscriber_jwt {env.MERCURE_SUBSCRIBER_JWT_KEY} {env.MERCURE_SUBSCRIBER_JWT_ALG}
# Permissive configuration for the development environment
cors_origins *
publish_origins *
demo
anonymous
subscriptions
# Extra directives
{$MERCURE_EXTRA_DIRECTIVES}
}
respond /healthz 200
respond "Not Found" 404
}
When I provider the SERVER_NAME as an environment variable, without a domain, SERVER_NAME=:3000, the hub actually starts on port 3000, but runs in http mode, which only allows for anonymous subscriptions and is not what I need.
Server:
Operating System: CentOS Stream 8
Kernel: Linux 4.18.0-383.el8.x86_64
Architecture: x86-64
Full output when trying to start the Mercure hub:
2022/05/10 04:50:29.605 INFO using provided configuration {"config_file": "Caddyfile.dev", "config_adapter": ""}
2022/05/10 04:50:29.606 WARN input is not formatted with 'caddy fmt' {"adapter": "caddyfile", "file": "Caddyfile.dev", "line": 3}
2022/05/10 04:50:29.609 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["localhost:2019", "[::1]:2019", "127.0.0.1:2019"]}
2022/05/10 04:50:29.610 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2022/05/10 04:50:29.610 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc0003d6150"}
2022/05/10 04:50:29.627 INFO tls cleaning storage unit {"description": "FileStorage:/root/.local/share/caddy"}
2022/05/10 04:50:29.628 INFO tls finished cleaning storage units
2022/05/10 04:50:29.642 INFO pki.ca.local root certificate is already trusted by system {"path": "storage:pki/authorities/local/root.crt"}
2022/05/10 04:50:29.643 INFO tls.cache.maintenance stopped background certificate maintenance {"cache": "0xc0003d6150"}
run: loading initial config: loading new config: http app module: start: tcp: listening on :80: listen tcp :80: bind: address already in use
I'm a bit late, but I hope that will help someone.
As mentionned here, you can specify the http_port manually in your caddy configuration file.

ECS logging (awslogs driver) only logging apache server startup logs to cloudwatch, no error.log & no access.log

I have the problem that my ECS logs (awslogs driver) are not working as expected. In Cloudwatch I'm only seeing the server startup logs & not the useful logs from the apache (/var/log/apache2/error.log & /var/log/apache2/access.log)
I have a docker multicontainer setup with one container running the apache server & the other container running PHP-FPM. My container logs on cloudwatch look like this:
Apache-Container:
23:35:39 *** Running /etc/my_init.d/02_init.sh...
23:35:39 Starting Apache
23:35:39 * Starting Apache httpd web server apache2
23:35:39 /usr/sbin/apache2ctl: 87: ulimit: error setting limit (Operation not permitted)
23:35:39 Setting ulimit failed. See README.Debian for more information.
23:35:40 *** Running /etc/rc.local...
23:35:40 *** Booting runit daemon...
23:35:40 *** Runit started as PID 225
23:35:40 Oct 25 22:35:40 apache-container syslog-ng[231]: syslog-ng starting up; version='3.5.6'
2019-10-26 00:17:01
Oct 25 23:17:01 apache-container CRON[947]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
...
07:35:16 tail: '/var/log/syslog' has been replaced; following new file
...
FPM-Container:
...
10:25:23 172.x.x.x - 27/Okt/2019:09:25:23 +0000 "GET /app.php" 200
10:25:25 172.x.x.x - 27/Okt/2019:09:25:24 +0000 "GET /app.php" 200
...
I've checked various forums & online resources. As I understood it right I just need to symlink my logs to STDOUT/STDERR or even better to /proc/self/fd/1 & /proc/self/fd/2 like this:
ln -sf /dev/stdout /var/log/apache2/access.log
ln -sf /dev/stderr /var/log/apache2/error.log
I've tried to link the logs in my .Dockerfile via the RUN command & also during runtime, but no success. I see that my logs are showing up correctly in the log files before linking them. I've also tried things like echo "test stderr logs" >> /dev/stderr or echo "test stdout logs" >> /dev/stdout inside & outside the containers, but nothing showing up in the cloudwatch logs. When I try docker logs MY_DOCKER_CONTAINER_ID I get: Error response from daemon: configured logging driver does not support reading.
Maybe I'm missing some basic knowledge here. I see that syslog is in my environment/base image (maybe i need to merge syslog & apache logs?) and that the PHP-FPM-container is logging 200's but only to the app.php even though I would like to know the exact path of the accessed url.
You need to specify in your docker-compose used by ECS to use the cloudwatch logging driver like so:
version: '2'
services:
myapp:
build:
context: .
logging:
driver: awslogs
options:
awslogs-group: "/my/log/group"
awslogs-region: "us-west-2"
awslogs-stream-prefix: some-prefix
This should cause /dev/stdout and /dev/stderr to appear in CloudWatch. You can find more information on the logging driver options on the Docker page.
Hey thx for the responses. If I remember it right, the problem was, that all of my output was redirected to syslog & there was a misconfiguration in my syslog config.

Error connecting to Redis on localhost:6379 (Errno::ECONNREFUSED)

I'm trying to make edits to certain pages on our website on localhost:3000 but am experiencing the following error on specific pages.
Redis::CannotConnectError in FeaturesController#inventory
Error connecting to Redis on localhost:6379 (Errno::ECONNREFUSED)
ENV dump:
GATEWAY_INTERFACE: "CGI/1.2"
HTTP_ACCEPT: "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
HTTP_ACCEPT_ENCODING: "gzip, deflate, sdch"
HTTP_ACCEPT_LANGUAGE: "en-US,en;q=0.8"
HTTP_CACHE_CONTROL: "max-age=0"
REMOTE_ADDR: "127.0.0.1"
SERVER_NAME: "localhost"
SERVER_PROTOCOL: "HTTP/1.1"
In development.log file, it says:
Started GET "/en/ipad-pos-for-retail/inventory-management" for 127.0.0.1 at 2016-10-05 17:02:22 -0400
Processing by FeaturesController#inventory as HTML
Parameters: {"locale"=>"en"}
Geokit is using the domain: localhost
Completed 500 Internal Server Error in 709ms
Redis::CannotConnectError (Error connecting to Redis on localhost:6379 (Errno::ECONNREFUSED)):
app/controllers/features_controller.rb:46:in `inventory'
Rendered /Users/benedictwong/.rvm/gems/ruby-2.2.5/gems/actionpack-3.2.13/lib/action_dispatch/middleware/templates/rescues/_trace.erb (10.4ms)
Rendered /Users/benedictwong/.rvm/gems/ruby-2.2.5/gems/actionpack-3.2.13/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (2.4ms)
Rendered /Users/benedictwong/.rvm/gems/ruby-2.2.5/gems/actionpack-3.2.13/lib/action_dispatch/middleware/templates/rescues/diagnostics.erb within rescues/layout (45.1ms)
Would really appreciate some help! Complete newbie here!

How to configure apache server to allow wget with proxy?

I'm totally new to the apache httpd stuff
I setup my host ServerHost1 as a file server with httpd
# httpd -v
Server version: Apache/2.4.6 (Red Hat Enterprise Linux)
Server built: Dec 2 2014 08:09:42
I have put the file TestFile.txt under /var/www/html/TestDir/TestFile.txt
I modified part of the httpd.conf as follow
<Directory />
Order deny,allow
Allow from all
</Directory>
On a test host TestHost1 with full Internet access, I can downloaded my file with wget
TestHost1]# wget http://ServerHost1/TestDir/TestFile.txt
--2016-03-17 13:39:12-- http://ServerHost1/TestDir/TestFile.txt
Resolving ServerHost1 (ServerHost1)... <IP address>
Connecting to ServerHost1 (ServerHost1)|<IP address>|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2859976598 (2.7G) [application/octet-stream]
Saving to: ‘TestFile.txt’
2% [> ] 60,645,376 24.0MB/s
On the host sitting on a semi-isolated network TestHost2, I have to use proxy for wget to work. It works fine with google
TestHost2]# wget google.ca
--2016-03-17 13:53:26-- http://google.ca/
Resolving proxy.com (proxy.com)... <ProxyIP>
Connecting to proxy.com (proxy.com)|<ProxyIP>|:3128... connected.
Proxy request sent, awaiting response... 301 Moved Permanently
Location: http://www.google.ca/ [following]
--2016-03-17 13:53:26-- http://www.google.ca/
Reusing existing connection to proxy.com:3128.
Proxy request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’
[ <=> ] 19,928 --.-K/s in 0.1s
2016-03-17 13:53:27 (159 KB/s) - ‘index.html’ saved [19928]
However when I try to get my file from ServerHost1, it gets ERROR 503: Service Unavailable
TestHost2]# wget http://ServerHost1/TestDir/TestFile.txt
--2016-03-17 13:57:13-- http://ServerHost1/TestDir/TestFile.txt
Resolving proxy.com (proxy.com)...<ProxyIP>
Connecting to proxy.com (proxy.com)|<ProxyIP>|:3128... connected.
Proxy request sent, awaiting response... 503 Service Unavailable
2016-03-17 13:57:13 ERROR 503: Service Unavailable.
So the question is
(1) Why am I seeing 503 ServiceUnavailable when the file is apparently available (since I can downloaded from testhost1)?
(2) How do I configure my httpd.conf file so that TestHost2 can wget the file from ServerHost1?
Maybe try with ProxyRequests as described in Apache docs https://httpd.apache.org/docs/2.4/mod/mod_proxy.html

How to connect Apache log to graylog2 server

I'm using Graylog2 server as my application log server. But couldn't connect apache log to graylog2. Is there any guide to send apache log to graylog2 server or can someone help me to solve this ?
I put this at the bottom of my /etc/rsyslog.conf on Ubuntu 14.04
# Apache access file:
$ModLoad imfile
$InputFileName /var/log/apache2/access.log
$InputFileTag apache-access:
$InputFileStateFile stat-apache-access
$InputFileSeverity info
$InputRunFileMonitor
#Apache Error file:
$InputFileName /var/log/apache2/error.log
$InputFileTag apache-errors:
$InputFileStateFile stat-apache-error
$InputFileSeverity error
$InputRunFileMonitor
$InputFilePollInterval 10
if $programname == 'apache-access' then #10.11.11.33:514
if $programname == 'apache-errors' then #10.11.11.33:514
where 10.x.x.x is my Graylog2 server.
There will be a GELF module for Apache soon. Until that is released I can recommend using Logstash to parse and forward the Apache log files. You could even send in the log lines to "Raw/Plaintext" inputs in Graylog2 using tail and netcat.