How to make the netdata apache plugin works on Plesk enviroment - apache

I'm wondering how can I do to make the web apache netdata's plugin works on a Plesk server...
The graphics are empties and no data is displayed.
I've checked and apache mod-status is enabled and working...

It's probable you have apache behind an Nginx proxy, so the apache ports are not the defaults (80).
Run this commands:
cd /etc/netdata/
./edit-config go.d/apache.conf
Go to bottom of the config file when you will see:
jobs:
- name: local
url: http://localhost/server-status?auto
- name: local
url: http://127.0.0.1/server-status?auto
and change by:
jobs:
- name: local
url: http://localhost:7080/server-status?auto
- name: local
url: http://127.0.0.1:7080/server-status?auto
*(You can check on which port is running your apache using netstat -pltn command).
Restart netdata and you're going to see the information.
#Custom logs
Plesk save the logs on special folder, so you probably want to change the defaults logs.
Edit (or create) the file /etc/netdata/python.d/web_log.conf
Set this content:
nginx_log:
name : 'nginx_log'
path : '/var/www/vhosts/system/{yourdomain}/logs/proxy_access_ssl_log'
apache_log:
name : 'apache_log'
path : '/var/www/vhosts/system/{yourdomain}/logs/access_ssl_log'

Related

Running an apache container on a port > 1024

I've built a docker image based on httpd:2.4. In my k8s deployment I've defined the following securityContext:
securityContext:
privileged: false
runAsNonRoot: true
runAsUser: 431
allowPrivilegeEscalation: false
In order to get this container to run properly as non-root apache needs to be configured to bind to a port > 1024, as opposed to the default 80. As far as I can tell this means editing Listen 80 in httpd.conf to Listen {Some port > 1024}.
When I want to run the docker image I've build normally (i.e. on default port 80) I have the following port settings:
deployment
spec.template.spec.containers[0].ports[0].containerPort: 80
service
spec.ports[0].targetPort: 80
spec.ports[0].port: 8080
ingress
spec.rules[0].http.paths[0].backend.servicePort: 8080
Given these settings the service becomes accessible at the host url provided in the ingress manifest. Again, this is without the changes to httpd.conf. When I make those changes (using Listen 8000), and add in the securityContext section to the deployment, I change the various manifests accordingly:
deployment
spec.template.spec.containers[0].ports[0].containerPort: 8000
service
spec.ports[0].targetPort: 8000
spec.ports[0].port: 8080
ingress
spec.rules[0].http.paths[0].backend.servicePort: 8080
Yet for some reason, when I try to access a URL that should be working I get a 502 Bad Gateway error. Have I set the ports correctly? Is there something else I need to do?
Check if pod is Running
kubectl get pods
kubectl logs pod_name
Check if the URL is accessible within the pod
kubectl exec -it <pod_name> -- bash
$ curl http://localhost:8000
If the above didn't work, check your httpd.conf.
Check with the service name
kubectl exec -it <ingress pod_name> -- bash
$ curl http://svc:8080
You can check ingress logs too.
In order to get this container to run properly as non-root apache
needs to be configured to bind to a port > 1024, as opposed to the
default 80
You got it, that's the hard requirement in order to make the apache container running as non-root, therefore this change needs to be done at container level, not to Kubernetes' abstracts like Deployment's Pod spec or Service/Ingress resource object definitions. So the only thing left in your case, is to build a custom httpd image, with listening port > 1024. The same approach applies to the NGINX Docker containers.
One key information for the 'containerPort' field in Pod spec, that you are trying to manually adjust, and which is not so apparent. It's there primarily for informational purposes, and does not cause opening port on container level. According Kubernetes API reference:
Not specifying a port here DOES NOT prevent that port from being
exposed. Any port which is listening on the default "0.0.0.0" address
inside a container will be accessible from the network. Cannot be updated.
I hope this will help you to move on

browse postgres in a docker container

I am using docker-compose to work across multiple docker containers, all these containers are mostly individual django rest framework built applications. I have downloaded all the containers and am able to build the whole application using all these containers.
Each container has postgres db running, I want to browse the db now using any ui tool. I know pgadmin can do the work here, but how I can configure my pgadmin to showcase any postgres database from these containers?
It should be possible to expose your database port also to your local network.
Normally you connect your application containers internally to the database container. In that case it's not needed declare the ports section in your compose file for the database, but if you have that entry you bind your database in addition to your local host.
After you have also expose the postgres port to your host port it should be no problem to connect with the gui tool of your choice.
version: '3.2'
services:
httpd:
image: "oth/d_apache2.4:0.2"
ports:
# container port 80 of the webserver to localhost 80
- "80:80"
keycloak:
# keycloak uses keycloak_db
image: "jboss/keycloak-postgres:3.2.1.Final"
environment:
# internal network reference to db container
- POSTGRES_PORT_5432_TCP_ADDR=keycloak_db
- POSTGRES_PORT_5432_TCP_PORT=5432
keycloak_db:
environment:
image: "postgres:alpine"
ports:
# container port 5432 to localhost 5432
# stack intern is the port still available
- "5432:5432"
Make sure that the port of the postgres container is mapped to the host system. The default postgres port is 5432. You can do that with the port directive in your docker-compose.yml. You are only able to map the port once. So your config file would look like:
services:
postgres_1:
ports:
- "49000:54321"
[...]
postgres_2:
ports:
- "49001:54321"
[...]
After that you should be able to access the desired database with the IP of your docker host and the above specified port.
If you still encounter problems connecting with a client like pgadmin check the following configuration files inside your container.
Is there anything blocking your connection attempt? Is yourdocker host behind a firewall?
postgresql.conf under the section connections and authentication:
listen_addresses
port
Check your pg_hba.conf, which controls client authentication.
For debug purposes you can set it to the following:
Don't do the following in production:
host all all all trust

Issue with docker push on local registry https access to ressource denied

I have a problem with my registry docker. My "server" VM is on kali-linux. I created the registry docker in HTTP and use a centOS VM as a client. I declared the registry insecure in the client VM and it worked perfectly.
Now I try to put it in HTTPS. In order to do that, I use nginx as a proxy. I followed this tutorial : Step 5 — Setting Up SSL except for Part 8 to make it a service (I don't know why but i can't do it).
Because I don't have a domain name, I used a fake one. In order to be recognized, I added my IP (192.168.X.X) and the domain name I used (myregistryexemple) to the /etc/hosts file on both VM.
As asked by the tutorial, I generated the certificat on my "server" VM (the kali one), and send it by scp to my client VM. I make the centOS vm trust the certificate thanks to this commands :
yum install ca-certificates
update-ca-trust force-enable
cp cert.crt /etc/pki/ca-trust/source/anchors/
update-ca-trust extract
I restart the service docker on the client VM. And launch the docker registry and the nginx proxy with "docker-compose up" on my kali VM.
I tag and try to push an ubuntu on the registry :
docker tag ubuntu myregistryexemple/ubuntu
docker push myregistryexemple/ubuntu
But I get this error :
The push refers to a repository [docker.io/myregistryexemple/ubuntu]
56827159aa8b: Preparing
440e02c3dcde: Preparing
29660d0e5bb2: Preparing
85782553e37a: Preparing
745f5be9952c: Preparing
denied: requested access to the resource is denied
Then I try to push to localhost directly :
docker tag ubuntu localhost:5000/ubuntu & docker push localhost:5000/ubuntu
then I docker login on the domain from the client VM, it worked, but when i tried to pull from my domain registry on the client VM, docker cannot find on the registry the docker images i tried to push.
Do someone has any idea why and knows how to help me ?
Ok so i found a way to make it work.
It is quite simple : Juste follow the complete tutorial I quote on the question ( https://www.digitalocean.com/community/tutorials/how-to-set-up-a-private-docker-registry-on-ubuntu-14-04#step-5-%E2%80%94-setting-up-ssl )
After you created the repository, and before you push/pull a docker image.
You need to go, in both client and server VM, on /etc/hosts .
Add the line : domainChosen serverVmIp
Save and quit it.
Now we need the client to trust the certificate generated. In order to do that, you can use this tutorial : http://kb.kerio.com/product/kerio-connect/server-configuration/ssl-certificates/adding-trusted-root-certificates-to-the-server-1605.html .
Then restart your registry and your docker deamon. And you normaly can use your domain name to push/pull in your registry in https.

How to configure Let's encrypt certificates for nginx inside a docker image?

I know how to configure let's encrypt for nginx. I'm having hard time configuring let's encrypt with nginx inside a docker image. Let's encrypt certificates are symlinked in etc/letsencrypt/live folder and I don't have permission to view the real certificate files inside /etc/letsencrypt/archive
Can someone suggest a way out ?
I add my mistake. Maybe someone will find it useful.
I mounted the /live directory of letsencrypt and not the whole letsencrypt directory tree.
The problem with this:
The /live folder just holds symlinks to the /archive folder that is not mounted to the docker container with my approach.
(In fact I even mounted a /certs folder that symlinked to the live folder because I had that certs folder in the development environment, same problem..the real (symlinked) files were not mounted)
All problems went away when I mounted /etc/letsencrypt instead of /live
A part of my docker-compose.yml
services:
ngx:
image: nginx
container_name: ngx
ports:
- 80:80
- 443:443
links:
- php-fpm
volumes:
- ../../com/website:/var/www/html/website
- ./nginx.conf:/etc/nginx/nginx.conf
- ./nginx_conf.d/:/etc/nginx/conf.d/
- ./nginx_logs/:/var/log/nginx/
- ../whereever/you/haveit/etc/letsencrypt/:/etc/letsencrypt
The last line in that config is the important one. Changed it from
- ./certs/:/etc/nginx/certs/
And /certs was a symlink to /etc/letsencrypt/live in my case. This can not work as I described above.
If anyone having this problem, I've solved it by mounting the folders into docker container.
I've mounted both etc/letsencrypt and etc/ssl folders into docker
Docker has -vflag to mount volumes. Don't forget to open port 443 for the container.
Based on how you mount it it's possible to enable https in docker container without changing nginx paths.
docker run -d -p 80:80 -p 443:443 -v /etc/letsencrypt/:/etc/letsencrypt/ -v /etc /ssl/:/etc/ssl/ <image name>
If you are using nginx, Docker and Letsencrypt you might like the following Github project: https-portal.
It automates a lot of manual actions, and makes it easy to manage your configurations using docker-compose. From the README:
Features
Test Locally
Redirections
Automatic Container Discovery
Hybrid Setup with Non-Dockerized Apps
Multiple Domains
Serving Static Sites
Share Certificates with Other Apps
HTTP Basic Auth
How it works
obtains an SSL certificate for each of your subdomains from Let's Encrypt.
configures Nginx to use HTTPS (and force HTTPS by redirecting HTTP to HTTPS)
sets up a cron job that checks your certificates every week, and renew them. if they expire in 30 days.
For some background.. The project was also discussed on Hacker News: HTTPS-Portal: Automated HTTPS server powered by Nginx, Let’s Encrypt and Docker
(Disclaimer: I have no affiliation to the project, just a user)

artifactory pro registry docker image

I am trying out the 30 day trial version of the artifactory-registry docker image to evaluate the docker repository for our internal use. I am following the documentation https://www.jfrog.com/confluence/display/RTF/Running+with+Docker
After I run the docker image I am able to access the UI on port 8081, however When I try to push an image I get the following error
“The plain HTTP request was sent to HTTPS port”
Heres how I deploy the image
sudo docker pull mysql
sudo docker tag mysql localhost:5002/mysql
sudo docker push localhost:5002/mysql
Also the documentation says that artifactory could be accessed on the following URLS
http://localhost/artifactory
http://localhost:8081/artifactory
https://localhost:5000/v2 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-remote/v2)
https://localhost:5001/v1 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-prod-local/v1)
https://localhost:5002/v1 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-dev-local/v1)
https://localhost:5001/v2 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-prod-local/v2)
https://localhost:5002/v2 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-dev-local/v2)
But I get a 404 trying to access any of the https urls
What am I missing?
This appears to be an NGINX configuration issue (as described here ) with not forwarding HTTPS requests to Artifactory.
Changing the configuration to forward your requests should fix your issue.