Docker apache image, store logs in host? - apache

I use Docker to build an Apache image, and then use docker-compose to run it. I set up Apache access.log and error.log and want to store them outside of the container. currently, I use Volumes but it stores the data both in container and host.
docker-compose.yml
version: '2'
services:
web:
image: apache
build: .
container_name: my-image
volumes:
- "/var/log/my-app:/var/log/apache2"
restart: always
ports:
- "8000:80"
My question is how to only store apache log data in a host, and It woule be better if there is a way to stream apache log data to stdout so that I don't need to store in the host.
Thanks in advance!

Volumes but it stores the data both in container and host.
Not really, it should only store data in the host (and makes it visible in the container through a bind mount)
if there is a way to stream apache log data to stdout
Possible yes, through configuration, but that would not be persistent.

Check this post for the Apache configuration:
https://serverfault.com/questions/711168/writing-apache2-logs-to-stdout-stderr
Then you can inspect your logs with
docker logs <container>

Related

Traefik load balance between Docker provider and server at url

I currently have a Traefik setup with one nodejs service running locally in a docker container with a docker-compose.yml file like so:
container_name: nestjs-server
build:
context: ./
dockerfile: Dockerfile
networks:
- traefik-global-proxy
labels:
- "traefik.enable=true"
- "traefik.http.routers.nestjs-server.rule=Host(`mydomain.com`) || Host(`www.mydomain.com`)"
- "traefik.http.routers.nestjs-server.entrypoints=websecure"
- "traefik.http.routers.nestjs-server.tls.certresolver=letsencrypt"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
I am running on a 1vCPU / 2GB cloud instance. Now I would like to add a second node app instance on another VM. I have seen it is possible to add server instances to the load balancer like so:
services:
my-service:
loadBalancer:
servers:
- url: "http://<private-ip-server-1>:<private-port-server-1>/"
- url: "http://<private-ip-server-2>:<private-port-server-2>/"
But I am not sure how to load balance between instances on another service alongside local docker instances. I have read that it is not possible to mix the label based config with the file config, so I assume I'd need to do something like:
my-service:
loadBalancer:
servers:
- url: "http://<private-ip-server-1>:<private-port-server-1>/"
- port: "<local-port-server-2>"
Is this possible? What is the correct way to accomplish this, other than having everything run on the same machine within docker?

Docker Swarm CE, Reverse-Proxy without shared config file on master nodes

I've been wrestling with this for several days now. I have a swarm with 9 nodes, 3 managers. I'm planning on deploying multiple testing environments to this swarm using Docker-Compose for each environment. We have many rest services in each environment that I would like to manage access to them through a reverse proxy so that access to the services comes through a single port per environment. Ideally I would like it do behave something like this http://dockerNode:9001/ServiceA and http:/dockerNode:9001/ServiceB.
I have been trying traefic, docker proxy, HAProxy, (I haven't tried NGINX yet). All of these have ran into issues where I can't even get their examples to work, OR they require me to drop a file on each masternode, or setup cloud storage of some sort).
I would like to be able to have something just work by droping it into a docker-compose file, but I am also comfortable configuring all the mappings in the compose file (these are not dynamically changing environments where services come and go).
What is there a working example of this type of setup, or what should I be looking into?
If you want to access your service using the server IP and the service port, then you need to setup dnsrr endpoint mode to override the docker swarm's service mesh. Here is a yaml so you know how to do it.
version: "3.3"
services:
alpine:
image: alpine
ports:
- target: 9100
published: 9100
protocol: tcp
mode: host
deploy:
endpoint_mode: dnsrr
placement:
constraints:
- node.labels.host == node1
Note the configuration endpoint_mode: dnsrr and the way the port has been defined. Also note the placement contraint that will make the service only be able to be schedule in the with the label node1. Thus, now you can access your service using node1's IP address and port 9100. With respect to the URI serviceA just add it.

How to add a simple routing rule to traefik

I'm trying to get started with traefik in the hopes I can replace my current reverse proxy (pound) with traefik.
How do I add a simple routing rule so that mysubdomain.mydomain.com routes to http://192.168.x.x:8080?
I'm following the quickstart here. I created the following docker compose yml file and started it with docker-compose up -d reverse-proxy
version: '3'
services:
reverse-proxy:
container_name: reverse-proxy
image: traefik #The official Traefik docker image
command: --api --docker #Enables the web UI and tells Træfik to listen to docker
ports:
- "80:80" #The HTTP port
- "8080:8080" #The Web UI (enabled by --api)
volumes:
- /var/run/docker.sock:/var/run/docker.sock #So that Traefik can listen to the Docker events
Great, the container is running, but now what? How would I go about adding the simply routing rule?
If my backend web service isn't running one of these supported backends will it not work? Surely traefik can simply route http requests to any http backend right?
For example my backend web service is a web interface for my synology NAS at home. Traefik should be able to route this right? If so, how?

How to make REST calls between Frontend and Backend using Docker containers

I have 3 docker containers:
Backend (Spring boot rest api)
Frontend (Js and html in the apache image)
Mongodb
I'm orchestrating them through docker-compose and works nicely.
However I don't know how to let my frontend javascript client know the backend container's host/ip in order to reach it.
This is my docker-compose.yml:
version: '3.1'
services:
project-server:
build: .
restart: always
container_name: project-server
ports:
- 8200:8200
working_dir: /opt/app
depends_on:
- mongo
httpd:
image: project-ui
container_name: project-ui
ports:
- 8201:80
mongo:
image: project-mongo
container_name: project-mongo
ports:
- 27018:27017
volumes:
- $HOME/data/mongo-data:/data/db
- $HOME/data/mongo-bkp:/data/bkp
restart: always
So i've tried with this in my js client app:
export default {
REMOTE_HOST: 'http://project-server:8200'
}
But it doesn't work. (Failed to load resource: net::ERR_NAME_NOT_RESOLVED)
And i'm pretty sure it's because JS runs locally on the browser so it has no way to resolve that.
What's the right way to do this? There is any way for the frontend service (apache) to pass/render the real host to Javascript and get it somehow?
Thanks a lot
project-server can be resolved only within the network created by docker-compose. As you mentioned, to connect from the outside world you need to export the IP of your host instead of project-server. The problem is the guest container doesn't know the IP of the guest. Here is a detailed discussion about that: How to get the IP address of the docker host from inside a docker container
What you probably need in your situation is to run the container passing the IP of the host as an environment variable:
run --env <IP>=<value>
Then in node you can just read that variable.
Hope it helps

browse postgres in a docker container

I am using docker-compose to work across multiple docker containers, all these containers are mostly individual django rest framework built applications. I have downloaded all the containers and am able to build the whole application using all these containers.
Each container has postgres db running, I want to browse the db now using any ui tool. I know pgadmin can do the work here, but how I can configure my pgadmin to showcase any postgres database from these containers?
It should be possible to expose your database port also to your local network.
Normally you connect your application containers internally to the database container. In that case it's not needed declare the ports section in your compose file for the database, but if you have that entry you bind your database in addition to your local host.
After you have also expose the postgres port to your host port it should be no problem to connect with the gui tool of your choice.
version: '3.2'
services:
httpd:
image: "oth/d_apache2.4:0.2"
ports:
# container port 80 of the webserver to localhost 80
- "80:80"
keycloak:
# keycloak uses keycloak_db
image: "jboss/keycloak-postgres:3.2.1.Final"
environment:
# internal network reference to db container
- POSTGRES_PORT_5432_TCP_ADDR=keycloak_db
- POSTGRES_PORT_5432_TCP_PORT=5432
keycloak_db:
environment:
image: "postgres:alpine"
ports:
# container port 5432 to localhost 5432
# stack intern is the port still available
- "5432:5432"
Make sure that the port of the postgres container is mapped to the host system. The default postgres port is 5432. You can do that with the port directive in your docker-compose.yml. You are only able to map the port once. So your config file would look like:
services:
postgres_1:
ports:
- "49000:54321"
[...]
postgres_2:
ports:
- "49001:54321"
[...]
After that you should be able to access the desired database with the IP of your docker host and the above specified port.
If you still encounter problems connecting with a client like pgadmin check the following configuration files inside your container.
Is there anything blocking your connection attempt? Is yourdocker host behind a firewall?
postgresql.conf under the section connections and authentication:
listen_addresses
port
Check your pg_hba.conf, which controls client authentication.
For debug purposes you can set it to the following:
Don't do the following in production:
host all all all trust