Docker Swarm CE, Reverse-Proxy without shared config file on master nodes - reverse-proxy

I've been wrestling with this for several days now. I have a swarm with 9 nodes, 3 managers. I'm planning on deploying multiple testing environments to this swarm using Docker-Compose for each environment. We have many rest services in each environment that I would like to manage access to them through a reverse proxy so that access to the services comes through a single port per environment. Ideally I would like it do behave something like this http://dockerNode:9001/ServiceA and http:/dockerNode:9001/ServiceB.
I have been trying traefic, docker proxy, HAProxy, (I haven't tried NGINX yet). All of these have ran into issues where I can't even get their examples to work, OR they require me to drop a file on each masternode, or setup cloud storage of some sort).
I would like to be able to have something just work by droping it into a docker-compose file, but I am also comfortable configuring all the mappings in the compose file (these are not dynamically changing environments where services come and go).
What is there a working example of this type of setup, or what should I be looking into?

If you want to access your service using the server IP and the service port, then you need to setup dnsrr endpoint mode to override the docker swarm's service mesh. Here is a yaml so you know how to do it.
version: "3.3"
services:
alpine:
image: alpine
ports:
- target: 9100
published: 9100
protocol: tcp
mode: host
deploy:
endpoint_mode: dnsrr
placement:
constraints:
- node.labels.host == node1
Note the configuration endpoint_mode: dnsrr and the way the port has been defined. Also note the placement contraint that will make the service only be able to be schedule in the with the label node1. Thus, now you can access your service using node1's IP address and port 9100. With respect to the URI serviceA just add it.

Related

Multiple docker hosts

Is it possible to link traefik to two docker hosts directly?
I can add one docker host to the traefik.toml via tcp or a unix socket. But there doesn't appear to be a way to add two.
The way to do this would be not to run two Docker containers separately. You can run your containers in Docker SWARM mode as a cluster, with a replication factor of two (Since you mentioned two explicitly, the value is based on your SWARM cluster nodes). Then you can provide the SWARM cluster manager node IP into your traefik.toml config. Docker SWARM will take care of the load balancing among the replicated containers.

browse postgres in a docker container

I am using docker-compose to work across multiple docker containers, all these containers are mostly individual django rest framework built applications. I have downloaded all the containers and am able to build the whole application using all these containers.
Each container has postgres db running, I want to browse the db now using any ui tool. I know pgadmin can do the work here, but how I can configure my pgadmin to showcase any postgres database from these containers?
It should be possible to expose your database port also to your local network.
Normally you connect your application containers internally to the database container. In that case it's not needed declare the ports section in your compose file for the database, but if you have that entry you bind your database in addition to your local host.
After you have also expose the postgres port to your host port it should be no problem to connect with the gui tool of your choice.
version: '3.2'
services:
httpd:
image: "oth/d_apache2.4:0.2"
ports:
# container port 80 of the webserver to localhost 80
- "80:80"
keycloak:
# keycloak uses keycloak_db
image: "jboss/keycloak-postgres:3.2.1.Final"
environment:
# internal network reference to db container
- POSTGRES_PORT_5432_TCP_ADDR=keycloak_db
- POSTGRES_PORT_5432_TCP_PORT=5432
keycloak_db:
environment:
image: "postgres:alpine"
ports:
# container port 5432 to localhost 5432
# stack intern is the port still available
- "5432:5432"
Make sure that the port of the postgres container is mapped to the host system. The default postgres port is 5432. You can do that with the port directive in your docker-compose.yml. You are only able to map the port once. So your config file would look like:
services:
postgres_1:
ports:
- "49000:54321"
[...]
postgres_2:
ports:
- "49001:54321"
[...]
After that you should be able to access the desired database with the IP of your docker host and the above specified port.
If you still encounter problems connecting with a client like pgadmin check the following configuration files inside your container.
Is there anything blocking your connection attempt? Is yourdocker host behind a firewall?
postgresql.conf under the section connections and authentication:
listen_addresses
port
Check your pg_hba.conf, which controls client authentication.
For debug purposes you can set it to the following:
Don't do the following in production:
host all all all trust

How to expose minikube service urls to outside system

I 've apache camel application deployed on kubernetes. My application is esposed in kubernetes cluster which is accessible at http://192.168.99.100:31750. so how to make it accessiible accross.
I suggest you do 2 things :
run an NginX Ingress Controller in your minikube and expose it with NodePort service. Meaning it will be available somewhat similar to your service right now (high port range)
run HAProxy on your host that runs minikube that will forward 80/443 port to your high ports on minikube (ie. 80->32080, 443->32443)
that way you can expose your ingress controller on standard ports and have your services exposed with regular kubernetes Ingress definitions on these ports.

kubernetes intra cluster service communication

I have a composite service S.c which consumes 2 atomic service S.a and S.b where all three services are running in Kubernetes cluster. What would be a better pattern
1) Create Sa,Sb as a headless service and let Sc integrate with them via external Loadbalancer like NGINX+ (which uses DNS resolver to maintain updated backend pods)
2) Create Sa,Sb with clusterIP and let Sc access/resolve them via cluster DNS (skyDNS addon). Which will internally leverage IP-Table based load-balancing to pods.
Note: My k8s cluster is running on custom solution (on-premise VMs)
We have many composite services which consume 1 to many atomic services (like example above).
Edit: In few scenarios I would also need to expose services to external network
like Sb would need access both from Sc and outside.
In such it would make more sense to create Sb as a headless service, otherwise DNS resolver would always return only the clusterIP address and all external request will also get routed to clusterIP address.
My challenge is both scenarios (intra vs inter) are conflicting with each other
example: nginx-service (which has clusterIP) and nginx-headless-service (headless)
/ # nslookup nginx-service
Server: 172.16.48.11
Address 1: 172.16.48.11 kube-dns.kube-system.svc.cluster.local
Name: nginx-service
Address 1: 172.16.50.29 nginx-service.default.svc.cluster.local
/ # nslookup nginx-headless-service
Server: 172.16.48.11
Address 1: 172.16.48.11 kube-dns.kube-system.svc.cluster.local
Name: nginx-headless-service
Address 1: 11.11.1.13 wrkfai1asby2.my-company.com
Address 2: 11.11.1.15 imptpi1asby.my-company.com
Address 3: 11.11.1.4 osei623a-sby.my-company.com
Address 4: 11.11.1.6 osei511a-sby.my-company.com
Address 5: 11.11.1.7 erpdbi02a-sbyold.my-company.com
Using DNS + cluster IPs is the simpler approach, and doesn't require exposing your services to the public internet. Unless you want specific load-balancing features from nginx, I'd recommend going with #2.

Can not link a HTTP Load Balancer to a backend (502 Bad Gateway)

I have on the backend a Kubernetes node running on port 32656 (Kubernetes Service of type NodePort). If I create a firewall rule for the <node_ip>:32656 to allow traffic, I can open the backend in the browser on this address: http://<node_ip>:32656.
What I try to achieve now is creating an HTTP Load Balancer and link it to the above backend. I use the following script to create the infrastructure required:
#!/bin/bash
GROUP_NAME="gke-service-cluster-61155cae-group"
HEALTH_CHECK_NAME="test-health-check"
BACKEND_SERVICE_NAME="test-backend-service"
URL_MAP_NAME="test-url-map"
TARGET_PROXY_NAME="test-target-proxy"
GLOBAL_FORWARDING_RULE_NAME="test-global-rule"
NODE_PORT="32656"
PORT_NAME="http"
# instance group named ports
gcloud compute instance-groups set-named-ports "$GROUP_NAME" --named-ports "$PORT_NAME:$NODE_PORT"
# health check
gcloud compute http-health-checks create --format none "$HEALTH_CHECK_NAME" --check-interval "5m" --healthy-threshold "1" --timeout "5m" --unhealthy-threshold "10"
# backend service
gcloud compute backend-services create "$BACKEND_SERVICE_NAME" --http-health-check "$HEALTH_CHECK_NAME" --port-name "$PORT_NAME" --timeout "30"
gcloud compute backend-services add-backend "$BACKEND_SERVICE_NAME" --instance-group "$GROUP_NAME" --balancing-mode "UTILIZATION" --capacity-scaler "1" --max-utilization "1"
# URL map
gcloud compute url-maps create "$URL_MAP_NAME" --default-service "$BACKEND_SERVICE_NAME"
# target proxy
gcloud compute target-http-proxies create "$TARGET_PROXY_NAME" --url-map "$URL_MAP_NAME"
# global forwarding rule
gcloud compute forwarding-rules create "$GLOBAL_FORWARDING_RULE_NAME" --global --ip-protocol "TCP" --ports "80" --target-http-proxy "$TARGET_PROXY_NAME"
But I get the following response from the Load Balancer accessed through the public IP in the Frontend configuration:
Error: Server Error
The server encountered a temporary error and could not complete your
request. Please try again in 30 seconds.
The health check is left with default values: (/ and 80) and the backend service responds quickly with a status 200.
I have also created the firewall rule to accept any source and all ports (tcp) and no target specified (i.e. all targets).
Considering that regardless of the port I choose (in the instance group), and that I get the same result (Server Error), the problem should be somewhere in the configuration of the HTTP Load Balancer. (something with the health checks maybe?)
What am I missing from completing the linking between the frontend and the backend?
I assume you actually have instances in the instance group, and the firewall rule is not specific to a source range. Can you check your logs for a google health check? (UA will have google in it).
What version of kubernetes are you running? Fyi there's a resource in 1.2 that hooks this up for you automatically: http://kubernetes.io/docs/user-guide/ingress/, just make sure you do these: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md.
More specifically: in 1.2 you need to create a firewall rule, service of type=nodeport (both of which you already seem to have), and a health check on that service at "/" (which you don't have, this requirement is alleviated in 1.3 but 1.3 is not out yet).
Also note that you can't put the same instance into 2 loadbalanced IGs, so to use the Ingress mentioned above you will have to cleanup your existing loadbalancer (or at least, remove the instances from the IG, and free up enough quota so the Ingress controller can do its thing).
There can be a few things wrong that are mentioned:
firewall rules need to be set to all hosts, are they need to have the same network label as the machines in the instance group have
by default, the node should return 200 at / - readiness and liveness probes to configure otherwise did not work for me
It seems you try to do things that are all automated, so I can really recommend:
https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
This shows the steps that do the firewall and portforwarding for you, which also may show you what you are missing.
I noticed myself when using an app on 8080, exposed on 80 (like one of the deployments in the example) that the load balancer staid unhealthy untill I had / returning 200 (and /healthz I added to). So basically that container now exposes a webserver on port 8080, returning that and the other config wires that up to port 80.
When it comes to firewall rules, make sure they are set to all machines or make the network label match, or they won't work.
The 502 error is usually from the loadbalancer that will not pass your request if the health check does not pass.
Could you make your service type LoadBalancer (http://kubernetes.io/docs/user-guide/services/#type-loadbalancer) which would setup this all up automatically? This assumes you have the flag set for google cloud.
After you deploy, then describe the service name and should give you the endpoint which is assigned.