Portainer - OAuth conf. with multiple cluster nodes - google-oauth

I've installed 3 nodes with Docker Swarm and Portainer:
node1.int.org
node2.int.org
node3.int.org
Portainer uses Google Credential to authenticate each users.
The problem is that into the Redirect URL I can specify only one node (in the image below, node1.int.org). If the node1.int.org die, and I use node2.int.org or node3.int.org to login, the redirect doesn't work!
What is the best practice to solve this problem?
Thank you

You create DNSRR records:
swarm.int.org A IP1
swarm.int.org A IP2
*.swarm.int.org CNAME swarm.int.org
and then use "swarm.int.org" in place of "node1.int.org" when addressing swarm hosted services.
Bonus Point 1
Use Traefik to handle ssl offloading, so "https://swarm.int.org" can be used to connect to Portainer on the swarm.
Bonus Point 2
Use keepalived or similar to allocate a pool of VIPs and map the DNSRR entries to those. This means even if nodes go down the IPs and thus DNS entries keep routing to healthy nodes.

Related

How to setup Redis cluster behind a load balancer?

We want to set up Redis 6.2 clustering behind a LB. There are only master nodes and there is no Redis Sentinel being used. Each cluster-enabled Redis instance is running on a different host with the same configuration (eg. all of them are configured with port 6379). Is this possible with some port configuration on the LB such that a unique port on an LB maps to a unique_ip:6379?
Our idea is to use a cluster-aware Redis client like Lettuce RedisClusterClient which would issue CLUSTER NODES/SLOTS commands or react to MOVED/ASK redirection. It would also take care of split up a pipeline into using separate connections based on the slot for a command
It seems like this is not possible to achieve if the same port is used on all Redis hosts. Using https://docs.redis.com/latest/rs/networking/cluster-lba-setup/ as a guide, the best we could manage was to configure each Redis with a unique port and set cluster-announce-ip as the virtual IP (points to LB) and then manually make sure that the same port is used on LB as the Redis host. With this, the CLUSTER SLOTS and MOVED responses from Redis hosts could be correctly acted upon by the client. But this complicates our setup when a new Redis host has to be added or removed
You can use Route 53 if you're on AWS to achieve this.
Create A setup like this:
Add all hosts(IP addresses) in Route 53 and set TTL to smaller values like 30 seconds or so. Route 53 will return one of these Redis IP addresses, using this endpoint Redis clients like Lettuce or Jedis will discover all the Redis nodes.
You can use any other DNS system as well, record type should be A.

AWS EKS how can the container also inherit the host's /etc/resolv.conf

Is there a way to set automatically that any new containers query from the instance's host /etc/resolv.conf but it also requires to be able to query the cluster locally too.
What i tried is dhcp options set and it does work for instances and docker containers but does not work for eks clusters.
The goal is really to have the containers within the eks cluster have additional nameservers without manual configuration because the eks cluster's admin is managed by a vendor.
Currently all containers have this in /etc/resolv.conf
nameserver 10.100.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ca-central-1.compute.internal
options ndots:5
what are other options to add another nameserver entry.
i know setting the coredns config map is one method but we dont' have admin access. Any other solutions?
Thank you
I solved this by creating private dns hosts.

Docker Swarm CE, Reverse-Proxy without shared config file on master nodes

I've been wrestling with this for several days now. I have a swarm with 9 nodes, 3 managers. I'm planning on deploying multiple testing environments to this swarm using Docker-Compose for each environment. We have many rest services in each environment that I would like to manage access to them through a reverse proxy so that access to the services comes through a single port per environment. Ideally I would like it do behave something like this http://dockerNode:9001/ServiceA and http:/dockerNode:9001/ServiceB.
I have been trying traefic, docker proxy, HAProxy, (I haven't tried NGINX yet). All of these have ran into issues where I can't even get their examples to work, OR they require me to drop a file on each masternode, or setup cloud storage of some sort).
I would like to be able to have something just work by droping it into a docker-compose file, but I am also comfortable configuring all the mappings in the compose file (these are not dynamically changing environments where services come and go).
What is there a working example of this type of setup, or what should I be looking into?
If you want to access your service using the server IP and the service port, then you need to setup dnsrr endpoint mode to override the docker swarm's service mesh. Here is a yaml so you know how to do it.
version: "3.3"
services:
alpine:
image: alpine
ports:
- target: 9100
published: 9100
protocol: tcp
mode: host
deploy:
endpoint_mode: dnsrr
placement:
constraints:
- node.labels.host == node1
Note the configuration endpoint_mode: dnsrr and the way the port has been defined. Also note the placement contraint that will make the service only be able to be schedule in the with the label node1. Thus, now you can access your service using node1's IP address and port 9100. With respect to the URI serviceA just add it.

Traefik cert and route features

We are working on a on-prim k8 cluster (No native Load Balancer like cloud )
and exploring traefik for SSL termination and routing. We have few questions
1) does it support more than 1 certs ( no wild card for us) , can we configure 1 cert per route ?
2) can we listen on low ports , 443 for all the ingress traffic to the cluster ? We plan front-end the nodes with a on-prim global load balancer.
3) does all ingress configuration needs to be in ONE yaml ? can can we split configurations(route and ssl info) per application
4) We are using istio for east-west traffic , any issues with integration ?
Got response from #nicomengin
I hope these responses will help you:
1) does it support more than 1 certs ( no wild card for us),
Yes you can define many services per entrypoints statically (in the Traefik configuration) or dynamically (thanks to the TLS secrets in K8s)
can we configure 1 cert per route ?
The certificates are linked to entrypoints. As you can define many certificates per entrypoints, you can define a certificate for each (sub) domains. So, I'll say yes you can do that.
2) can we listen on low ports , 443 for all the ingress traffic to the cluster ? We plan front-end the nodes with a on-prim global load balancer.
Yes, you can define entrypoints on all the ports you need, then you can define if they are TLS or not
3) does all ingress configuration needs to be in ONE yaml ? can can we split configurations(route and ssl info) per application
Yes you can split your ingress rules in many YAML files
4) We are using istio for east-west traffic , any issues with integration ?
No issues you can use together Traefik for nourth-south and Istio for esat-west

Can not link a HTTP Load Balancer to a backend (502 Bad Gateway)

I have on the backend a Kubernetes node running on port 32656 (Kubernetes Service of type NodePort). If I create a firewall rule for the <node_ip>:32656 to allow traffic, I can open the backend in the browser on this address: http://<node_ip>:32656.
What I try to achieve now is creating an HTTP Load Balancer and link it to the above backend. I use the following script to create the infrastructure required:
#!/bin/bash
GROUP_NAME="gke-service-cluster-61155cae-group"
HEALTH_CHECK_NAME="test-health-check"
BACKEND_SERVICE_NAME="test-backend-service"
URL_MAP_NAME="test-url-map"
TARGET_PROXY_NAME="test-target-proxy"
GLOBAL_FORWARDING_RULE_NAME="test-global-rule"
NODE_PORT="32656"
PORT_NAME="http"
# instance group named ports
gcloud compute instance-groups set-named-ports "$GROUP_NAME" --named-ports "$PORT_NAME:$NODE_PORT"
# health check
gcloud compute http-health-checks create --format none "$HEALTH_CHECK_NAME" --check-interval "5m" --healthy-threshold "1" --timeout "5m" --unhealthy-threshold "10"
# backend service
gcloud compute backend-services create "$BACKEND_SERVICE_NAME" --http-health-check "$HEALTH_CHECK_NAME" --port-name "$PORT_NAME" --timeout "30"
gcloud compute backend-services add-backend "$BACKEND_SERVICE_NAME" --instance-group "$GROUP_NAME" --balancing-mode "UTILIZATION" --capacity-scaler "1" --max-utilization "1"
# URL map
gcloud compute url-maps create "$URL_MAP_NAME" --default-service "$BACKEND_SERVICE_NAME"
# target proxy
gcloud compute target-http-proxies create "$TARGET_PROXY_NAME" --url-map "$URL_MAP_NAME"
# global forwarding rule
gcloud compute forwarding-rules create "$GLOBAL_FORWARDING_RULE_NAME" --global --ip-protocol "TCP" --ports "80" --target-http-proxy "$TARGET_PROXY_NAME"
But I get the following response from the Load Balancer accessed through the public IP in the Frontend configuration:
Error: Server Error
The server encountered a temporary error and could not complete your
request. Please try again in 30 seconds.
The health check is left with default values: (/ and 80) and the backend service responds quickly with a status 200.
I have also created the firewall rule to accept any source and all ports (tcp) and no target specified (i.e. all targets).
Considering that regardless of the port I choose (in the instance group), and that I get the same result (Server Error), the problem should be somewhere in the configuration of the HTTP Load Balancer. (something with the health checks maybe?)
What am I missing from completing the linking between the frontend and the backend?
I assume you actually have instances in the instance group, and the firewall rule is not specific to a source range. Can you check your logs for a google health check? (UA will have google in it).
What version of kubernetes are you running? Fyi there's a resource in 1.2 that hooks this up for you automatically: http://kubernetes.io/docs/user-guide/ingress/, just make sure you do these: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md.
More specifically: in 1.2 you need to create a firewall rule, service of type=nodeport (both of which you already seem to have), and a health check on that service at "/" (which you don't have, this requirement is alleviated in 1.3 but 1.3 is not out yet).
Also note that you can't put the same instance into 2 loadbalanced IGs, so to use the Ingress mentioned above you will have to cleanup your existing loadbalancer (or at least, remove the instances from the IG, and free up enough quota so the Ingress controller can do its thing).
There can be a few things wrong that are mentioned:
firewall rules need to be set to all hosts, are they need to have the same network label as the machines in the instance group have
by default, the node should return 200 at / - readiness and liveness probes to configure otherwise did not work for me
It seems you try to do things that are all automated, so I can really recommend:
https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
This shows the steps that do the firewall and portforwarding for you, which also may show you what you are missing.
I noticed myself when using an app on 8080, exposed on 80 (like one of the deployments in the example) that the load balancer staid unhealthy untill I had / returning 200 (and /healthz I added to). So basically that container now exposes a webserver on port 8080, returning that and the other config wires that up to port 80.
When it comes to firewall rules, make sure they are set to all machines or make the network label match, or they won't work.
The 502 error is usually from the loadbalancer that will not pass your request if the health check does not pass.
Could you make your service type LoadBalancer (http://kubernetes.io/docs/user-guide/services/#type-loadbalancer) which would setup this all up automatically? This assumes you have the flag set for google cloud.
After you deploy, then describe the service name and should give you the endpoint which is assigned.