AWS EKS how can the container also inherit the host's /etc/resolv.conf - amazon-eks

Is there a way to set automatically that any new containers query from the instance's host /etc/resolv.conf but it also requires to be able to query the cluster locally too.
What i tried is dhcp options set and it does work for instances and docker containers but does not work for eks clusters.
The goal is really to have the containers within the eks cluster have additional nameservers without manual configuration because the eks cluster's admin is managed by a vendor.
Currently all containers have this in /etc/resolv.conf
nameserver 10.100.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ca-central-1.compute.internal
options ndots:5
what are other options to add another nameserver entry.
i know setting the coredns config map is one method but we dont' have admin access. Any other solutions?
Thank you

I solved this by creating private dns hosts.

Related

Portainer - OAuth conf. with multiple cluster nodes

I've installed 3 nodes with Docker Swarm and Portainer:
node1.int.org
node2.int.org
node3.int.org
Portainer uses Google Credential to authenticate each users.
The problem is that into the Redirect URL I can specify only one node (in the image below, node1.int.org). If the node1.int.org die, and I use node2.int.org or node3.int.org to login, the redirect doesn't work!
What is the best practice to solve this problem?
Thank you
You create DNSRR records:
swarm.int.org A IP1
swarm.int.org A IP2
*.swarm.int.org CNAME swarm.int.org
and then use "swarm.int.org" in place of "node1.int.org" when addressing swarm hosted services.
Bonus Point 1
Use Traefik to handle ssl offloading, so "https://swarm.int.org" can be used to connect to Portainer on the swarm.
Bonus Point 2
Use keepalived or similar to allocate a pool of VIPs and map the DNSRR entries to those. This means even if nodes go down the IPs and thus DNS entries keep routing to healthy nodes.

How to retrieve client IP within a Docker container running Apache on AWS Elastic Container Service?

I have a Docker server running Apache 2.4.25 (Debian) PHP 7.3.5.
This container is "hosted" within an Amazon Elastic Container Service.
The default AWS EC2s are sat behind an AWS application load balancer.
I want to be able to obtain, in PHP, the users/clients IP address.
My presumption based on my limited knowledge is that this IP address will need to be handed from the ALB, to the EC2, then to the Docker container, and finally for Apache to pick up.
I have tried to shorten the stack by attempting to obtain the IP within a Docker container running on my local machine, however still I wasn't able to find a way for Docker to fetch and pass through my IP to Apache.
I know typically you'd have the X-Forwarded header from the ALB, but I have not been able to work out how Docker can take this and pass it through to Apache.
I expected to have the client IP in $_SERVER['REMOTE_ADDR'] or $_SERVER['X_FORWARDED'].
Within the AWS hosted Docker containers
$_SERVER['REMOTE_ADDR'] contains an IP within the VPC subnet
$_SERVER['X_FORWARDED'] does not exist

How to connect to redis-ha cluster in Kubernetes cluster?

So I recently installed stable/redis-ha cluster (https://github.com/helm/charts/tree/master/stable/redis-ha) on my G-Cloud based kubernetes cluster. The cluster was installed as a "Headless Service" without a ClusterIP. There are 3 pods that make up this cluster one of which is elected master.
The cluster has installed with no issues and can be accessed via redis-cli from my local pc (after port-forwarding with kubectl).
The output from the cluster install provided me with DNS name for the cluster. Because the service is a headless I am using the following DNS Name
port_name.port_protocol.svc.namespace.svc.cluster.local (As specified by the documentation)
When attempting to connect I get the following error:
"redis.exceptions.ConnectionError: Error -2 connecting to
port_name.port_protocol.svc.namespace.svc.cluster.local :6379. Name does not
resolve."
This is not working.
Not sure what to do here. Any help would be greatly appreciated.
the DNS appears to be incorrect. it should be in the below format
<redis-service-name>.<namespace>.svc.cluster.local:6379
say, redis service name is redis and namespace is default then it should be
redis.default.svc.cluster.local:6379
you can also use pod dns, like below
<redis-pod-name>.<redis-service-name>.<namespace>.svc.cluster.local:6379
say, redis pod name is redis-0 and redis service name is redis and namespace is default then it should be
redis-0.redis.default.svc.cluster.local:6379
assuming the service port is same as container port and that is 6379
Not sure if this is still relevant. Just enhance the chart similar to other charts to support NodePort, e.g. rabbitmq-ha so that you can use any node ip and configured node port if you want to access redis from outside the cluster.

Multiple docker hosts

Is it possible to link traefik to two docker hosts directly?
I can add one docker host to the traefik.toml via tcp or a unix socket. But there doesn't appear to be a way to add two.
The way to do this would be not to run two Docker containers separately. You can run your containers in Docker SWARM mode as a cluster, with a replication factor of two (Since you mentioned two explicitly, the value is based on your SWARM cluster nodes). Then you can provide the SWARM cluster manager node IP into your traefik.toml config. Docker SWARM will take care of the load balancing among the replicated containers.

Problems setting up artifactory as a docker registry

im currently trying to setup a private Docker Registry in Artifacory (v4.7.4).
I've setup a local,remote and virtual docker Repository, added Apache as a Reverse Proxy. Added a DNS Entry for the virtual "docker" Repo.
Reverse Proxy is working but if i try something like:
docker pull docker.my.company.com/ubuntu:16.04
I'm getting:
https://docker.my.company.com/v1/_ping: x509: certificate is valid for
*.company.com, company.com, not docker.my.company.com
My Artifactory URL is: "my.company.com/artifactory" and i want the repositorys to be accessible on repo.my.company.com/artifactory.
I also have a Wildcard Certificate for company.com so i don't understand whats the problem here.
Or is there a way to access Artifactory over just http without SSL
Any Ideas?
According to the RFC-2818 Wildcard certificate matches only the one level down domains, but not deeper:
E.g., *.a.com matches foo.a.com but not bar.foo.a.com. f*.com matches foo.com but not bar.com.
In this case what you should do is use ports for mapping repositories, instead of subdomains, so the docker repository will be accessible under, for example my.company.com:5001/ instead of docker.my.company.com.
You can find the explanation about the change and how to do it using Artifactory Proxy settings generator in the User Guide.
If you are prepared to live with the certificate-name mismatch for-now, and understand the security implications of ignoring the name-mismatch and accessing the repo insecurely, you can apply the following workaround:
Edit /etc/default/docker and add the option DOCKER_OPTS="--insecure-registry docker.my.company.com".
Restart docker: [sudo] service docker restart.