Is there a way to open a proxy from the agent nodes to the master in Azure AKS? - azure-container-service

With kubectl proxy I can open a proxy from my machine to the master node in the kubernetes cluster of my current context.
Is there any way I can do the same from the nodes of a managed cluster in Azure AKS?
For context, the thing I want to do is to use Linkerd backed by k8s service discovery, but that doesn't support TLS at the moment and the recommendation in their docs is to run kubectl proxy on each node.

You can run a kubectl proxy container in the pod alongside Linkerd. For example:
- name: kubectl
image: buoyantio/kubectl:v1.8.5
args:
- "proxy"
- "-p"
- "8001"
Complete example:
https://github.com/linkerd/linkerd-examples/blob/99e33284860a35228dccc23a8810374b02f24c26/k8s-daemonset/k8s/linkerd.yml#L103

Related

Custom DNS record and SSL certificate in docker container

I am facing a issue with self signed certificate and DNS record in hosts file inside docker container. We have multiple Linux servers with docker swarm running. There was a docker service where I need to copy the self signed certificate and create a DNS record manually with docker exec all the time when ever service is restarting. There was a mapped volume for the docker service. How can I map container DNS file(/etc/hosts) and /usr/local/share/ca-certificates to have these in mapped place so that there will be no issues if the container re-start.
Use docker configs.
Something like :-
docker config create my_public-certificate-v1 public.crt.
docker service create --config src=my_public-certificate-v1,target=/usr/local/share/ca-certificates/example.com.crt ...

React works from docker image on localmachine but unreachable from kubernetes. service looks to be configured correctly

I want to put my docker image running react into kubernetes and be able to hit the main page. I am able to get the main page just running docker run --rm -p 3000:3000 reactdemo locally. When I try to deploy to my kubernetes (running locally via docker-desktop) I get no response until eventually a timeout.
I tried this same process below with a springboot docker image and I am able to get a simple json response in my browser.
Below is my Dockerfile, deployment yaml (with service inside it), and commands I'm running to try and get my results. Morale is low, any help would be appreciated!
Dockerfile:
# pull official base image
FROM node
# set working directory
RUN mkdir /app
WORKDIR /app
# install app dependencies
COPY package.json /app
RUN npm install
# add app
COPY . /app
#Command to build ReactJS application for deploy might not need this...
RUN npm run build
# start app
CMD ["npm", "start"]
Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: reactdemo
image: reactdemo:latest
imagePullPolicy: Never
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
type: NodePort
selector:
app: demo
ports:
- port: 3000
targetPort: 3000
protocol: TCP
nodePort: 31000
selector:
app: demo
I then open a port on my local machine to the nodeport for the service:
PS C:\WINDOWS\system32> kubectl port-forward pod/demo-854f4d78f6-qv4mt 31000:3000
Forwarding from 127.0.0.1:31000 -> 3000
My assumption is that everything is in place at this point and I should be able to open a browser to hit localhost:31000. I expected to see that spinning react symbol for their landing page just like I do when I only run a local docker container.
Here is it all running:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/demo-854f4d78f6-7dn7c 1/1 Running 0 4s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/demo NodePort 10.111.203.209 <none> 3000:31000/TCP 4s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/demo 1/1 1 1 4s
NAME DESIRED CURRENT READY AGE
replicaset.apps/demo-854f4d78f6 1 1 1 4s
Some extra things to note:
Although I don't have it setup currently I did have my springboot service in the deployment file. I logged into it's pod and ensured the react container was reachable. It was.
I haven't done anything with my firewall settings (but sort of assume I dont have to since the run with the springboot service worked?)
I see this in chrome developer tools and so far don't think it's related to my problem: crbug/1173575, non-JS module files deprecated. I see this response in the main browser page after some time:
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
if you are using Kubernetes using minikube in your local system then it will not work with localhost:3000, because it runs in minikube cluster which has there owned private IP address, so instead of trying localhost:3000 you should try minikube service <servicename> this in your terminal and it shows the URL of your service.
Thanks for all the feedback peeps! In trying out the solutions presented I found my error and it was pretty silly. I tried removing the service and trying the different port configs mentioned above. What solved it was using 127.0.0.1:31000 instead of localhost. Not sure why that fixed it but it did!
That being said a few comments I found while looking at the above comments.
I found that I couldnt hit the cluster without doing the port forwarding regardless of whether I had a service defined or not.
ContainerPort to my understanding is for kubernetes to work on pod to pod communication and doesnt impact application function from a user perspective (could be wrong!)
good to know on minikube Im thinking about trying it out and if I do I'll know why port 3000 stops working.
Thanks
You don't need a Service to port-forward to a pod. You just port-forward straight to it
kubectl port-forward <pod-name> <local-port>:<pod-port>
This creates a "tunnel" between your local machine on a chosen port (<local-port>) and your pod (pod-name), on a chosen pod's port (<pod-port>).
Then you can curl your pod with
curl localhost:<local-port>
If you really want to use a Service, then port-forward to the Service (service/demo in your case) + service port, but it will be translated to POD's IP eventually.
Change .spec.port.[0].targetPort in your Service to be the same as .spec.template.spec.containers.port.containerPort in your deployment, so in your case:
...
- port: 3000
targetPort: 80
...
Port forward to a Service, with Service port
kubectl port-forward service/demo 8080:3000
Then curl your Service with
curl localhost:8080
This have a side effect if there are more pods under the same service. Port forward almost always will be connected to the same pod.

Docker Swarm CE, Reverse-Proxy without shared config file on master nodes

I've been wrestling with this for several days now. I have a swarm with 9 nodes, 3 managers. I'm planning on deploying multiple testing environments to this swarm using Docker-Compose for each environment. We have many rest services in each environment that I would like to manage access to them through a reverse proxy so that access to the services comes through a single port per environment. Ideally I would like it do behave something like this http://dockerNode:9001/ServiceA and http:/dockerNode:9001/ServiceB.
I have been trying traefic, docker proxy, HAProxy, (I haven't tried NGINX yet). All of these have ran into issues where I can't even get their examples to work, OR they require me to drop a file on each masternode, or setup cloud storage of some sort).
I would like to be able to have something just work by droping it into a docker-compose file, but I am also comfortable configuring all the mappings in the compose file (these are not dynamically changing environments where services come and go).
What is there a working example of this type of setup, or what should I be looking into?
If you want to access your service using the server IP and the service port, then you need to setup dnsrr endpoint mode to override the docker swarm's service mesh. Here is a yaml so you know how to do it.
version: "3.3"
services:
alpine:
image: alpine
ports:
- target: 9100
published: 9100
protocol: tcp
mode: host
deploy:
endpoint_mode: dnsrr
placement:
constraints:
- node.labels.host == node1
Note the configuration endpoint_mode: dnsrr and the way the port has been defined. Also note the placement contraint that will make the service only be able to be schedule in the with the label node1. Thus, now you can access your service using node1's IP address and port 9100. With respect to the URI serviceA just add it.

redis cluster K8S connection

I'm running a redis-cluster in K8S:
kubectl get services -o wide
redis-cluster ClusterIP 10.97.31.167 <none> 6379/TCP,16379/TCP 22h app=redis-cluster
When connecting to the cluster IP from the node itself connection is working fine:
redis-cli -h 10.97.31.167 -c
10.97.31.167:6379> set some_val 1
-> Redirected to slot [11662] located at 10.244.1.9:6379
OK
Is there some way I can access the redis server from my local development VM without exposing every single pod as a service?
When deploying my application to run inside the cluster itself (later, when in production), I should use the cluster IP too, or should I use the internal IPs of the pods as master IPs of the redis-master servers?
Simple forwarding to the remote machine won't work:
devvm:ssh -L 6380:10.97.31.167:6379 -i user.pem admin#k8snode.com
On dev VM:
root#devvm:~# redis-cli -h 127.0.0.1 -p 6380 -c
127.0.0.1:6380> set jaheller 1
-> Redirected to slot [11662] located at 10.244.1.9:6379
The redis connection is timeouting at this point.
I belive in all scenarios you just need to expose the service using kubernetes Service object of type:
Cluster IP ( in case you are consuming it inside the cluster )
NodePort ( for external access )
LoadBalancer ( in case of public access and if you are on cloud provider)
NodePort with external loadbalancer ( for public external access if you are on local infrastructure )
Dont need to worry about individual pods. The Service will take care of them.
Docs:
https://kubernetes.io/docs/concepts/services-networking/service/
I don't think you need any port redirection. You have to build an ingress controller upon your cluster though, i.e. nginx ingress controller
Then you just set up a single ingress service with exposed access, which will serve a cluster traffic.
Here is an example of Ingress Controller to Access cluster service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-cluster-ing
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- backend:
serviceName: redis-cluster
servicePort: 6379
You may check a step-by-step instruction

Run Kubernetes on EC2

I am trying to run kubernetes on EC2 and I used CoreOs alpha channel ami.I configured Kubectl ssh tunnel for the communication between Kubectl client and Kubernetes API.
But when I try kubectl api-versions command, I am getting following error.
Couldn't get available api versions from server: Get http://MyIP:8080/api: dial tcp MyIP:8080: connection refused
MyIP - this has set accordingly.
What could be the reason for this?
Reason for this issue was that I haven't set the kubernetes_master environment variable properly. As there is a ssh tunnel between the kubectl client and API, kubernetes master environment variable should be set to localhost.