With Podman, how to turn 4 Pods into a Deployment using 4 replicas on different ports? - centos8

I currently have a website deployed using multiple pods: 1 for the client (nginx), and 4 pods for the server (node.js). But I've had to copy/paste the yaml for the server pods, name them differently and change their ports (3001, 3002, 3003, 3004).
I'm guessing this could be simplified by using kind: Deployment and replicas: 4 for the server yaml, but I don't know how to change the port numbers.
I currently use the following commands to get everything up and running:
podman play kube server1-pod.yaml
podman play kube server2-pod.yaml
podman play kube server3-pod.yaml
podman play kube server4-pod.yaml
podman play kube client-pod.yaml
Here's my existing setup on a CentOS 8 machine with Podman 3.0.2-dev:
client-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-07-29T00:00:00Z"
labels:
app: client-pod
name: client-pod
spec:
hostName: client
containers:
- name: client
image: registry.example.com/client:1.2.3
ports:
- containerPort: 8080
hostPort: 8080
resources: {}
status: {}
server1-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-07-29T00:00:00Z"
labels:
app: server1-pod
name: server1-pod
spec:
hostName: server1
containers:
- name: server1
image: registry.example.com/server:1.2.3
ports:
- containerPort: 3000
hostPort: 3001 # server2 uses 3002 etc.
env:
- name: NODE_ENV
value: production
resources: {}
status: {}
nginx.conf
# node cluster
upstream server_nodes {
server api.example.com:3001 fail_timeout=0;
server api.example.com:3002 fail_timeout=0;
server api.example.com:3003 fail_timeout=0;
server api.example.com:3004 fail_timeout=0;
}
server {
listen 8080;
listen [::]:8080;
server_name api.example.com;
location / {
root /usr/share/nginx/html;
index index.html;
}
# REST API requests go to node.js
location /api {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'Upgrade';
proxy_read_timeout 300;
proxy_request_buffering off;
proxy_redirect off;
proxy_buffering off;
proxy_http_version 1.1;
proxy_pass http://server_nodes;
client_max_body_size 10m;
}
}
I tried using kompose convert to turn the Pod into a Deployment, then setting replicas to 4, but since the ports are all the same, the first container is started on 3001, but the rest fail to start since 3001 is already taken.
server-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.7.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: server
name: server
spec:
replicas: 4
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: server
spec:
containers:
ports:
- containerPort: 3000
hostPort: 3001
- env:
- name: NODE_ENV
value: production
image: registry.example.com/server:1.2.3
name: server
resources: {}
restartPolicy: Always
status: {}
How can I specify that each subsequent replica needs to use the next port up?

docker-compose sounds like it suited you well, you may have an interest in using podman-compose which is meant to be a drop in replacement:
https://github.com/containers/podman-compose
This should allow you to use your original workflow that you enjoyed. Alternately, Podman 3 includes docker-compose support natively.
In terms of incrementing the port automatically, a few rough suggestions for solving the underlying problem are:
Switch back to compose with the above and use yaml anchors to define multiple images with the same config, but override the port.
Investigate using Kind or Minikube to utilise a local Kubernetes cluster, offering a larger surface of the K8s api than Podmans current implementation. (Podman's work with K8s is pretty neat, but limited. A small K8s stack would allow you to utilise Services for more flexibility with routing)
None of these tools really have support for auto-incrementing port allocations, as if you're looking at manually specifying ports you either have a smallish simple stack (within a docker-compose file say), or you've got a special workload among a sea of (likely) auto-routed and managed services on a Kubernetes cluster.
Fortunately both of these options have the ability to dynamically set ports for you, as you may be aware when using docker-compose to scale a service (docker-compose scale service-name=4), with the caveat that you have not pinned a port in the service spec.
Hope that gives you options to think about that may help you resolve this challenge in your workflow.

You can explore using Kubernetes Services in front of a replica set.
The service is in charge of load balancing the request between all pods with a valid selectorfield. Now all your backend pods, can be using the same port as you are already doing using replicas, and you do not need to reconfigure different port in each pod.
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
selector:
app: backend-pod # modify your replicaset with a suitable label
ports:
- port: 3000 # you can use here what ever port you like. This is the port where the service listens, the one you configure in nginx.conf later. It can be different than the the targetPort
targetPort: 3000 # request will be redirect to this pod's port.
As you access the pods via the service, you need to modify also nginx.conf to acces the service directly. You no longer need to specify all different pods in a line. This way you also gain flexibility. If you scale up the deployment with 10 replicas por example, you do not need to include all servers here. The service does this dirty work for you.
# node cluster
upstream server_nodes {
server backend-service:3000 fail_timeout=0;
}

Related

How to configure Traefik UDP Ingress?

My UDP setup doesn't work.
In traefik pod,
--entryPoints.udp.address=:4001/udp
is added. The port is listening and on traefik UI, it shows udp entrypoints port 4001. So entry-point UDP 4001 is working.
I have applied this CRD:
kind: IngressRouteUDP
metadata:
name: udp
spec:
entryPoints:
- udp
routes:
- services:
- name: udp
port: 4001
kubrnetes service CRD:
apiVersion: v1
kind: Service
metadata:
name: udp
spec:
selector:
app: udp-server
ports:
- protocol: UDP
port: 4001
targetPort: 4001
got error on traefik UI:
NAME: default-udp-0#kubernetescrd
ENTRYPOINTS: udp
SERVICE:
ERRORS: the udp service "default-udp-0#kubernetescrd" does not exist
What did I wrong? Or is it a bug?
traefik version 2.3.1
So I ran into the trouble using k3s/rancher and traefik 2.x. The problem here was that configuring the command line switch only showed up a working environment in the traefik dashboard, - but it just did not worked.
In k3s the solution is to provide a traefik-config.yam besite the trafik.yaml. traefik.yaml is always recreated on a restart of k3s.
Put traefik-config.yaml to /var/lib/rancher/k3s/server/manifests/traefik-config.yaml is keeping changes persistent.
What misses is the entrypoint declaration. You may assume this is done as well by the command line switch, but it is not.
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
additionalArguments:
- "--entryPoints.udp.address=:55000/udp"
entryPoints:
udp:
address: ':55000/upd'
Before going further check the helm install job in the name kube-system. If one of the two helm install jobs error out, traefik won't work.
In case everything worked as above and you still have trouble. Then one option is just to configure the upd traffic as a normal kubernetes loadbalancer service. Like this example, that was successfully tested by me
apiVersion: v1
kind: Service
metadata:
name: nginx-udp-ingress-demo-svc-udp
spec:
selector:
app: nginx-udp-ingress-demo
ports:
- protocol: UDP
port: 55000
targetPort: 55000
type: LoadBalancer
The entry type: LoadBalancer will start a pod on ony kubernets node, that will send incoming UDP/55000 to the load balancer service.
This worked for me on a k3s cluster. But is not a native traefik solution asked in the question. More a work around, that make things work in the first place.
I found a source that seem to handle the Traefik solution on https://github.com/traefik/traefik/blob/master/docs/content/routing/providers/kubernetes-crd.md.
That seems to have a working solution. But it has very slim expanation and shows just the manifests. I need to test this out, and come back.
This worked on my system.

Traefik & k3d: Dashboard is not reachable

This is my k3d cluster creation command:
$ k3d cluster create arxius \
--agents 3 \
--k3s-server-arg --disable=traefik \
-p "8888:80#loadbalancer" -p "9000:9000#loadbalancer" \
--volume ${HOME}/.k3d/registries.yaml:/etc/rancher/k3s/registries.yaml
Here my nodes:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c83f2f746621 rancher/k3d-proxy:v3.0.1 "/bin/sh -c nginx-pr…" 2 weeks ago Up 21 minutes 0.0.0.0:9000->9000/tcp, 0.0.0.0:8888->80/tcp, 0.0.0.0:45195->6443/tcp k3d-arxius-serverlb
0ed525443da2 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-2
561a0a51e6d7 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-1
fc131df35105 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-0
4cfceabad5af rancher/k3s:v1.18.6-k3s1 "/bin/k3s server --d…" 2 weeks ago Up 21 minutes k3d-arxius-server-0
873a4f157251 registry:2 "/entrypoint.sh /etc…" 3 months ago Up About an hour 0.0.0.0:5000->5000/tcp registry.localhost
I've installed traefik using default helm installation command:
$ helm install traefik traefik/traefik
After that, an ingressroute is also installed in order to reach dashboard:
Name: traefik-dashboard
Namespace: traefik
Labels: app.kubernetes.io/instance=traefik
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=traefik
helm.sh/chart=traefik-9.1.1
Annotations: helm.sh/hook: post-install,post-upgrade
API Version: traefik.containo.us/v1alpha1
Kind: IngressRoute
Metadata:
Creation Timestamp: 2020-12-09T19:07:41Z
Generation: 1
Managed Fields:
API Version: traefik.containo.us/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:helm.sh/hook:
f:labels:
.:
f:app.kubernetes.io/instance:
f:app.kubernetes.io/managed-by:
f:app.kubernetes.io/name:
f:helm.sh/chart:
f:spec:
.:
f:entryPoints:
f:routes:
Manager: Go-http-client
Operation: Update
Time: 2020-12-09T19:07:41Z
Resource Version: 141805
Self Link: /apis/traefik.containo.us/v1alpha1/namespaces/traefik/ingressroutes/traefik-dashboard
UID: 1cbcd5ec-d967-440c-ad21-e41a59ca1ba8
Spec:
Entry Points:
traefik
Routes:
Kind: Rule
Match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
Services:
Kind: TraefikService
Name: api#internal
Events: <none>
As you can see:
Match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
I'm trying to reach dashboard. Nevertheless:
Details are not shown.
I've also tried to launch a curl command:
curl 'http://localhost:9000/api/overview'
curl: (52) Empty reply from server
Any ideas?
First, using the default configuration of the traefik helm chart (in version 9.1.1) sets up the entryPoint traefik on port 9000 but does not expose it automatically. So, if you check the service created for you, you will see that this only maps the web and websecure endpoints.
Check this snippet from kubectl get svc traefik -o yaml
spec:
clusterIP: xx.xx.xx.xx
externalTrafficPolicy: Cluster
ports:
- name: web
nodePort: 30388
port: 80
protocol: TCP
targetPort: web
- name: websecure
nodePort: 31115
port: 443
protocol: TCP
targetPort: websecure
selector:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik
sessionAffinity: None
type: LoadBalancer
As explained in the docs, there are two ways to reach your dashboard. Either, you start a port-forward to your local machine for port 9000 or you expose the dashboard via ingressroute on another entrypoint.
Please be aware that you still net to port-forward even though your k3d proxy already binds to 9000. This is only the reservation if some loadbalanced service wants to be exposed on that external port. At the moment, this is not used and is also not necessary for any of the solutions. You still need to port-forward to the traefik pod. After establishing the port-forward, you can access the dashboard on http://localhost:9000/dashboard/ (be aware of the trailing slash that is needed for the PathPrefix rule).
The other solution of exposing on another entrypoint requires no port-forward, but you need to care for a proper domain name (dns entry + host rule) and take care of not exposing it to the whole world by e.g. adding an auth middleware.
See the changes highlighted below:
# dashboard.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: dashboard
spec:
entryPoints:
- web # <-- using the web entrypoint, not the traefik (9000) one
routes: # v-- adding a host rule
- match: Host(`traefik.localhost`) && (PathPrefix(`/dashboard`) || PathPrefix(`/api`))
kind: Rule
services:
- name: api#internal
kind: TraefikService

Adding f5 router to existing openshift cluster

I'm running okd 3.6 (upgrade is a work in progress) with a f5 bigip appliance running 11.8. We currently have 2 virtual servers for http(s) doing nat/pat and talking to the clusters haproxy. The cluster is configured to use redhat/openshift-ovs-subnet.
I now have users asking to do tls passthrough. Can I add new virtual servers and a f5 router pod to the cluster and run this in conjunction with my existing virtual servers and haproxy?
Thank you.
Personally I think... yes, you can. If TLS passthrough is route configuration, then you just define the route as follows, then the HAProxy would transfer to your new virtual server.
apiVersion: v1
kind: Route
metadata:
labels:
name: myService
name: myService-route-passthrough
namespace: default
spec:
host: mysite.example.com
path: "/myApp"
port:
targetPort: 443
tls:
termination: passthrough
to:
kind: Service
name: myService
Frankly I don't know whether or not I could make sense your needs correclty. I might not answer appropriately against your question, so you had better read the following readings for looking for more appropriate solutions.
Passthrough Termination
Simple SSL Passthrough (Non-Prod only)

AWS-EKS deployed pod is exposed with type service Node Port is not accessible over nodePort IP and exposed port

I've created k8s cluster on AWS using EKS with Terraform followed this documentation https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html .
I have one worker node.Note: Everything is in Private Subnets
Just running node.js hello-world container
Code for pod definition
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
containerPort: 3000
Code for service definition
apiVersion: v1
kind: Service
metadata:
name: helloworld-service
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: nodejs-port
protocol: TCP
selector:
app: helloworld
type: NodePort
kubectl get pods shows that my pod is up and running
nodehelloworld.example.com 1/1 Running 0 17h
kubectl get svc shows that my service is also created
helloworld-service NodePort 172.20.146.235 <none> 31001:31001/TCP 16h
kubectl describe svc helloworld-service it has correct end-point and correct selector
So here is the problem
When I hit NodeIP:exposed port(which is 31001) I'm getting This site can’t be reached
then I used kubeclt port-forward podname 3000:3000
I can hit with curl -v localhost:3000 is reachable
I checked my security group inbound rule is 0-65535 from my CIDR block.
Is there anything else I'm missing?
If you are trying to connect from outside the cluster then in the security group for worker nodes you will have to add a custom TCP entry for enabling inbound traffic on port 31001.
If that does not work then make sure you are able to connect to the Node through that IP. I usually connect using a VPN client.
Fixed..
On AWS EKS nodeports are not working as on Pure Kubernetes.
when you exposing
- port: 31001
targetPort: nodejs-port
protocol: TCP
31001 that's the clusterIP port will get exposed.
in order to get nodePort you must describe your service and look for NodePort is description that was exposed

AWS EKS - cannot access apache httpd behind a LoadBalancer

I've deployed an apache httpd server in a container and am attempting to expose it externally via a LoadBalancer. Although I can log on to the local host and get the expected response (curl -X GET localhost) when I try and access the external URL exposed by the load balancer I get an Empty reply from server:
curl -X GET ad8d14ea0ba9611e8b2360afc35626a3-553331517.us-east-1.elb.amazonaws.com:5000
curl: (52) Empty reply from server
Any idea what I am missing - is there some kind of additional redirection going on that I'm unaware of?
The yaml is here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache
labels:
app: apache
spec:
replicas: 1
selector:
matchLabels:
pod: apache
template:
metadata:
name: apachehost
labels:
pod: apache
spec:
containers:
- name: apache
image: myrepo/apache2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: apache
labels:
app: apache
spec:
type: LoadBalancer
selector:
pod: apache
ports:
- name: port1
port: 5000
targetPort: 80
1.Check your pod running.
2.Check aws IAM and security group also may be 5000 port not open for public.Use curl command in kubernet master and check port.
3.Share a pod logs
check on your aws load balancer for open port of 5000 in security group for LB. as in bound rule.
check for inbound rule of load balancer.
If your pods are running on Fargate the load balancer service will not work: https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html