I deployed a RabbitMQ (from the helm chart https://github.com/helm/charts/tree/master/stable/rabbitmq) in a Kubernetes cluster, in namespace named rabbitmq. I added 3 IPs (the IPs of my Kubernetes nodes) as externalIPs in rabbitmq service.
Here is the rabbitmq service :
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: rabbitmq
app.kubernetes.io/name: rabbitmq
helm.sh/chart: rabbitmq-6.15.0
name: rabbitmq
namespace: rabbitmq
spec:
externalIPs:
- x.x.x.x1
- x.x.x.x2
- x.x.x.x3
ports:
- name: epmd
port: 4369
protocol: TCP
targetPort: epmd
- name: amqp
port: 5672
protocol: TCP
targetPort: amqp
- name: amqp-ssl
port: 5671
protocol: TCP
targetPort: amqp-ssl
- name: dist
port: 25672
protocol: TCP
targetPort: dist
- name: stats
port: 15672
protocol: TCP
targetPort: stats
- name: metrics
port: 9419
protocol: TCP
targetPort: metrics
selector:
app.kubernetes.io/component: rabbitmq
app.kubernetes.io/name: rabbitmq
helm.sh/chart: rabbitmq-6.15.0
type: ClusterIP
I have an external load balancer which target these 3 IPs. I have a DNS entry "rabbitmq.mycompagny.com" which target my load balancer. I always use this mecanism to target all my ingresses without any problem.
Finally, I have a test pod in a namespace test, with amqp-tools installed.
On this test pod, see some commands and results :
amqp-get --url=amqps://user:password#rabbitmq.rabbitmq.svc.cluster.local:5671 --cacert=/opt/bitnami/rabbitmq/certs/ca_certificate.pem --key=/opt/bitnami/rabbitmq/certs/server_key.pem --cert=/opt/bitnami/rabbitmq/certs/server_certificate.pem --queue=test --ssl
# opening socket to rabbitmq.rabbitmq.svc.cluster.local:5671
# => KO
# logs from rabbitmq :
#2020-01-29 07:34:25.308 [debug] <0.3077.0> accepting AMQP connection <0.3077.0> (10.244.96.29:42620 -> 10.244.32.52:5671)
#2020-01-29 07:34:25.308 [debug] <0.3077.0> closing AMQP connection <0.3077.0> (10.244.96.29:42620 -> 10.244.32.52:5671):
#connection_closed_with_no_data_received
amqp-get --url=amqps://user:password#rabbitmq.mycompagny.com:5671 --cacert=/opt/bitnami/rabbitmq/certs/ca_certificate.pem --key=/opt/bitnami/rabbitmq/certs/server_key.pem --cert=/opt/bitnami/rabbitmq/certs/server_certificate.pem --queue=test --ssl
# => OK
# logs from rabbitmq :
#2020-01-29 07:37:47.430 [info] <0.23936.4> accepting AMQP connection <0.23936.4> (10.244.32.0:55694 -> 10.244.32.28:5671)
#2020-01-29 07:37:47.472 [debug] <0.23936.4> User 'user' authenticated successfully by backend rabbit_auth_backend_internal
#2020-01-29 07:37:47.486 [info] <0.23936.4> closing AMQP connection <0.23936.4> (10.244.32.0:55694 -> 10.244.32.28:5671, vhost: '/', user: 'user')
amqp-get --url=amqp://user:password#rabbitmq.rabbitmq.svc.cluster.local:5672 --queue=test
# => OK
amqp-get --url=amqp://user:password#rabbitmq.mycompagny.com:5672 --queue=test
#opening socket to rabbitmq.mycompagny.fr:5672
# => OK
The difference between these commands is the target DNS. With TLS, when I use the external DNS, it's ok, but it's not the case with the internal DNS. Can you explain why ?
I double checked the certificate, which is a signed wildcard certificate (check with openssl s_server/s_client).
I have to use externalIPs focused by an external load balancer to access rabbitmq since Kubernetes ingresses only support HTTP/HTTPS protocol and the rabbitmq protocol is AMQP/AMPQS.
Related
I am trying to configure traefik and loadbalancer to accept traffic from host port 9200.
Everything works fine for port 8443 (websecure). I am using k3d and traefik is initially disabled.
I can curl my "2048" service from my macos host. The ingress is configured for "websecure" endpoint and a match is found.
curl --cacert ca.crt -I https://2048.127.0.0.1.nip.io:8443
HTTP/2 200
I have installed the exact same service and named it "2049". I want this service to be available from 9200 (I have de-configured tls to simplify things).
+ curl -vvv -k -I http://2049.127.0.0.1.nip.io:9200
* Trying 127.0.0.1:9200...
* Connected to 2049.127.0.0.1.nip.io (127.0.0.1) port 9200 (#0)
> HEAD / HTTP/1.1
> Host: 2049.127.0.0.1.nip.io:9200
> User-Agent: curl/7.79.1
> Accept: */*
>
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server
Both services can be accessed from within the cluster.
I have installed traefik through helm and made sure ports are available.
#
k get -n traefik-system svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.43.86.220 172.27.0.3,172.27.0.4,172.27.0.5 80:30039/TCP,443:30253/TCP,9092:30179/TCP,9200:31428/TCP 61m
# just to display, the lb is configured for port 9200 (iptables, /pause container)
k logs -n traefik-system pod/svclb-traefik-h5zs4
error: a container name must be specified for pod svclb-traefik-h5zs4, choose one of: [lb-tcp-80 lb-tcp-443 lb-tcp-9092 lb-tcp-9200]
# my ingress
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: game-2049
spec:
entryPoints: # We listen to requests coming from port 9200
- elasticsearch
routes:
- match: Host(`2049.127.0.0.1.nip.io`)
kind: Rule
services:
- name: game-2049 # Requests will be forwarded to this service
port: 80
# traefik is configured with these endpoint addresses:
- "--entrypoints.web.address=:8000/tcp"
- "--entrypoints.websecure.address=:8443/tcp"
- "--entrypoints.kafka.address=:9092/tcp"
- "--entrypoints.elasticsearch.address=:9200/tcp"
My goal is to access elasticsearch 9200 and kafka 9092 from my MacOS host using k3d. But first I need to get this configuration for "2049" right.
What I am missing?
I have this working on K3s using bitnami kafka
You need two things:
Define the entry point in traefik config -- which from your note you already have.
kubectl describe pods traefik-5bcf476bb9-qrqg7 --namespace traefik
Name: traefik-5bcf476bb9-qrqg7
Namespace: traefik
Priority: 0
Service Account: traefik
...
Status: Running
...
Image: traefik:2.9.1
Image ID: docker.io/library/traefik#sha256:4ebf68cdb33c162e8786ac83ece782ec0dbe583471c04dfd0af43f245b96c88f
Ports: 9094/TCP, 9100/TCP, 9000/TCP, 8000/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
--global.checknewversion
--global.sendanonymoususage
--entrypoints.kafka.address=:9094/tcp
--entrypoints.metrics.address=:9100/tcp
--entrypoints.traefik.address=:9000/tcp
--entrypoints.web.address=:8000/tcp
--entrypoints.websecure.address=:8443/tcp
--api.dashboard=true
--ping=true
--metrics.prometheus=true
--metrics.prometheus.entrypoint=metrics
--providers.kubernetescrd
--providers.kubernetescrd.allowCrossNamespace=true
--providers.kubernetescrd.allowExternalNameServices=true
--providers.kubernetesingress
--providers.kubernetesingress.allowExternalNameServices=true
--providers.kubernetesingress.allowEmptyServices=true
--entrypoints.websecure.http.tls=true
State: Running
Started: Thu, 27 Oct 2022 16:27:22 -0400
Ready: True
I'm using TCP port 9094 for kafka traffic.
Is the Ingress- I'm using IngressRouteTCP CRD
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: kafka-ingress
namespace: bitnami-kafka
spec:
entryPoints:
- kafka
routes:
- match: HostSNI(`*`)
services:
- name: my-bkafka-0-external
namespace: bitnami-kafka
port : 9094
Note: traefik is routing to a k8 LoadBalancer
kubectl get services --namespace bitnami-kafka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-bkafka ClusterIP 10.43.153.8 <none> 9092/TCP 20h
my-bkafka-0-external LoadBalancer 10.43.45.233 10.55.10.243 9094:30737/TCP 20h
my-bkafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 20h
my-bkafka-zookeeper ClusterIP 10.43.170.229 <none> 2181/TCP,2888/TCP,3888/TCP 20h
my-bkafka-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 20h
which is
option A from bitnami's write-up on Kafka external access.
If I have a backend implementation for TLS, does Ingress NGINX expose it correctly?
I'm exposing an MQTT service through an Ingress NGNIX with the following configuration:
ConfigMap:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-tcp-microk8s-conf
namespace: ingress
#Add the service we want to expose
data:
1883: "default/mosquitto-broker:1883"
DaemonSet:
---
apiVersion: apps/v1
kind: DaemonSet
...
spec:
selector:
matchLabels:
name: nginx-ingress-microk8s
template:
metadata:
...
spec:
...
ports:
- containerPort: 80
- containerPort: 443
#Add the service we want to expose
- name: prx-tcp-1883
containerPort: 1883
hostPort: 1883
protocol: TCP
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf
- --tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
- --udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
$DEFAULT_CERT
$EXTRA_ARGS
I have configured the MQTT broker to use TLS in the backend. When I run the broker in my machine, outside the kubernetes cluster, Wireshark detects the messages as TLS, and it doesn't show anything about MQTT:
However, if I run the broker inside the cluster, it shows that im using MQTT, and nothing about TLS. But the messages aren't read correctly:
And finally, if I run the MQTT broker inside the cluster without TLS, Wireshark detects correctly the MQTT pakcets:
My question is: Is the connection encrypted when I use TLS inside the cluster? It's true that Wireshark doesn't show the content of the packets, but it knows I'm using MQTT. Maybe it's because the headers aren't encrypted, but the payload is? Does anyone knows exactly?
The problem was that I was running TLS MQTT in port 8883 as recommended by the documentation (not in 1883 port for standar MQTT), but Wireshark didn't recognise this port as an MQTT port, so the format given by Wireshark was kinda broken.
I have an ActiveMQ consumer in AKS I am trying to connect to a external service.
I have setup a AKS load balancer with a dedicated IP with with the following rules but it will not connect.
apiVersion: v1
kind: Service
metadata:
name: mx-load-balancer
spec:
loadBalancerIP: 1.1.1.1
type: LoadBalancer
ports:
- name: activemq-port-61616
port: 61616
targetPort: 61616
protocol: TCP
selector:
k8s-app: handlers-mx
Any ideas?
First of all, your loadBalancerIP is not a real one, you need to use a real IP of your LB. Second, you need to add annotation for service of type LoadBalancer to work:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: LB_RESORCE_GROUP
I'm trying to connect to my ElastiCache Redis Cluster 5.0 from within a container in EKS that has Istio as a sidecar proxy but I constantly get MOVED error-loop.
I have 1 shard with 2 replicas and I have added a ServiceEntry and a VirtualService for each of the shards + the configuration endpoint.
Example config used for Istio routing:
kind: ServiceEntry
metadata:
name: redis-test-cluster
spec:
hosts:
- redis-cluster-test.XXXX.clustercfg.euw1.cache.amazonaws.com
ports:
- number: 6379
name: tcp
protocol: TCP
resolution: NONE
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-test-cluster
spec:
hosts:
- redis-cluster-test.XXXX.clustercfg.euw1.cache.amazonaws.com
http:
- timeout: 30s
route:
- destination:
host: redis-cluster-test.XXXX.clustercfg.euw1.cache.amazonaws.com
Note that Redis protocol is not HTTP, so you cannot use an http VirtualService.
To control egress access for a TCP protocol like Redis, check Egress Control for TLS section of the Consuming External MongoDB Services blog post.
I have created a kubernetes cluster where I'm currently running only a docker service that is serving a static web page. It is working exposing the standard port 80.
Now I want to attach an SSL certificate to the domain, and have managed to do so running locally. But when I try to publish my service to the kubernates cluster, the https://my.domain.com times out. It appears like the service does not receives the request, but is blocked by the kuernates or GCE.
Do I need to open up a firewall, or setup my cluster deployment to open port 443? What might be the issue?
I have heard about Ingress and kubernetes secrets, and that is the way to go. But all I find is with using Ingress-nginx, and as I'm only having a single docker service I do not utilize Nginx. To me it seems like enabling the 443 call to reach the service would be the easiest solution. Or am I wrong?
Below is my setup:
apiVersion: v1
kind: Service
metadata:
name: client
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: client-pods
-----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: client
spec:
replicas: 1
revisionHistoryLimit: 0
template:
metadata:
labels:
name: client-pods
spec:
containers:
- image: <CONTAINER>
name: client-container
imagePullPolicy: Always
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 1
I have also enabled HTTPS traffic on the GKE VM running the cluster, and the Dockerfile exposes both 80 and 443. I'm at a loss. Anyone know what I'm doing wrong?