If I have a backend implementation for TLS, does Ingress NGINX expose it correctly?
I'm exposing an MQTT service through an Ingress NGNIX with the following configuration:
ConfigMap:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-tcp-microk8s-conf
namespace: ingress
#Add the service we want to expose
data:
1883: "default/mosquitto-broker:1883"
DaemonSet:
---
apiVersion: apps/v1
kind: DaemonSet
...
spec:
selector:
matchLabels:
name: nginx-ingress-microk8s
template:
metadata:
...
spec:
...
ports:
- containerPort: 80
- containerPort: 443
#Add the service we want to expose
- name: prx-tcp-1883
containerPort: 1883
hostPort: 1883
protocol: TCP
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf
- --tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
- --udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
$DEFAULT_CERT
$EXTRA_ARGS
I have configured the MQTT broker to use TLS in the backend. When I run the broker in my machine, outside the kubernetes cluster, Wireshark detects the messages as TLS, and it doesn't show anything about MQTT:
However, if I run the broker inside the cluster, it shows that im using MQTT, and nothing about TLS. But the messages aren't read correctly:
And finally, if I run the MQTT broker inside the cluster without TLS, Wireshark detects correctly the MQTT pakcets:
My question is: Is the connection encrypted when I use TLS inside the cluster? It's true that Wireshark doesn't show the content of the packets, but it knows I'm using MQTT. Maybe it's because the headers aren't encrypted, but the payload is? Does anyone knows exactly?
The problem was that I was running TLS MQTT in port 8883 as recommended by the documentation (not in 1883 port for standar MQTT), but Wireshark didn't recognise this port as an MQTT port, so the format given by Wireshark was kinda broken.
Related
My UDP setup doesn't work.
In traefik pod,
--entryPoints.udp.address=:4001/udp
is added. The port is listening and on traefik UI, it shows udp entrypoints port 4001. So entry-point UDP 4001 is working.
I have applied this CRD:
kind: IngressRouteUDP
metadata:
name: udp
spec:
entryPoints:
- udp
routes:
- services:
- name: udp
port: 4001
kubrnetes service CRD:
apiVersion: v1
kind: Service
metadata:
name: udp
spec:
selector:
app: udp-server
ports:
- protocol: UDP
port: 4001
targetPort: 4001
got error on traefik UI:
NAME: default-udp-0#kubernetescrd
ENTRYPOINTS: udp
SERVICE:
ERRORS: the udp service "default-udp-0#kubernetescrd" does not exist
What did I wrong? Or is it a bug?
traefik version 2.3.1
So I ran into the trouble using k3s/rancher and traefik 2.x. The problem here was that configuring the command line switch only showed up a working environment in the traefik dashboard, - but it just did not worked.
In k3s the solution is to provide a traefik-config.yam besite the trafik.yaml. traefik.yaml is always recreated on a restart of k3s.
Put traefik-config.yaml to /var/lib/rancher/k3s/server/manifests/traefik-config.yaml is keeping changes persistent.
What misses is the entrypoint declaration. You may assume this is done as well by the command line switch, but it is not.
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
additionalArguments:
- "--entryPoints.udp.address=:55000/udp"
entryPoints:
udp:
address: ':55000/upd'
Before going further check the helm install job in the name kube-system. If one of the two helm install jobs error out, traefik won't work.
In case everything worked as above and you still have trouble. Then one option is just to configure the upd traffic as a normal kubernetes loadbalancer service. Like this example, that was successfully tested by me
apiVersion: v1
kind: Service
metadata:
name: nginx-udp-ingress-demo-svc-udp
spec:
selector:
app: nginx-udp-ingress-demo
ports:
- protocol: UDP
port: 55000
targetPort: 55000
type: LoadBalancer
The entry type: LoadBalancer will start a pod on ony kubernets node, that will send incoming UDP/55000 to the load balancer service.
This worked for me on a k3s cluster. But is not a native traefik solution asked in the question. More a work around, that make things work in the first place.
I found a source that seem to handle the Traefik solution on https://github.com/traefik/traefik/blob/master/docs/content/routing/providers/kubernetes-crd.md.
That seems to have a working solution. But it has very slim expanation and shows just the manifests. I need to test this out, and come back.
This worked on my system.
I have a couple of microservices and our backend is opendj/ldap. It has been configured to use SSL. Now we are trying to use ISTIO as our k8s service mesh. Every other service works fine but the ldap server - opendj - is not. My gues is it's because of the ssl configuration. It's meant to use self-signed cert.
I have a script that creates a self-signed cert in istio namespace and I have tried to use it like this on the gateway.yaml
- port:
number: 4444
name: tcp-admin
protocol: TCP
hosts:
- "*"
tls:
mode: SIMPLE # enable https on this port
credentialName: tls-certificate # fetch cert from k8s secret
I also have tried to use
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: opendj-istio-mtls
spec:
host: opendj.{{.Release.Namespace }}.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
credentialName: tls-certificate
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: opendj-receive-tls
spec:
targets:
- name: opendj
peers:
- mtls: {}
For the ldap server but it's not connecting. While trying to use the tls spec in gateway.yaml I am getting this error
Error: admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: server cannot have TLS settings for non HTTPS/TLS ports
And the logs from opendj server
INFO - entrypoint - 2020-06-17 12:49:44,768 - Configuring OpenDJ.
WARNING - entrypoint - 2020-06-17 12:49:48,987 -
Unable to connect to the server at
"oj-opendj-0.opendj.default.svc.cluster.local" on port 4444
WARNING - entrypoint - 2020-06-17 12:49:53,293 -
Unable to connect to the server at
"oj-opendj-0.opendj.default.svc.cluster.local" on port 4444
Can someone please help me out how I should approach this.
To Enable non-https traffic over TLS connections you have to use Protocol TLS. TLS implies the connection will be routed based on the SNI header to the destination without terminating the TLS connection. You can check this.
- port:
number: 4444
name: tls
protocol: TLS
hosts:
- "*"
tls:
mode: SIMPLE # enable https on this port
credentialName: tls-certificate # fetch cert from k8s secret
Please check this istio documentation also.
I have an ActiveMQ consumer in AKS I am trying to connect to a external service.
I have setup a AKS load balancer with a dedicated IP with with the following rules but it will not connect.
apiVersion: v1
kind: Service
metadata:
name: mx-load-balancer
spec:
loadBalancerIP: 1.1.1.1
type: LoadBalancer
ports:
- name: activemq-port-61616
port: 61616
targetPort: 61616
protocol: TCP
selector:
k8s-app: handlers-mx
Any ideas?
First of all, your loadBalancerIP is not a real one, you need to use a real IP of your LB. Second, you need to add annotation for service of type LoadBalancer to work:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: LB_RESORCE_GROUP
I've created k8s cluster on AWS using EKS with Terraform followed this documentation https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html .
I have one worker node.Note: Everything is in Private Subnets
Just running node.js hello-world container
Code for pod definition
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
containerPort: 3000
Code for service definition
apiVersion: v1
kind: Service
metadata:
name: helloworld-service
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: nodejs-port
protocol: TCP
selector:
app: helloworld
type: NodePort
kubectl get pods shows that my pod is up and running
nodehelloworld.example.com 1/1 Running 0 17h
kubectl get svc shows that my service is also created
helloworld-service NodePort 172.20.146.235 <none> 31001:31001/TCP 16h
kubectl describe svc helloworld-service it has correct end-point and correct selector
So here is the problem
When I hit NodeIP:exposed port(which is 31001) I'm getting This site can’t be reached
then I used kubeclt port-forward podname 3000:3000
I can hit with curl -v localhost:3000 is reachable
I checked my security group inbound rule is 0-65535 from my CIDR block.
Is there anything else I'm missing?
If you are trying to connect from outside the cluster then in the security group for worker nodes you will have to add a custom TCP entry for enabling inbound traffic on port 31001.
If that does not work then make sure you are able to connect to the Node through that IP. I usually connect using a VPN client.
Fixed..
On AWS EKS nodeports are not working as on Pure Kubernetes.
when you exposing
- port: 31001
targetPort: nodejs-port
protocol: TCP
31001 that's the clusterIP port will get exposed.
in order to get nodePort you must describe your service and look for NodePort is description that was exposed
my company have i runing microservice project deployed with openshift.
and existing services success to connect to rabbitmq service with this properties on it's code ,
spring.cloud.stream.rabbitmq.host: rabbitmq
spring.cloud.stream.rabbitmq.port: 5672
now i'm developing new service, with my laptop, not deployed to openshift yet, and now i'm trying to connect to the same broker as the others, so i created new route for rabbitmq service with 5672 as it's port
this is my route YAML :
apiVersion: v1
kind: Route
metadata:
name: rabbitmq-amqp
namespace: message-broker
selfLink: /oapi/v1/namespaces/message-broker/routes/rabbitmq-amqp
uid: 5af3e903-a8ad-11e7-8370-005056aca8b0
resourceVersion: '21744899'
creationTimestamp: '2017-10-04T02:40:16Z'
labels:
app: rabbitmq
annotations:
openshift.io/host.generated: 'true'
spec:
host: rabbitmq-amqp-message-broker.apps.fifgroup.co.id
to:
kind: Service
name: rabbitmq
weight: 100
port:
targetPort: 5672-tcp
tls:
termination: passthrough
wildcardPolicy: None
status:
ingress:
- host: rabbitmq-amqp-message-broker.apps.fifgroup.co.id
routerName: router
conditions:
- type: Admitted
status: 'True'
lastTransitionTime: '2017-10-04T02:40:16Z'
wildcardPolicy: None
when i;m trying to connect my new service with this properties :
spring.cloud.stream.rabbitmq.host: rabbitmq-amqp-message-broker.apps.fifgroup.co.id
spring.cloud.stream.rabbitmq.port: 80
and my service failed to set the connection.
how to solve this problem?
where should i fix it? my service routes or my service properties..?
Thank for your attention.
If you are using passthrough secure connection to expose a non HTTP server outside of the cluster, your service must terminate a TLS connection and your client must also support SNI over TLS. Do you have both of those? Right now it seems you are trying to connect on port 80 anyway, which means it must be HTTP. Isn't RabbitMQ non HTTP? If you are only trying to connect to it from a front end app in the same OpenShift cluster, you don't need a Route on rabbitmq, use 'rabbitmq' as host name and use port 5672.