my company have i runing microservice project deployed with openshift.
and existing services success to connect to rabbitmq service with this properties on it's code ,
spring.cloud.stream.rabbitmq.host: rabbitmq
spring.cloud.stream.rabbitmq.port: 5672
now i'm developing new service, with my laptop, not deployed to openshift yet, and now i'm trying to connect to the same broker as the others, so i created new route for rabbitmq service with 5672 as it's port
this is my route YAML :
apiVersion: v1
kind: Route
metadata:
name: rabbitmq-amqp
namespace: message-broker
selfLink: /oapi/v1/namespaces/message-broker/routes/rabbitmq-amqp
uid: 5af3e903-a8ad-11e7-8370-005056aca8b0
resourceVersion: '21744899'
creationTimestamp: '2017-10-04T02:40:16Z'
labels:
app: rabbitmq
annotations:
openshift.io/host.generated: 'true'
spec:
host: rabbitmq-amqp-message-broker.apps.fifgroup.co.id
to:
kind: Service
name: rabbitmq
weight: 100
port:
targetPort: 5672-tcp
tls:
termination: passthrough
wildcardPolicy: None
status:
ingress:
- host: rabbitmq-amqp-message-broker.apps.fifgroup.co.id
routerName: router
conditions:
- type: Admitted
status: 'True'
lastTransitionTime: '2017-10-04T02:40:16Z'
wildcardPolicy: None
when i;m trying to connect my new service with this properties :
spring.cloud.stream.rabbitmq.host: rabbitmq-amqp-message-broker.apps.fifgroup.co.id
spring.cloud.stream.rabbitmq.port: 80
and my service failed to set the connection.
how to solve this problem?
where should i fix it? my service routes or my service properties..?
Thank for your attention.
If you are using passthrough secure connection to expose a non HTTP server outside of the cluster, your service must terminate a TLS connection and your client must also support SNI over TLS. Do you have both of those? Right now it seems you are trying to connect on port 80 anyway, which means it must be HTTP. Isn't RabbitMQ non HTTP? If you are only trying to connect to it from a front end app in the same OpenShift cluster, you don't need a Route on rabbitmq, use 'rabbitmq' as host name and use port 5672.
Related
We are using ActiveMQ 5.15.6. We have deployed it as a container on a Kubernetes cluster. We have exposed the ActiveMQ web console using a load balancer.
Now we are setting up a service mesh using Istio, for which we have to enable a strict TLS on all the namespaces (applications). After enabling the TLS we are unable to use the load balancer. So to expose the web-console we need to create a virtual service and a gateway which will do the routing.
We have created the virtual service and tried to expose it on path /activemq but it is not working as expected. Also we changed the routing path /admin to /activemq in jetty.xml as /admin was conflicting with our other application path.
Kindly help us to understand how can we set the proper routing using a virtual service.
Note: We also tried it with nginx-ingress and it didn’t work.
Virtual Service :
kind: VirtualService
metadata:
name: activemq-vs
spec:
hosts:
- "*"
gateways:
- default/backend-gateway
http:
- name: activemq-service
match:
- uri:
prefix: /activemq
rewrite:
uri: /activemq
route:
- destination:
host: active-mq.activemq.svc.cluster.local
port:
number: 8161
If I have a backend implementation for TLS, does Ingress NGINX expose it correctly?
I'm exposing an MQTT service through an Ingress NGNIX with the following configuration:
ConfigMap:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-tcp-microk8s-conf
namespace: ingress
#Add the service we want to expose
data:
1883: "default/mosquitto-broker:1883"
DaemonSet:
---
apiVersion: apps/v1
kind: DaemonSet
...
spec:
selector:
matchLabels:
name: nginx-ingress-microk8s
template:
metadata:
...
spec:
...
ports:
- containerPort: 80
- containerPort: 443
#Add the service we want to expose
- name: prx-tcp-1883
containerPort: 1883
hostPort: 1883
protocol: TCP
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf
- --tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
- --udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
$DEFAULT_CERT
$EXTRA_ARGS
I have configured the MQTT broker to use TLS in the backend. When I run the broker in my machine, outside the kubernetes cluster, Wireshark detects the messages as TLS, and it doesn't show anything about MQTT:
However, if I run the broker inside the cluster, it shows that im using MQTT, and nothing about TLS. But the messages aren't read correctly:
And finally, if I run the MQTT broker inside the cluster without TLS, Wireshark detects correctly the MQTT pakcets:
My question is: Is the connection encrypted when I use TLS inside the cluster? It's true that Wireshark doesn't show the content of the packets, but it knows I'm using MQTT. Maybe it's because the headers aren't encrypted, but the payload is? Does anyone knows exactly?
The problem was that I was running TLS MQTT in port 8883 as recommended by the documentation (not in 1883 port for standar MQTT), but Wireshark didn't recognise this port as an MQTT port, so the format given by Wireshark was kinda broken.
We are connecting Ambassador (API Gateway - https://www.getambassador.io/) to Google VM instance group via a load balancer where in http/2 is enabled. This requires ssl must be enabled. There is no proper information on how to connect Ambassador to an SSL enabled end system.
We tried connecting to Google VM instance from an Ambassador pod which is running in kubernetes via normal http service as per the suggestion - https://github.com/datawire/ambassador/issues/585. But could not find a way to connect to an SSL enabled endpoint by providing SSL certificate.
kind: Service
apiVersion: v1
metadata:
name: a-b-service
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: a-b-mapping
grpc: True
headers:
lang: t
prefix: /a.Listener/
rewrite: /a.Listener/
service: http://<ip>:<port>/
timeout_ms: 60000
We want to connect to an SSL enabled Google VM instance group via loadbalancing. Also, how to provide SSL certificate to this
kind: Service
apiVersion: v1
metadata:
name: a-b-service
annotations:
.....
service: https://<ip>:443/ <---- https with ssl
timeout_ms: 60000
Can someone suggest how to achieve this?
Ambassador has a fair amount of documentation around TLS, so probably your best bet is to check out https://www.getambassador.io/reference/core/tls/ and start from there.
The short version is specifying a service that starts with https:// is enough for Ambassador to originate TLS, but if you want to control the originating certificate, you need to reference a TLSContext as well.
I'm running okd 3.6 (upgrade is a work in progress) with a f5 bigip appliance running 11.8. We currently have 2 virtual servers for http(s) doing nat/pat and talking to the clusters haproxy. The cluster is configured to use redhat/openshift-ovs-subnet.
I now have users asking to do tls passthrough. Can I add new virtual servers and a f5 router pod to the cluster and run this in conjunction with my existing virtual servers and haproxy?
Thank you.
Personally I think... yes, you can. If TLS passthrough is route configuration, then you just define the route as follows, then the HAProxy would transfer to your new virtual server.
apiVersion: v1
kind: Route
metadata:
labels:
name: myService
name: myService-route-passthrough
namespace: default
spec:
host: mysite.example.com
path: "/myApp"
port:
targetPort: 443
tls:
termination: passthrough
to:
kind: Service
name: myService
Frankly I don't know whether or not I could make sense your needs correclty. I might not answer appropriately against your question, so you had better read the following readings for looking for more appropriate solutions.
Passthrough Termination
Simple SSL Passthrough (Non-Prod only)
I've created k8s cluster on AWS using EKS with Terraform followed this documentation https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html .
I have one worker node.Note: Everything is in Private Subnets
Just running node.js hello-world container
Code for pod definition
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
containerPort: 3000
Code for service definition
apiVersion: v1
kind: Service
metadata:
name: helloworld-service
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: nodejs-port
protocol: TCP
selector:
app: helloworld
type: NodePort
kubectl get pods shows that my pod is up and running
nodehelloworld.example.com 1/1 Running 0 17h
kubectl get svc shows that my service is also created
helloworld-service NodePort 172.20.146.235 <none> 31001:31001/TCP 16h
kubectl describe svc helloworld-service it has correct end-point and correct selector
So here is the problem
When I hit NodeIP:exposed port(which is 31001) I'm getting This site can’t be reached
then I used kubeclt port-forward podname 3000:3000
I can hit with curl -v localhost:3000 is reachable
I checked my security group inbound rule is 0-65535 from my CIDR block.
Is there anything else I'm missing?
If you are trying to connect from outside the cluster then in the security group for worker nodes you will have to add a custom TCP entry for enabling inbound traffic on port 31001.
If that does not work then make sure you are able to connect to the Node through that IP. I usually connect using a VPN client.
Fixed..
On AWS EKS nodeports are not working as on Pure Kubernetes.
when you exposing
- port: 31001
targetPort: nodejs-port
protocol: TCP
31001 that's the clusterIP port will get exposed.
in order to get nodePort you must describe your service and look for NodePort is description that was exposed