ActiveMQ integration with Istio - activemq

We are using ActiveMQ 5.15.6. We have deployed it as a container on a Kubernetes cluster. We have exposed the ActiveMQ web console using a load balancer.
Now we are setting up a service mesh using Istio, for which we have to enable a strict TLS on all the namespaces (applications). After enabling the TLS we are unable to use the load balancer. So to expose the web-console we need to create a virtual service and a gateway which will do the routing.
We have created the virtual service and tried to expose it on path /activemq but it is not working as expected. Also we changed the routing path /admin to /activemq in jetty.xml as /admin was conflicting with our other application path.
Kindly help us to understand how can we set the proper routing using a virtual service.
Note: We also tried it with nginx-ingress and it didn’t work.
Virtual Service :
kind: VirtualService
metadata:
name: activemq-vs
spec:
hosts:
- "*"
gateways:
- default/backend-gateway
http:
- name: activemq-service
match:
- uri:
prefix: /activemq
rewrite:
uri: /activemq
route:
- destination:
host: active-mq.activemq.svc.cluster.local
port:
number: 8161

Related

HTTPS Ingress with Istio and SDS not working (returns 404) when I configure multiple Gateways

When I configure multiple (gateway - virtual service) pairs in a namespace, each pointing to basic HTTP services, only one service becomes accessable. Calls to the other (typically, the second configured) return 404. If the first gateway is deleted, the second service then becomes accesible
I raised a github issue a few weeks ago ( https://github.com/istio/istio/issues/20661 ) that contains all my configuration but no response to date. Does anyone know what I'm doing wrong (if anything) ?
Based on that github issue
The gateway port names have to be unique, if they are sharing the same port. Thats the only way we differentiate different RDS blocks. We went through this motion earlier as well. I wouldn't rock this boat unless absolutely necessary.
More about the issue here
Checked it on istio documentation, and in fact if you configure multiple gateways name of the first one is https, but second is https-bookinfo.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "httpbin.example.com"
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https-bookinfo
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-bookinfo-certs/tls.crt
privateKey: /etc/istio/ingressgateway-bookinfo-certs/tls.key
hosts:
- "bookinfo.com"
EDIT
That's weird, but I have another idea.
There is a github pull which have the following line in pilot:
routeName := gatewayRDSRouteName(s, config.Namespace)
This change adds namespace scoping to Gateway port names by appending
namespace suffix to the HTTPS RDS routes. Port names still have to be
unique within the namespace boundaries, but this change makes adding
more specific scoping rather trivial.
Could you try make 2 namespaces like in below example
EXAMPLE
apiVersion: v1
kind: Namespace
metadata:
name: httpbin
labels:
name: httpbin
istio-injection: enabled
---
apiVersion: v1
kind: Namespace
metadata:
name: nodejs
labels:
name: nodejs
istio-injection: enabled
And deploy everything( deployment,service,virtual service,gateway) in proper namespace and let me know if that works?
Could You try change the hosts from "*" to some names? It's only thing that came to my mind besides trying serverCertficate and privateKey but from the comments I assume you have already try it.
Let me know if that help.

How to connect ambassador to ssl enabled google instance groups

We are connecting Ambassador (API Gateway - https://www.getambassador.io/) to Google VM instance group via a load balancer where in http/2 is enabled. This requires ssl must be enabled. There is no proper information on how to connect Ambassador to an SSL enabled end system.
We tried connecting to Google VM instance from an Ambassador pod which is running in kubernetes via normal http service as per the suggestion - https://github.com/datawire/ambassador/issues/585. But could not find a way to connect to an SSL enabled endpoint by providing SSL certificate.
kind: Service
apiVersion: v1
metadata:
name: a-b-service
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: a-b-mapping
grpc: True
headers:
lang: t
prefix: /a.Listener/
rewrite: /a.Listener/
service: http://<ip>:<port>/
timeout_ms: 60000
We want to connect to an SSL enabled Google VM instance group via loadbalancing. Also, how to provide SSL certificate to this
kind: Service
apiVersion: v1
metadata:
name: a-b-service
annotations:
.....
service: https://<ip>:443/ <---- https with ssl
timeout_ms: 60000
Can someone suggest how to achieve this?
Ambassador has a fair amount of documentation around TLS, so probably your best bet is to check out https://www.getambassador.io/reference/core/tls/ and start from there.
The short version is specifying a service that starts with https:// is enough for Ambassador to originate TLS, but if you want to control the originating certificate, you need to reference a TLSContext as well.

Adding f5 router to existing openshift cluster

I'm running okd 3.6 (upgrade is a work in progress) with a f5 bigip appliance running 11.8. We currently have 2 virtual servers for http(s) doing nat/pat and talking to the clusters haproxy. The cluster is configured to use redhat/openshift-ovs-subnet.
I now have users asking to do tls passthrough. Can I add new virtual servers and a f5 router pod to the cluster and run this in conjunction with my existing virtual servers and haproxy?
Thank you.
Personally I think... yes, you can. If TLS passthrough is route configuration, then you just define the route as follows, then the HAProxy would transfer to your new virtual server.
apiVersion: v1
kind: Route
metadata:
labels:
name: myService
name: myService-route-passthrough
namespace: default
spec:
host: mysite.example.com
path: "/myApp"
port:
targetPort: 443
tls:
termination: passthrough
to:
kind: Service
name: myService
Frankly I don't know whether or not I could make sense your needs correclty. I might not answer appropriately against your question, so you had better read the following readings for looking for more appropriate solutions.
Passthrough Termination
Simple SSL Passthrough (Non-Prod only)

Unable to connect to running rabbitmq services route on openshift

my company have i runing microservice project deployed with openshift.
and existing services success to connect to rabbitmq service with this properties on it's code ,
spring.cloud.stream.rabbitmq.host: rabbitmq
spring.cloud.stream.rabbitmq.port: 5672
now i'm developing new service, with my laptop, not deployed to openshift yet, and now i'm trying to connect to the same broker as the others, so i created new route for rabbitmq service with 5672 as it's port
this is my route YAML :
apiVersion: v1
kind: Route
metadata:
name: rabbitmq-amqp
namespace: message-broker
selfLink: /oapi/v1/namespaces/message-broker/routes/rabbitmq-amqp
uid: 5af3e903-a8ad-11e7-8370-005056aca8b0
resourceVersion: '21744899'
creationTimestamp: '2017-10-04T02:40:16Z'
labels:
app: rabbitmq
annotations:
openshift.io/host.generated: 'true'
spec:
host: rabbitmq-amqp-message-broker.apps.fifgroup.co.id
to:
kind: Service
name: rabbitmq
weight: 100
port:
targetPort: 5672-tcp
tls:
termination: passthrough
wildcardPolicy: None
status:
ingress:
- host: rabbitmq-amqp-message-broker.apps.fifgroup.co.id
routerName: router
conditions:
- type: Admitted
status: 'True'
lastTransitionTime: '2017-10-04T02:40:16Z'
wildcardPolicy: None
when i;m trying to connect my new service with this properties :
spring.cloud.stream.rabbitmq.host: rabbitmq-amqp-message-broker.apps.fifgroup.co.id
spring.cloud.stream.rabbitmq.port: 80
and my service failed to set the connection.
how to solve this problem?
where should i fix it? my service routes or my service properties..?
Thank for your attention.
If you are using passthrough secure connection to expose a non HTTP server outside of the cluster, your service must terminate a TLS connection and your client must also support SNI over TLS. Do you have both of those? Right now it seems you are trying to connect on port 80 anyway, which means it must be HTTP. Isn't RabbitMQ non HTTP? If you are only trying to connect to it from a front end app in the same OpenShift cluster, you don't need a Route on rabbitmq, use 'rabbitmq' as host name and use port 5672.

Enable HTTPS on GCE/GKE

I am running web site with Kubernetes on Google Cloud. At the moment, everything is working well - through http. But I need https. I have several services and one of them is exposed to the outside world, let's call it web. As far as I know, this is the only service that needs to be modified. I tried to creating a static IP and TCP/SSL loadbalancer ssl-LB in the Networking section of GCP and using that LB in web.yaml, which I create. Creating the service gets stuck with:
Error creating load balancer (will retry): Failed to create load
balancer for service default/web: requested ip <IP> is
neither static nor assigned to LB
aff3a4e1f487f11e787cc42010a84016(default/web): <nil>
According to GCP my IP is static, however. The hashed LB I cannot find anywhere and it should be assigned to ssl-LB anyway. How do I assign this properly?
More details:
Here are the contents of web.yaml
apiVersion: v1
kind: Service
metadata:
name: web
labels:
...
spec:
type: LoadBalancer
loadBalancerIP: <RESERVED STATIC IP>
ports:
- port: 443
targetPort: 7770
selector:
...
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web
spec:
replicas: 1
template:
metadata:
labels:
...
spec:
containers:
- name: web
image: gcr.io/<PROJECT>/<IMAGE NAME>
ports:
- containerPort: 7770
Since you have not mentioned this already, I'm just assuming you're using Google Container Engine (GKE) for your Kubernetes setup.
In the service resource manifest, if you set the Type to LoadBalancer, Kubernetes on GKE automatically sets up Network load balancing (L4 Load balancer) using GCE. You will have to terminate connections in your pod using your own custom server or something like nginx/apache.
If your goal is to set up a (HTTP/HTTPS) L7 load balancer (which looks to be the case), it will be simpler and easier to use the Ingress resource in Kubernetes (starting with v1.1). GKE automatically sets up a GCE HTTP/HTTPS L7 load balancing with this setup.
You will be able to add your TLS certificates which will get provisioned on the GCE load balancer automatically by GKE.
This setup has the following advantages:
Specify services per URL path and port (it uses URL Maps from GCE to configure this).
Set up and terminate SSL/TLS on the GCE load balancer (it uses Target proxies from GCE to configure this).
GKE will automatically also configure the GCE health checks for your services.
Your responsibility will be to handle the backend service logic to handle requests in your pods.
More info available on the GKE page about setting up HTTP load balancing.
Remember that when using GKE, it automatically uses the available GCE load balancer support for both the use cases described above and you will not need to manually set up GCE load balancing.