IP/hostname whitelist way for a call to API from openshift - api

This is more of a how-to question , as i am still exploring openshift.
We have an orchestrator running out of openshift which calls to a REST API written in Flask hosted on apache/RHEL.
While our end point is token authenticated - we wanted to add a second level of restriction by allowing access from a whitelisted source of hosts.
But given the concept of openshift, that can span a container across any (number of ) server across its cluster.
What is the best way to go about whitelisting the action from a cluster of computers?
I tried to take a look at External Load Balancer for my the orchestrator service.
clusterIP: 172.30.65.163
externalIPs:
- 10.198.40.123
- 172.29.29.133
externalTrafficPolicy: Cluster
loadBalancerIP: 10.198.40.123
ports:
- nodePort: 30768
port: 5023
protocol: TCP
targetPort: 5023
selector:
app: dbrun-x2
deploymentconfig: dbrun-x2
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 172.29.29.133
So I am unsure is - what is the IP I am expected to see on the other side [ my API apache access logs ] with this setup?
or
Does this LoadBalancer act as gateway only for incoming calls to openshift.
sorry about the long post - would appreciate some inputs

Related

ActiveMQ integration with Istio

We are using ActiveMQ 5.15.6. We have deployed it as a container on a Kubernetes cluster. We have exposed the ActiveMQ web console using a load balancer.
Now we are setting up a service mesh using Istio, for which we have to enable a strict TLS on all the namespaces (applications). After enabling the TLS we are unable to use the load balancer. So to expose the web-console we need to create a virtual service and a gateway which will do the routing.
We have created the virtual service and tried to expose it on path /activemq but it is not working as expected. Also we changed the routing path /admin to /activemq in jetty.xml as /admin was conflicting with our other application path.
Kindly help us to understand how can we set the proper routing using a virtual service.
Note: We also tried it with nginx-ingress and it didn’t work.
Virtual Service :
kind: VirtualService
metadata:
name: activemq-vs
spec:
hosts:
- "*"
gateways:
- default/backend-gateway
http:
- name: activemq-service
match:
- uri:
prefix: /activemq
rewrite:
uri: /activemq
route:
- destination:
host: active-mq.activemq.svc.cluster.local
port:
number: 8161

How to restrict direct access from internet to Azure Public LoadBalancer backend pool VM with NSG

As the question at title, I'm setup the following architecture on Azure Cloud and having trouble at restricting direct access from the internet to VMs.
Here are architecture requirements:
Both VMs must have public ips (for SysAdmin to access via SSH)
Direct traffics from Internet to WebService on VMs (via port 80) must be denied
The web traffics from Internet must go thru Public LB to VMs
Suppose that both VMs are in WebASG (Application Security Group), in the NSG setting that applied to VM's Subnet, I've add some rules (which have higher priority than 3 Azure NSG default rules):
Scenario A (adding 1 custom rule):
Port: 80 - Protocol: Tcp - Source: Internet - Destination:
WebASG - Action: Allow
With this NSG setting, I could access WebService from LoadBalancer IP (satisfy #3 requirement), but WebService on port 80 of both VMs will be exposed to Internet (that violates #2 requirement)
Scenario B (adding 2 custom rules):
Port: 80 - Protocol: Tcp - Source: AzureLoadBalancer -
Destination: WebASG - Action: Allow
Port: 80 - Protocol: Tcp - Source: Internet - Destination:
WebASG - Action: Deny
With this NSG setting, #2 requirement is satisfied, but I could not access WebService when visit LoadBalancer IP (violates #3 requirement)
Please note that: using AGW (Azure Application Gateway, I could make all the requirements happened by these NSG configuration:
RuleName: AllowSSH Port: 22 - Protocol: Tcp - Source:
sys-admin-ip-address - Destination: WebASG - Action: Allow
RuleName: DenyInternet2Web Port: Any - Protocol: Any -
Source: Internet - Destination: WebASG - Action: Deny
RuleName: AllowProbe2Web Port: 80 - Protocol: Tcp -
Source: VirtualNetwork - Destination: WebASG - Action:
Allow
RuleName: AllowProbe2Web Port: 80 - Protocol: Tcp -
Source: VirtualNetwork - Destination: WebASG - Action:
Allow
I dont want using AGW because it would cost more money than Azure LoadBalancer (actually the Basic LoadBalancer is free). So, how could I change NSG to satisfy all requirements when using LoadBalancer?
Thank in advance for any help!
I don't think there are NSG rules that will satisfy all requirements because of the #1 and #2 requirements are contradictory.
If the VMs must have public IP addresses, it actually has a chance to expose to the Internet. Any clients could access the VMs via the public IP. It works the same if you want to access the VMs through the load balancer frontend IP. Read the https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-overview#load-balancer-concepts
Load Balancer doesn't terminate or originate flows, interact with the
payload of the flow, or provide any application layer gateway
function. Protocol handshakes always occur directly between the client
and the back-end pool instance. A response to an inbound flow is
always a response from a virtual machine. When the flow arrives on the
virtual machine, the original source IP address is also preserved.
In this case, you could remove the backend-instance IP address, just use the load balancer frontend for the web traffic and SSH connection. If so, You could configure port forwarding in Azure Load Balancer for the SSH connections to individual instances and a load balancer rule for the web traffic following this quickstart, which works with standard LB. You can only allow port 80 and 22 from your clients' IP addresses. The NSG will look like this,
Port: 80,22 - Protocol: Tcp - Source: client's IP list - Destination: WebASG - Action: Allow

Adding f5 router to existing openshift cluster

I'm running okd 3.6 (upgrade is a work in progress) with a f5 bigip appliance running 11.8. We currently have 2 virtual servers for http(s) doing nat/pat and talking to the clusters haproxy. The cluster is configured to use redhat/openshift-ovs-subnet.
I now have users asking to do tls passthrough. Can I add new virtual servers and a f5 router pod to the cluster and run this in conjunction with my existing virtual servers and haproxy?
Thank you.
Personally I think... yes, you can. If TLS passthrough is route configuration, then you just define the route as follows, then the HAProxy would transfer to your new virtual server.
apiVersion: v1
kind: Route
metadata:
labels:
name: myService
name: myService-route-passthrough
namespace: default
spec:
host: mysite.example.com
path: "/myApp"
port:
targetPort: 443
tls:
termination: passthrough
to:
kind: Service
name: myService
Frankly I don't know whether or not I could make sense your needs correclty. I might not answer appropriately against your question, so you had better read the following readings for looking for more appropriate solutions.
Passthrough Termination
Simple SSL Passthrough (Non-Prod only)

Unable to connect to running rabbitmq services route on openshift

my company have i runing microservice project deployed with openshift.
and existing services success to connect to rabbitmq service with this properties on it's code ,
spring.cloud.stream.rabbitmq.host: rabbitmq
spring.cloud.stream.rabbitmq.port: 5672
now i'm developing new service, with my laptop, not deployed to openshift yet, and now i'm trying to connect to the same broker as the others, so i created new route for rabbitmq service with 5672 as it's port
this is my route YAML :
apiVersion: v1
kind: Route
metadata:
name: rabbitmq-amqp
namespace: message-broker
selfLink: /oapi/v1/namespaces/message-broker/routes/rabbitmq-amqp
uid: 5af3e903-a8ad-11e7-8370-005056aca8b0
resourceVersion: '21744899'
creationTimestamp: '2017-10-04T02:40:16Z'
labels:
app: rabbitmq
annotations:
openshift.io/host.generated: 'true'
spec:
host: rabbitmq-amqp-message-broker.apps.fifgroup.co.id
to:
kind: Service
name: rabbitmq
weight: 100
port:
targetPort: 5672-tcp
tls:
termination: passthrough
wildcardPolicy: None
status:
ingress:
- host: rabbitmq-amqp-message-broker.apps.fifgroup.co.id
routerName: router
conditions:
- type: Admitted
status: 'True'
lastTransitionTime: '2017-10-04T02:40:16Z'
wildcardPolicy: None
when i;m trying to connect my new service with this properties :
spring.cloud.stream.rabbitmq.host: rabbitmq-amqp-message-broker.apps.fifgroup.co.id
spring.cloud.stream.rabbitmq.port: 80
and my service failed to set the connection.
how to solve this problem?
where should i fix it? my service routes or my service properties..?
Thank for your attention.
If you are using passthrough secure connection to expose a non HTTP server outside of the cluster, your service must terminate a TLS connection and your client must also support SNI over TLS. Do you have both of those? Right now it seems you are trying to connect on port 80 anyway, which means it must be HTTP. Isn't RabbitMQ non HTTP? If you are only trying to connect to it from a front end app in the same OpenShift cluster, you don't need a Route on rabbitmq, use 'rabbitmq' as host name and use port 5672.

Enable HTTPS on GCE/GKE

I am running web site with Kubernetes on Google Cloud. At the moment, everything is working well - through http. But I need https. I have several services and one of them is exposed to the outside world, let's call it web. As far as I know, this is the only service that needs to be modified. I tried to creating a static IP and TCP/SSL loadbalancer ssl-LB in the Networking section of GCP and using that LB in web.yaml, which I create. Creating the service gets stuck with:
Error creating load balancer (will retry): Failed to create load
balancer for service default/web: requested ip <IP> is
neither static nor assigned to LB
aff3a4e1f487f11e787cc42010a84016(default/web): <nil>
According to GCP my IP is static, however. The hashed LB I cannot find anywhere and it should be assigned to ssl-LB anyway. How do I assign this properly?
More details:
Here are the contents of web.yaml
apiVersion: v1
kind: Service
metadata:
name: web
labels:
...
spec:
type: LoadBalancer
loadBalancerIP: <RESERVED STATIC IP>
ports:
- port: 443
targetPort: 7770
selector:
...
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web
spec:
replicas: 1
template:
metadata:
labels:
...
spec:
containers:
- name: web
image: gcr.io/<PROJECT>/<IMAGE NAME>
ports:
- containerPort: 7770
Since you have not mentioned this already, I'm just assuming you're using Google Container Engine (GKE) for your Kubernetes setup.
In the service resource manifest, if you set the Type to LoadBalancer, Kubernetes on GKE automatically sets up Network load balancing (L4 Load balancer) using GCE. You will have to terminate connections in your pod using your own custom server or something like nginx/apache.
If your goal is to set up a (HTTP/HTTPS) L7 load balancer (which looks to be the case), it will be simpler and easier to use the Ingress resource in Kubernetes (starting with v1.1). GKE automatically sets up a GCE HTTP/HTTPS L7 load balancing with this setup.
You will be able to add your TLS certificates which will get provisioned on the GCE load balancer automatically by GKE.
This setup has the following advantages:
Specify services per URL path and port (it uses URL Maps from GCE to configure this).
Set up and terminate SSL/TLS on the GCE load balancer (it uses Target proxies from GCE to configure this).
GKE will automatically also configure the GCE health checks for your services.
Your responsibility will be to handle the backend service logic to handle requests in your pods.
More info available on the GKE page about setting up HTTP load balancing.
Remember that when using GKE, it automatically uses the available GCE load balancer support for both the use cases described above and you will not need to manually set up GCE load balancing.