OpenShift - Call external Services - Which IP is used - openshift-origin

I am currently evaluating OpenShift and there is a question regarding the networking of OpenShift which I wasn't able to find an answer about.
Which IP is used, if n Pod accesses an external service?
To be more specific: Which IP would be listed in the access-log of an webserver, that is accessed by a service inside a Pod?
Is it the IP of the Pod or the Node that the Pod is running on?

You can check the IP by using dig or telnet on your master or any of your nodes. You just use the service. The service should be something like:
<service-name>.<project-name>.svc
It should resolve to a 10.x.x.x ip address.

Related

Is it possible to get an SSL certificate for a portforwarding VPN service?

In my example I am using https://portmap.io VPN service which is not exactly a pure VPN services but still uses VPN technology to break my ISP restrictions, allowing portforwarding to my own home server running in my android device.
So if I run 193.161.193.99:1200, my website gets browsed. The port 1200 is mapped to my local python server port running on port 1000. Port 1200 is given by the VPN provider.
However, if I try 193.161.193.99 without the port 1200. The portmap VPN official website gets called, cause that's the websites' IP. So basically each user of this VPN services has there own port to work with.
Question: I don't have any public IP totally in my own control to get an SSL certificate, which requires a file upload verification by the CA (CSR). So, it it anyhow possible to get an SSL certificate using 193.161.193.99:1200 ?
Note: Services like zerossl.com accepts to provide a certificate for ipv4 public addresses. So it not always essential to use a FQDN to get a cert.
Yes this is possible, you will need a domain pointing to the VPN/portmap IP and then obtain a SLL certificate from Let's Encrypt for that domain. This can be your own domain or one provided by a Dynamic DNS Service such as Duck DNS.
I'll describe how I have done it with Docker and Duck DNS in detail:
Sign in to Duck DNS, create a subdomain and point it to the VPN/portmap IP, note the token at the top of the page.
Deploy a docker container from LinuxServer.io's SWAG Image
Make sure to provide the required environment variables in your docker-compose.yml (or with docker run command):
- VALIDATION=duckdns
- DUCKDNSTOKEN={your token}
- URL={yourdomain}.duckdns.org
Note: If you want everything behind your VPN, there is a great docker container called gluetun which allows you to run the swag container behind your VPN
You will find your SSL certificates in the /config/etc/letsencrypt/live/{yourdomain}.duckdns.org folder of the SWAG container. Use those for the website/service that is running behind your forwarded port.
The certificates will get updated automatically 30 days before they expire. There is also a PKCS#12-file privkey.pfx, which is needed for services like emby. For more information on SWAG see the LinuxServer.io Docs. You may or may not need another container running that updates the Duck DNS IP periodically, I'm not sure if the SWAG container already does that.
All of this can of course be done without Docker and with your own domain. In this case you will need to map your domain or subdomain to the VPN IP in the DNS Record section of your domain provider. And then use certbot to create certificates for that domain. Docker just automates the renewal part.

Traefik, Metallb portforwarding

I'm having problems portforwarding traefik. I have a deployment in Rancher, where i'm using metallb with traefik to have ssl certs. applied on my services. All of this is working locally, and i'm not seeing any error messages in the traefik logs. It's funny because, at times, i am able to reach my service outside of my network, but other times not.
I have portforwarded, 80, 433, 8080 to 192.168.87.135
What am i doing wrong? are there some ports im missing?
Picture of traefik logs
Picture of the exposed traefik loadbalancer
IPv4 specifies private ip address ranges that are not reachable from the internet because:
The Internet has grown beyond anyone's expectations. Sustained
exponential growth continues to introduce new challenges. One
challenge is a concern within the community that globally unique
address space will be exhausted.
(source: RFC-1918 Address Allocation for Private Internets)
IP addresses from these private IP ranges are not accessible from the internet. Your IP address 192.168.87.235 is part of the class C private ip address range 192.168.0.0/16 hence it is by nature not reachable from the internet.
Furthermore you state yourself that it is working correctly within your local network.
A follow up question to this is: How can I access my network if it's a private network?
To access your local network you need to have a gateway that has both an internal as well as a public IP so that you can reach your network through the public IP. One solution could be to have a DNS name thats mapped to the public IP and is internally routed to the internal load balancer IP 192.168.87.235 with a reverse proxy.
Unfortunately I can't tell you why it is working occasionally because that would require far more knowledge about your local network. But I guess it could i.e. be that you are connected with VPN to your local network or that you already have a reverse proxy that is just not online all the time.
Edit after watching your video:
Your cluster is still reachable from the internet at the end of the video. You get the message "Service unavailable" which is in fact returned by traefik everytime you wish to access a non-healthy application. Your problem is that the demo application is not starting up after you restart the VM. So what you need to do next is to check why the demo app is not starting. This includes checking the logs of the pod and events of the failing pod.
Another topic I'd like to touch is traefik and what it actually does. First to only call Traefik a reverse proxy, while not false,is not the entire truth. Traefik in a kubernetes environment is an ingress controller. That means it is a reverse proxy configured by kubernetes resources, namely by the "Ingress" object or the "IngressRoute" object. The latter is a custom resource introduced by Traefik itself (read here for further informations) because it introduces andvanced options to configure traefik.
The reason I tell you this is because you actually have two ingress controllers installed in your cluster, "Traefik" and "nginx-ingress-controller" and you just need a single one.

Bastionhost configuration with NaviServer on GCP?

How to add TLS/SSL letsencrypt or GCP provided certificate to VM instance in GCP with an internal ip address and static external address?
When I create one via a letsencrpt certificate install script, resultant connections break because the VM doesn't have an external facing ip number --only an internal number.
The traffic passes through a firewall (or load balancer) of sorts.
I'm used to bastionhost VM servers in the wild..
Details: NaviServer web server is running on a GCP Compute Engine with a FreeBSD 11.3 image.
(Linux Shield OSes aren't letting me compile Naviserver and use it on any port).
All works for port 80 and 8000 on an internal ip address, and a static ip address pointed externally and not connected to the VM.
I can't find any proxy/firewall settings to navigate via GCP menus.
How to resolve?
Is there some special term I should use to search for docs?
Any link with instructions to follow?
Is there a way to expose a VM instance directly to an external ip address?
Any other creative way I may get SSL/TLS to work with NaviServer?
thank you
Links to some things I've tried:
Enable SSL on Tomcat on Google Compute Engine
How to setup Letsencrypt for Google Cloud Compute Engine load balancer? <-- this is for Kubernetes clusters
I'm currently trying adding a load balancer:
https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs
This appears to be the solution: Use a GCP HTTP/S load balancer: https://cloud.google.com/load-balancing/docs/https
and specifically:
https://cloud.google.com/load-balancing/docs/https/ext-https-lb-simple
Argh. Actually No.
GCP Team kindly suggested this url: https://cloud.google.com/compute/docs/instances/custom-hostname-vm#create-custom-hostname
Set the hostname to the domain name. Treat this as if there's no proxy, just a firewall.

How can you publish a Kubernetes Service without using the type LoadBalancer (on GCP)

I would like to avoid using type: "LoadBalancer" for a certain Kubernetes Service, but still to be able to publish it on the Internet. I am using Google Cloud Platform (GCP) to run a Kubernetes cluster currently running on a single node.
I tried to us the externalIPs Service configuration and to give at turns, the IPs of:
the instance hosting the Kubernetes cluster (External IP; which also conincides with the IP address of the Kubernetes node as reported by kubernetes describe node)
the Kubernetes cluster endpoint (as reported by the Google Cloud Console in the details of the cluster)
the public/external IP of another Kubernetes Service of type LoadBalancer running on the same node.
None of the above helped me reach my application using the Kubernetes Service with an externalIPs configuration.
So, how can I publish a service on the Internet without using a LoadBalancer-type Kubernetes Service.
If you don't want to use a LoadBalancer service, other options for exposing your service publicly are:
Type NodePort
Create your service with type set to NodePort, and Kubernetes will allocate a port on all of your node VMs on which your service will be exposed (docs). E.g. if you have 2 nodes, w/ public IPs 12.34.56.78 and 23.45.67.89, and Kubernetes assigns your service port 31234, then the service will be available publicly on both 12.34.56.78:31234 & 23.45.67.89:31234
Specify externalIPs
If you have the ability to route public IPs to your nodes, you can specify externalIPs in your service to tell Kubernetes "If you see something come in destined for that IP w/ my service port, route it to me." (docs)
The cluster endpoint won't work for this because that is only the IP of your Kubernetes master. The public IP of another LoadBalancer service won't work because the LoadBalancer is only configured to route the port of that original service. I'd expect the node IP to work, but it may conflict if your service port is a privileged port.
Use the /proxy/ endpoint
The Kubernetes API includes a /proxy/ endpoint that allows you to access services on the cluster endpoint IP. E.g. if your cluster endpoint is 1.2.3.4, you could reach my-service in namespace my-ns by accessing https://1.2.3.4/api/v1/proxy/namespaces/my-ns/services/my-service with your cluster credentials. This should really only be used for testing/debugging, as it takes all traffic through your Kubernetes master on the way to the service (extra hops, SPOF, etc.).
There's another option: set the hostNetwork flag on your pod.
For example, you can use helm3 to install nginx this way:
helm install --set controller.hostNetwork=true nginx-ingress nginx-stable/nginx-ingress
The nginx is then available at port 80 & 443 on the IP address of the node that runs the pod. You can use node selectors or affinity or other tools to influence this choice.
There are a few idiomatic ways to expose a service externally in Kubernetes (see note#1):
Service.Type=LoadBalancer, as OP pointed out.
Service.Type=NodePort, this would exposing node's IP.
Service.Type=ExternalName, Maps the Service to the contents of the externalName field by returning a CNAME record (You need CoreDNS version 1.7 or higher to use the ExternalName type.)
Ingress. This is a new concept that expose eternal HTTP and/or HTTPS routes to services within the Kubernetes cluster, you can even map a route to multiple services. However, this only maps HTTP and/or HTTPS routes only. (See note#2)

Pod to Pod connection with using multiple port

I have a Google Cloud Container Engine cluster with 2 Pods, master and slave. Each of them runs RabbitMQ instance, that supposed to be joined into one cluster.
Ports exposed from Dockers aren't available from other machine, but could be accessed only through a Service. That's not a problem, I could establish a service for each instance (one-to-one, service-to-pod), and point each Pod to opposite service IP.
The problem that RabbitMQ uses more that one port for communications. That means that service IP should open all this ports from underlying Pod. But I cannot specify list of shared port for a Service, and if I create a new service for each port each of them will have own IP.
Is there any way to expose list of ports from same Docker/Pod on same internal IP address using Container Engine cluster? maybe some special routing configuration?
Your question is similar to this question, and unfortunately has the same response: Kubernetes / Google Container Engine does not currently have a way to expose a range of ports for a service at the current time. There is an open issue in GitHub to address this use case.