Traefik, Metallb portforwarding - traefik

I'm having problems portforwarding traefik. I have a deployment in Rancher, where i'm using metallb with traefik to have ssl certs. applied on my services. All of this is working locally, and i'm not seeing any error messages in the traefik logs. It's funny because, at times, i am able to reach my service outside of my network, but other times not.
I have portforwarded, 80, 433, 8080 to 192.168.87.135
What am i doing wrong? are there some ports im missing?
Picture of traefik logs
Picture of the exposed traefik loadbalancer

IPv4 specifies private ip address ranges that are not reachable from the internet because:
The Internet has grown beyond anyone's expectations. Sustained
exponential growth continues to introduce new challenges. One
challenge is a concern within the community that globally unique
address space will be exhausted.
(source: RFC-1918 Address Allocation for Private Internets)
IP addresses from these private IP ranges are not accessible from the internet. Your IP address 192.168.87.235 is part of the class C private ip address range 192.168.0.0/16 hence it is by nature not reachable from the internet.
Furthermore you state yourself that it is working correctly within your local network.
A follow up question to this is: How can I access my network if it's a private network?
To access your local network you need to have a gateway that has both an internal as well as a public IP so that you can reach your network through the public IP. One solution could be to have a DNS name thats mapped to the public IP and is internally routed to the internal load balancer IP 192.168.87.235 with a reverse proxy.
Unfortunately I can't tell you why it is working occasionally because that would require far more knowledge about your local network. But I guess it could i.e. be that you are connected with VPN to your local network or that you already have a reverse proxy that is just not online all the time.
Edit after watching your video:
Your cluster is still reachable from the internet at the end of the video. You get the message "Service unavailable" which is in fact returned by traefik everytime you wish to access a non-healthy application. Your problem is that the demo application is not starting up after you restart the VM. So what you need to do next is to check why the demo app is not starting. This includes checking the logs of the pod and events of the failing pod.
Another topic I'd like to touch is traefik and what it actually does. First to only call Traefik a reverse proxy, while not false,is not the entire truth. Traefik in a kubernetes environment is an ingress controller. That means it is a reverse proxy configured by kubernetes resources, namely by the "Ingress" object or the "IngressRoute" object. The latter is a custom resource introduced by Traefik itself (read here for further informations) because it introduces andvanced options to configure traefik.
The reason I tell you this is because you actually have two ingress controllers installed in your cluster, "Traefik" and "nginx-ingress-controller" and you just need a single one.

Related

Is it possible to get an SSL certificate for a portforwarding VPN service?

In my example I am using https://portmap.io VPN service which is not exactly a pure VPN services but still uses VPN technology to break my ISP restrictions, allowing portforwarding to my own home server running in my android device.
So if I run 193.161.193.99:1200, my website gets browsed. The port 1200 is mapped to my local python server port running on port 1000. Port 1200 is given by the VPN provider.
However, if I try 193.161.193.99 without the port 1200. The portmap VPN official website gets called, cause that's the websites' IP. So basically each user of this VPN services has there own port to work with.
Question: I don't have any public IP totally in my own control to get an SSL certificate, which requires a file upload verification by the CA (CSR). So, it it anyhow possible to get an SSL certificate using 193.161.193.99:1200 ?
Note: Services like zerossl.com accepts to provide a certificate for ipv4 public addresses. So it not always essential to use a FQDN to get a cert.
Yes this is possible, you will need a domain pointing to the VPN/portmap IP and then obtain a SLL certificate from Let's Encrypt for that domain. This can be your own domain or one provided by a Dynamic DNS Service such as Duck DNS.
I'll describe how I have done it with Docker and Duck DNS in detail:
Sign in to Duck DNS, create a subdomain and point it to the VPN/portmap IP, note the token at the top of the page.
Deploy a docker container from LinuxServer.io's SWAG Image
Make sure to provide the required environment variables in your docker-compose.yml (or with docker run command):
- VALIDATION=duckdns
- DUCKDNSTOKEN={your token}
- URL={yourdomain}.duckdns.org
Note: If you want everything behind your VPN, there is a great docker container called gluetun which allows you to run the swag container behind your VPN
You will find your SSL certificates in the /config/etc/letsencrypt/live/{yourdomain}.duckdns.org folder of the SWAG container. Use those for the website/service that is running behind your forwarded port.
The certificates will get updated automatically 30 days before they expire. There is also a PKCS#12-file privkey.pfx, which is needed for services like emby. For more information on SWAG see the LinuxServer.io Docs. You may or may not need another container running that updates the Duck DNS IP periodically, I'm not sure if the SWAG container already does that.
All of this can of course be done without Docker and with your own domain. In this case you will need to map your domain or subdomain to the VPN IP in the DNS Record section of your domain provider. And then use certbot to create certificates for that domain. Docker just automates the renewal part.

Load balancer PublicIPReferencedByMultipleIPConfigs error on restart

Following along from the Use a static IP address with the Azure Container Service (AKS) load balancer documentation I have created a static IP and assigned it to the load balancer. This worked fine on the initial run, but now I am getting the following error and the external ip for my load balancer is stuck <pending> (personal info omitted):
Failed to ensure load balancer for service default/[...]: network.LoadBalancersClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="PublicIPReferencedByMultipleIPConfigs" Message="Public ip address /subscriptions/[...]/providers/Microsoft.Network/publicIPAddresses/[PublicIPName] is referenced by multiple ipconfigs in resource
As far as I can tell, this isn't referenced by multiple configs - just the load balancer service that I'm trying to run. Removing the loadBalancerIP option from my yaml file allows this to work but then I don't think the server address is static - which is not ideal for the applications trying to communicate with this container
Is this supposed to be happening? Is there a way to configure this so that the same IP can be reused after the container restarts?
Seeing as this issue appears to still be present, for anyone else stumbling upon this issue it seems that the Azure load balancer resource itself may be taking the first configured static IP address.
GitHub issue response:
the first public IP address created is used for egress traffic
Microsoft Docs:
Once a Kubernetes service of type LoadBalancer is created, agent nodes are added to an Azure Load Balancer pool. For outbound flow, Azure translates it to the first public IP address configured on the load balancer.
As far as I can tell, once you provision an IP address and configure an AKS load balancer to use it, that IP gets picked up by the provisioned load balancer resource in Azure. My best guess is that when Kubernetes attempts to provision a new load balancer with the same IP address, if the previous Azure load balancer still exists the IP config will fail as it's still in use.
Workaround was to provision an extra static IP (one specifically for the Azure load balancer resource, and one for the actual AKS load balancer service) to avoid conflicts. It's obviously not ideal but it solves the issue...

HTTPS on Amazon EC2 for OwnCloud

I have a question which I hope somebody can answer for me.
My situation: I have an Ubuntu Server running Apache2 on a EC2 Amazon instance, which is serving an OwnCloud instance.
My goal: I want to deploy HTTPS on this instance. I already configured the security group to allow HTTPS traffic from anywhere (as the server should be accessible from anywhere on the internet). We already have a domain name bar.com registered at another domain hosting company. But we want to point foo.bar.com to this owncloud installation.
My questions:
1) Which IP-address do I use to configure the DNS at this domain hosting company. Because the public ip-address and public DNS of the EC2 instance is renewed every time the instance restarts.
2) How do I generate the SSL certificate for HTTPS configuration of Apache2? More specifically, which common name (CN) do I need to put in the certificate. Because the public dns of the EC2 instance is changing on every restart. I think if I put the foo.bar.com CN in the certificate that the browser will throw a certificate error once the user gets redirected from foo.bar.com -> .compute.amazonaws.com, am I right?
In short: how do I deploy https on a EC2 instance at Amazon AWS with a dns at a third party domain name service?
To deal with the changing public ip address you've got two options, first and (for simple situations, best) go to the Elastic Ip Page, get an eip and associate it with your instance, this association and hence public IP will hang around even after start/stop. You can even move the eip over to a different machine if you need to. This option is very cheap (you only get charged for an eip if its not attached to a started server). You're then safe to point your dns at the eip. The alternative option is much more powerful and that is to use elb (load balancing) but it also involves a fair amount more work to setup.
I assume if you're asking about cn's you dont really want a "how to" on creating an ssl cert (please correct me if I'm wrong). For the cn you just use the domain name - it doesn't matter what ip address the name resolves to the cert is for the domain. If you have your own domain to point at your eip you dont need to care about the machines public hostname. A user will never see it.

Pod to Pod connection with using multiple port

I have a Google Cloud Container Engine cluster with 2 Pods, master and slave. Each of them runs RabbitMQ instance, that supposed to be joined into one cluster.
Ports exposed from Dockers aren't available from other machine, but could be accessed only through a Service. That's not a problem, I could establish a service for each instance (one-to-one, service-to-pod), and point each Pod to opposite service IP.
The problem that RabbitMQ uses more that one port for communications. That means that service IP should open all this ports from underlying Pod. But I cannot specify list of shared port for a Service, and if I create a new service for each port each of them will have own IP.
Is there any way to expose list of ports from same Docker/Pod on same internal IP address using Container Engine cluster? maybe some special routing configuration?
Your question is similar to this question, and unfortunately has the same response: Kubernetes / Google Container Engine does not currently have a way to expose a range of ports for a service at the current time. There is an open issue in GitHub to address this use case.

How can I make Apache on an amazon ec2 linux box using the elastic IP instead of the private IP?

I've migrated a website to Amazon ec2 that hooks into a service we are using that is installed on another server (not on Amazon). Access to the API for that service is IP-restricted and done by sending XML data using *http_build_query* & *stream_context_create* in PHP.
If I want to connect to the service from a new server, I need to ask the vendor to add the new IP first. I did that by sending the Elastic IP to them, but it doesn't work.
While trying to debug, I noticed that the output for $_SERVER['SERVER_ADDR'] is the private IP of the ec2 instance.
I assume that the server on the other side is receiving the same data, so it tries to authenticate the private IP.
I've asked the vendor to allow access from the private IP as well – it's not implemented yet, so I'm not sure if that solves the problem, but as far as I understand the way their API works, it will then try to parse data back to the IP it was contacted from, which shouldn't be possible because the server is outside the Amazon cloud.
I might miss something really obvious here. I added a command to rc.local (running CENT OS on my ec2 instance) that associates the elastic IP to the server upon startup by using ec2-associate-address, and this seemed to help make a MySQL connection to another outside server working, but no luck with the above mentioned API.
To rule out one thing - the API is accessed through HTTPS, with ports 80 and 443 (and a mysql port) enabled in security groups and tested. The domain and SSL are running fine.
Any hint highly appreciated - I searched a lot already, but couldn't find anything useful so far.
It sounds like both IPs (private and elastic) are active in your VM. Check by running ifconfig -a. If that's what's happening then the IP that gets used for external traffic will depend on the remote address and your VM's routing table. It could even vary from one connection to the next.
If that's what's going on then the quickest fix would be to ifconfig down the interface that has the private address. That should leave only the elastic address for all external connections. If that resolves the problem then you can script something that downs the private IP automatically after the elastic IP has been made active, or if the elastic IP will be permanently assigned to this VM and you really don't need the private IP then you can permanently disassociate the private IP from this VM.