Azure Load Balancer - cannot reach port unless it is open in Network Security Group - azure-virtual-network

Load Balancer is configured to redirect TCP requests on the front port 80 to backend port 8080.
That worked fine till I removed "Allow-Port-8080" rule from the Network Security Groups attached to pool VMs.
In my understanding Load Balancer is always allowed by default due to AllowAzureLoadBalancerInBound security rule that I did not touch. Isn't it?
Moreover, port 8080 on pool VMs is reachable from hosts in the same virtual network, so there is no issue with local firewall which is not running on Centos Azure hosts by default BTW.
So to sum up - the question is why should I add an inbound security rule to let Load Balancer to redirect requests to a particular port.

After considering the issue a bit more I've realized that AllowAzureLoadBalancerInBound security rule only applies to the traffic originated by the Load Balancer - health probes, etc.
For all LoadBalancer-redirected traffic general security rules apply, hence we should setup security rules accordingly.

Related

GCP load balancing ("internal" traffic over HTTPS)

I have a GCP instance group with 2 instances. Both are up and running. I want to configure a load balancer (HTTPS) to manage the traffic.
I've set up a forwarding rule with the HTTP-protocol and a certificate managed by google. This all works, but only when the traffic between the load balancer and the backend (the instances) is plain HTTP.
Steps I did so far
I create a template and this template is just a normal N1 series machine. I checked the boxes to create firewall rules for allowing http and https traffic.
I create a firewall rule named "allow-ports". This firewall rule targets all instances in the network, has a 0.0.0.0/0 IP-range and allow port tcp = 80, 443. How I see this, this firewall rule should open both the http (80) and https (443) port.
I create an instance group with port mapping. "http-port" = 80, "https-port" = 443. I use the template I just created.
When the instance group is created, I check if this is running. With SSH, I get access to the instances and install apache (sudo apt-get install -y apache2) on the both. When navigating to their external IP's in the browser, I see them both.
I create a HTTP(S) load balancer, with the option "From internet to my VMs". For backend configuration, I add a backend service with my instance group, protocol HTTP, named port "http-port". For frontend configuration, I set up the HTTPS protocol, create an IPv4 IP address, create a google-managed ssl certificate, and I'm done. I also added health checks btw.
Now... these steps work (after a few minutes). With the cloud DNS, I have set up a domain name which points to the IP address of the load balancer. When going to , I see the apache page.
What doesn't work?
When I change the backend configuration to HTTPS (and named port "https-port"), I get a 502 server error. So it seems to me that there is some connection, but there is an error. Could this be an apache error?
I have spent a whole day, creating and recreating instance groups, firewall rules, load balancers, ... but nothing seems to work. I'm surely missing something, probably something dumb, but I have no clue what it could be.
What do I want to achieve?
I do not only want a secure (HTTPS) connection between the client and my load balancer, I also want a secure connection between the load balancer and the backend service (the instance group). Because GCP offers the option to use the HTTPS protocol when creating a backend service, I feel that this could be done.
To be honest: I'm reading some articles about the fact that the internal traffic is secured, so a HTTPS connection is not necessary. But that doesn't matter to me, I really want to know how this works!
EDIT
I'm using the correct VPC (default). I also edited the firewall rule from 0.0.0.0/0 to 130.211.0.0/22 and 35.191.0.0/16 (see: https://cloud.google.com/compute/docs/tutorials/globally-autoscaling-a-web-service-on-compute-engine?hl=nl#configure_the_load_balancer).
In addition to my previous comment. I followed your steps at my test project to find out the cause of your issue. I installed the same configuration and checked it with HTTP at the back-end. As it was expected, I found no errors. After that, I installed SSL certificates to the back-end and to the load balancer. Then I switched my back-end, load balancer and health checks to HTTPS and disabled HTTP at the back-end. At this point, I found no errors also.
So, I decided to get 502 error in my test configuration in some way. I switched my health check at the load balancer to HTTP. A few minutes later I tried to reach my test service again and got 502 error. When I switched back my health check to HTTPS 502 error gone away.
During this test, I didn't change firewall rules, but allowed HTTP and HTTPS traffic in my instance template and I used default network.

IIS8 https refused to connect

I have a windows 2012 server and have applied an SSL certificate following godaddy's guide:
https://uk.godaddy.com/help/iis-7-install-a-certificate-4801
I have applied the binding on the site in IIS however when I try to view the https site I get "refused to connect".
I have updated the firewall setting to allow port 443.
Any ideas?
It was due to my site using a load balancer.
An additional load balancer for port 443 was required.
Anyone using rackspace will find this useful:
To allow secure traffic you would need an additional load balancer
allowing traffic on port 443, with a shared VIP with the current one.
https://support.rackspace.com/how-to/configure-a-load-balancer/

Google Load Balancer not passing traffic to back end

I am trying to use a load balancer to direct traffic to a container backend. The service in the containers hosts web traffic on port 80. My health checks are all passing. If I ssh into the Kubernetes host for the containers, I can curl each container and get correct responses over port 80. When I try to access them through the load balancers external IP, however, I receive a 502 response. I have firewall rules allowing traffic from 130.211.0.0/22 on tcp:1-5000 and on the NodePort port. I've also tried adding firewall rules from 0.0.0.0/0 ports 80 and 443 to those nodes.
When in the Kubernetes host, capturing with tcpdump, I see the health check requests to my containers, but no traffic is coming through when I make an external request.
I have an identical configuration that points to a single Compute Engine VM that works perfectly. This leads me to believe that the issue might be in the container setup rather than the load balancer.
Does anyone have any advice on resolving this issue?
I was able to resolve the problem by changing the Named Port that the Load Balancer was connecting to. By default, the Load Balancer connected to Named Port "http", which pointed to port 80. It was my assumption (always a bad thing) that this matched since my application serves on port 80. Not so. Since I'd exposed the containers though NodePort, it was assigned another port. This is the port I had my Health Check pointing to. By going into "Compute Engine -> Instance groups" selecting the group, and then "Edit Group", I was able to change the Named Port "http" to match my NodePort number. Once I did that, traffic started flowing.

AWS Beanstalk SSL Load balanced environment

I am trying to make sense of a note in the aws documentation to configure HTTPS for an elastic beanstalk application.
The note reads:
If at any point you decide to redeploy your application using a load-balanced environment, you risk opening port 443 to all incoming traffic from the Internet. In that case, delete the configuration file from your .ebextensions directory. Then create a load-balanced environment and set up SSL using the Load Balancer section of the Configuration page of the Elastic Beanstalk management console.
Here is the link to the original documentation page: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https.html#configuring-https-elb
Can you help me make sense of the warning?
This documentation is poorly written.
There are actually 2 separate possibilities
Load Balanced Environment: All traffic goes through a load balancer first, then to instance.
Non-Load Balanced Environment: All traffic goes directly through your instance.
If all the traffic goes directly to the instance, you need to open up your instances HTTPS port 443 to everyone
Resources:
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: {Ref : AWSEBSecurityGroup}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
If you didn't, everyone couldn't access your site.
If all your traffic is going through a load balancer you can change your instance security to only talk to it. It gets to ignore the rest of the internet. This is more secure because your load balancer is the one that is open to everyone. This matters because imagine if there was a bug in linux that let someone take over your machine through port 443 by doing something weird. The load balancer sees it first and standardizes it so its harder for your instance to be attacked.

Google Compute Engine load balancer health checks on port 443 or 80 with 301

I have an app on 2 Google Compute Engine VMs, and I'm trying to configure a Load Balancer health check. Each server has nginx as a front end proxy that also handles SSL duties for my app. I have nginx configured to redirect port 80 to port 443 via a 301 redirect. On port 443, nginx proxies that traffic to a high port that my app is listening on. I'm trying to configure load balancer health checks, but can't ever get the health check to show healthy. My app has a default root document so requesting http://myapp.com should do a 301 to https://myapp.com, which should return HTTP 200. I configured a health check to test 80 and 443, and both never show healthy, although the load balancer IS sending traffic to them, based on my tests with curl. Based on my above config, the best health check scenario is to request https://myapp.com since that will test the server itself, nginx, and my app serving data.
How can I configure the load balancer to properly test my instances health?
Is it possible that you were affected by this outage (now solved)?
https://groups.google.com/forum/#!topic/gce-operations/VjTcsm0019E
If not, please disregard this.
The health check failing indicates that the redirect connection isn't closing properly. This can be confirmed if a tcpdump shows [R] flags, which are interpreted as failures. You'll need to configure nginx to close the connection properly when doing a redirect.