How to make my Google Cloud Load Balancer work? - ssl

I follow Document for Creating Content-Based Load Balancing: https://cloud.google.com/load-balancing/docs/https/content-based-example
I want to reach external address with https. I want load balancer to connect to VM with simple http.
Both VMs work as expected and are returning proper answet when reached by IP address. LB's settings seem fine. Both health checks are passing and Google SSL Certificate is ACTIVE.
However, when I try to reach Load Balancer's IP address or domain I get 502.
LB IP is 35.244.161.226 wciel.pl
Load Balancer's logs show statusDetails: "failed_to_connect_to_backend"
I attached screens of my Google Cloud Console.
Please advice.
me#machine:$ gcloud beta compute ssl-certificates list
NAME TYPE CREATION_TIMESTAMP EXPIRE_TIME MANAGED_STATUS
wciel-pl-certificate2 MANAGED 2019-08-11T03:20:15.971-07:00 2019-11-09T01:27:44.000-08:00 ACTIVE
www.wciel.pl: ACTIVE

I think there is a mismatch in back end service configuration. From the details of web-map-backend-service its seems like your service listening on port 80. However, when you have configured backend service you have configured it with port 443.
If you don't require secure communication between LB to VM, I would recommend followings:
Change backend protocol from HTTPS to HTTP
Edit backend Port numbers from 443 to 80
Save and update the configuration.

Related

Can not link a HTTP Load Balancer to a backend (502 Bad Gateway)

I have on the backend a Kubernetes node running on port 32656 (Kubernetes Service of type NodePort). If I create a firewall rule for the <node_ip>:32656 to allow traffic, I can open the backend in the browser on this address: http://<node_ip>:32656.
What I try to achieve now is creating an HTTP Load Balancer and link it to the above backend. I use the following script to create the infrastructure required:
#!/bin/bash
GROUP_NAME="gke-service-cluster-61155cae-group"
HEALTH_CHECK_NAME="test-health-check"
BACKEND_SERVICE_NAME="test-backend-service"
URL_MAP_NAME="test-url-map"
TARGET_PROXY_NAME="test-target-proxy"
GLOBAL_FORWARDING_RULE_NAME="test-global-rule"
NODE_PORT="32656"
PORT_NAME="http"
# instance group named ports
gcloud compute instance-groups set-named-ports "$GROUP_NAME" --named-ports "$PORT_NAME:$NODE_PORT"
# health check
gcloud compute http-health-checks create --format none "$HEALTH_CHECK_NAME" --check-interval "5m" --healthy-threshold "1" --timeout "5m" --unhealthy-threshold "10"
# backend service
gcloud compute backend-services create "$BACKEND_SERVICE_NAME" --http-health-check "$HEALTH_CHECK_NAME" --port-name "$PORT_NAME" --timeout "30"
gcloud compute backend-services add-backend "$BACKEND_SERVICE_NAME" --instance-group "$GROUP_NAME" --balancing-mode "UTILIZATION" --capacity-scaler "1" --max-utilization "1"
# URL map
gcloud compute url-maps create "$URL_MAP_NAME" --default-service "$BACKEND_SERVICE_NAME"
# target proxy
gcloud compute target-http-proxies create "$TARGET_PROXY_NAME" --url-map "$URL_MAP_NAME"
# global forwarding rule
gcloud compute forwarding-rules create "$GLOBAL_FORWARDING_RULE_NAME" --global --ip-protocol "TCP" --ports "80" --target-http-proxy "$TARGET_PROXY_NAME"
But I get the following response from the Load Balancer accessed through the public IP in the Frontend configuration:
Error: Server Error
The server encountered a temporary error and could not complete your
request. Please try again in 30 seconds.
The health check is left with default values: (/ and 80) and the backend service responds quickly with a status 200.
I have also created the firewall rule to accept any source and all ports (tcp) and no target specified (i.e. all targets).
Considering that regardless of the port I choose (in the instance group), and that I get the same result (Server Error), the problem should be somewhere in the configuration of the HTTP Load Balancer. (something with the health checks maybe?)
What am I missing from completing the linking between the frontend and the backend?
I assume you actually have instances in the instance group, and the firewall rule is not specific to a source range. Can you check your logs for a google health check? (UA will have google in it).
What version of kubernetes are you running? Fyi there's a resource in 1.2 that hooks this up for you automatically: http://kubernetes.io/docs/user-guide/ingress/, just make sure you do these: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md.
More specifically: in 1.2 you need to create a firewall rule, service of type=nodeport (both of which you already seem to have), and a health check on that service at "/" (which you don't have, this requirement is alleviated in 1.3 but 1.3 is not out yet).
Also note that you can't put the same instance into 2 loadbalanced IGs, so to use the Ingress mentioned above you will have to cleanup your existing loadbalancer (or at least, remove the instances from the IG, and free up enough quota so the Ingress controller can do its thing).
There can be a few things wrong that are mentioned:
firewall rules need to be set to all hosts, are they need to have the same network label as the machines in the instance group have
by default, the node should return 200 at / - readiness and liveness probes to configure otherwise did not work for me
It seems you try to do things that are all automated, so I can really recommend:
https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
This shows the steps that do the firewall and portforwarding for you, which also may show you what you are missing.
I noticed myself when using an app on 8080, exposed on 80 (like one of the deployments in the example) that the load balancer staid unhealthy untill I had / returning 200 (and /healthz I added to). So basically that container now exposes a webserver on port 8080, returning that and the other config wires that up to port 80.
When it comes to firewall rules, make sure they are set to all machines or make the network label match, or they won't work.
The 502 error is usually from the loadbalancer that will not pass your request if the health check does not pass.
Could you make your service type LoadBalancer (http://kubernetes.io/docs/user-guide/services/#type-loadbalancer) which would setup this all up automatically? This assumes you have the flag set for google cloud.
After you deploy, then describe the service name and should give you the endpoint which is assigned.

Configuration Errors: Elastic Load Balancer + EC2 + Route 53

I am trying to configure my website to have a secure connection (https://) via Amazon's EC2, ELB, and Route 53.
I am running a t2.micro instance (no Elastic IP or anything). My Elastic Load Balancer has the SSL certificate attached. My SecurityGroup allows for https connections through port 443. I'm not sure what I'm doing wrong here.
All of my configurations are below. Any help is appreciated because, as it stands, I can't access my website at all.
Thank you in advance!
EC2 - - -
Load Balancer - - -
Route 53 - - -
Step 1: Hit the EC2 instance directly and verify that the health check URL responds with an HTTP 200 status code. If not, then get that working first.
You aren't clear about your security group configuration. You should have a security group on your load balancer that allows HTTP and HTTPS connections. Then you should have a security group on your EC2 instance that allows HTTP (port 80) connections from the load balancer's group.
The issue is obviously the failing health check on the load balancer at this point, so no need to look at Route 53 settings right now. You need to concentrate on getting the communication working between the EC2 instance and the load balancer to get that health check to start working. Until then the load balancer won't accept any traffic because it doesn't have instances it considers healthy that it can forward traffic to.

Anyone mangaged to use google cloud loadbalancing from https to http?

I have set up an instance reachable on http.
I have set up an instance group containing that instance.
I have set up Loadbalancing using an self signed ssl cert.
The external IP of the LB and the instance can be reached.
The forwarding of the request from the LB runs into an time out.
The config for Loadbalancing says "you have 0 instances without errors, you have 1 instance with errors."
I don't see any log entries in the apache logs coming from the lb frontend.
There is no http connection from Google addresses showing up.
Any ideas where to look for or hints to a good guide (not the rather good google docu)?
Yes. You can use Compute Engine HTTPS load balancer with HTTP backend services. Select HTTP as Backend services protocol. For health check, use HTTP health check. Add GCE firewall rules to open tcp:80 for 130.211.0.0/22 and tcp:443 for 0.0.0.0/0 source IP ranges.

Can't get https working on Elastic Load Balancer (AWS)

I have a load balancer in front on an ec2-Classic instance. I have checked that the load balancer is working properly by directly linking to the DNS Name value that is listed in the Description tab for my load balancer. This gives me the main page of the webpage that lies on the EC2 instance. Thus my load balancer is working. My load balancer and my EC2 instance are in the same avalibility zone.
My load balancer has set up an SSL certificate and I have two listeners setup to forward http (port 80) and https (port 443) to instance port 80 as http. My EC2 instance has a security group set to accept http and https with protocol TCP on ports 80 and 443 respectively. Although my understanding is that only the port 80 would be useful, right? The data for the certificate are in the pem format. I have addded to my instance security group a custom TCP on Port Range 0 - 65535 for amazon-elb/amazon-elb-sg. This did nothing.
I can access my site using http just fine. If I try to access using https then I get Error code: ERR_CONNECTION_REFUSED on Chrome and Unable to Connect on Firefox.
I have checked similar posts for this question and nothing seems to help.
Any help or ideas would be greatly appreciated. Thanks
Have you made sure that the ELB is in a security group that allows https on port 443?
I had a similar problem with both classic and advanced load balancer. The thing that was missing for me is that the https to http translation stuff only workers AFTER you make an A record in the DNS for the domain your SSL is on ALIASED to the load balancer you just created. Once I did that all was well through that new A record DNS. Your instance doesn't need to accept port 443 and your LB definitely should not be forwarding over 443.
Hopefully it is something straightforward like this for you as well.
Wait, what SSL certificate in PEM format? I used an Amazon SSL certificate I just got from the dropdown. Are you sure you used an SSL certificate?
In your description I see that maybe you are not following Step 6 from Amazon's "Elastic Load Balancing in Amazon EC2-Classic ->Create HTTPS/SSL Load Balancer
Using the AWS Management Console -> Configure Listeners" guide.
There, it says that you should configure "HTTPS (...) in the Load Balancer Protocol [and] HTTPS (Secure HTTP) (...) in the Instance Protocol box.", whereas in your configuration you are forwarding ELB's 443 to port 80 in the instance.
For further reference, this is the guide that I'm talking about DEAD LINKhttp://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/configure-https-listener.htmlDEAD LINK
Also, check if your SSL certificate is well built according to the rules specified here: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/ssl-server-cert.html

503 Service Unavailable for Elastic Beanstalk HTTPS Configuration

I have a PHP app deployed on Elastic Beanstalk, currently with a single instance behind a load balancer and am attempting to enable SSL. The current configuration is as follows:
-I've uploaded my certs to IAM successfully
-On the EB Console Load Balancer config "Listener Port" is off, "Secure Listener Port" is "443", and "Protocol" is set to "HTTPS"
-In my Loadbalancer, accessed through the EC2 console, Load Balancer Port/Protocol 443/HTTPS and Instance Port/Protocol is 80/HTTP (the default HTTP/80 HTTP/80 listener is still there but i've tried removing it to no joy)
-My security groups for both the load balancer and the instance are configured the same: Inbound is allowing all connections from either security group, plus inbound http on 80 and https on 443 (source= 0.0.0.0/0)
When attempting to access the url https://myurl.com, I get 503 service unavailable (server at capacity). I suspect there is an issue with my security group configuration, but can't figure out what it is (have tried referring to this thread).
Any Ideas?
I just experienced this on my ElasticBeanstalk deployment and the reason was that my elastic load balancer had 0 healthy instances in service. There's different health check settings, one that checks over HTTP:80 and one that checks over TCP:80. I haven't investigated thoroughly but for some reason the HTTP:80 setting will result in my servers being marked as unhealthy, but TCP:80 will test correctly. If this comes up again I would suggest looking in there?