I want to create a LAMP site that also has a separate bucket on a subdomain. Basically, mysite.com and downloads.mysite.com. And, I want to put them both on the same load balancer and SSL certificate.
I know how to create the http(s) load balancer for the main site, using an instance group for the backend service and adding an SSL cert, but I can't seem to figure out how to add the downloads subdomain to that load balancer and cert.
I thought to create an additional bucket backend service for downloads. I'm not sure how to set the Host and Path Rules. I've tried:
Host Path Backend
----------------------------------------------------------------
All unmatched (default) All unmatched (default) main-backend
downloads.mysite.com /* bucket-backend
And for the certificate, I tried using mysite.com & downloads.mysite.com, as well as www.mysite.com & downloads.mysite.com, but I always get the error FAILED_NOT_VISIBLE.
And then there's the DNS settings. In the case of just the main LAMP site, I would add an A record with the load balancer's IP address. Not sure if I need to add another A record for the downloads subdomain or not.
Thanks for your help.
For your comment, you already have an A record pointing to your load balancer IP address for the downloads domain, that is enough in most cases but you can read here another reasons why you are getting a FAILED_NOT_VISIBLE error. Check that your downloads domain is visible on internet, can you ping it successfully? It should respond with your LB IP address. Fix this first before you try again to create the additional certificate. Consider that there is a project quota for certificates, verify you are not reaching it.
You can create a global certificate for several domains using a
gcloud command like this example:
gcloud compute ssl-certificates create my-cert \
--domains=one.com,two.com,www.three.com \
--global
You can use URL Maps and combine them if needed with patch matchers in order to direct traffic for your different backends. You can read here about these concepts, and here you will find how to use them.
Related
I have a GCP instance group with 2 instances. Both are up and running. I want to configure a load balancer (HTTPS) to manage the traffic.
I've set up a forwarding rule with the HTTP-protocol and a certificate managed by google. This all works, but only when the traffic between the load balancer and the backend (the instances) is plain HTTP.
Steps I did so far
I create a template and this template is just a normal N1 series machine. I checked the boxes to create firewall rules for allowing http and https traffic.
I create a firewall rule named "allow-ports". This firewall rule targets all instances in the network, has a 0.0.0.0/0 IP-range and allow port tcp = 80, 443. How I see this, this firewall rule should open both the http (80) and https (443) port.
I create an instance group with port mapping. "http-port" = 80, "https-port" = 443. I use the template I just created.
When the instance group is created, I check if this is running. With SSH, I get access to the instances and install apache (sudo apt-get install -y apache2) on the both. When navigating to their external IP's in the browser, I see them both.
I create a HTTP(S) load balancer, with the option "From internet to my VMs". For backend configuration, I add a backend service with my instance group, protocol HTTP, named port "http-port". For frontend configuration, I set up the HTTPS protocol, create an IPv4 IP address, create a google-managed ssl certificate, and I'm done. I also added health checks btw.
Now... these steps work (after a few minutes). With the cloud DNS, I have set up a domain name which points to the IP address of the load balancer. When going to , I see the apache page.
What doesn't work?
When I change the backend configuration to HTTPS (and named port "https-port"), I get a 502 server error. So it seems to me that there is some connection, but there is an error. Could this be an apache error?
I have spent a whole day, creating and recreating instance groups, firewall rules, load balancers, ... but nothing seems to work. I'm surely missing something, probably something dumb, but I have no clue what it could be.
What do I want to achieve?
I do not only want a secure (HTTPS) connection between the client and my load balancer, I also want a secure connection between the load balancer and the backend service (the instance group). Because GCP offers the option to use the HTTPS protocol when creating a backend service, I feel that this could be done.
To be honest: I'm reading some articles about the fact that the internal traffic is secured, so a HTTPS connection is not necessary. But that doesn't matter to me, I really want to know how this works!
EDIT
I'm using the correct VPC (default). I also edited the firewall rule from 0.0.0.0/0 to 130.211.0.0/22 and 35.191.0.0/16 (see: https://cloud.google.com/compute/docs/tutorials/globally-autoscaling-a-web-service-on-compute-engine?hl=nl#configure_the_load_balancer).
In addition to my previous comment. I followed your steps at my test project to find out the cause of your issue. I installed the same configuration and checked it with HTTP at the back-end. As it was expected, I found no errors. After that, I installed SSL certificates to the back-end and to the load balancer. Then I switched my back-end, load balancer and health checks to HTTPS and disabled HTTP at the back-end. At this point, I found no errors also.
So, I decided to get 502 error in my test configuration in some way. I switched my health check at the load balancer to HTTP. A few minutes later I tried to reach my test service again and got 502 error. When I switched back my health check to HTTPS 502 error gone away.
During this test, I didn't change firewall rules, but allowed HTTP and HTTPS traffic in my instance template and I used default network.
I am working with Load Balancing to have https to my static website and I have my domain in GoDaddy
I created a LoadBalancer with
Backend configuration: To my Cloud storage buckets & enabled CDN.
Frontend configuration: Https having static IP I have enabled
Google-managed SSL certificate with my domain example.com which is in GoDaddy.
Do I need to do any configuration in GoDaddy like pointing, After 10-20 min I get FAILED_NOT_VISIBLE in domain status
I am new and don't know how to link.
In google docs I can see DNS records for your domain must reference the IP address of your load balancer's target proxy, Can someone help me to understand.
https://cloud.google.com/load-balancing/docs/ssl-certificates?hl=en_US&_ga=2.190405227.-1195839345.1570257391#certificate-resource-status
Finally I fixed it, We need to point the Static IP to DNS in my case I have in GoDaddy, It took some time to point DNS and then it took time for my Google-managed SSL certificate to turn green.
Once it's done I hade an issue with err_ssl_version_or_cipher_mismatch for this we need to add Policy to tell LB to use TLS 1.2 but in my case it automatically resolved in 10 min.
We can Point DNS in two ways one by directly adding Static IP to A record in GoDaddy other is by creating a Cloud DNS in GCP and point Nameserver in Godaddy.
We must establish a link to confirm our DNS with Static IP of LB so that the SSL turns Green after confirming Domain status.
I am very new to load balancers. I have just set one up that listens on SSL. I also created an EC2 instance and added it to the target group of the "Application Load Balancer". The target group is also connected by SSL.
I have installed apache on the EC2 instance and placed an index.html file in the /var/www/html directory.
I would have thought typing the load balancer associated domain address (www.example.com) would route me to the index.html file of the EC2 instance (which is the only target). However I am getting a Bad Gateway 502 error.
Initially I only had SSH inbound rule on the EC2. I opened up 443 for HTTPS but that didn't make a difference.
Do I need to install a certificate for the SSL on the EC2 as well as the load balancer? And do I need to open any additional ports?
Very new to this all and not sure how the load balancer communicates with the EC2 instance. Hoping that it would be internal so that the EC2 instance was not at all exposed in isolation.
So many things can go wrong here but (assuming that you have correctly configured the load balancer) I think what you have should work if you add HTTP listener to your load balancer, change your target group's protocol to HTTP (because the load balancer talks to the EC2 over HTTP), and then, add something like this to your .htaccess:
RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} =http
RewriteRule . https://%{HTTP:Host}%{REQUEST_URI} [L,R=permanent]
You can read more here.
Install the SSL certificate on the load balancer instead of the EC2. The EC2 does not need its own SSL certificate.
Here are the steps to add HTTPS to a application load balancer:
When you try to set up HTTPS inbound to the load balancer it will give you a section called "ACM" click into that to get a SSL certificate
The ACM page will give you a section to create a new SSL certificate. You will need to input the domain name and some details, afterwards it will give you a CNAME record. You need to go to your domain's DNS settings to add that new CNAME record.
Once you create a SSL certificate with ACM you'll be able to use that on the load balancer, go back to the HTTPS listener and use that new SSL certificate
Then make sure your load balancer security group allows inbound to 443 port.
After that https should work on the load balancer
Note:
I would only set up https after I get http working first on the load balancer and it is directing to the right ec2.
Since the target group for http and https is the same, you want to make sure the target group is working before messing around with https.
That way you won't have two problems to deal with at the same time (https + incorrectly configured target groups/http). It'll allow to tackle each item step by step.
Hope that helps!
I've got a home server that sits behind a dynamic ip address. I use a free dynamic dns service (http://freedns.afraid.org/) so that I can access my server via the following (fake) hostname foo.example.com
I use bluehost to host a separate domain, mycompany.com, and used their DNS settings to set up a CNAME to route traffic for mycompany.com to foo.example.com
What i want to do now is throw an SSL cert into the mix. The problem is I don't know how to go about getting the cert. Some companys (godaddy) want me to associate a domain to the cert. I don't know if that's mycompany.com or foo.example.com. Even if i pick one, it seems like the browser would complain about a mismatch.
Any insight would be great.
If the website gets accessed as https://foo.example.com you need a certificate for foo.example.com. If it gets accessed as https://mycompany.com you need a certfificate for mycompany.com. If it needs to be accessed with both names (like if one side would redirect to the other) you need a certificate containing both names.
I see several people describing how to do this for a custom domain with sub-domain but no one talking about how to do it without one.
Example: Setting foobar.com and www.foobar.com to point to my Amazon S3–hosted site
I personally do not want the www prefix. Is there no way to make this happen? I seems crazy that Amazon would set it up to allow static sites and custom domains, then lock it down to prefixed domains?
Thanks in advance,
For historical reasons any URL needs to resolve to a subdomain, which you already know how to handle: Create a CNAME record with your DNS provider, pointing www to your S3-hosted subdomain. There are details to get right, described nicely elsewhere.
You nevertheless want to support users who, charmed that their browsers will autocomplete http:// and .com and such, want to type a naked domain domain.com, and have it automatically complete to your default subdomain such as www.domain.com.
The easiest way to accomplish this is to use www as your default subdomain, and point your DNS provider's A record at wwwizer.com (174.129.25.170). They automatically redirect any naked domain to the same domain with www in front.
You get fastest turnaround on development, and your visitors get fastest DNS resolution, if you use Amazon Route 53 to provide your DNS services. Route 53 can point its A records to wwwizer.com. However, you may want to create a micro Amazon EC2 instance, and start programming it. In the '50s everyone rebuilt their own cars. In the '80s everyone pushed a shopping cart down the aisle at Fry's, and built their own computer. Now, you want to be able to build your own computer in the cloud, for many reasons you will discover with time, and Amazon EC2 is best choice. For now, your cloud computer will simply handle naked domains for you. Later, email, generating the static site, ...
Install the Apache web server (the A in LAMP; a LAMP server will do the trick), and configure a virtual host for each of your domains. Then point an elastic IP address at your EC2 instance, and update Route 53 to have your A record point to this elastic IP address. Amazon doesn't support having multiple elastic IPs pointing to the same EC2 instance, but you can provide the same elastic IP to multiple domain A records, and have Apache resolve this within your EC2 instance.
This takes some fiddling and experimenting, as there's lots of conflicting advice on the details. I used the ami-ad36fbc4 instance image (US East, 64 bit EBS-backed Ubuntu 10.04 LTS), as I'm familiar with Ubuntu, there's plenty of online help with Ubuntu, and this image will be supported for years. I edited /etc/apache2/httpd.conf to have the contents
NameVirtualHost *
<VirtualHost *>
ServerName first.net
Redirect permanent / http://www.first.net/
</VirtualHost>
<VirtualHost *>
ServerName second.net
Redirect permanent / http://www.second.net/
</VirtualHost>
then checked for errors using
sudo /usr/sbin/apache2ctl configtest
then restarted the Apache server using
sudo /etc/init.d/apache2 restart
Apache is standard across Linux flavors, but the details such as file locations may vary, e.g./etc/apache2/httpd.conf could be /etc/httpd.conf. For example, it might be necessary put a Listen 80 in httpd.conf, but Apache throws an error if that command was already somewhere else. So read web instructions with a grain of salt, and be prepared to Google any error messages.
As I'd already been using Amazon Route 53 for days to point to wwwizer.com, this worked immediately once I updated Route 53 to point to my elastic IP. Before switching to Route 53, each change took days for me to verify, as the information propagated across the web. Once everyone knows to look to Amazon, Amazon can propagate its internal changes much more quickly.
Unfortunately you can not point foobar.com to an Amazon S3 bucket and the reason for this has to do with how DNS works.
DNS does not allow the root of a domain (called zone apex) to point to another DNS name (you can not have foobar.com set up as a CNAME / only subdomain.foobar.com can be a CNAME)
Since this question was asked things have changed. It is now possible to host your site on S3 with a root domain.
Instead of just having one bucket named "www.yourserver.com", you have to create another bucket with the nude (root) domain name, e.g. "yourserver.com".
After that you will have to use Amazon's DNS service Route 53. Create an A record for the nude domain and a CNAME for the "www" hostname.
Note that you will need to move the domain management of your domain to Amazon Route 53 completely.
See for the detailled walk-through here: http://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html