I have a handful of domains registered for brand protection reasons that are currently managed in a rather manual registrar, but it currently handles redirection to the main domain.
I am moving DNS from this provider over to Route53 using Terraform. I'm setting up a redirector for the brand protection domains using S3, but running into a catch-22:
If I want a single S3 bucket to handle the redirects I need to put CloudFront in front of it, which means I need an SSL cert valid for the various domains, which means I need DNS to already be in Route53 for the validation.
If I want to avoid breaking the redirects during this migration, I can't move the DNS until the redirector is in place.
This means I think I have the following options:
Migrate the domains into Route53 first, then create the CloudFront distribution. Redirects will break until this is complete.
Create multiple S3 buckets for each domain, which won't cover wildcard domains (e.g. *.aliasdomain.com), but can at least do the apex and www for instance (and http only).
Manually create the necessary certificate and import it into Terraform.
Have I missed an obvious alternative? Ideally I would create a single redirector that would handle all http traffic to begin with, then sort out https later.
Related
I have very little knowledge on CloudFlare.
Currently all my application traffic goes through Akamai. what i am looking for is a way to create a new DNS at CloudFlare and route specific requests through CloudFlare.
For example if you configure the same in AWS CloudFront you can give an alternate name for the domain and use it instead of the origin urls and route specific traffic with specific rules.
but with CloudFlare the only way is to move all incoming traffic to CloudFlare as it is asking to replace the name servers with CloudFlare name servers.
i am looking for a way to create a new domain name or alternate domain name(similar to CloudFront) at CloudFlare and use it to route specific requests to my Akamai URLs based on page rules.
Is it possible to achieve??
Thank you in advance.
When you onboard a zone on Cloudflare, you can onboard it in FULL mode (Cloudflare becomes the authoritative DNS, by pointing the nameservers as you mention) or in Partial/CNAME mode (you retain an external authoritative DNS and point specific subdomains to Cloudflare). This could help you in separating which traffic goes through Cloudflare and what does not.
At the time of writing, The Partial/CNAME onboarding is available on Business or Enterprise plans. Documentation is provided at this link
Another possibility could be to direct all the traffic to Cloudflare and then use the Load Balancing capability and custom rules to route the traffic as required.
I am developing a SaaS web application (https://mywebsite.example) which will be hosted in AWS and will have subdomains for individual customers like https://customer1.mywebsite.example , https://customer2.mywebsite.example.
As a second step I would like to introduce custom domain names and map it with the subdomains of mywebsite.com through cname records
https://customer1.example --> https://customer1.mywebsite.example
Here is what I have analysed till now.
Using Certificates in AWS loadbalancer for the custom domains as a SAN in the certificate. However the AWS Loadbalancer certificate limits are lesser than the number of customers I am expecting to add.
CloudFlare DNS setup for mywebsite.example and its subdomains, with ssl certificates configured in cloudflare. However Cloudflare allows thirdparty (custom domain) cname redirections only in the Enterprise Plan.
Are there any other alternative service or are there is an alternate way of achieving this use case?
it seems that this solution available in AWS EC2 marketplace should solve your problem
You can try, there is some trial available, called Kilo SSL
https://aws.amazon.com/marketplace/pp/prodview-nedlvgpke4hdk?sr=0-1&ref_=beagle&applicationId=AWSMPContessa
Also it is possible to map your customer's domains to your saas. Algorithm is:
you create EC2 instance. Allocate and associate public IP to it
create domain name which points to this instance. You will use this domain name as CNAME when pointing your own subdomains in your DNS provider (but there is limit of 50 certificates per week per one domain, so you can create only 50 domains like customer1.yourdomain.com ... customer50.yourdomain.com per week)
For customers who want to use their own domains (like app.customer1.com), you also provide them your CNAME and ask customer to set DNS record. After they will do it, you will be able to create certificate for their domain using this service.
Also this service allows to point different domains to different URLs. We started to use this in our SAAS application for URL shortening (we have several hundreds of customers who use their own domains. So we automatically able to create certificate for them, and everything is automated via API). Also we use the same machine to support SSL for all our company's domains.
available API methods: https://docs.kilossl.com/
I have build a saas product with angular 4 integrated with golang rest api and uploaded the build on aws ec2 instance. My project is a multi-tenant based app which loads customers dashboard on merchant-name.mystore.com subdomain but some of the customers asking for custom domain feature like they should be able to load the app on mydomain.com .
I have done the the subdomain part with following code in apache2.conf file so all subdomain loads from apps folder where the angular app files located
<VirtualHost *:80>
ServerAlias *.mystore.com
DocumentRoot /var/www/html/apps
<Directory "/var/www/html/apps">
AllowOverride All
Require all Granted
</Directory>
</VirtualHost>
For custom domain feature I have a section in admin to save custom domain but not sure how should I implement it.
possible method I thought about are
Create virtual host file and update it on each merchant signup with his custom domain
Do it somehow with htaccess file and mod_rewrite
Shopify do it but not sure how they load merchant specific store. Another point kept me busy thinking about is what values should I ask to update
IP address on domain registrar
Name servers ( not sure what it will be for my on aws )
Ask to create CNAME or A record as some of the article suggest
I have a similar setup on a number of SaaS platforms I develop and manage. This type of setup is certainly desirable, as your clients suggest. You should plan to serve each customer site on its own domain, probably also with *SSL, from the begining. In my opinion, this is best practice for a well architected Saas service today.
In reading your question, I think you are over engineering it a little.
For a custom domain Saas app on the same server, you simply open port 80 to all traffic, regardless of domain name. Point all customer domains to app.mystore.com, which is a CNAME to your app endpoint.
The app then reads the HTTP request header, and in that way determines the host name that was requested.
Finally the app looks up the host name in its client database, and locates the client record for the give customer domain.
For example, in Nginx all you need is:
server {
listen 80 default_server;
server_name _;
root /var/www/myservice/htdocs;
}
This server configuration provides a catch all for any domain that points to this endpoint.
That is all the web server should need to allow it to answer to any customer domain. The app must do the rest.
* When you serve a custom domain on an app on this domain, you should plan to serve the SSL endpoint for the domain, eg https://www.mycustomdomain.com. Consider this in your architecture design. Consider also the DNS issues also if your app fails over to a new IP.
The accepted answer is satisfactory but it only skims over the most important part, and that is enabling HTTPS by issuing certificates for third-party domains.
If your customers just CNAME to your domain or create the A record to your IP and you don't handle TLS termination for these custom domains, your app will not support HTTPS, and without it, your app won't work in modern browsers on these custom domains.
You need to set up a TLS termination reverse proxy in front of your webserver. This proxy can be run on a separate machine but you can run it on the same machine as the webserver.
CNAME vs A record
If your customers want to have your app on their subdomain, e.g. app.customer.com they can create a CNAME app.customer.com pointing to your proxy.
If they want to have your app on their root domain, e.g. customer.com then they'll have to create an A record on customer.com pointing to your proxy's IP. Make sure this IP doesn't change, ever!
How to handle TLS termination?
To make TLS termination work, you'll have to issue TLS certificates for these custom domains. You can use Let's Encrypt for that. Your proxy will see the Host header of the incoming request, e.g. app.customer1.com or customer2.com etc., and then it will decide which TLS certificate to use by checking the SNI.
The proxy can be set up to automatically issue and renew certificates for these custom domains. On the first request from a new custom domain, the proxy will see it doesn't have the appropriate certificate. It will ask Let's Encrypt for a new certificate. Let's Encrypt will first issue a challenge to see if you manage the domain, and since the customer already created a CNAME or A record pointing to your proxy, that tells Let's Encrypt you indeed manage the domain, and it will let you issue a certificate for it.
To issue and renew certificates automatically, I'd recommend using Caddy, greenlock.js, OpenResty (Nginx).
tl;dr on what happens here;
Caddy server listens on 443 and 80, receives requests, issues, and renews certificates automatically, and proxies traffic to your backend.
How to handle it on the backend
Your proxy is terminating TLS and proxying requests to your backend. However, your backend doesn't know who is the original customer behind the request. This is why you need to tell your proxy to include additional headers in proxied requests to identify the customer. Just add X-Serve-For: app.customer.com or X-Serve-For: customer2.com or whatever the Host header is of the original request.
Now when you receive the proxied request on the backend, you can read this custom header and you know who is the customer behind the request. You can implement your logic based on that, show data belonging to this customer, etc.
More
Put a load balancer in front of your fleet of proxies for higher availability. You'll also have to use distributed storage for certificates and Let's Encrypt challenges. Use AWS ECS or EBS for automated recovery if something fails, otherwise, you may be waking up in the middle of the night restarting machines, or your proxy manually.
Alternatively, there have been a few services like this recently that allow you to add custom domains to your app without running the infrastructure yourself.
If you need more detail you can DM me on Twitter #dragocrnjac
My company is looking at creating a subdomain for content that's currently stored in a subfolder on the site. As an SEO this decision makes my skin crawl. Since the decision has been made to implement the subdomain (server architecture decision to move parts of the site to a cloud provider), I would like to have IT implement a reverse proxy so we don't have to 301 the whole content base to a fresh subdomain.
One of the main objections IT has is that if we implement he reverse proxy, and there are issues with content or webpage functionality, the cloud provider will point to the reverse proxy as the issue.
My question is, unless we're specifically blocking access from outside the reverse proxy server, aren't the pages still accessible directly using the subdomain, or specific server ip address?
Example:
www.Example.com/blog hosted in say our Florida datacenter
becomes
www.Example.com/blog actually pointing to blog.Example.com hosted in say an Amazon EC2 cloud
Wouldn't a user still be able to access blog.Example.com directly unless we specify that we will only allow traffic from whatever the proxy server's IP address is?
I realize leaving access open to the world would introduce additional SEO considerations, but I can manage around that.
Yes, adding a reverse proxy to the mix just adds another route in to the destination URL, it's much like adding a 301 redirect to a page doesn't mean traffic can't get in using other means.
I am using an Amazon S3 bucket for uploading and downloading of data using my .NET application. Now my question is: I want to access my S3 bucket using SSL. Is it possible to implement SSL for an Amazon s3 bucket?
You can access your files via SSL like this:
https://s3.amazonaws.com/bucket_name/images/logo.gif
If you use a custom domain for your bucket, you can use S3 and CloudFront together with your own SSL certificate (or generate a free one via Amazon Certificate Manager): http://aws.amazon.com/cloudfront/custom-ssl-domains/
Custom domain SSL certs were just added today for $600/cert/month. Sign up for your invite below:
http://aws.amazon.com/cloudfront/custom-ssl-domains/
Update: SNI customer provided certs are now available for no additional charge. Much cheaper than $600/mo, and with XP nearly killed off, it should work well for most use cases.
#skalee AWS has a mechanism for achieving what the poster asks for, "implement SSL for an Amazon s3 bucket", it's called CloudFront. I'm reading "implement" as "use my SSL certs," not "just put an S on the HTTP URL which I'm sure the OP could have surmised.
Since CloudFront costs exactly the same as S3 ($0.12/GB), but has a ton of additional features around SSL AND allows you to add your own SNI cert at no additional cost, it's the obvious fix for "implementing SSL" on your domain.
I found you can do this easily via the Cloud Flare service.
Set up a bucket, enable webhosting on the bucket and point the desired CNAME to that endpoint via Cloudflare... and pay for the service of course... but $5-$20 VS $600 is much easier to stomach.
Full detail here:
https://www.engaging.io/easy-way-to-configure-ssl-for-amazon-s3-bucket-via-cloudflare/
It is not possible directly with S3, but you can create a Cloud Front distribution from you bucket. Then go to certificate manager and request a certificate. Amazon gives them for free. Ones you have successfully confirmed the certification, assign it to your Cloud Front distribution. Also remember to set the rule to re-direct http to https.
I'm hosting couple of static websites on Amazon S3, like my personal website to which I have assigned the SSL certificate as they have the Cloud Front distribution.
If you really need it, consider redirections.
For example, on request to assets.my-domain.example.com/path/to/file you could perform a 301 or 302 redirection to my-bucket-name.s3.amazonaws.com/path/to/file or s3.amazonaws.com/my-bucket-name/path/to/file (please remember that in the first case my-bucket-name cannot contain any dots, otherwise it won't match *.s3.amazonaws.com, s3.amazonaws.com stated in S3 certificate).
Not tested, but I believe it would work. I see few gotchas, however.
The first one is pretty obvious, an additional request to get this redirection. And I doubt you could use redirection server provided by your domain name registrar — you'd have to upload proper certificate there somehow — so you have to use your own server for this.
The second one is that you can have urls with your domain name in page source code, but when for example user opens the pic in separate tab, then address bar will display the target url.
As mentioned before, you cannot create free certificates for S3 buckets. However, you can create Cloud Front distribution and then assign the certificate for the Cloud Front instead. You request the certificate for your domain and then just assign it to the Cloud Front distribution in the Cloud Front settings. I've used this method to serve static websites via SSL as well as serve static files.
For static website creation Amazon is the go to place. It is really affordable to get a static website with SSL.