In a reverse proxy implementation, are the addresses still accessible outside the reverse proxy? - reverse-proxy

My company is looking at creating a subdomain for content that's currently stored in a subfolder on the site. As an SEO this decision makes my skin crawl. Since the decision has been made to implement the subdomain (server architecture decision to move parts of the site to a cloud provider), I would like to have IT implement a reverse proxy so we don't have to 301 the whole content base to a fresh subdomain.
One of the main objections IT has is that if we implement he reverse proxy, and there are issues with content or webpage functionality, the cloud provider will point to the reverse proxy as the issue.
My question is, unless we're specifically blocking access from outside the reverse proxy server, aren't the pages still accessible directly using the subdomain, or specific server ip address?
Example:
www.Example.com/blog hosted in say our Florida datacenter
becomes
www.Example.com/blog actually pointing to blog.Example.com hosted in say an Amazon EC2 cloud
Wouldn't a user still be able to access blog.Example.com directly unless we specify that we will only allow traffic from whatever the proxy server's IP address is?
I realize leaving access open to the world would introduce additional SEO considerations, but I can manage around that.

Yes, adding a reverse proxy to the mix just adds another route in to the destination URL, it's much like adding a 301 redirect to a page doesn't mean traffic can't get in using other means.

Related

Migrate redirected domains to CloudFront/S3

I have a handful of domains registered for brand protection reasons that are currently managed in a rather manual registrar, but it currently handles redirection to the main domain.
I am moving DNS from this provider over to Route53 using Terraform. I'm setting up a redirector for the brand protection domains using S3, but running into a catch-22:
If I want a single S3 bucket to handle the redirects I need to put CloudFront in front of it, which means I need an SSL cert valid for the various domains, which means I need DNS to already be in Route53 for the validation.
If I want to avoid breaking the redirects during this migration, I can't move the DNS until the redirector is in place.
This means I think I have the following options:
Migrate the domains into Route53 first, then create the CloudFront distribution. Redirects will break until this is complete.
Create multiple S3 buckets for each domain, which won't cover wildcard domains (e.g. *.aliasdomain.com), but can at least do the apex and www for instance (and http only).
Manually create the necessary certificate and import it into Terraform.
Have I missed an obvious alternative? Ideally I would create a single redirector that would handle all http traffic to begin with, then sort out https later.

Is it possible to add a routing without moving all my application traffic to cloudflare?

I have very little knowledge on CloudFlare.
Currently all my application traffic goes through Akamai. what i am looking for is a way to create a new DNS at CloudFlare and route specific requests through CloudFlare.
For example if you configure the same in AWS CloudFront you can give an alternate name for the domain and use it instead of the origin urls and route specific traffic with specific rules.
but with CloudFlare the only way is to move all incoming traffic to CloudFlare as it is asking to replace the name servers with CloudFlare name servers.
i am looking for a way to create a new domain name or alternate domain name(similar to CloudFront) at CloudFlare and use it to route specific requests to my Akamai URLs based on page rules.
Is it possible to achieve??
Thank you in advance.
When you onboard a zone on Cloudflare, you can onboard it in FULL mode (Cloudflare becomes the authoritative DNS, by pointing the nameservers as you mention) or in Partial/CNAME mode (you retain an external authoritative DNS and point specific subdomains to Cloudflare). This could help you in separating which traffic goes through Cloudflare and what does not.
At the time of writing, The Partial/CNAME onboarding is available on Business or Enterprise plans. Documentation is provided at this link
Another possibility could be to direct all the traffic to Cloudflare and then use the Load Balancing capability and custom rules to route the traffic as required.

Custom domain feature for saas product customers

I have build a saas product with angular 4 integrated with golang rest api and uploaded the build on aws ec2 instance. My project is a multi-tenant based app which loads customers dashboard on merchant-name.mystore.com subdomain but some of the customers asking for custom domain feature like they should be able to load the app on mydomain.com .
I have done the the subdomain part with following code in apache2.conf file so all subdomain loads from apps folder where the angular app files located
<VirtualHost *:80>
ServerAlias *.mystore.com
DocumentRoot /var/www/html/apps
<Directory "/var/www/html/apps">
AllowOverride All
Require all Granted
</Directory>
</VirtualHost>
For custom domain feature I have a section in admin to save custom domain but not sure how should I implement it.
possible method I thought about are
Create virtual host file and update it on each merchant signup with his custom domain
Do it somehow with htaccess file and mod_rewrite
Shopify do it but not sure how they load merchant specific store. Another point kept me busy thinking about is what values should I ask to update
IP address on domain registrar
Name servers ( not sure what it will be for my on aws )
Ask to create CNAME or A record as some of the article suggest
I have a similar setup on a number of SaaS platforms I develop and manage. This type of setup is certainly desirable, as your clients suggest. You should plan to serve each customer site on its own domain, probably also with *SSL, from the begining. In my opinion, this is best practice for a well architected Saas service today.
In reading your question, I think you are over engineering it a little.
For a custom domain Saas app on the same server, you simply open port 80 to all traffic, regardless of domain name. Point all customer domains to app.mystore.com, which is a CNAME to your app endpoint.
The app then reads the HTTP request header, and in that way determines the host name that was requested.
Finally the app looks up the host name in its client database, and locates the client record for the give customer domain.
For example, in Nginx all you need is:
server {
listen 80 default_server;
server_name _;
root /var/www/myservice/htdocs;
}
This server configuration provides a catch all for any domain that points to this endpoint.
That is all the web server should need to allow it to answer to any customer domain. The app must do the rest.
* When you serve a custom domain on an app on this domain, you should plan to serve the SSL endpoint for the domain, eg https://www.mycustomdomain.com. Consider this in your architecture design. Consider also the DNS issues also if your app fails over to a new IP.
The accepted answer is satisfactory but it only skims over the most important part, and that is enabling HTTPS by issuing certificates for third-party domains.
If your customers just CNAME to your domain or create the A record to your IP and you don't handle TLS termination for these custom domains, your app will not support HTTPS, and without it, your app won't work in modern browsers on these custom domains.
You need to set up a TLS termination reverse proxy in front of your webserver. This proxy can be run on a separate machine but you can run it on the same machine as the webserver.
CNAME vs A record
If your customers want to have your app on their subdomain, e.g. app.customer.com they can create a CNAME app.customer.com pointing to your proxy.
If they want to have your app on their root domain, e.g. customer.com then they'll have to create an A record on customer.com pointing to your proxy's IP. Make sure this IP doesn't change, ever!
How to handle TLS termination?
To make TLS termination work, you'll have to issue TLS certificates for these custom domains. You can use Let's Encrypt for that. Your proxy will see the Host header of the incoming request, e.g. app.customer1.com or customer2.com etc., and then it will decide which TLS certificate to use by checking the SNI.
The proxy can be set up to automatically issue and renew certificates for these custom domains. On the first request from a new custom domain, the proxy will see it doesn't have the appropriate certificate. It will ask Let's Encrypt for a new certificate. Let's Encrypt will first issue a challenge to see if you manage the domain, and since the customer already created a CNAME or A record pointing to your proxy, that tells Let's Encrypt you indeed manage the domain, and it will let you issue a certificate for it.
To issue and renew certificates automatically, I'd recommend using Caddy, greenlock.js, OpenResty (Nginx).
tl;dr on what happens here;
Caddy server listens on 443 and 80, receives requests, issues, and renews certificates automatically, and proxies traffic to your backend.
How to handle it on the backend
Your proxy is terminating TLS and proxying requests to your backend. However, your backend doesn't know who is the original customer behind the request. This is why you need to tell your proxy to include additional headers in proxied requests to identify the customer. Just add X-Serve-For: app.customer.com or X-Serve-For: customer2.com or whatever the Host header is of the original request.
Now when you receive the proxied request on the backend, you can read this custom header and you know who is the customer behind the request. You can implement your logic based on that, show data belonging to this customer, etc.
More
Put a load balancer in front of your fleet of proxies for higher availability. You'll also have to use distributed storage for certificates and Let's Encrypt challenges. Use AWS ECS or EBS for automated recovery if something fails, otherwise, you may be waking up in the middle of the night restarting machines, or your proxy manually.
Alternatively, there have been a few services like this recently that allow you to add custom domains to your app without running the infrastructure yourself.
If you need more detail you can DM me on Twitter #dragocrnjac

Cloudfront in front of an IIS7.5 web server with SSL

I have a CloudFront destro in front of my asp.net application, serving dynamic content. All cache periods are set and everything looks ok.
I am using cloudfront mainly to accelerate the site for international visitors.
I have a registration page on the site that uses SSL. I understand that I can't use my own SSL with cloudfront, but is there a way that I can tell cloudfront to point the user to the origin when they navigate to one of the HTTPS pages?
The urls on your secure page must use the https prefix or the browser will complain about mixed-mode content. This means that the requests have to start their lives as https ones. This makes redirection in the manner you suggest impossible.
Your best bet is to have logic in your pages that determines the host portion of your url and protocol so that if it's a secure connection, all your content urls are prefixed with the secure host/protocol in the form https://[cloudfront-secure-hostname]/[your content]
If the connection is not secure, you return your standard CDN hostname using http.
The only down side is that a user will see requests going off to a domain other than yours. This shouldn't be too much of a problem though.

Is it wrong to configure a webserver to map both HTTP and HTTPS traffic to the same document root?

Is there anything wrong with configuring a webserver to map SSL traffic (port 443) to the same document root as normal traffic (port 80)?
Using the same document root for both http and https means you need to implement the following:
On each page that needs to be secure, there needs to be some application code that redirects the user to the https version if they somehow got to the http version (or to rediect the user to the login page if they have no session).
The login page always needs to redirect to the https version.
For pages that are accessible via both http and https, you need to set a canonical URL to ensure it doesn't appear like you have duplicate content.
Is there a better way to configure encryption of user account pages? Is there a best practice to separate website into HTTP and HTTPS sections?
It's not necessarily wrong to do this, but as your points 1..3 show, it introduces complications. It seems to me that setting up a separate document root might be a lot simpler than working around the complications.
In Internet Information Server 7.X you can define a "secure path" which is require to access with HTTPS and you can redirect the user to a user-friendly error page.
Maybe this can be a good solution to mix the document root and keep parts of the application secured.
Redirecting http automatically to https allows for man-in-the-middle attacks and is therefore not recommended. A man-in-the-middle could manipulate your HTTP traffic to send you to a malicious HTTPS site that resembles your HTTPS content.