Reverse Proxy and Akamai - reverse-proxy

I'm trying to understand how a reverse proxy may impact our website and its performance. We are looking to create a reverse proxy with servers in two locations:
The main server that hosts our website is in Atlanta.
A subsection of the main website is hosted in Washington state on a subdomain.
We'd like all requests to the subdomain to be redirected to a subfolder on the main website via a reverse proxy.
Can this be done without performance issues considering the two servers are so far apart?
If anyone has experience with Akamai can it be used to accomplish a reverse proxy setup like this?
If so roughly how hard is it to set up for a well-trained Akamai engineer (easy, medium, hard)?

If you're using Akamai, configure your Akamai property to deliver content for your website from your origin in Atlanta (default origin). Then setup a rule for Akamai to source content from your origin in Washington based a specific path (I.e. website subsection).
https://techdocs.akamai.com/property-mgr/docs/path-match
This way Akamai has direct access to each origin and can optimize delivery accordingly. This also helps avoid your reverse proxy from being a single point of failure.
Cheers.

Related

Restrict Lightsail machine to be accessed from cloudfront

I have a website (https://www.cakexpo.com) hosted on lightsail. Few days ago, we faced a DDOS Attack on the IP which forced me onboard my website to cloudfront.
I moved my website to cloudfront, yet my ip address is still publically available and making it vulnerable for more attacks again.
I am trying to understand how I can hide my ip from public access.
I found that in vpc, you can get the list of corresponding cloudfront ips and whitelist them in security group., which I tried
It worked for some time, but later on I realised that cloudfront uses lots of Ips which are not listed here and thus not whitelisted in my security group.
This makes my site intermittent unavailable.
nslookup shows a different ip, which is not listed in the above list, and this link says that there 190+ ips associated with Cloudfront, which security group cannot handle, IMO. https://ip-ranges.amazonaws.com/ip-ranges.json
Finally I ended up reverting the config and make my IP public.
Is there any other way to hide the lightsail machines from public access?
you can do this in 2 ways.
easy Way: Create a ngnix reverse proxy instance in lightsnail, allow access to ur lightsnail main instance only from that reverse proxy instance. then Create a distribution instance (with is cloudfront for lightsnail) then point as Origin the reverse proxy instance.
Hard Way: vpc peering to Aws, from there you Create a cloudfront instance. allows access from it.

Cloudflare wildcard DNS entry - still Protected If target IS a CloudFlare Worker?

I see that in CloudFlare’s DNS FAQs they say this about wildcard DNS entries:
Non-enterprise customers can create but not proxy wildcard records.
If you create wildcard records, these wildcard subdomains are served directly without any Cloudflare performance, security, or apps. As a result, Wildcard domains get no cloud (orange or grey) in the Cloudflare DNS app. If you are adding a * CNAME or A Record, make sure the record is grey clouded in order for the record to be created.
What I’m wondering is if one would still get the benefits of CloudFlares infrastructure of the target of the wildcard CNAME record IS a Cloudflare Worker, like my-app.my-zone.workers.dev? I imagine that since this is a Cloudflare controlled resource, it would still be protected for DDoS for example. Or is it that much of the Cloudflare security and performance happening at this initial DNS stage that will be lost even if the target is a Cloudflare worker?
Also posted to CloudFlare support: https://community.cloudflare.com/t/wildcard-dns-entry-protection-if-target-is-cloudflare-worker/359763
I believe you are correct that there will be some basic level of Cloudflare services in front of workers, but I don't think you'll be able to configure them at all if accessing the worker directly (e.g. a grey-cloud CNAME record pointed at it). Documentation here is a little fuzzy on the Cloudflare side of things however.
They did add functionality a little while back to show the order of operations of their services, and Workers seem to be towards the end (meaning everything sits in front). However, I would think this only applies if you bind it to a route that is covered under a Cloudflare-enabled DNS entry.
https://blog.cloudflare.com/traffic-sequence-which-product-runs-first/
The good news is you should be able to test this fairly easily. For example, you can:
Setup a worker with a test route
Point a DNS-only (grey cloud)
record at it
Confirm you can make a request to worker
Add a firewall rule to block the rest route
See if you can still make the request to worker
This will at least give you an answer on whether your zone settings apply when accessing a worker (even through a grey cloud / wildcard DNS entry). Although it will not answer what kind of built-in / non-configurable services there are in front of workers.

Is nginx needed if Express used

I have a nodeJS web application with Express running on a Digital Ocean droplet.The nodeJs application provides back-end API's. I have two react front-ends that utilise the API's with different domains. The front-ends can be hosted on the same server, but my developer tells me I should use another server to host the front-ends, such as cloudflare.
I have read that nginX can enable hosting multiple sites on the same server (i.e. host my front-ends on same server) but unsure if this is good practice as I then may not be able to use cloudflare.
In terms of security could someone tell me If I need nginx, and my options please?
Thanks
This is a way too open-ended question but I will try to answer it:
In terms of security could someone tell me If I need nginx, and my
options please?
You will need Nginx (or Apache) on any scenario. With one server or multiple. Using Express or not. Express is only an application framework to build routes. But you still need a service that will respond to network requests. This is what Nginx and Apache do. You could avoid using Nginx but then your users would have to make the request directly to the port where you started Express. For example: http://my-site.com:3000/welcome. In terms of security you would better hide the port number and use a Nginx's reverse proxy so that your users will only need to go to http://my-site.com/welcome.
my developer tells me I should use another server to host the
front-ends, such as cloudflare
Cloudflare does not offer hosting services as far as I know. It does offer CDN to host a few files but not a full site. You would need another Digial Ocean instance to do so. In a Cloudflare's forum post I found: "Cloudflare is not a host. Cloudflare’s basic service is a DNS provider, where you simply point to your existing host.".
I have read that nginX can enable hosting multiple sites on the same
server
Yes, Nginx (and Apache too) can host multiple sites. With different names or the same. As domains (www.my-backend.com, www.my-frontend.com) or subdomains (www.backend.my-site.com, www.my-site.com) in the same server.
... but unsure if this is good practice
Besides if it is a good or bad practice, I think it is very common. A few valid reasons to keep them in separated servers would be:
Because you want that if the front-end fails the back-end API continues to work.
Because you want to balance network traffic.
Because you want to keep them separated.
It is definitively not a bad practice if both applications are highly related.

How can I check who is running my DNS?

I set up cloudflare with ssl and a 301 redirect to ssl this morning. Everything seemed to work, but now, i'm back on http and the redirect is not working. I'm trying to figure out why and the DNS-system is sometimes a bit hard to decipher. I'm using a swedish registrar, Loopia. Loopia in turn passes the DNS-records to cloudflare.
Is there some way to figure out if I even go through cloudflare any more?
To determine which names servers you have set:
dig NS DOMAIN
This should only return Cloudflare name servers (unless you enabled Cloudflare via your hosting provider's integration). If you see other name servers in addition to the Cloudflare name servers that indicates you left your other names servers in place at the time you setup Cloudflare. To use Cloudflare you'd need to remove all other name servers other than the ones they provide. Other name servers being in place would return non-Cloudflare IPs which would explain the behavior you're seeing.

AWS Route 53 Redirect to Status Page

First question, so if I get this wrong somehow be kind.
We are using Route 53 with Amazon and have our primary front end servers behind an ELB. Our app also routes all requests through HTTPS. We are utilizing an offsite status page via statuspage.io.
What I am trying to accomplish is if the primary site goes down I'd like to have R53 redirect both the SSL and non-SSL traffic to our status page.
I originally had tried setting up a static page in S3 but still had issues with HTTPS requests made on our site.
Has anyone done this successfully? I imagine it has to be possible, but its definitely outside my realm of expertise.
Thank you very much for your time and help.
You are right, S3 website doesn't support HTTPS. However, CloudFront does[1]. What you can do is failover to CloudFront and have your origin be your S3 website or your statuspage.io.
Steps:
Create a distribution and set the CNAMEs to match your DNS entries.
Upload and associate your SSL cert with your distribution
Update failover target to be your CloudFront distribution and set it as an alias.
[1] http://aws.amazon.com/about-aws/whats-new/2014/03/05/amazon-cloudront-announces-sni-custom-ssl/
Route53 is managing the DNS which is not what you want to do (even if you'd change the DNS it would take TTL to sync). What you should do is use a combination of auto-scaling policies and health-checks. These health-checks will be performed by the ELB every 30 seconds and if two consecutive checks will fail it'll mark the instance as out-of-service and will stop directing traffic to it (the ELB is directing traffic to your instances in a round-robin manner).
Having more than one instance and using auto-scaling rules is the key: it will enable AWS to terminate the unhealthy instance and spin up a new instance instead (in the same ASG with the same AMI etc).