Is it possible to add a routing without moving all my application traffic to cloudflare? - cloudflare

I have very little knowledge on CloudFlare.
Currently all my application traffic goes through Akamai. what i am looking for is a way to create a new DNS at CloudFlare and route specific requests through CloudFlare.
For example if you configure the same in AWS CloudFront you can give an alternate name for the domain and use it instead of the origin urls and route specific traffic with specific rules.
but with CloudFlare the only way is to move all incoming traffic to CloudFlare as it is asking to replace the name servers with CloudFlare name servers.
i am looking for a way to create a new domain name or alternate domain name(similar to CloudFront) at CloudFlare and use it to route specific requests to my Akamai URLs based on page rules.
Is it possible to achieve??
Thank you in advance.

When you onboard a zone on Cloudflare, you can onboard it in FULL mode (Cloudflare becomes the authoritative DNS, by pointing the nameservers as you mention) or in Partial/CNAME mode (you retain an external authoritative DNS and point specific subdomains to Cloudflare). This could help you in separating which traffic goes through Cloudflare and what does not.
At the time of writing, The Partial/CNAME onboarding is available on Business or Enterprise plans. Documentation is provided at this link
Another possibility could be to direct all the traffic to Cloudflare and then use the Load Balancing capability and custom rules to route the traffic as required.

Related

Migrate redirected domains to CloudFront/S3

I have a handful of domains registered for brand protection reasons that are currently managed in a rather manual registrar, but it currently handles redirection to the main domain.
I am moving DNS from this provider over to Route53 using Terraform. I'm setting up a redirector for the brand protection domains using S3, but running into a catch-22:
If I want a single S3 bucket to handle the redirects I need to put CloudFront in front of it, which means I need an SSL cert valid for the various domains, which means I need DNS to already be in Route53 for the validation.
If I want to avoid breaking the redirects during this migration, I can't move the DNS until the redirector is in place.
This means I think I have the following options:
Migrate the domains into Route53 first, then create the CloudFront distribution. Redirects will break until this is complete.
Create multiple S3 buckets for each domain, which won't cover wildcard domains (e.g. *.aliasdomain.com), but can at least do the apex and www for instance (and http only).
Manually create the necessary certificate and import it into Terraform.
Have I missed an obvious alternative? Ideally I would create a single redirector that would handle all http traffic to begin with, then sort out https later.

Why do websites based in any country have their ip in the USA when using cloudflare?

I was looking at the websites of some Korean Exchange's APIs. When I did a lookup of they're ip on maxmind it says that they're using cloudflare with coordinates in the USA.
However, I know these sites should be around korea because pinging them from korea gives 1-2ms response times. Also, it would make sense that a korean exchange would they're servers based in korea. So how does cloudflare work? Is my data really being routed to cloudflare USA before being routed back to the exchange and then to the US and then back to me? If so, how am i getting such fast response times?
The website Im looking at is api.bithumb.com
Cloudflare uses "anycast" routing, which means that all of Cloudflare's 180+ locations around the world use the same IP address. When you send packets to that IP, the packets are routed to the closest Cloudflare location to you. Cloudflare has a location is Seoul, so when you access a Cloudflare IP address from Korea, that's the location you'll almost certainly go to.
Cloudflare (usually) acts as a proxy in front of the web site's real server. Your HTTP requests go to Cloudflare first, and then are forwarded to the "origin server" from there. Sometimes, responses are served directly from Cloudflare (e.g. from cache, or from a Cloudflare Worker) without talking to the origin at all. There is no way to determine the location of the origin server without talking to the owner—part of the reason people use Cloudflare is to shield their origin server from direct access.
Note that when you ping a Cloudflare IP, your ping packets only go to Cloudflare and back; they do not go to the site's origin server. So, the ping time doesn't tell you anything about where the origin server lives.

In a reverse proxy implementation, are the addresses still accessible outside the reverse proxy?

My company is looking at creating a subdomain for content that's currently stored in a subfolder on the site. As an SEO this decision makes my skin crawl. Since the decision has been made to implement the subdomain (server architecture decision to move parts of the site to a cloud provider), I would like to have IT implement a reverse proxy so we don't have to 301 the whole content base to a fresh subdomain.
One of the main objections IT has is that if we implement he reverse proxy, and there are issues with content or webpage functionality, the cloud provider will point to the reverse proxy as the issue.
My question is, unless we're specifically blocking access from outside the reverse proxy server, aren't the pages still accessible directly using the subdomain, or specific server ip address?
Example:
www.Example.com/blog hosted in say our Florida datacenter
becomes
www.Example.com/blog actually pointing to blog.Example.com hosted in say an Amazon EC2 cloud
Wouldn't a user still be able to access blog.Example.com directly unless we specify that we will only allow traffic from whatever the proxy server's IP address is?
I realize leaving access open to the world would introduce additional SEO considerations, but I can manage around that.
Yes, adding a reverse proxy to the mix just adds another route in to the destination URL, it's much like adding a 301 redirect to a page doesn't mean traffic can't get in using other means.

Cloudfront in front of an IIS7.5 web server with SSL

I have a CloudFront destro in front of my asp.net application, serving dynamic content. All cache periods are set and everything looks ok.
I am using cloudfront mainly to accelerate the site for international visitors.
I have a registration page on the site that uses SSL. I understand that I can't use my own SSL with cloudfront, but is there a way that I can tell cloudfront to point the user to the origin when they navigate to one of the HTTPS pages?
The urls on your secure page must use the https prefix or the browser will complain about mixed-mode content. This means that the requests have to start their lives as https ones. This makes redirection in the manner you suggest impossible.
Your best bet is to have logic in your pages that determines the host portion of your url and protocol so that if it's a secure connection, all your content urls are prefixed with the secure host/protocol in the form https://[cloudfront-secure-hostname]/[your content]
If the connection is not secure, you return your standard CDN hostname using http.
The only down side is that a user will see requests going off to a domain other than yours. This shouldn't be too much of a problem though.

Is using a CDN possible when you're running a HTTPS website?

I have a website with only home page available through simple HTTP protocol.
All other pages are accessible only through HTTP over SSL(https://).
I'm using CDN for home page and very happy with it.
But for me it looks like using CDN for https pages is impossible because of security warnings, especially in IE. My files hosted at CDN are accessible though simple HTTP protocol.
What should I do? How this problem can be solved?
You need to get a CDN that supports serving files over HTTPS, then use that CDN for the SSL requests.
You can do this if their boxes have HTTPS support. What you can't do is use a subdomain of your own domain to cname against the cdn network. Because SSL doesn't work this way.
so https://cdn.tld/mydomain/path/to/file as a mechanism does work (because browsers will verify the cdn.tld ssl certificate correctly)
but https://cdn.mydomain.tld/path/to/file will not.
Two options, but in general I'd redirect all pages that don't need to be SSL'ed to their non-SSL equivalent and only use SSL when necessary.
Get a SSL certificate for your CDN host. It's just 30 bucks/year, but you need to take into account that this requires more configuration and depending on the traffic, this is also more expensive because the server requires more resources for SSL'd connections.
For the relevant pages, store the CSS/images/js files "local" on your own SSL host and use them when you need SSL. Of course you loose the speed etc. from the CDN, but that's a trade off. We opted for this because just our signup is SSL, 99.9999% of the time users spend on our website is on non-SSL links.