Cloudflare wildcard DNS entry - still Protected If target IS a CloudFlare Worker? - cloudflare

I see that in CloudFlare’s DNS FAQs they say this about wildcard DNS entries:
Non-enterprise customers can create but not proxy wildcard records.
If you create wildcard records, these wildcard subdomains are served directly without any Cloudflare performance, security, or apps. As a result, Wildcard domains get no cloud (orange or grey) in the Cloudflare DNS app. If you are adding a * CNAME or A Record, make sure the record is grey clouded in order for the record to be created.
What I’m wondering is if one would still get the benefits of CloudFlares infrastructure of the target of the wildcard CNAME record IS a Cloudflare Worker, like my-app.my-zone.workers.dev? I imagine that since this is a Cloudflare controlled resource, it would still be protected for DDoS for example. Or is it that much of the Cloudflare security and performance happening at this initial DNS stage that will be lost even if the target is a Cloudflare worker?
Also posted to CloudFlare support: https://community.cloudflare.com/t/wildcard-dns-entry-protection-if-target-is-cloudflare-worker/359763

I believe you are correct that there will be some basic level of Cloudflare services in front of workers, but I don't think you'll be able to configure them at all if accessing the worker directly (e.g. a grey-cloud CNAME record pointed at it). Documentation here is a little fuzzy on the Cloudflare side of things however.
They did add functionality a little while back to show the order of operations of their services, and Workers seem to be towards the end (meaning everything sits in front). However, I would think this only applies if you bind it to a route that is covered under a Cloudflare-enabled DNS entry.
https://blog.cloudflare.com/traffic-sequence-which-product-runs-first/
The good news is you should be able to test this fairly easily. For example, you can:
Setup a worker with a test route
Point a DNS-only (grey cloud)
record at it
Confirm you can make a request to worker
Add a firewall rule to block the rest route
See if you can still make the request to worker
This will at least give you an answer on whether your zone settings apply when accessing a worker (even through a grey cloud / wildcard DNS entry). Although it will not answer what kind of built-in / non-configurable services there are in front of workers.

Related

Cloud Run with Fastly or Cloudflare

When I want to map a custom domain to my cloud run services. Is this a one time only thing I need to do via CNAME record? Or is this validated on a continuous basis?
I would like to have a CNAME record from Fastly which shield my Cloud Run service.
The same functionality applies on Cloudflare with a DNS record (without proxy) pointing to Cloud Run service and then enabling the proxy functionality. Everything seems to work fine (with Cloudflare) but I don't know if this will break in the future? I would also like to able to do the same for Fastly
You need to update your DNS record for 2 things when doing a domain mapping of your Cloud Run service: domain ownership verification and domain mapping creation.
The domain verification is a one time operation, once verified, you can clean up your DNS records.
The actual mapping of the verified domain to the Cloud Run service requires to set either CNAME or A records, and these must stay in your DNS records in order for the mapping to keep working.

How can I check who is running my DNS?

I set up cloudflare with ssl and a 301 redirect to ssl this morning. Everything seemed to work, but now, i'm back on http and the redirect is not working. I'm trying to figure out why and the DNS-system is sometimes a bit hard to decipher. I'm using a swedish registrar, Loopia. Loopia in turn passes the DNS-records to cloudflare.
Is there some way to figure out if I even go through cloudflare any more?
To determine which names servers you have set:
dig NS DOMAIN
This should only return Cloudflare name servers (unless you enabled Cloudflare via your hosting provider's integration). If you see other name servers in addition to the Cloudflare name servers that indicates you left your other names servers in place at the time you setup Cloudflare. To use Cloudflare you'd need to remove all other name servers other than the ones they provide. Other name servers being in place would return non-Cloudflare IPs which would explain the behavior you're seeing.

Medium sized website: Transition to HTTPS, Apache and reverse proxy

I have a medium sized website called algebra.com. As of today, it is ranked 900th website in US in Quantcast ratings.
At the peak of its usage, during weekday evenings, it serves over 120-150 queries for objects per second. Almost all objects, INCLUDING IMAGES, are dynamically generated.
It has 7.5 million page views per month.
It is server by Apache2 on Ubuntu and is supplemented by Perlbal reverse proxy, which helps reduce the number of apache slots/child processes in use.
I spent an inordinate amount of time working on performance for HTTP and the result is a fairly well functioning website.
Now that the times call for transition to HTTPS (fully justified here, as I have logons and registered users), I want to make sure that I do not end up with a disaster.
I am afraid, however, that I may end up with a performance nightmare, as HTTPS sessions last longer and I am not sure whether a reverse proxy can help as much as it did with HTTP.
Secondly, I want to make sure that I will have enough CPU capacity to handle HTTPS traffic.
Again, this is not a small website with a few hits per second, we are talking 100+ hits per second.
Additionally, I run multiple sites on one server.
For example, can I have a reverse proxy, that supports several virtual domains on one IP (SNI), and translates HTTPS traffic into HTTP, so that I do not have to encrypt twice (once by apache for the proxy, and once by the proxy for the client browser)?
What is the "best practices approach" to have multiple websites, some large, served by a mix of HTTP and HTTPS?
Maybe I can continue running perlbal on port 80, and run nginx on port 443? Can nginx be configured as a reverse proxy for multiple HTTPS sites?
You really need to load test this, and no one can give a definitive answer other than that.
I would offer the following pieces of advice though:
First up Stack overflow is really for programming questions. This question probably belongs on the sister site www.serverfault.com.
Https processing is, IMHO, not an issue for modern hardware unless you are encrypting large volumes of traffic (e.g. video streaming). Especially with proper caching and other performance tuning that I presume you've already done from what you say in your question. However not dealt with a site of your traffic so it could become an issue there.
There will be a small hit to clients as the negotiate the https session on initial connection. This is in the order of a few hundred milliseconds, will only happen on initial connection for each session, is unlikely to be noticed by most people, but it is there.
There are several things you can do to optimise https including choosing fast ciphers, implementing session resumption (two methods for this - and this can get complicated on load balanced sites). Ssllabs runs an excellent https tester to check your set up, Mozilla has some great documentation and advice, or you could check out my own blog post on this.
As to whether you terminate https at your end point (proxy/load balanced) that's very much up to you. Yes there will be a performance hit if you re-encrypt to https again to connect to your actual server. Most proxy servers also allow you to just pass through the https traffic to your main server so you only decrypt once but then you lose the original IP address from your webserver logs which can be useful. It also depends on if you access your web server directly at all? For example at my company we don't go through the load balanced for internal traffic so we do enable https on the web server as well and make the LoadBalancer re-encrypt to connect to that so we can view the site over https.
Other things to be aware of:
You could see an SEO hit during migration. Make sure you redirect all traffic, tell Google Search Console your preferred site (http or https), update your sitemap and all links (or make them relative).
You need to be aware of insecure content issues. All resources (e.g. css, javascript and images) need to be served over https or you will get browsers warnings and refuse to use those resources. HSTS can help with links on your own domain for those browsers that support HSTS, and CSP can also help (either to report on them or to automatically upgrade them - for browsers that support upgrade insecure requests).
Moving to https-only does take a bit of effort but it's once off and after that it makes your site so much easier to manage than trying to maintain two versions of same site. The web is moving to https more and more - and if you have (or are planning to have) logged in areas then you have no choice as you should 100% not use http for this. Google gives a slight ranking boost to https sites (though it's apparently quite small so shouldn't be your main reason to move), and have even talked about actively showing http sites as insecure. Better to be ahead of the curve IMHO and make the move now.
Hope that's useful.

Is it possible to host only subdomain on cloudflare

I want to host only a subdomain to cloudflare. I do not want to change the nameserver of my main domain to theirs. Is it really possible?
Yes, this is possible, however it needs to be set-up via a CloudFlare Partner or you need to be on the Business or Enterprise plan. They can set-up domains via a CNAME record instead of moving nameservers.
There is a complete list of partners at: https://www.cloudflare.com/hosting-partners
We use this at Creare, it allows us to set-up a clients site on CloudFlare yielding the performance and security benefits without altering their nameservers (where it is impractical or the client doesn't want us to), we provide this option without them needing a Business or Enterprise plan leading to it being at a lower price for the client.

DNS propagation - why is a subdomain only accessible minutes after it's creation (own nameservers)

we have a question about the behavior of DNS propagation for subdomains.
Here's the scenario we are trying to achieve:
User1 registers at our site "company.com". A subdomain "user1.company.com" is automatically created, not as add-on domain of "company.com", but as standalone account in WHM.
So, an own zone is being created & an A record is being set (same IP as company.com)
NS records are also set to "ns1.domain2.com + ns2.domain2.com", our own nameservers
(no clustering, 2 different IP's, BIND method, they are provided for the moment by the same WHM installation as for company.com & subdomains)
Domain2.com is handled by our registrar GoDaddy, nameservers ns1 + ns2 are also defined there.
Our problem is that right after the creation the domain "user1.company.com" is not immediately accessible (unlike an add-on domain).
When we nslookup for example the new domain "user1.company.com" with the nameservers from our ISP, right after it's creation, we get "Non-existent domain". Then, after 1-15 minutes, and depending on the DNS we try, the nslookup answers with the correct IP address. The DNS from Google (8.8.8.8) for example immediately! answers with the correct IP address.
What exactly happens when the user tries to access his subdomain "user1.company.com"? Are our nameservers contacted to resolve the subdomain or are the subdomains somehow propagated across the DNS servers worldwide? But why does Google's DNS answer immediately, propagation can't be so fast!
Any ideas where may be the problem and how to make a subdomain account accessible immediately after its creation, regardless of the DNS being used by the user?
Many thanks
Marc
My understanding of DNS is that to resolve a url, the process is as follows:
Usually when we contact a DNS server it is not the first time it has received a request for a specific website, they keep the records from previous requests until they expire. The amount of time till expiry (TTL - a value in seconds) varies depending on your settings. If you change you records, the chances are there are plenty of DNS servers out there with these cached records. Once these records expire or the first time the server receives a request for a url it does not know, the DNS server does a 'recursive lookup' in order to get fresh data.
The following is a recursive lookup of a.contoso.com. (Notice the dot at the end which is normally hidden)
The process starts off working backwards, starting with the hidden dot at the end of a URL:
1 - Contact the root name servers (dot servers), their IP addresses are pre-loaded onto DNS servers, these IP's are the same for every server and don't change, they give an IP address for the .COM DNS servers (or whichever TLD you use such as .net)
2 - You then query the .COM DNS for 'contoso' in contoso.com. (this may be where your problem lie if you've changed nameservers)
3 - You then query contoso.com's DNS (your nameservers) for the 'a' in a.contoso.com
ad infinitum ( b.a.contoso.com, 2.ww.c.b.a.contoso.com...)
The process of these recursive lookups replacing stale records is known as propagation.
I would presume your not getting a request sent to your name server because of propagation during a switch-over, your server is not replicating the A record to the nameserver correctly
Domain propagation is only an issue when transferring a domain, not when a new one is created as your not having to deal with out of date records since those records never existed. The request will go straight to the source.
This is most likely due to negative caching. That is to say that a DNS server remembers that the subdomain doesn't exist, and replies with NXDOMAIN without checking if it's still true. You can find the TTL for negative caching in the SOA record of the apex domain. In your case: dig company.com.
Another, unlikely, cause could be that not all the authoritative DNS servers are in sync yet. Since they operate independently, it can take some time before all authoritative DNS servers have the same records. This is called zone transfer, and happens through the AXFR and IXFR pseudo record types.
To debug this issue, visit a DNS lookup tool, and check the following:
Do the authoritative servers reply with the correct records?
What is the negative cache TTL in the SOA record?
As for why some recursive DNS servers immediately reply with the correct response, they might have made different trade-offs in how aggressively they cache records. TTLs are not always followed by DNS resolvers. Or the caches of some resolvers might happen not to have this record yet, causing them to ask the authoritative DNS server immediately.