DNS propagation - why is a subdomain only accessible minutes after it's creation (own nameservers) - apache

we have a question about the behavior of DNS propagation for subdomains.
Here's the scenario we are trying to achieve:
User1 registers at our site "company.com". A subdomain "user1.company.com" is automatically created, not as add-on domain of "company.com", but as standalone account in WHM.
So, an own zone is being created & an A record is being set (same IP as company.com)
NS records are also set to "ns1.domain2.com + ns2.domain2.com", our own nameservers
(no clustering, 2 different IP's, BIND method, they are provided for the moment by the same WHM installation as for company.com & subdomains)
Domain2.com is handled by our registrar GoDaddy, nameservers ns1 + ns2 are also defined there.
Our problem is that right after the creation the domain "user1.company.com" is not immediately accessible (unlike an add-on domain).
When we nslookup for example the new domain "user1.company.com" with the nameservers from our ISP, right after it's creation, we get "Non-existent domain". Then, after 1-15 minutes, and depending on the DNS we try, the nslookup answers with the correct IP address. The DNS from Google (8.8.8.8) for example immediately! answers with the correct IP address.
What exactly happens when the user tries to access his subdomain "user1.company.com"? Are our nameservers contacted to resolve the subdomain or are the subdomains somehow propagated across the DNS servers worldwide? But why does Google's DNS answer immediately, propagation can't be so fast!
Any ideas where may be the problem and how to make a subdomain account accessible immediately after its creation, regardless of the DNS being used by the user?
Many thanks
Marc

My understanding of DNS is that to resolve a url, the process is as follows:
Usually when we contact a DNS server it is not the first time it has received a request for a specific website, they keep the records from previous requests until they expire. The amount of time till expiry (TTL - a value in seconds) varies depending on your settings. If you change you records, the chances are there are plenty of DNS servers out there with these cached records. Once these records expire or the first time the server receives a request for a url it does not know, the DNS server does a 'recursive lookup' in order to get fresh data.
The following is a recursive lookup of a.contoso.com. (Notice the dot at the end which is normally hidden)
The process starts off working backwards, starting with the hidden dot at the end of a URL:
1 - Contact the root name servers (dot servers), their IP addresses are pre-loaded onto DNS servers, these IP's are the same for every server and don't change, they give an IP address for the .COM DNS servers (or whichever TLD you use such as .net)
2 - You then query the .COM DNS for 'contoso' in contoso.com. (this may be where your problem lie if you've changed nameservers)
3 - You then query contoso.com's DNS (your nameservers) for the 'a' in a.contoso.com
ad infinitum ( b.a.contoso.com, 2.ww.c.b.a.contoso.com...)
The process of these recursive lookups replacing stale records is known as propagation.
I would presume your not getting a request sent to your name server because of propagation during a switch-over, your server is not replicating the A record to the nameserver correctly
Domain propagation is only an issue when transferring a domain, not when a new one is created as your not having to deal with out of date records since those records never existed. The request will go straight to the source.

This is most likely due to negative caching. That is to say that a DNS server remembers that the subdomain doesn't exist, and replies with NXDOMAIN without checking if it's still true. You can find the TTL for negative caching in the SOA record of the apex domain. In your case: dig company.com.
Another, unlikely, cause could be that not all the authoritative DNS servers are in sync yet. Since they operate independently, it can take some time before all authoritative DNS servers have the same records. This is called zone transfer, and happens through the AXFR and IXFR pseudo record types.
To debug this issue, visit a DNS lookup tool, and check the following:
Do the authoritative servers reply with the correct records?
What is the negative cache TTL in the SOA record?
As for why some recursive DNS servers immediately reply with the correct response, they might have made different trade-offs in how aggressively they cache records. TTLs are not always followed by DNS resolvers. Or the caches of some resolvers might happen not to have this record yet, causing them to ask the authoritative DNS server immediately.

Related

Cloudflare wildcard DNS entry - still Protected If target IS a CloudFlare Worker?

I see that in CloudFlare’s DNS FAQs they say this about wildcard DNS entries:
Non-enterprise customers can create but not proxy wildcard records.
If you create wildcard records, these wildcard subdomains are served directly without any Cloudflare performance, security, or apps. As a result, Wildcard domains get no cloud (orange or grey) in the Cloudflare DNS app. If you are adding a * CNAME or A Record, make sure the record is grey clouded in order for the record to be created.
What I’m wondering is if one would still get the benefits of CloudFlares infrastructure of the target of the wildcard CNAME record IS a Cloudflare Worker, like my-app.my-zone.workers.dev? I imagine that since this is a Cloudflare controlled resource, it would still be protected for DDoS for example. Or is it that much of the Cloudflare security and performance happening at this initial DNS stage that will be lost even if the target is a Cloudflare worker?
Also posted to CloudFlare support: https://community.cloudflare.com/t/wildcard-dns-entry-protection-if-target-is-cloudflare-worker/359763
I believe you are correct that there will be some basic level of Cloudflare services in front of workers, but I don't think you'll be able to configure them at all if accessing the worker directly (e.g. a grey-cloud CNAME record pointed at it). Documentation here is a little fuzzy on the Cloudflare side of things however.
They did add functionality a little while back to show the order of operations of their services, and Workers seem to be towards the end (meaning everything sits in front). However, I would think this only applies if you bind it to a route that is covered under a Cloudflare-enabled DNS entry.
https://blog.cloudflare.com/traffic-sequence-which-product-runs-first/
The good news is you should be able to test this fairly easily. For example, you can:
Setup a worker with a test route
Point a DNS-only (grey cloud)
record at it
Confirm you can make a request to worker
Add a firewall rule to block the rest route
See if you can still make the request to worker
This will at least give you an answer on whether your zone settings apply when accessing a worker (even through a grey cloud / wildcard DNS entry). Although it will not answer what kind of built-in / non-configurable services there are in front of workers.

Action Required: S3 shutting down legacy application server capacity

I got a mail from amazon s3 webservices stating below details
"We are writing to you today to let you know about changes which impact your use of the Amazon Simple Storage Service (S3). In efforts to best serve our customers, we have improved the systems powering the Amazon S3 API and are in the process of shutting down legacy application server capacity. We have detected access on the legacy capacity for Amazon S3 buckets that you own. The legacy capacity is no longer in service, as the DNS entry for the S3 endpoint no longer includes the IP addresses associated with it. We will be shutting down the legacy capacity and retiring the set of IP addresses fronting this capacity after April 1, 2020."
I want to find out which legacy system I am using, and how to prevent from affecting my services.
Imagine you had a web site, www.example.com.
In DNS, that name was pointed to your web server at 203.0.113.100.
You decide to buy a new web server, and you give it a new IP address, let's say 203.0.113.222.
You update the DNS for example.com to point to 203.0.113.222. Within seconds, traffic starts arriving at the new server. Over the coming minutes, more and more traffic arrives at the new server, and less and less arrives at the old server.
Yet, for some strange reason, a few of your site's prior visitors are still hitting that old server. You check the DNS and it's correct. Days go by, then weeks, and somehow a few visitors who used your old server before the cutover are still hitting it.
How is that possible?
That's the gist of the communication here from AWS. They see your traffic arriving on unexpected S3 server IP addresses, for no reason that they can explain.
You're trying to connect to the right endpoint -- that's not the issue -- the problem is that for some reason you have somehow "cached" (using the term in a very imprecise sense) an old DNS lookup and are accessing a bucket by hitting a wrong, old S3 IP address.
If you have a Java backend service accessing S3, those can notorious for holding on to DNS lookups forever. You might need to restart that service, and look into how to resolve that issue and enable correct behavior which is -- as I understand it -- not how Java behaves by default. (Not claiming to be a Java expert but I've encountered this sort of DNS behavior many times.)
If you have an HAProxy or Nginx server that's front-ending for an S3 bucket and has been up for a while, those might need a restart and you should look into how to correctly configure them not to resolve DNS only at startup. I ran into exactly this issue once, years ago, except my HAProxy was forwarding requests to Amazon CloudFront on only 1 of the several IP addresses it could have been using. They took that CloudFront edge server offline, or it failed, or whatever, and the DNS was updated... but my proxy was not able to re-query DNS so it just kept trying and failing until I restarted it. Then I fixed it so that it periodically repeated the DNS lookup so it always had a current address.
If you have your own DNS resolver servers, you might want to verify that they aren't somehow misbehaving, and you might want to ensure that you don't for some reason have any /etc/hosts (or equivalent) static host entried for anything related to S3.
There could be any number of causes but I'm confident at least in my interpretation of what they say is happening.

Domain name and Dynamic IP Addresse

I have recently acquired a domain name from GoDaddy. At home i am trying to setup a nextcloud server. Since my ISP serves me a dynamic IP addresse i had to create another domain name on no-ip website. Furthermore, i want to forward http requests to https. The following questions rises:
Do i create the ssl certificate (with let’s encrypt) for the godaddy domain or the no-ip domain?
What is the correct forwarding sequence here? Assume godaddy is foo.com and no-ip bar.dyndns.me and the user types foo.com, my server apache settings would forward foo.com:80 to :443 but this i guess should be corrected to my dyndns. I am confused.
I would appreciate any help - thank you.
you are making it too complicated. Instead of using a redirect you should request a static ip from you isp. this costs money varying by your provider but then you only need one domain. you then apply the ssl certificate to that domain and enforce ssl only with your hosting server (i.e apache, iis).
You can write a simple app/script to manage the Dynamic DNS from your server using the GoDaddy Api, thats what i have been doing for ~3 years now as my ISP want a stupid amount for a static IP. I have mine pinging out every 10 mins to check if my IP changed (ISP sucked for a while and mine would change several times a day)
Here are some links to various implementations of the GoDaddy API
BASH
Python
Powershell
So I think I have a fix for this, before I give you my answer I will outline problems with other solutions.
Static IP from your ISP. The problem with this is it may cost too much. (However if it’s cheap I’d probably do this solution)
Script and update godaddy DNS. This is okay however only if you can allow for some outage time between changes. (The DNS will take time to Propagate up to 24 hours)
Upgrade your noip account to a plus managed DNS it costs $29.95 a year. However it will allow you to bring your own domain name from another provider like go daddy. Depending how often your noip client is running there could be a very small outage between changes.
https://www.noip.com/support/knowledgebase/can-i-use-my-own-domain-name-with-no-ip/

How can I check who is running my DNS?

I set up cloudflare with ssl and a 301 redirect to ssl this morning. Everything seemed to work, but now, i'm back on http and the redirect is not working. I'm trying to figure out why and the DNS-system is sometimes a bit hard to decipher. I'm using a swedish registrar, Loopia. Loopia in turn passes the DNS-records to cloudflare.
Is there some way to figure out if I even go through cloudflare any more?
To determine which names servers you have set:
dig NS DOMAIN
This should only return Cloudflare name servers (unless you enabled Cloudflare via your hosting provider's integration). If you see other name servers in addition to the Cloudflare name servers that indicates you left your other names servers in place at the time you setup Cloudflare. To use Cloudflare you'd need to remove all other name servers other than the ones they provide. Other name servers being in place would return non-Cloudflare IPs which would explain the behavior you're seeing.

Web site not accessible when "www" is not used

I have a web domain registered and a hosting space.
When I access my website with www (for ex. www.example.com) it shows expected content. However when I try to access it without www (for ex. example.com) it shows site under construction page. This site under construction page is provided by web hosting provider and is html file.
What changes are required for accessing site both ways?
setup an A-record for the domain name without the 'www' prefix pointing to the IP address of
the web-server, and setup a CNAME-record for the domain name with the 'www' prefix pointing
to the web-server IP.
use a CNAME record for "www" to point it to the base name. Use an A record for the base name.
But I find it easier (and it's ever so slightly faster for users) to simply use an A record for both the base name and www.
Creating A record and CNAME record usually is the solution - but on your authoritative DNS.
You will want to put A and/or CNAME records into master DNS, not secondary DNS. There are two approaches to DNS which are:
authoritative DNS (master DNS)
local DNS (usually resides on your host machine/router) (secondary DNS)
Indeed - it is not simple as it may seemed. To have your own working authoriative DNS, you need 2 host machines physcially connected to two different separated ip addresses (eth0 physical connect - not virtually bridged). Since this is so complicated and time-consuming implemention, it is typical to outsource master DNS to a DNS provider (and is a common practice among many of us).
I have 4 servers on my one ip, and my local DNS are being managed in between my router and 4 host machines and it works great on local network ONLY. Since I wanted the local network to be hooked to my domain, I outsourced my master DNS to http://dnsimple.com (there are other DNS provider competitors), so it'd manage my domain directly. This therefore functions as an authoritative DNS, known as master DNS.
The issue you are trying to fix should be focused toward master DNS, not secondary DNS (local network) as it'd not work. If you got your domain via a registrar company or a web-hosting company, you should be able to find the setting/management on your account with the company (for example, C-Panel)...not DNS on your local network.
EDITED: This is a tool that I always use and is a great benefit in tackling down DNS / Domain issues. I dont know what I'd have done without it. http://www.dnsstuff.com/tools