AWS Route 53 "Failure: DNS resolution failed: Rcode NXDomain(3)" - amazon-s3

AWS Route 53 / S3 static website
I have a domain / Route 53 hosted zone with several A records. One A record in particular has started producing the error "Failure: DNS resolution failed: Rcode NXDomain(3)" when it attempts to resolve.
user.samtec-atg.com
This is a static website hosted on S3. The S3 link works, but configuring a recordset for this link using either an Alias or CNAME produces the error "Failure: DNS resolution failed: Rcode NXDomain(3)"
Again, I have several S3 websites with the same root domain, but only this link is producing the error.
How can I get this resolved?

As this is the very first item in Google's Search result upon googling the #subj and it has no clear answer, I decided to excavate it.
https://docs.aws.amazon.com/cli/latest/reference/route53/get-health-check.html says:
If you want to check the health of weighted, latency, or failover resource record sets and you choose to specify the endpoint only by FullyQualifiedDomainName, we recommend that you create a separate health check for each endpoint.
So, if you're routing traffic to resources that you can't create alias records for, such as EC2 instances, you create a record and a health check for each resource. Then you associate each health check with the applicable record. Health checks regularly check the health of the corresponding resources, and Route 53 routes traffic only to the resources that health checks report as healthy.
I've ran into this when I tried to perform a health check on on domain name in my private zone in route 53 (instead separate health check for each record/ec2-instance).

It was recognized that your DNS was hosting in two parents so it was reject

Related

AWS: I can't link my route 53 configuration to my S3 bucket

I have 2 route53 hosted zone, let's call them myfirsturl.com and mysecondurl.com.
For both of them, I have created a bucket in S3 named after my domain names. I have verified it multiple times letter by letter.
Both of my buckets have static content, available from the S3 endpoint, with the fine policy etc: the 2 endpoints work perfectly.
The 1st hosted zone has been bought in route53 and when I connect to it, it opens my static website, all is good.
My second domain name has been transferred to Amazon last month, and in route 53 I can find the S3 bucket in the list of targets when I create the recordset, but it doesn't reach the static website
Another point: I have created a WP site a few days ago, behind a load balancer etc, and I linked wp.myfirsturl.com to it: it worked perfectly
I tried the same with wp.mysecondurl.com, to the same load balancer, it never worked.
I can't find any idea has I can't see any difference between my 2 domain name, except where I bought it.
Another difference:
The 1st is something like sometexte.info
The 2nd is something like sometext-othertext.fr
Maybe the hyphen is a problem? (it's not, pertaining to the doc)
Someone has a lead, please?
The bucket must have the same name as your domain or subdomain in Route53. For example, if you want to use the subdomain acme.example.com, the name of the bucket must be acme.example.com. Have a look at this documentation for more information.

"Unauthorized" error in adding SSL Certificate to an AWS EC2 instance using Let's Encrypt

I have a server running on Amazon Web Services as an EC2 instance and want to reach it in a secured manner (https). I decided to use Let's Encrypt, following this tutorial to install the SSL Certificates to the server (using the --webroot plugin type). I used puTTY to reach the EC2 instance. In the final step, I was prompted to provide the domain name, wherein I keyed in the URL generated for the instance by AWS (not my own/masked domain name).
I get an Unauthorized error with a note saying
FailedChallenges Failed authorization procedure abcd.efgh.us-west-2.elasticbeanstalk.com (http-01) :urn:acme:error:unautorized :: The client lacks sufficient authorization :: Invalid response from http://abcd.efgh.us-west-2.elasticbeanstalk.com/.well-known/acme-challenge/...
NOTE : abcd.efgh.us-west-2.elasticbeanstalk.com is just an example of an AWS domain name I have provided for the question.
I also get a note following the error:
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain contain(s)
the right IP address.
I'm not sure if this occurs because I'm directly using the AWS domain name and not a domain name that I own.
So will I get rid of this error, if I use a domain name that I own or is this issue because of something else that I need to add/change? Please advice.
Issue is that you are trying to run Let's Encrypt with URL that is the Amazon EC2 instance's. You need to register a domain, point it at that EC2 instance, and then run Let's Encrypt with that domain name.
Helpful: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-14-04

Not able to add custom domain in Salesforce

I have setup a community for one of our client and trying to add a domain for it in Administer | Domain Management | Domain. Everytime, I try to add domain name as 'testcommunity', it gives me below error
Error: Salesforce.com can't validate the domain. The CNAME record may still be processing (which can take up to 24 hours), or the domain may not belong to you. Make sure the domain name 'testcommunity' uses testcommunity.****orgid****.live.siteforce.com as its CNAME target and try again later.
We added a CNAME in DNS Management three days before and it has propagated successfully. Check on 'www.whatsmydns.net' and it shows CNAME is correctly pointing to testcommunity.****orgid****.live.siteforce.com
It seems to be a bug to me as we followed the help & training tutorial for same and followed all steps. Please can anyone help us out in this.

MX record and Amazon S3

I'm having an issue with setting up my Google Apps account.
I believe that my S3 bucket is causing the problem.
I configure the MX records like Google asked me to and today mij DNS providers acknowledged that the records where propagated.
Now when I try to continue the setup of my Google Apps account it's stuck and doesn't provide any info. I have hosted a a static website on a Amazon S3 Bucket.
Trying to see if the MX records are available I used this tool MX Toolbox
to see if my MX records where available but they weren't. Anybody with the same problem or some professional advice?
BTW: the domain name is xntriek.be
What I suspect you will have to do is as follows:
1.) change the settings at your DNS registrar to use a different name server. For my registrar, namecheap, I go to manage -> transfer Name Server to 3rd party (or some variant) -> (leave this screen up - there should be a set of 5+ blank records)
2.) Set up Amazon Route 53.
3.) "Create Hosted Zone" for your domain name in the Route 53 console
4.) This hosted zone should be associated with a "Delegation Set" (right side of R53 console) - 4 records which you will paste into the screen you found in (1) above.
5.) Save that, and configure Route 53 as you would have configured records with your DNS provider. (CNAME aliasing and mx forwarding)
The reason this must be done in R53 and not at the Registrar is that setting the cname record alias to, say, www.yourdomain.com.aws.us-east.amazon blah blah blah tells mx traffic to go to amazon for instructions about what to do. Of course, there are no further instructions for that traffic if you have not set up Route 53.
I hope this helps!

Cloudfront dist on top level domain

Is it possible to point a top level domain like http://example.com to a amazon cloudfront distribution?
I know it's possible with CNAMEs, but as far as I know, I need to set an A-name record for the top level domain in the DNS settings.
As explained by #dgeske, this can be done.
In my case, I had not purchased the domain from Route 53, and hence had to do extra configuration.
Scenario: You have the following
Cloud front distribution
A second-level domain (example.com) not purchased from Amazon Route 53. It was Google domains in my case, but the idea will work for other providers also.
You want to point the second-level domain (example.com) to the cloud front distribution (as opposed to a subdomain like www.example.com)
Your nomenclature is slightly inaccurate. example.com is not a TLD (top-level domain), it is what is called a second-level domain. See the following image.
Steps to do this.
Create a hosted zone in Route 53.
Route 53 will now give you a list of name servers that you have to set in the domain settings panel of the provider from which you purchased the domain (Google domains in my case).
Go back to Route 53 dashboard, and create an A - Alias record for this hosted zone (use create record set option). Remember to select 'Yes' radio button. Make sure you leave the subdomain part empty (since we are only interested in creating record for second-level domain).
Now you should be able to access your cloudfront distribution at http://example.com.
Depending on your DNS server, it may take a while to get records updated.
You may configure your system to use a public DNS server such as 8.8.8.8 to verify if you are able to access the cloudfront distribution using the URL. I used firefox's DNS over https feature for this. This makes firefox use cloudflare's (not cloudfront) DNS servers. You can also use dig command line utility dig #8.8.8.8 example.com (My domain was fightcoronapune.com, hence, dig #8.8.8.8 fightcoronapune.com) (telling dig to use 8.8.8.8 DNS server to resolve names)
You may additionally get Access Denied error, in which case you will have to configure the default root object for your cloudfront distribution. So that when you visit http://example.com the file http://example.com/index.html is served to you (assuming you specified index.html as default root object). This error has nothing to do with the steps we did above, and you will still get this error even if you directly use your cloudfront distribution's URL given by Amazon (eg. when you go to http://abcd.cloudfront.net directly, instead of going to http://example.com).
Q. Can I point my zone apex (example.com versus www.example.com) at my Amazon CloudFront distribution?
Yes. Amazon Route 53 offers a special type of record called an ‘Alias’ record that lets you map your zone apex (example.com) DNS name to your Amazon CloudFront distribution (for example, d123.cloudfront.net). IP addresses associated with Amazon CloudFront endpoints vary based on your end user’s location (in order to direct the end user to the nearest CloudFront edge location) and can change at any time due to scaling up, scaling down, or software updates. Route 53 responds to each request for an Alias record with the IP address(es) for the distribution. Route 53 doesn't charge for queries to Alias records that are mapped to a CloudFront distribution. These queries are listed as “Intra-AWS-DNS-Queries” on the Amazon Route 53 usage report.
Source: Amazon Route 53 FAQs
My understanding is that you cannot create an A record for Cloudfront.
Amazon provides you with a domain name like YourName.cloudfront.net. They need to manage the DNS resolution for that domain name behind the scenes in order to route each request to the nearest edge server.
you can if you add alias in cloudfront
then select A or AAAA(ipv6 if enabled on cloudfront)