I have a ssl cert generated for "host1.domain.com". This host has 4 NIC configured. 192.168.1.10, 192.168.1.11, 192.168.1.12 and 192.168.1.13 are the IPs. Now, if I try to browse through IP, it gives cert mismatch and works fine with hostname as expected. Why doesn't it do a lookup to validate the certificate against ip?
Related
I am running a web server (Wordpress) locally using XAMPP (using Apache) and forwarding it to my Cloudflare-hosted domain using a Cloudflared tunnel. I am having an issue with the certificate when connecting over my domain.
I have a certificate I received from Cloudflare which is valid for my domain installed in XAMPP's location for its certificate, and I know that it is being sent with the HTTPS result. Also, my "SSL/TLS encryption mode" on Cloudflare is "Full (Strict)".
When connecting from the browser, I get a 502 Bad Gateway error, and Cloudflared prints this error: error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: x509: certificate is valid for *.example.com, example.com, not localhost where example.com is my domain.
If I go to
http://example.com or https://example.com, I get the above error.
http://localhost, the website loads but does not load any of the resources, since Wordpress loads the resources by querying the domain, https://example.com/path/to/resource.
https://localhost, the same as above happens, but Chrome also give me a warning that the certificate is not valid.
Here are the ingress rules in Cloudflared's config.yml.
ingress:
- hostname: ssh.example.com # I haven't gotten this one to work yet.
service: ssh://localhost:22
- hostname: example.com # This is the one having a problem.
service: https://localhost
- service: https://localhost
What I believe is happening is that Cloudflared receives the certificate which is valid for my domain (*.example.com, example.com) and then tries to execute the ingress rule by going to https://localhost, but the certificate is not valid for localhost. I don't think I should just get a certificate which is valid for localhost AND example.com. Do I need one certificate (valid for localhost) to be returned whenever http(s)://localhost is called and another (valid for example.com) that Cloudflared checks when it tries to execute an ingress rule involving example.com? If so, how do I do this?
I solved it by using the noTLSVerify option in Cloudflared's config.yml. When a client connects to my domain, it goes like this:
Client > Cloudflare > Cloudflared instance running on my machine > Origin (which also happens to be my machine: https://localhost)
The certificate sent back by the Origin was not valid for the address Cloudflared was accessing it from, localhost, but by adding these lines to config.yml,
originRequest:
noTLSVerify: true
I think Cloudflared does not check the certificate received from the origin, although it still returns the certificate to Cloudflare, which checks it against my domain.
I am setting up a Health Check in Oracle Cloud Infrastructure for DNS failover. The destiation is only reachable through HTTPS and has a valid SSL-Certificate for the domain name. However, when activating the health check, it always fails with error message:
x509: cannot validate certificate for x.x.x.x because it doesn't contain any IP SANs
I tried to set the domain name in the HTTP header host field. However, this does not seem to help in the SSL authentication.
Is there a way to ignore SSL errors for the health check? Or do I somehow need to include all IP addresses in the SSL certificate? I would like to avoid having to get a new signed SSL certificate each time an IP address changes or is added.
I observed a site example.com has a cname mapping with mysite.com. Both example.com and mysite.com have ssl certificates.
Correct if I am wrong?
When a browser tries to connect https://example.com it checks DNS and finds it has cname mapping with mysite.com and connect to mysite.com web server directly.
When I observed browser it has ssl certificate for example.com domain. I am facing problem in understanding this case.
If request did not go to example.com web server how could browser get ssl certificate of example.com
or my cname mapping understanding is wrong?
or example.com private and public keys are shared with mysite.com webserver ?
DNS and TLS operate completely independent of each other.
TLS is used, among other things like encryption, to verify the identity of a server against its FQDN (Fully qualified domain name). This is done by checking whether the server in question is able to present a certificate, containing the FQDN, signed by a trusted certification authority (CA).
DNS is used to resolve host names to IP addresses, in order to establish network connections (like TCP connections) on a lower layer. How this resolution takes place is completely transparent to other components, like TLS. It does not matter whether the name resolution involves A, AAAA, or the mentioned CNAME record - in our context the input is always a single hostname, the output is always one (or more) IP addresses. Intermediate results, like CNAME mappings, are essentially discarded once name resolution is done.
This means that the TLS client always uses the FQDN initially requested by the user, regardless of any CNAME mappings, to verify the certificate. How to present a valid certificate is up to the server - sticking to your example, the server behind FQDN mysite.com will have to present a certificate valid for example.com in order for the client to accept it. How the private/public key of this certificate is generated, and whether it is shared with other certificates or servers, does not matter.
We would have to explicitly attach the SSL certificates of both the domains to the webserver/load balancer for both the domains to support HTTPS.
To understand this, it's useful to be aware of and understand SNI
When multiple websites are hosted on one server and share a single IP
address, and each website has its own SSL certificate, the server may
not know which SSL certificate to show when a client device tries to
securely connect to one of the websites. This is because the SSL/TLS
handshake occurs before the client device indicates over HTTP which
website it's connecting to.
Server Name Indication (SNI) is designed to solve this problem. SNI is
an extension for the TLS protocol (formerly known as the SSL
protocol), which is used in HTTPS. It's included in the TLS/SSL
handshake process in order to ensure that client devices are able to
see the correct SSL certificate for the website they are trying to
reach. The extension makes it possible to specify the hostname, or
domain name, of the website during the TLS handshake, instead of when
the HTTP connection opens after the handshake.
From: https://www.cloudflare.com/en-gb/learning/ssl/what-is-sni/
I have 2 different ubuntu VPS instances each with different ip addresses.
One is assigned as a chef-server and the other acts as a workstation.
When I use the command
knife configure -i
I do get options to locate admin.pem and chf-validator.pem files locally.
I am also able to create knife.rb file locally.
WHile setting up knife, I get a question which asks me to enter 'chef-server url' so I enter 'https://ip_address/ of the vps instance
But in the end I get an error message
ERROR: SSL Validation failure connecting to host: "ip_address of my server host"- hostname "ip_address of my host" does not match the server certificate
ERROR: Could not establish a secure connection to the server.
Use knife ssl check to troubleshoot your SSL configuration.
If your Chef Server uses a self-signed certificate, you can use
knife ssl fetch to make knife trust the server's certificates.
I used 'knife ssl fetch' to fetch the trusted_certs from the chef-server but still it doesnt work.
CHef experts please help.
Your chef-server has a hostname, the selfsigned certificate is done with this hostname.
The error you get is due to the fact you call an IP adress where the certificate is done for a hostname.
Two way: disable ssl validation (you'll have a warning but it will works) or make a configuration (using your hostname files for exemple) to use the chef-server hostname instead of ip address.
This is a SSL configuration point you may have with other servers too.
I've got a domain purchased from godaddy (example.com), as well as an ssl certificate from them for that same domain. I have a single machine running a web server and a static ip, the domain from godaddy points to that static ip. The ssl cert is installed on that machine, and everything works fine.
Now I need to start hosting from a different machine, which has a different static IP address. I believe all I have to do is change the IP address for my domain in godaddy's control panel, and the ssl certificate should still be valid, even though it's a new IP address.
Is there any way to test this beforehand? Is my assumption correct that just changing the IP address in the domain record is all I have to do?
Thanks
SSL certificates are (almost always) associated with domain names, not IP addresses. Assuming you have a standard configuration for your SSL cert, you're fine.
But! You want to test this beforehand. OK:
openssl x509 -in yourcert.crt -text -noout
This command will allow you to examine your certificate. In particular, look for your hostname. Mine says something like:
X509v3 Subject Alternative Name:
DNS:mail.cternus.net, DNS:cternus.net
If your hostname is in there (and your IP address is not), you're golden.