I configured nodemailer to send to hostgator (as I learned how to here: https://stackoverflow.com/a/56291143/954986):
const transporter = nodemailer.createTransport(smtpTransport({
name: "hostgator",
host: "mail.mysite.com",
port: 465,
secure: true,
auth: {
user: "test#mysite.com",
pass: $password,
}
}));
However, when sending messages I'm getting: Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate's altnames: Host: mail.mysite.com. is not in the cert's altnames: DNS:*.hostgator.com, DNS:hostgator.com
It only works when I add tls: { rejectUnauthorized: false}, which I would like to avoid.
The weird thing is that when I use any online SSL checker to look up "mail.mysite.com" it shows that SSL is configured correctly, and my site domain shows up in the certificate.
It seems like somehow hostgator is serving a different certificate for my supplied host? Any idea what might be happening, or how I can dig deeper into this?
Interesting update:
I did some more digging, and found the domain "cloud64.hostgator.com". I used this as the transportor host instead of mail.mysite.com, and it works with TLS enabled! And the email even sends faster.
I want to understand this though. Is this a stable host I can continue using? Is there some sort of redirect happening at the SMTP layer? What's going on?
From what I see you are connecting to:
In the first case, where you are getting *.hostgator.com TSL certificate, which is not valid for your domain thus your TSL validation fails.
In second scenario you are using cloud64.hostgator.com which probably has some generic MX record so your domain will work. Which is kind of weird, but I can imagine hacking it up somehow.
It seems to me you have incorrect DNS MX record(s) set for your domain. You have to correctly add the MX record(s) so the certificate will be matched to your domain when connecting via TLS SMTP.
For hostgator you can setup MX records like this.
Of course, if you want you can also read RFC974 - the mail routing with domain system and the RFC8314 - on TLS security which gives you details how it should work.
Note: What version of TLS will be supported depends on the negotiation between server and client. They will both agree on the highest common denominator. The latest is TLSv1.3.
Related
I'm using CloudFlare SSL. when I set SSL to flexible everything works fine.
But if I use full SSL mode this error occurs instantly:
Note that I set the certificate and the key in my cpanel ssl section and I think everything is done ok.
Why this is happening and how to fix it ?
It is quite simple. All you have to remember is:
Flexible - there should be no SSL installed for that domain on the server ( no VHOST for port 443 either ).
Full - there should be an SSL installed for that domain, but it does not have to be a valid one ( you can use a self-signed or expired SSL ).
Full Strict - there should be a VALID SSL installed for that domain on the server ( it has to be absolutely a valid and active SSL ).
So, depending on the SSL you have on your domain server, just set the Cloudflare SSL to one of the above options. If you believe that everything is okay and you still get an issue, I would suggest reaching your web host to check that further for you.
I am new to HTTPs. In our application to integrate with another system we were given HTTPs URLs along with their certificates. Our team added those certificates in the test store. Now when we are sending request on those URLs, we are getting “Unsupported or unrecognized SSL message”.
And if I do curl -v on that URL, I get error:1408F10B:SSL routines:ssl3_get_record:wrong version number.
Is it problem on our side or this need to be fixed by other systems who shared those URLs with us.
Both of these errors are due to the same reason?
It is very likely that the server does not speak TLS at all.
The client will start with the TLS handshake and the server will reply to this with some non-TLS response. The client expect the server to do its part of the TLS handshake though. Thus it will try to interpret the servers as response as TLS. This will lead to strange error messages depending on the TLS stack used by the client.
With OpenSSL based stacks it will often result in wrong version number, since the trying to extract the TLS version number for the expected TLS record and get some unexpected results since the server did not actually send a TLS record.
Is it problem on our side or this need to be fixed by other systems who shared those URLs with us.
If this is exact the URL you are supposed to use (i.e. no simply changing of http:// to https:// on your site) then it is likely a server side problem. But it might also be a problem of some middlebox or software in the network path to the server, like some antivirus, firewall or captive portal hijacking your data and denying access to the remote system with an error message.
In my case, I had on apache2 another badly configured virtual host. On the other wrong virtual host there was a http virtual server on port 443!!!
The second virtual host was correct but apache cannot use different protocols on the same port for different virtual hosts.
After removing the http on port 443 configuration all other https hosts worked and error
error:1408F10B:SSL routines:ssl3_get_record:wrong version number"
disapeared
Can someone kindly help me understand the following and suggest a possible fix?
Problem: Secure websocket (wss) connection fails in Chrome browser, when using a multi domain (SAN) SSL certificate
Details: We have a multi domain SSL SAN certificate that covers, say, webapp.example.com and websocket.example.com. The page https://webapp.example.com/ loads correctly (the domain is verified correctly against the SAN certificate by the browser, and a 'lock' icon is shown to indicate that the connection is secure). However, the said web application on that page also attempts to makes a connection to wss://websocket.example.com/. This connection is failing with ERR_CERT_COMMON_NAME_INVALID.
A weak hypothesis for the failure: This error is possibly because
The browser first opens an SSL connection to https://webapp.example.com after verifying webapp.example.com as a valid domain in the SAN certificate
When a connection is made to wss://websocket.example.com, the name 'websocket.example.com' does not match with the domain that has been previously verified (webapp.example.com).
Question: Is it possible to make this work? If yes, how?
Your hypothesis is wrong. The certificate validation is always done against the domain in the currently accessed URL. It is not done based on some URL previously accessed, even if the provided certificate was the same.
It is more likely that the domain you access is actually not contained in the multi-domain certificate. Note that an entry of webapp.example.com or example.com in the certificate does not cover websocket.example.com or similar in the URL.
I have Windows service that listens for https requests on an end user's machine, is there an accepted way of creating or distributing the private key in this circumstance? Should I be packaging a real key specifically made for localhost requests (e.g. local.mydomain.com) or generating a self signed key and adding a trusted root CA at install time?
If it matters, the service uses Nancy Self Host for handing the requests and it runs on the SYSTEM user. We have a web app running over https that will be making CORS requests to the service, the user will be using it on a standard web browser (>=IE10). Only the machine onto which the service is installed will be making requests to it.
Thanks.
I have 2 options for you, doing the right way and not doing it at all.
The right way
(Warning: it costs a plenty)
Let's say your application is hosted in the cloud, under kosunen.fi. Main portion is served from the cloud at https://www.kosunen.fi.
Buy DNS delegation for this domain. Resolve localhost-a7b3ff.kosunen.fi to either 127.0.0.1 / ::1 or the actual client's local ip address 10.0.0.63 / fe80::xxxx
Buy subCA or get mass certificate purchase agreement, issue certificates (earlier) or get certificates issued (latter) on demand for each localhost-a7b3ff.kosunen.fi. These certificate will emanate from a trusted global CA and therefore are trusted by all browsers. Each certificate is only used by one PC.
Set up CORS/XSRF/etc bits for *.kosunen.fi.
Done.
Not doing it
Realise that localhost traffic, is, in practice quite secure. Browsers typically refuse http://localhost and http://127.0.0.1 URLs (to prevent JS loaded from internet probing your local servers).
You'll still need at least one DNS entry, localhost.kosunen.fi that resolves to 127.0.0.1 / ::1, browsers will happily accept http://localhost.kosunen.fi even though host name resolves to 127.0.0.1.
What are the risks?
Someone running wireshark on client machine -- if someone has privileges, your model is done for anyway.
Someone hijacks or poisons DNS -- sets it up so that www.kosunen.fi resolves to correct ip, but localhost.kosunen.fi resolves to their internet ip. They steal requests user's browser makes and can include JS.
Mitigate that ad hoc -- only serve data from localhost, not scripts. Set up restrictive CORS/XSRF/CSRF.
You are still left with CORS for HTTP x HTTPS there are solutions to that.
Super-simple CORS
Here between 2 ports, 4040 and 5050, that's just as distinct as between different hosts (localhost vs your.com) or protocols (HTTPS vs HTTP). This is the cloud server:
import bottle
#bottle.route("/")
def foo():
return """
<html><head></head>
<body><p id="42">Foobar</p>
<script>
fetch("http://localhost:5050/").then(
function(response) {
console.log("status " + response.status);
response.json().then(
function(data) {
console.log(data);
}
);
}
);
</script>
</body></html>""".strip()
bottle.run(host="localhost", port=4040, debug=True)
And this is localhost server:
import bottle
#bottle.route("/")
def foo():
bottle.response.headers["Access-Control-Allow-Origin"] = "*" # usafe!
bottle.response.headers["Access-Control-Allow-Methods"] = "HEAD, GET, POST, PUT, OPTIONS"
bottle.response.headers["Access-Control-Allow-Headers"] = "Origin, Accept, Content-Type, X-Requested-With, X-CSRF-Token"
return """{"foo": 42}"""
bottle.run(host="localhost", port=5050, debug=True)
Making it safe(r): in the localhost server, read request Origin, validate it, e.g. starswith("https://your.com/") and then return same Allow-Origin as request Origin. IMO that ensures that a compliant browser will only serve your localhost content to JS loaded in your.com context. A broken browser, or, any program running on same machine can, of course, trick your server.
The best way to go about this is to create a self-signed certificate on your local hostname and add an exception to that in IE.
There are a few services that offer 'localhost over SSL', but these all require the private key to be shared by all users using the service, effectively making the key unusable from a security perspective. You might not care about that too much as long as you only allow connections on the local network interface, but CA's try and revoke these certificates as they compromise the integrity of SSL (see for instance http://readme.localtest.me/).
It should be possible to make a mixed-content (HTTPS to HTTP) CORS request on IE11 (see https://social.technet.microsoft.com/Forums/ie/en-US/ffec5b73-c003-4ac3-a0fd-d286d9aee89a/https-to-http-cors-issue-in-ie10?forum=ieitprocurrentver). You just have to make sure that both sites are in a trusted zone and allow mixed-content. I'm not so sure if the web application can / should be trusted to run mixed-content, so the self-signed certificate is more explicit and provides a higher level of trust!
You can probably not use a real certificate signed by a trusted CA since there is no way for a CA to validate your identity based on a hostname that resolves to 127.0.0.1. Unless you create a wildcard certificate on a domain name (i.e. *.mydomain.com) that also has a subdomain local.mydomain.com resolving to your local machine, but this might interfere with existing SSL infrastructure already in place on mydomain.com.
If you already have a certificate on a hostname, it might be possible to set that hostname to 127.0.0.1 in your client's hosts file to work around this, but again this is more trouble and less explicit than just making a self-signed certificate and trusting that.
BTW: Don't rely on the security of the localhost hostname for the self-signed key as discussed here: https://github.com/letsencrypt/boulder/issues/137. The reason behind this is that some DNS resolvers send localhost requests to the network. A misconfigured or compromised router could then be used to intercept the traffic.
I'm having some issues with two way SSL, hoping I can find some guidance. I've tried a lot of info I've found online, and while helpful I can't get past this issue..
First some background on what I have done and how I'm setup.
IIS Server Setup:
SSL Settings:
Auth Settings:
Mapping Config:
Mapped Certs:
Bindings:
Server Cert:
Server Trusted Root:
Cert Path:
OK. So for starters I'm trying to get this to work on the server itself. I've imported the server cert into my personal store:
With the same root authority:
So... That should cover it, Right?!
Nope. I get this error in IE when I hit the site locally??
You seem to use the same certificate for the server and the client. That's not how it works.
You need to have a client certificate in your personal store. IE can't find one doesn't send one to the server. IIS then complains about it with a 403.7 because IE didn't send a client certificate.
So you need to get a client (sometimes known as SMIME) certificate from your CA and install it into your personal store.
If you look at the details of your existing certificates it should show:
Enhanced Key Usage: Server Authentication (1.3.6.1.5.5.7.3.1)
what you need there is:
Enhanced Key Usage:
Client Authentication (1.3.6.1.5.5.7.3.2)
Secure Email (1.3.6.1.5.5.7.3.4)