I have a small REST server running on a local network, with a bunch of client applications connected (via IP address).
I want to secure this traffic so that access tokens cannot be sniffed.
According to This answer I can create a self signed cert. using an IP (although not common). If i go down this route (assuming no physical access to the server box itself) is this secure?
Yes. It is as secure as CA-signed certificate as long as users always install the correct certificate. (Make sure the the certificate is distributed securely.)
If distributing it securely seems too much trouble, then you may see answers here and consider using certificate issued by letsencrypt.org if you can.
Related
we have an web application(WAMP stack) on a local Windows server. There are several dependencies and an Oracle db running on this server and the server is closed to outside internet traffic.
Clients access the app on LAN using the server's IP, but we plan to create a virtual hostname in active directory to enable access using a hostname on entire local network.
We would like to secure the traffic and switch to https, thus we need a SSL certificate, preferably from a trusted CA to avoid any confusing warnings to the users.
Is there a way to get a Trusted CA SSL certificate for a local host? I was thinking of getting a certificate for a public domain, say myapp.net, then map this domain to actual ip of the local server running the app and install the certificate in apache... would that work?
Thank you for any ideas.
Alexander
You have several options:
The public CAs that your browser trusts by default will only sign keys for DNS names. And you can totally have a DNS name that is not accessible from the public internet (e.g. one that resolves to a local IP). In case you're using CAs like LetsEncrypt (Domain Validated), you will need to make the server available publicly during the key exchange/certification time, but can change the IP immediately afterwards (*). Or simply use one of the other available validation techniques - typically they're paid.
As you're on the intranet, you might be in a situation where you can install your own internal trusted CA on your users' computers. In that case, you can mint such a certificate yourself. This is a scenario that's common in case there's a proxy running internally that's also inspecting https traffic.
And, of course, in case you can install trusted root CAs on computers, you can also install individual trusted keys/certificates for a single machine. But that seems not to be what you'd like to do.
So, for https://myapp you'll have to do some minting/installation yourself. For https://myapp.example.com, you have options with (already) trusted root CAs
(*) that is for the commonly used and documented mode-of-operation. See Patrick's comment below
A few months ago I created the following website for my company: www.mydomain.co.uk. I created it using Lightsail / Bitnami and as part of this process I created an SSL using the 'LetEncrypt' tool. I have also got Cloudflare set up which is providing a SSL/TLS encryption.
Website was live for a number of months and yesterday it stopped working - I suspect this is because the Bitnami SSL certificate had expired. So I set the Cloudflare SSL/TLS setting from 'Full (Strict' to 'Full' and it works fine now.
So do I need to renew the SSL certificate? Or is it obsolete as I am using Cloudflare?
Thanks :-)
The SSL provided by Cloudflare only protects the connection between the client and Cloudflare, but not between Cloudflare and your server. To protect this connection you need SSL on your own server, i.e. using SSL here is not obsoleted by using Cloudflare.
The chance that some attacker is between the client and Cloudflare though is much higher than an attacker being between Cloudflare and your server. In the server the attacker is usually near the client, i.e. typically in the same network like in an open WiFi hotspot or by creating a rogue hotspot. Not much effort and expertise is needed for this kind of attacks. Contrary to this infiltrating connections between Cloudflare and your host is much harder, but government or other attackers with more resources can do it.
Full security does not check the certificate and thus only protects against passive sniffing of the connections. It does not protect against active man in the middle attacks where the attacker intercepts the traffic, decrypts it so it can be read and modified and clear and the re-encrypts again. To protect against such an attack a properly valid certificate has to be used and Cloudflare need to be configured to validate it properly, i.e. full strict. While active attacks need more access the network than passive attacks they are not out of scope for advanced attackers. So better use a valid certificate and switch on full strict.
I have a doubt that I had for years and now I decided to try to understand it. I know that when a user hits a website with SSL all headers are encrypted, even the HOST header.
So, in order to enable SSL in a server, you need to have a single IP to every certificate you have cause Apache, for example, wont know which VHOST it should redirect the user if the HOST header is encrypted.
My question is: how does Cloudflare knows which domain the user is using to access its CDN if it does not know the HOST before the decrypt happens?
Server Name Indication (SNI) allows TLS clients to specify the host they are attempting to connect to give the server a chance to serve the right certificate. It is supported in most browsers.
CloudFlare's page on their free SSL offering indicates they use SNI.
Now, CloudFlare has multiple offerings. Their paid plans don't actually rely on SNI (that's why they support all browsers). Only the free plans do.
For the paid plans, CloudFlare presumably uses dedicated IPs, though even in that case they can still pool multiple domains under a single certificate (using Subject Alternative Names).
I'm interested to purchase a wild card SSL certificate for my public domain (say example.com), so that we can run intranet web servers using a universally recognized CA (e.g., GoDaddy). I do plan to publish the DNS names publicly (e.g. internal.example.com), but their IP addresses are actually LAN addresses (e.g., 192...*). We want to use public DNS, because these web servers may actually be development laptops which travel around, and thus we will use Dynamic DNS to update. It's our intention that these web servers will only be available on the LAN each one is currently running on.
Will that work universally with all clients, e.g., TLS v1.2 ?
Thanks.
As long as the clients can route their traffic to these IP addresses, it will work (otherwise you won't get the connection, of course).
Certificate verification relies on two points:
Verifying that the certificate is genuine, trusted and valid in time.
Verifying that the identity of the certificate matches what you were looking for (host name verification).
This does not depend on how the DNS resolution mechanism. These mechanisms are also orthogonal to the SSL/TLS specifications (although they do recommend to verify the remote party's identity).
I've seen this sort of setup used on various clients and platforms (IE, Chrome, FF, Java clients on Windows/Linux/Mac) and it worked fine.
Of course, whether all implementations do this well is hard to guarantee. There might be some implementation that thinks it's a good idea to perform a reverse DNS lookup, for example.
I'm setting up a webserver for a system that needs to be used only through HTTPS, on an internal network (no access from outside world)
Right now I got it setup with a self-signed certificate, and it works fine, except for a nasty warning that all browsers fire up, as the CA authority used to sign it is naturally not trusted.
Access is provided by a local DNS domain name resolved on local DNS server (example: https://myapp.local/), that maps that address to 192.168.x.y
Is there some provider that can issue me a proper certificate for use on an internal domain name (myapp.local)? Or is my only option to use a FQDN on a real domain, and later map it to a local IP address?
Note: I would like an option where it's not needed to mark the server public key as trusted on each browser, as I have not control over workstations.
You have two practical options:
Stand up your own CA. You can do it with OpenSSL and there's a lot of Google info out there.
Keep using your self-signed cert, but add the public key to your trusted certs in the browser. If you're in an Active Directory domain, this can be done automatically with group policy.
I did the following, which worked nicely for me:
I got a wildcard SSL cert for *.mydomain.com (Namecheap, for example, provide this cheaply)
I created a CNAME DNS record pointing "mybox.mydomain.com" at "mybox.local".
I hope that helps - unfortunately you'll have the expense of a wildcard cert for your domain name, but you may already have that.
You'd have to ask the typical cert people for that. For ease of use I'd get with the FQDN though, you might use a subdomain to your already registered one: https://mybox.example.com
Also you might want to look at wildcard certificates, providing a blanket cert for (e.g.) https://*.example.com/ - even usable for virtual hosting, should you need more than just this one cert.
Certifying sub- or sub-sub domains of FQDN should be standard business - maybe not for the point&click big guys that proud themselves to provide the certificates in just 2 minutes.
In short: To make the cert trusted by a workstation you'd have to either
change settings on the workstations (which you don't want) or
use an already trusted party to sign your key (which you're looking for a way around).
That's all your choices. Choose your poison.
I would have added this as a comment but it was a bit long..
This is not really an answer to your questions, but in practice I've found that it's not recommended to use a .local domain - even if it's on your "local" testing environment, with your own DNS Server.
I know that Active Directory uses the .local name by default when your install DNS, but even people at Microsoft say to avoid it.
If you have control over the DNS Server you can use a .com, .net, or .org domain - even if it's internal and private only. This way, you could actually buy the domain name that you are using internally and then buy a certificate for that domain name and apply it to your local domain.
I had a similar requirement, have our companys browsers trust our internal websites.
I didnt want our public DNS to issue public DNS for our internal sites, so the only way to make this work that I found was to use an internal CA.
Heres the writeup for this,
https://medium.com/#mike.reider/getting-firefox-chrome-to-trust-your-internal-websites-internal-certificate-authority-a53ba2d4c2af
i think the answer is NO.
out-of-the-box, browsers won't trust certificates unless it's ultimately been verified by someone pre-programmed into the browser, e.g. verisign, register.com.
you can only get a verified certificate for a globally unique domain.
so i'd suggest instead of myapp.local you use myapp.local.yourcompany.com, for which you should be able to get a certificate, provided you own yourcompany.com. it'll cost you thought, several hundred per year.
also be warned wildcard certificates might only go down to one level -- so you could use it for a.yourcompany.com and local.yourcompany.com but maybe not b.a.yourcompany.com or myapp.local.yourcompany.com, unless you pay more.
(does anyone know, does it depend on the type of wildcard certificate? are sub-sub-domains trusted by the major browsers?)
Development purpose only
This docker image solves the problem (thanks to local-ip.co): https://github.com/medic/nginx-local-ip.
It launches a reverse proxy in the port 443 with a public cert that works with any *.my.local-ip.co domain. Eg. your local IP is 192.168.10.10 → 192-168-10-10.my.local-ip.co already points to it (it's a public domain)! Assuming the app is running in your computer at the port 8080, you only need to execute this to proxy pass your app and expose it at the URL https://192-168-10-10.my.local-ip.co:
$ APP_URL=http://192.168.10.10:8080 docker-compose up
The domain is resolved with any public DNS you have configured in the devices where you want to access the app, but your traffic keeps local between your app and the client (through the proxy), so you can even use it to connect with devices within the same LAN network, without any of the traffic going out to internet, all the traffic is local.
The reason that is mostly useful for development is that anybody can launch an application with this same certificate, so is not really secure, but helpful when you need to expose your app with HTTPS while developing or testing (e.g. HTML5 apps in Android that are loaded with Webview).