I`m using elastic 2.2.0 with shield 2.2, 10 nodes cluster. I need to enable ssl in Elastic for Kibana to work with shield and I got troubled on the certification signing part.
I do not have a wild card certificate so I cant sign just one csr in node and copy it to all other nodes, I tried to use letsencrypt (with the elastic tutorial) and sign a certificate with common name of node1 and alternative names of node2-10 and copy it to all other nodes (of course I firstly created domains for all 10 servers and pointed it to node1, sign the csr, then pointed all the 9 to the right server), it didn't work and I got a lot of "bed certificate" exceptions in the nodes log.
As I said, I need ssl for kibana to work with shield, and for secure connections in general, and I planning to add some more nodes to the cluster...
How can I manage to do so?
What would be the best architecture for that purpose?
The problem was that I tried to use the certificates on private ip seeds of nodes, and as the documentation says (its not possible):
If you use a commercial CA, the DNS names and IP addresses used to identify a node must be publicly resolvable. Internal DNS names and private IP addresses are not accepted due to security concerns.
If you need to use private DNS names and IP addresses, using an internal CA is the most secure option. It enables you to specify node identities and ensure node identities are verified when nodes connect. If you must use a commercial CA and private DNS names or IP addresses, you cannot include the node identity in the certificate, so the only option is to disable hostname verification.
So the solution is to use the certificate only for outside requests (like kibana UI) by setting in elasticsearch.yml:
shield.transport.ssl: false
shield.http.ssl: true
Related
For a project I'm working on I will have multiple servers and lots of subdomains (eg- *.mydomain.example). I'm thinking of getting this SSL cert from godaddy- Unlimited Subdomains (Wildcard) $199.99/yr
Will I be able to use the cert on all the servers or do I need to buy a cert for each server since they each have a unique IP address?
Certificates are bound to a hostname (or wildcard hostname), so you're fine using the same cert on multiple machines.
However, when requesting a certificate, you usually create a private key on one of the servers. This private key needs to be copied to all machines in addition to the actual certificate that you receive.
One scenario is where you have www.domain.example resolving to an IP of a load-balancer, which in turn forwards the traffic to multiple servers. In that case, you only need a certificate for www.domain.example that you copy (with the private key) to all servers in your cluster.
I am trying to configure a self signed ssl certificate to one of my servers. I have 3 servers. I am using 1 public ip for all three servers. To do that i am using port forwarding. I know by using openssl i can configure a self signed ssl for localhost(private network). I have below questions regarding my problem. I am using xampp in windows server.
Is it possible to configure self signed ssl certificate for a public ip-address with port? if yes
then how can i do that ?
I have seen some tutorial to configure self signed certificates but all of them are for private ip.
Meta: this is really an operations question, not programming or development, and offtopic for SO
An HTTPS server (like Apache) must use a certificate that includes the name used to access it i.e. in the URL(s) used by clients, excluding any port specification. This can be a domain name or an IP address (any kind: v4 or v6, public or private, as long as using that address from the client(s) reaches the correct server). This means that multiple servers at different ports on the same address can use the same cert if you wish, but they also be different if you prefer.
A public CA will only issue a cert containing a domainname or address you can prove you own, which is only possible for a fully-qualified domainname in public DNS or a 'permanent' public address, but for a cert you create yourself that restriction doesn't apply.
Since the turn of the century you should use the Subject Alternative Name (SAN) extension, not solely the CommonName attribute in Subject as you may find in many websites and blogs that copy outdated or incorrect information. SAN can specify more than one name if desired, but note each item in it explicitly specifies DNS or IPaddress; be sure to use the correct one(s). All popular software has supported SAN for years, but so far only Chrome(ium) requires it. OpenSSL can create certs with extensions like SAN, but not with its simplest method, which is very popular with outdated or wrong websites, namely req -new followed by basic x509 -req. See:
https://security.stackexchange.com/questions/150078/missing-x509-extensions-with-an-openssl-generated-certificate
https://unix.stackexchange.com/questions/371997/creating-a-local-ssl-certificate
https://unix.stackexchange.com/questions/393601/local-ssl-certificates-in-chrome-ium-63
PS: Windows is irrelevant, except that you probably need to install OpenSSL whereas on many Unices it is already present by default
Can I use a certificate from letsencrypt to sign local certificates?
I'm annoyed when accessing routers and APs at 192.168.x.x to get security warnings.
I could create my own root cert, and import it into all my browsers etc, and create certs for all the local servers.
But I'd rather have the chain device -> www.example.com -> letsencrypt -> root
Then also guests could use my local servers/services without this security error.
No, you can not because the certificate issued to you by letsencrypt will not have the keyusage certificate signing enabled. Without this attribute in the issuer, any browser or SSL client musth reject the certificate.
If this were possible, anyone could issue valid certificates for any server simply by having a valid certificate from a trusted CA
If you want to issue certificates for your local servers you will need to create your own CA and include the root certificate in the truststore of each client
Yes, you can... but not like that
Yes, you can get certificates for servers on a private network. The domain must be a real domain with public txt records, but the A, AAAA, and CNAME records can be private/non-routable (or in a private zone).
No, the way to do that isn't by using Let's Encrypt certificates to sign local certificates.
You can accomplish exactly what you want to accomplish using the DNS-01 challenge (setting txt records for your domain).
Who is your domain / dns provider?
Immediate, but Temporary Solution
If you want to test it out real quick, try https://greenlock.domains and choose DNS instead of HTTP for the "how do you want to do this" step.
Automatable Integration
If you want a configurable, automatable, deployable solution try greenlock.js (there are node plugins for Cloudflare, Route 53, Digital Ocean, and a few other DNS providers).
Both use Let's Encrypt under the hood. Certbot can also be used for either case and can use python plugins.
Possibly related...
P.S. You might also be interested in a service like Telebit, localtunnel, or ngrok.
I am getting the bad certificate error while accessing the server using IP address instead DNS name.
Is this functionality newly introduced in tls1.1. and tls 1.2? It would be good if someone would point out OpenSSL code where it fails and return the bad certificate error.
Why do we get bad certificate error while accessing the server using IP address instead dns name?
It depends on the issuing/validation policies, user agents, and the version of OpenSSL you are using. So to give you a precise answer, we need to know more about your configuration.
Generally speaking, suppose www.example.com has a IP address of www.xxx.yyy.zzz. If you connect via https://www.example.com/..., then the connection should succeed. If you connect using a browser via https://www.xxx.yyy.zzz/... then it should always fail. If you connect using another user agent via https://www.xxx.yyy.zzz/... then it should succeed if the certificate includes www.xxx.yyy.zzz; and fail otherwise.
Issuing/Validation Policies
There are two bodies which dominate issuing/validation policies. They are the CA/Browser Forum, and the Internet Engineering Task Force (IETF).
Browsers, Like Chrome, Firefox and Internet Explorer, follow the CA/B Baseline Requirements (CA/B BR).
Other user agents, like cURL and Wget, follow IETF issuing and validation policies, like RFC 5280, Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile and RFC 6125, Representation and Verification of Domain-Based Application Service Identity within Internet Public Key Infrastructure Using X.509 (PKIX) Certificates in the Context of Transport Layer Security (TLS). The RFCs are more relaxed that CA/B issuing policies.
User Agents
Different user agents have different policies that apply to DNS names. Some want a traditional hostname found in DNS, while others allow IP addresses.
Browsers only allow DNS hostnames in the Subject Alternate Name (SAN). If the hostname is missing from the SAN, then the match will not occur. Putting the server name in the Common Name is a waste of time and energy because browsers require host names in the SAN.
Browsers do not match a public IP address in the SAN. They will sometimes allow a Private IP from RFC 1918, Address Allocation for Private Internets.
Other user agents allow any name in the Subject Alternate Name (SAN). They also will match a name in both the Common Name (CN) and the Subject Alternate Name (SAN). Names include a DNS name like www.example.com, a public IP address, a private IP address like 192.168.10.10 and a local name like localhost and localhost.localdomain.
OpenSSL Version
OpenSSL version 1.0.2 and below did not perform hostname validation. That is, you had to perform the matching yourself. If you did not perform hostname validation yourself, then it appeared the connection always succeeded. Also see Hostname Validation and TLS Client on the OpenSSL wiki.
OpenSSL 1.1.0 and above perform hostname matching. If you switch to 1.1.0, then you should begin experiencing failures if you were not performing hostname matching youself or you were not strictly following issuing policies.
It would be good if someone would point out OpenSSL code where it fails and return the bad certificate error.
The check-ins occurred in early-2015, and they have been available in Master (i.e., 1.1.0-dev) since that time. The code was also available in 1.0.2, but you had to perform special actions. The routines were not available in 1.0.1 or below. Also see Hostname Validation on the OpenSSL wiki. I don't have the Git check-ins because I'm on a Windows machine at the moment.
More information of the rules for names and their locations can be found at How do you sign Certificate Signing Request with your Certification Authority and How to create a self-signed certificate with openssl. There are at least four or six more documents covering them, like how things need to be presented for HTTP Strict Transport Security (HSTS) and Public Key Pinning with Overrides for HTTP.
I'm new to Cassandra and just installed DataStax Community Edition 3-node cluster in our QA environment. I'd like to secure node-to-node and client-to-node communications within my cluster using GlobalSign wildcard SSL cert that I already have. So far I found posts showing how to secure cluster using your own CA but wasn't able to find any mentions on how to use wildcard certs. Basically, I'd like to install my wildcard cert to all nodes in the cluster and use DNS A-records to match node IP address and the DNS name (e.g. 10.100.1.1 > node01.domain.com).
Is that even possible? Any help is greatly appreciated!
Mike
Using anything but certificate pinning as described in the reference is insecure, as Cassandra will not validate if the hostname the certificate was created for is actually the host trying to connect. See CASSANDRA-9220 for details.