I've successfully built openssl (multiple versions all the way up to the current latest version v1.1.1p - v3+ builds fine for Linux but was failing for Android so I am using 1.1.1p currently). I'm generating the static libraries for Linux x64 and Android (all platforms) and then building mosquitto passing the openssl static libraries into that to generate the relevant libmosquitto static library as well as the mosquitto_pub/sub clients.
Everything builds fine (apart from the mosquitto timestamp example for Android) and will connect without issue to insecure MQTT brokers on 1883.
However, when I come to use the library against a secure MQTT server on port 8883 (such as the free HiveMQ broker) I'm having to specify the path of the openssl certificates using "--capath /etc/openssl/certs" but the default (downloaded using apt) Ubuntu versions of mosquitto_pub/sub don't need to do this.
Obviously, I've tried --tls-use-os-certs option (and the --insecure option - although that's irrelevant in this instance) but I'm just getting;
Client null sending CONNECT
OpenSSL Error[0]: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed
Error: A TLS error occurred.
with the -d option tagged on for more information.
When I specify "--capath /etc/openssl/certs" it works fine. Certainly don't know how I'm going to specify those paths cross-platform...
I suspect I'm missing something when building openssl but I can't see it in the documentation so hoping someone can tell me why I have to specify the path to the openssl certs and the default Ubuntu system versions of mosquitto_pub/sub don't need this option?
Related
I have configured a ListenHTTP 1.7.0 processor in NiFi 1.7.0-RC1. It is listening on a custom port behind a reverse proxy. I have configured a StandardRestrictedSSLContextService with a JKS keystore and have added the keystore password. We have not configured the truststore as we don't expect to need mutual TLS. The certificate is signed by an internal enterprise CA and is (or should be!) trusted by the client.
When I test this with Chrome I receive the following:
This site can’t provide a secure connection
my.server uses an unsupported protocol.
ERR_SSL_VERSION_OR_CIPHER_MISMATCH
Unsupported protocol
The client and server don't support a common SSL protocol version or cipher suite.
Troubleshooting:
We have tried both TLS and TLSv1.2 in the ListenHTTP processor.
We have treid using curl (Linux) and Invoke-WebRequest (Windows) but have received variations on the bad cipher/SSL version message above.
I don't see anything in the release notes suggesting that the ListenHTTP processor changed much since 1.7.0, so I'm assuming that I don't need to upgrade NiFi.
Can anyone suggest what to try next or explain why we see this error?
I have read the following:
https://www.simonellistonball.com/technology/nifi-ssl-listenhttp/
https://cwiki.apache.org/confluence/display/NIFI/Release+Notes
Nifi: how to make ListenHTTP work with SSL
What version of Java are you running on? Java 11 provides TLSv1.3, which is the default offering if you have generic TLS selected, but NiFi 1.7.0 doesn't support TLSv1.3 (and doesn't run on Java 11). So assuming you are running on Java 8, recent updates have introduced TLSv1.3 but should still provide for TLSv1.2. This can also indicate that the certificate you have provided is invalid or incompatible with the cipher suite list provided by the client. You can use $ openssl s_client -connect <host:port> -debug -state -CAfile <path_to_your_CA_cert.pem> to try diagnosing the available cipher suites & protocol versions. Adding -tls1_2 or -tls1_3, etc. will restrict the connection attempt to the specified protocol version as well.
You should definitely upgrade from NiFi 1.7.0 -- it was released over 2 years ago, has known issues, and there have been close to 2000 bug fixes and features added since, including numerous security issues. NiFi 1.12.1 is the latest released version.
I have microK8S cluster, and expose the API server at my domain.
The server.crt and server.key in /var/snap/microk8s/1079/certs need to be replaced with the ones that include my domain.
Otherwise, as expected, i get the error:
Unable to connect to the server: x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster, kubernetes.default.svc.cluster.local, not mydonaim.com
With the help of cert-manager I have produced certificates and replaced them, my system works well.
Problem: every time server is restarted, server.crt and server.key are generated again in
/var/snap/microk8s/1079/certs. My custom certs are deleted, making API server unreachable remotely.
How can I stop the system from doing that all the time?
Workaround?
Should I place my certificates elsewhere and edit config files like /var/snap/microk8s/1079/args/kube-controller-manager with the path to those certificates? Are those config files auto-replaced as well?
Cluster information:
Kubernetes version: 1.16.3
Cloud being used: Bare metal, single-node
cluster Installation method: Ubuntu Server with Snaps
Host OS: Ubuntu 18.04.3 LTS
It looks like there is an existing issue that describes copying and modifying the /var/snap/microk8s/current/certs/csr.conf.template to include any extra IP or DNS entries for the generated certificates
Please be aware of the proposed updates in https://discuss.kubernetes.io/t/services-and-ports/11263/6. The following command was required to run in my env:
sudo microk8s refresh-certs
I am using kafka version 2.12-2.2.1 in windows operating system. I have implemented tls on my local windows systems using the process of signed certificates.
The kafka is running fine and there is one command to check if certificates are installed on kafka.
openssl s_client -debug -connect localhost:9093 -tls1
But when I try to connect to localhost:9093 and using producer or consumer it is throwing me an error saying :
connection to node -1 failed due to authentication
I have tried everything, I am stuck even the documentation provided is not giving any hints to solve this error.
Note: One more addition how can I see the list of topics and describe the topic if exits using ssl in kafka because that command is also not working.
Along with that I have tried every answer on SO but still no success.
The documentation I have followed Installing ssl on kafka
I have a command line application that is using the libcurl-4 dll's, and currently I can get everything to work by placing my CA certs in my working directory and passing their names to the CUTLOPT_CAINFO and CURLOPT_SSLCERT with ./ prefix to their names.
But, what I am working on is getting cURL to not use what is in the current directory and instead use the certs that are stored in my computers system store.
From reading cURL's documentation I understand that if you configure it without giving a specified default ca-bundle or ca-path that ti will "auto-detect a setting".
And that the CURLOPT_CAINFO is by default set to "built-in system specific"
So can anyone help me understand:
if nothing is specified at configure time with curl, is the default path it detects the system store? Or does curl use its own path for a system store?
what value do you give curl_easy_setopt(m_curlHandle, CURLOPT_CAINFO, *<value> ) to make CURLOPT_CAINFO go use its default value?
Any help is appreciated as i am still learning how this all works.
Thank you.
OpenSSL does not support using the "CA certificate store" that Windows has on its own. If you want your curl build to use that cert store, you need to rebuild curl to use the schannel backend instead (aka "winssl"), which is the Windows native version that also uses the Windows cert store by default.
If you decide to keep using OpenSSL, you simple must provide CA certs in either a PEM file or a specially crafted directory as Windows doesn't provide its system store using that format you either have to get a suitable store from somewhere or figure out how to convert the Windows cert store to PEM format.
Update
Starting with libcurl 7.71.0, due to ship on June 24, 2020, it will get the ability to use the Windows CA cert store when built to use OpenSSL. You then need to use the CURLOPT_SSL_OPTIONS option and set the correct bit in the bitmask: CURLSSLOPT_NATIVE_CA.
Since April 2018, for those of you who want to download a file using the Windows command line, you can use the Curl.exe executable. From Windows 10 build 17063 and later, Curl is included, so that you can execute it directly from Cmd.exe or PowerShell.exe.
curl.exe -V
curl 7.55.1 (Windows) libcurl/7.55.1 WinSSL
Release-Date: [unreleased]
Protocols: dict file ftp ftps http https imap imaps pop3 pop3s smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile SSPI Kerberos SPNEGO NTLM SSL
Be careful using PowerShell the Cmdlet Invoke-WebRequest is aliased with name curl, so unalias this CmdLet (Remove-item alias:curl) or explicitly use curl.exe.
As far as I understand, curl.exe is built with Schannel (Microsoft's native TLS engine), then libcurl still perform peer certificate verification, but instead of using a CA cert bundle, it uses the certificates that are built into the OS.
curl.exe "https://www.7-zip.org/a/7z1805-x64.exe" --output c:\temp\7zip.exe
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1404k 100 1404k 0 0 1404k 0 0:00:01 --:--:-- 0:00:01 9002k
I have a site which is served over HTTPS, but which iTunes can't find. My suspicion is that it's related to the iTunes backend server being Java 6, and Java 6 not supporting SNI. SSL Labs seems to hint that my site does require SNI (see this report, and search for SNI), but I can't think why. Have I misunderstood multi-domain certificates? I've got multiple sites running on the same server, but my understanding was that as long as all the URLs were listed as Subject Alternative Names on the certificate, that all would be well.
Does anyone know a good way to check if a URL requires SNI support on the client to access it? I don't have a Windows XP/Java 6 install around to play with sadly.
The reports from SSLLabs regarding SNI are usually correct. Your understanding that SNI is not needed if your certificate contains all possible hosts is correct too. But, not needed in theory does not mean that your server setup does not require SNI anyway.
I don't have a Windows XP/Java 6 install around to play with sadly.
Given that you only specify what you don't have I will assume that you have everything else which might be used. A simple way to check is openssl:
# without SNI
$ openssl s_client -connect host:port
# use SNI
$ openssl s_client -connect host:port -servername host
Compare the output of both calls of openssl s_client. If they differ in the certificate they serve or if the call w/o SNI fails to establish an SSL connection than you need SNI to get the correct certificate or to establish a SSL connection at all.
An easy way to check if a site relies on SNI is this:
openssl s_client -servername alice.sni.velox.ch -tlsextdebug -msg \
-connect alice.sni.velox.ch:443 2>/dev/null | grep "server name"
And if in that output you see the following, it means the site is using SNI.
TLS server extension "server name" (id=0), len=0
The above is a summary of an answer at serverfault.
Nginx in general, and your site in particular, accepts but doesn't require SNI. To test this you cannot easily use Oracle Java out of the box, because its cacerts does not include DST Root CA X3 which is the root cert used (initially) by 'Let's Encrypt' who issued your site's cert; this is true for all versions of Oracle Java up to current (8u74). Windows (hence IE and Chrome on Windows) and Firefox do have this root cert; I can't say for other OS or browsers.
To fix this so you can easily test, either:
use Oracle Java 6 but modify JRE/lib/security/cacerts to add the DSTX3 cert, obtained either from your OS or browser, or by following the link at https://letsencrypt.org/certificates/ to https://www.identrust.com/certificates/trustid/root-download-x3.html -- except that page nonstandardly gives you only the base64 body of the cert so you must manually add the PEM header and trailer lines before Java keytool will import it.
use Oracle Java 6 as-is but configure your application (with system properties) to use a custom truststore which you create containing the DSTX3 cert as above.
use a version of Java 6 that does include this root cert in cacerts. In particular I use CentOS 6 and its openjdk packages (for 6, 7, and 8) use a systemwide CA 'bundle' that includes DSTX3, which is what made it easy for me to do this test. I expect, but can't confirm, that other RedHat variants do the same. For other distros and platforms I can't say; if not, see above.
Monitor the connection attempt with wireshark or similar to see that the ClientHello does not contain SNI, but the connection succeeds and is successfully used for an HTTP request.
If you actually want to communicate with the server instead of testing it for SNI, simply omit the final 'monitor' step.