Azure IoTHub_Client with X509 CA-based certificate in Python - azure-iot-hub

I'm playing with Azure IOT Hub/Edge and Python on a Linux client device. I have modified the iothub_client_sample_x509.py sample program from the SDK to use arbitrary X509 certificates (actually created with OpenSSL) to connect to IoT Hub. The device certificates are part of a CA structure with two layers of certificate chain on top (i.e. a two-tier CA).
Connection establishment works fine as long as I register the device on IoT Hub as 'X.509 Self-Signed' with the SHA1 fingerprint of the device cert as the primary/secondary fingerprint. When I try, however, to connect in 'X.509 CA-Signed' mode, I fail with 'Connection Not Accepted: 0x5: Not Authorized' (in the latter case, I registered the device on IoT Hub as 'X.509 CA-Signed'). The chain (CA) certificate files were uploaded under 'Certificates' in IoT Hub with status 'Verified'.
In both cases, I use a connection string like this: "HostName=<host name with full DNS name>;DeviceId=<device ID>;x509=true".
Is this the correct connection string for the 'X.509 CA-Signed' case? Do I need to transmit the certificate chain (in addition to the device cert) when I connect? How would I do that? The Common Name in the device certificates reads 'CN = <device ID>.westeurope.cloudapp.azure.com', and not 'CN = <device ID>.azure-devices.net'. Could that be a problem?
Please note that I cannot easily set up C#/.NET on the client devices for testing, so I'd like to stick with Python (or maybe C) for now. Is there any chance to activate some more granular logging to understand why connection establishment fails with '0x5: Not Authorized'? Anything else I may have missed?
Thanks for any help, Michael
Update 2018-05-16:
Hi Michael, thanks for your comments. Indeed, I have been able to make connection establishment work in 'X.509 CA-Signed' mode by setting up the certificates with the MS PowerShell scripts (https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-security-x509-create-certificates). Correcting the CN to the device name only did not fix the problem with my original PKI setup, though. Guess I will have to analyze every little difference in the PKI setups.
Two obvious differences in the PKI setups are:
My setup uses a two-tier PKI (one root CA and one intermediate/signing CA) rather than one CA only. The certificates of both are stored in IoTHub (verified).
Root CA and Signing CA have 384 bit keys (ECC - secp384r1 curve) rather than 256 bit (prime256v1 curve). Both CA certs could be stored and verified without any problems in IOTHub.
Are any of these differences known to cause problems?
Thanks, Michael

Setting the CN to the device name was only one part of the solution. In addition, I had to correct a subtle error in my root CA certificate.
It contained:
X509v3 extensions:
X509v3 Basic Constraints: critical
CA:TRUE, pathlen:0
'pathlen:0' means that this CA would not allow any subsequent, intermediate CA below itself. And - remember my PKI setup - there is an intermediate CA that would eventually issue the device certificates. Setting pathlen to 1 in the root CA certificate fixed the problem. Apparently, IoTHub performs a very thorough validation of the certificate chain.
Ironically, simply deleting the root CA certificate in the Certificates section of IoTHub also fixes the problem. Although the certificate chain is no more complete then (the signer of the intermediate CA is missing), IoTHub approves the certificate chain during connection establishment. Maybe this is because the intermediate CA certificate is in state 'verified' in IoTHub.

Even though you have not provided the detailed steps about how to generate the x509 certificate for the device, i encountered the similar issue long time ago.Please try to change the Common Name(CN) as the device name that created in Azure Portal(IoTHub->IoT Devices). If the error still can not be solved, please feel free and let me know.

Related

Can I use my own Certificate Authority for HTTPS over LAN?

I have a server and a few clients, all running on different docker containers. The users can use the client by entering localhost:3000 on their browser (where the client docker is running).
All the containers run on the same LAN. I want to use HTTPS.
Can I sign a public private key pair using my own CA, then load the CA's public key to the browser?
I want to use the normal flow for public domains, but internally with my own CA.
Or should I look for another solution?
Meta: since you've now disclosed nodejs, that makes it at least borderline for topicality.
In general, the way PKIX (as used in SSL/TLS including HTTPS) works is that the server must have a privatekey and matching certificate; this is the same whether you use a public CA or your own (as you desire). The server should also have any intermediate or 'chain' cert(s) needed to verify its cert; a public CA will always need such chain cert(s) because CABforum rules (codifying common best practice) prohibits issuing 'subscriber' (EE) certs directly from a root, while your own CA is up to you -- you can choose to use intermediate(s) or not, although as I say it is considered best practice to use them and keep the root privatekey 'offline' -- in cryptography, that means not on any system that communicates with anybody, such as in this case servers that request certificates, thus eliminating one avenue of attack -- on a specialized device that is 'airgapped' (not connected or even able to be connected to any network) and in a locked vault, possibly with 'tamper protection', a polite name for self-destruct. As a known example of the rigor needed to secure something as sensitive as the root key of an important CA, compare Stuxnet.
The client(s) does not need and should not be configured with the server cert unless you want to do pinning; it(they) do need the CA root cert. Most clients, and particularly browsers, already have many/most/all public CA root certs builtin, so using a cert from such a CA does not require any action on the client(s); OTOH using your own CA requires adding your CA cert to the client(s). Chrome on Windows uses the Microsoft-supplied (Windows) store; you can add to this explicitly (using the GUI dialog, or the certutil program or powershell), although in domain-managed environments (e.g. businesses) it is also popular to 'push' a CA cert (or certs) using GPO. Firefox uses its own truststore, which you must add to explicitly.
In nodejs you configure the privatekey, server cert, and if needed chain cert(s), as documented
PS: note you generally should, and for Chrome (and new Edge, which is actually Chromium) must, have the SubjectAlternativeName (SAN) extension in the server cert specify its domain name(s), or optionally IP address(es), NOT (or not only) the CommonName (CN) attribute as you will find in many outdated and/or incompetent instructions and tutorials on the Web. OpenSSL commandline makes it easy to do CommonName but not quite so easy to do SAN; there are many Qs on several Stacks about this. Any public CA after about 2010 does SAN automatically.

Solve boost.asio certificate failed error -without- access to source code, to find out what information Philips Hue Bridge shares

This is a bit of a super duper specific question, but who knows there's someone out there that can help me.
I happen to have Philips Hue Bridge and I would love to know what personal information it is sharing with the outside world. Using tcpdump on my router I figured the Hue Bridge has a rather talkative personality. But because it talks over SSL tunnels, I have no idea what it says. So what I did is I setup a SonicWall with SSL-DPI with a CA, got root access to the Hue Bridge and found the application that does the talking to wws://ws.meethue.com (its called websocketcd). I then replaced the root certificate on the Hue Bridge, adjusted the cipher to match the Sonicwall and now I am stuck due to boost.asio trowing an validation error of my certificate:
error:14090086:lib(20):func(144):reason(134)
For those not too familiar with the error codes, this is what they mean:
lib(20) is ERR_LIB_SSL
func(144) is SSL_F_SSL3_GET_SERVER_CERTIFICATE
reason(134) is SSL_R_CERTIFICATE_VERIFY_FAILED
To verify it's not my SonicWall or certificate that is causing the problem, I executed openssl s_client -connect ws.meethue.com:443 -CAfile ca.pem from the Hue Bridge and that validates the chain perfectly fine, the same way as the original certificate. I also verified that the application is loading my root certificate and cipher correctly (because if change the cipher, I get a cipher error error). Also in my browser, I can visit https://ws.meethue.com without certificate errors. Here's my self made certificate chain, in case someone wants to check it: https://gofile.io/d/5msjoJ (password for download/key 1020304050, it's a temporary key that only exists in my local test env. so it's safe to share ;-)
If websocketcd wasn't a binary file, the problem was super easy to solve using set_verify_mode, but unfortunately it is a binary and that makes life significantly more complicated.
Is there anyone who can give me advice how to make this blob called websocketcd with boost.asio in it accept my root certificate? What I tried too: letting it communicate without ssl and with ssl without encryption (eNULL:aNULL ciphers). I am a bit hesitant to share the blob but for those who have a Hue Bridge too, it's located at /usr/bin/websocketcd.
Perhaps you can use strace (or maybe even ltrace) to spot which certificate paths it is using for root authorities.
If it uses a single file, you might be abel to hack it by replacing it with a CA that verifies your MITM certificate.
Sometimes the file can contain multiple certificates, so worth appending/prepending yours.
If you're in luck, there will be a readdir on a directory containing certificates. If so, you should be able to add your root certificate (in PEM form) there and **remember to run c_rehash on that directory.
For those interested: after some 20hrs, I figured that websocketcd requires a certificate revocation list for each CA in the chain (which do not have to have any revoked serials). These CLRs need to be included in the root CA file that is loaded using the ca-filename argument. I was not aware that Boost Asio could demand that a CLR is present for each CA, but apparently, they (Signify) managed to do so.

LDAPS Microsoft Active Directory Multiple Certificates RFC6125

We have an Microsoft Active Directory Domain with a large pool of domain controllers (DC) that are are setup with LDAP. These are all setup with LDAPS and uses Certificate Services via a template to setup a certificate with the domain name (i.e. test.corp) in the Subject Alternate Name (SAN) for the LDAPS server to serve.
Since these are DC's, DNS is setup in a pool for each these systems to respond to requests to test.corp in a round robin fashion.
Each of these DC's have multiple templates and multiple certificates in the Local Computer\Personal Certificate Store.
Upon testing, using a nodejs module, ldapjs when making a LDAPS request using the domain name, test.corp we notice that a handful of servers fail with the following message:
Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match
certificate's altnames: Host: test.corp. is not in the cert's
altnames: othername:, DNS:.test.corp
As we investigated we found that these handful of LDAPS servers are serving the incorrect certificate. We determined this by using the following command
openssl s_client -connect .test.corp:636
If you take the certificate section of the output and put it in a file and use a tool such as the Certificate manager or certutil to read the file, you can see the certificate is not the correct one. (It does not have the domain "test.corp" SAN). We also verified this by comparing the Serial Numbers
As we investigated, since we have DC's that have multiple certificates in the Local Computer\Personal Certificate store, we came across the following article:
https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx
It suggests putting the certificate from the local computer\Personal certificate store to the Active Directory Domain Service\Personal store. We followed the steps outlined but we found the same results.
Upon further investigation, it was suggested to use a tool called ldp or adsiedit. We then proceeded to use these tools and spoofed the local machine's host file we were doing the test from, to point the domain (test.corp) to the ip's of one of the DC's that are giving us trouble. After a restart to clear any cache we tested the "ldp" and "adsiedit" tools to connect to test.corp. These systems did not report any errors.
We found this odd, we then ran the openssl command to see what certificate it was serving from this same system and we found it was still serving the incorrect certificate.
Upon further research, it appears that the "ldp" upon selecting the SSL checkbox and "adsiedit" tools were not compliant with RFC6125, specifically B.3
https://www.rfc-editor.org/rfc/rfc6125#appendix-B.3
, which basically states the identity of the certificate must match the identity of the request otherwise the handshake would fail. This identity verification is done by using the certificate common name (CN) or the SAN.
Based on this appears the tools "ldp" and "adsiedit" are not conforming to the RFC6125 standard.
All this to say, we need to first fix the handful of domain controllers that are serving the correct certificate. We are open to suggestions since we have been working on this problem for the past few months. Second, is there a way to get the MS tools in question to work to the RFC6125 standard?
This has been moved to:
https://serverfault.com/questions/939515/ldaps-microsoft-active-directory-multiple-certificates-rfc6125
RFC6125 specifically states that it does not supersede existing RFCs. LDAP cert handling is defined in RFC4513. Outside of that, RFC6125 has significant flaws. See also https://bugzilla.redhat.com/show_bug.cgi?id=1740070#c26
LDP will supposedly validate the SSL against the client store if you toggle the ssl checkbox on the connection screen.
That said, I'm not surprised that neither it nor ADSI edit enforce that part of the standard given they are often used to configure or repair broken configurations. Out of the box and without Certificate Services they use self signed certs on LDAPS. I would wager 80% of DCs never get a proper certificate for LDAP. If they enforced it most wouldn't be able to connect. A better design decision would have been to toggle off the validation.
I use a similar openssl command to verify my own systems. I think it's superior to LDP even if LDP were to validate the certificate. To save you some effort, I would suggest using this variant of the openssl command:
echo | openssl s_client -connect .test.corp:636 2>/dev/null | openssl x509 -noout -dates -issuer -subject -text
That should save you having to output to a file and having to read it with other tools.
I've found LDAPS on AD to be a huge pain for the exact reasons you describe. It just seems to pick up the first valid cert it can find. If you've already added it to the AD DS personal store, I'm not sure where else to suggest you go other than removing some of tother certs from the DCs computer store.

simple Akka ssl encryption

There are several questions on stackoverflow regarding Akka, SSL and certificate management to enable secure (encrypted) peer to peer communication between Akka actors.
The Akka documentation on remoting (http://doc.akka.io/docs/akka/current/scala/remoting.html)
points readers to this resource as an example of how to Generate X.509 Certificates.
http://typesafehub.github.io/ssl-config/CertificateGeneration.html#generating-a-server-ca
Since the actors are running on internal servers, the Generation of a server CA for example.com (or really any DNS name) seems unrelated.
Most servers (for example EC2 instances running on Amazon Web Services) will be run in a VPC and the initial Akka remotes will be private IP addresses like
remote = "akka.tcp://sampleActorSystem#172.16.0.10:2553"
My understanding, is that it should be possible to create a self signed certificate and generate a trust store that all peers share.
As more Akka nodes are brought online, they should (I assume) be able to use the same self signed certificate and trust store used by all other peers. I also assume, there is no need to trust all peers with an ever growing list of certificates, even if you don't have a CA, since the trust store would validate that certificate, and avoid man in the middle attacks.
The ideal solution, and hope - is that it possible to generate a single self signed certificate, without the CA steps, a single trust store file, and share it among any combination of Akka remotes / (both the client calling the remote and the remote, i.e. all peers)
There must be a simple to follow process to generate certificates for simple internal encryption and client authentication (just trust all peers the same)
Question: can these all be the same file on every peer, which will ensure they are talking to trusted clients, and enable encryption?
key-store = "/example/path/to/mykeystore.jks"
trust-store = "/example/path/to/mytruststore.jks"
Question: Are X.509 instructions linked above overkill - Is there a simple self signed / trust store approach without the CA steps? Specifically for internal IP addresses only (no DNS) and without an ever increasing web of IP addresses in a cert, since servers could autoscale up and down.
First, I have to admit that I do not know Akka, but I can give you the guidelines of identification with X509 certificates in the SSL protocol.
akka server configuration require a SSL certificate bound to a hostname
You will need a server with a DNS hostname assigned, for hostname verification. In this example, we assume the hostname is example.com.
A SSL certificate can be bound to a DNS name or an IP (not usual). In order for the client verification to be correct, it must correspond to the IP / hostname of the server
AKKA requires a certificate for each server, issued by a common CA
CA
- server1: server1.yourdomain.com (or IP1)
- server2: server2.yourdomain.com (or IP2)
To simplify server deployment, you can use a wildcard *.yourdomain.com
CA
- server1: *.yourdomain.com
- server2: *.yourdomain.com
On the client side you need to configure a truststore including the public key of the CA certificate in the JKS. The client will trust in any certificate issued by this CA.
In the schema you have described I think you do not need the keystore. It is needed when you also want to identify the client with a certificate. The SSL encrypted channel will be stablished in both cases.
If you do not have a domain name like yourdomain.com and you want to use internal IP, I suggest to issue a certificate for each server and bound it to the IP address.
Depending on how akka is verifying the server certificate, it would be possible to use a unique self-signed certificate for all servers. Akka probably relies trust configuration to JVM defaults. If you include a self-signed certificate in the truststore (not the CA), the ssl socket factory will trust connections presenting this certificate, even if it is expired or if the hostname of the server and the certificate will not match. I do not recomend it

How sim800 get ssl certificate?

Sim800 supports SSL protocol. AT command "AT+CIPSSL" sets TCP to use SSL function.
In the "sim800_series_ssl_application_note_v1.01.pdf" is noted that: "Module will automatic begin SSL certificate after TCP connected."
My Problem: What is the exact meaning of the begin SSL certificate? what does sim800 do exactly? Does sim800 get SSL certificate from website? where does sim800 save SSL certificate?
As far as I know, SIM800 has some certificates in it and when you use a TCP+SSL or HTTP+SSL connection it will automatically use those certificates.
If those certificates are not ok for you, you will need to use an SD card, save there the certificates you want and use the command AT+SSLSETCERT to set the certificate you saved on your SD card. Here you can find how to use the File System.
Usually the certificates that come with the module are enough and you won't need this. But for example they didn't work for me when I tried to communicate with Azure via MQTT. I had to encrypt the data myself using wolfSSL library and send it using TCP without SSL.
Note: Not all SIM800 modules have SD card support.
There are a very few information about sim800 and ssl certificate on the web, and like you i got a lot of questions about it.
About your questions on how does sim800 get certificate and where does it save it, it seems, according to sim800_series_ssl_application_note_v1.01.pdf, that you can create (defining your own path), write and import a ssl certificate on your own with the AT+FSCREATE, AT+FSWRITE and AT+SSLSETCERT commands. An example is provided at the paragraph 3.10.
I'm sorry, i can't answer your other questions.
Anyway, if you get further informations about sim800 and ssl, i would be grateful if you share it with me.
When you use AT+CIPSSL you tell the SIM-module to use the SSL connection with TCP. When you use +CIPSTART command->
SIM module requests the TCP connection with the server through SSL.
Server sends the Server SSL certificate.
The authenticity of that certificate is checked with internal certificate authority certificate (The one that resides inside SIM-module) which is cryptographically connected with server certificate.
If the authenticity of certificate can not be confirmed SIM-module will close the connection unless you use the command AT+SSLOPT=0,0 (which forces the SIM-module to ignore invalid certificate authentication) prior to AT+CIPSSL command.
//Key exchange
SIM-module then encrypts it's master key (already inside SIM-module cannot be changed or read) with the public key (Which is part of the already sent server certificate) and sends it back to server.
Server then encrypts it's master key with SIM-module's master-key and sends it back to SIM-module. Key exchange is now complete as both (server and SIM-module) recieved master keys.
SIM-module currently doesn't support Client authentication which means that server cannot authenticate the client. That means there must be some other option of authentication (For example in MQTT that can be username and password that only client knows)
If you want your module to be able to authenticate server you will need to create the self-signed certificate for server and certificate authority certificate (for SIM-module) which is cryptographically connected to self-signed certificate and upload them to server and SIM-module (through AT+SSLSETCERT command from SD card).
If you only want to encrypt the data traffic you can ignore invalid certificate (AT+SSLOPT=0,0) as you will recieve publickey nevertheless. But if you want to be sure about server authenticity you will need to upload right certificate to module.