Connecting a Mosquitto MQTT C client to Azure IoT-hub - ssl

As the title states, I'm having difficulties connecting my Mosquitto MQTT client (written in C) to my Azure IoT-hub. I've managed to connect to many different platforms before (e.g. Amazon EC2, ThingsBoard, TheThings.io, SierraWireless, ...), so I know my client is pretty solid.
The difficulty here is the fact that I need some sort of certificate to be allowed to connect, and I'm not sure what I need to do this.
I have added the following configuration in order to get this working:
mosquitto_opts_set(client, MOSQ_OPT_PROTOCOL_VERSION, "MQTT_PROTOCOL_V311");
mosquitto_tls_set(client, "/home/ca-certificates.crt", NULL, NULL, NULL, NULL);
mosquitto_tls_insecure_set(client, 1);
mosquitto_tls_opts_set(client, 0, "tlsv1", NULL);
mosquitto_username_pw_set(client, "hubname.azure-devices.net/deviceName", "SharedAccessSignature=SharedAccessSignature sr=hubname.azure-devices.net%2Fdevices%2FdeviceName&sig=sigValue&se=1553087157");
In the code above, "hubname", "deviceName" and "sigValue" are of course replaced with real values in my code.
Can any of you point me to what I'm doing wrong, or what other configuration steps I need to take?

I install mosquitto on Windows and send a message with the command successfully:
mosquitto_pub -d -h hubname.azure-devices.net -i "device1" -u "hubname.azure-devices.net/device1" -P "SharedAccessSignature sr=hubname.azure-devices.net%2Fdevices%2Fdevice1&sig=sig&se=1553325061" -m "hi from mosquitto client" -t "devices/device1/messages/events/" -p 8883 --cafile \path-to-cert-file\IoTHubTest.cer -V mqttv311
Based on information you provided, maybe the issue caused by the certificate.
Azure IoT Hub use DigiCert Baltimore Root certificate to secure device connection. (Note that China Azure didn’t use CyberTrust Root CA. Instead it still uses WoSign Root CA.)
You can create this file by copying the certificate information from
certs.c in the Azure IoT SDK for C. Include the lines -----BEGIN
CERTIFICATE----- and -----END CERTIFICATE-----, remove the " marks at
the beginning and end of every line, and remove the \r\n characters at
the end of every line.
Here I save it as IoTHubTest.cer.
Hope this is helpful to you.

Finally managed to get things working. It turned out I had cross-compiled my Mosquitto client without SSL support. So after compiling and installing again, these functions now all return 1 and I can connect successfully.
mosquitto_opts_set(client, MOSQ_OPT_PROTOCOL_VERSION, "MQTT_PROTOCOL_V311");
mosquitto_tls_set(client, NULL, /etc/ssl/certs, NULL, NULL, NULL);
mosquitto_tls_insecure_set(client, 1);
mosquitto_tls_opts_set(client, 0, "tlsv1", NULL);
mosquitto_username_pw_set(client, "hubname.azure-devices.net/deviceName", "SharedAccessSignature=SharedAccessSignature sr=hubname.azure-devices.net%2Fdevices%2FdeviceName&sig=sigValue&se=1553087157");

Related

Where do I find the ca certificates for mosquitto_sub and pub?

In this article mosquitto_sub with TLS enabled I understand that you need to provide a capath or cafile option to mosquitto_sub (and pub) but I am having trouble figuring out where those files/paths come from.
Back in October I was able to run mosquitto_sub -h mymosquitto.com -p 8883 -v -t 'jim/#' -u <u> -P <pw> --capath ssl/certs from my desktop computer (running Mint 19). That no longer works. I did an apt install ca-certificates and found the .crt files in /usr/share/ca-certificates/mozilla/ but when I used that path, it still gave me: Error: A TLS error occurred.
This is a Ubuntu 18.04 server running Let'sencrypt. I tried to point the --cafile to the chain.pem file which came from:
allow_anonymous false
password_file /etc/mosquitto/pwfile
listener 1883
listener 8883
certfile /etc/letsencrypt/live/mymosquitto.com/cert.pem
cafile /etc/letsencrypt/live/mymosquitto.com/chain.pem
keyfile /etc/letsencrypt/live/mymosquitto.com/privkey.pem
But that didn't work either. Can someone please help me understand what I should be doing?
From the mosquitto_sub man page:
--capath
Define the path to a directory containing PEM encoded CA certificates that are trusted. Used to enable SSL communication.
For --capath to work correctly, the certificate files must have ".crt" as the file ending and you must run "openssl rehash [path to
capath]" each time you add/remove a certificate.
If you want to use a directory of certs you will have to make sure the openssl rehash command mentioned has been run on that directory.
If you want use a file from the letsencrypt --cafile with the fullchain.pem file
I have rethought my situation. Since my certs get regenerated every 3 months or so I'm going to have to redo my apps using the new files so I decided to just go back to rolling my own. I did that using this site: http://www.steves-internet-guide.com/mosquitto-tls/ and I'm back to where I was in October.Thanks to hardillb for the advise.
Jim.

Add certificate to trusted does not work via macos "security add-trusted-cert"

I have a safari web-page connecting to secured web socket server (written with C# netcore 3.0).
I add server sertificate to trusted running this command (the same certificate I put on my websocket end point).
security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain certificate.crt
Everything seems valid (I have a blue cross near my certificate in Keys application)
but when I connect from safari (catalina os, mojave os) I get an error
OSStatus Error -9807. Invalid certificate chain
Also when I import certificate manually via Keys Application GUI there is no error. Everything works.
Can anyone explain is there any difference between "security add-trusted-cert" import and GUI manual import?
Maybe my add-trusted-cert command is wrong and I need some additional params?
This syntax works perfectly for me on MacOS Catalina, however, it must be run with elevated privileges (sudo or whatever).
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain <MY_CERTIFICATE_FILE.pem>

Setting up a Docker registry with Letsencrypt certificate

I'm setting up a domain registry as described here:
https://docs.docker.com/registry/deploying/
I generated a certificate for docker.mydomain.com and started the docker using their command on my server:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
I've started the docker and pointed to certificates I obtained using letsencrypt (https://letsencrypt.org/).
Now, when I browse to https://docker.mydomain.com:5000/v2/ I will get a page with just '{}', with a green lock (succesful secure page request).
But when I try to do a docker login docker.mydomain.com:5000 from a different server I see a error in the registry docker:
TLS handshake error from xxx.xxx.xxx.xxx:51773: remote error: bad certificate
I've tried some different variations in setting up the certificates, and gotten errors like:
remote error: unknown certificate authority
and
tls: first record does not look like a TLS handshake
What am I missing?
Docker seams to not support SNI : https://github.com/docker/docker/issues/9969
Update : Docker now should support SNI.
It's mean, when connecting to your server during the tls transaction, the docker client do not specify the domain name, so your server show the default certificate.
The solution could be to change to default certificate of your server to be to one valid for the docker domain.
This site works only in browsers with SNI support.
To check if your (sub-)domain works with clients not SNI-aware, you can use ssllabs.com/ssltest : If you DONT see the message, "This site works only in browsers with SNI support. " then it will works.

Docker Registry incorrectly claims an expired CA cert

I followed the Docker Registry installation docs precisely, and have a registry running on a remote Ubuntu VM. On that VM, the Docker container is running with the following command:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/auth:/auth \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/registry.key \
registry:2
On the remote VM, I have the following directory structure:
/home/myuser/
certs/
registry.crt
registry.key
/etc/docker/certs.d/myregistry.example.com:5000/
ca.crt
ca.key
The ca.crt is the same exact cert as ~/certs/registry.crt (just renamed); same goes for ca.key and registry.key being the same/just renamed. I created the ca* files per a suggestion from the error output you'll see below.
I am almost 100% sure the CA cert is still valid, although any help ruling that out (e.g. how can I actually tell?) would be appreciated. When I start the container and look at the Docker logs, I don't see any errors.
I then attempt to login from my local laptop (Mac):
docker login myregistry.example.com:5000
It queries me for my username, password and email (although I don't recall ever specifying an email when setting up Basic Auth). After entering these correctly (I have checked and double checked...) I get the following error:
myuser#mymachine:~/tmp$docker login myregistry.example.com:5000
Username: my_ciuser
Password:
Email: myuser#example.com
Error response from daemon: invalid registry endpoint https://myregistry.example.com:5000/v0/:
unable to ping registry endpoint https://myregistry.example.com:5000/v0/ v2 ping attempt failed with error:
Get https://myregistry.example.com:5000/v2/: x509: certificate has expired or is not yet valid
v1 ping attempt failed with error: Get https://myregistry.example.com:5000/v1/_ping: x509:
certificate has expired or is not yet valid. If this private registry supports only HTTP or
HTTPS with an unknown CA certificate, please add
`--insecure-registry myregistry.example.com:5000` to the daemon's
arguments. In the case of HTTPS, if you have access to the registry's CA
certificate, no need for the flag; simply place the CA certificate
at /etc/docker/certs.d/myregistry.example.com:5000/ca.crt
So from my perspective, I guess the following are possible:
The CA cert is invalid (if so, why?!?)
The CA cert is an intermediary cert (if so, how can I tell?)
The CA cert is expired (if so, how do I tell?)
This is a bad error message, and some other facet of the registry is not configured properly (if so, how do I troubleshoot further?)
Perhaps my cert is not located in the correct place on the server, or doesn't have the right permissions set (if so, where does the cert need to be?)
Something else that I would never expect in a million years
Any ideas/thoughts?
As said in the error message:
... In the case of HTTPS, if you have access to the registry's CA
certificate, no need for the flag; simply place the CA certificate
at /etc/docker/certs.d/myregistry.example.com:5000/ca.crt
where myregistry.example.com:5000 - your CN with port.
You should copy your ca.crt into each Docker Daemon that will connect to your Docker Registry and put it in this folder: /etc/docker/certs.d/myregistry.example.com:5000/ca.crt
After this action you need to restart Docker daemon, for example, via sudo service docker stop && service docker start on CentOS (or call similar procedure on your OS).
I had the similar error:
Then I added my private registry to the insecureregistries list.
See below image for docker-desktop

Amazon AWS EC2 Instance - Can't connect with SSH

This shouldn't be this hard. I cannot connect to new AWS EC2 instance via SSH clients. I am connecting from a Win 7 box.
Instance OS: Debian 6
AMI: debian-squeeze-i386-20121119-e4554303-3a9d-412e-9604-eae67dde7b76-ami-1977f070.1(ami-a121a6c8)
User: tried root and also ec2-user
Using .pem keypair that AWS generated and I downloaded
Confirmed security group and Key Pair Name on instance
SSH port 22 is OPEN: Nmap says so and Telnet gets a welcome reply
Using 3 different clients: all clients connect ok
PuTTY replies: Server refused our key
MindTerm Java browser add-in replies: Authentication failed, permission denied
Bitvise SSH replies: Attempting 'publickey' auth; auth failed;
Rebooted instance, wash, rinse, repeat...
REBUILT new instance and new keypair, wash, rinse, repeat...
Connecting isn't the issue. Why would the instance not accept the .pem file as the password? Is there an additional step I am missing? I followed EVERY frigging guide I could Google. AWS support is a joke. stackoverflow to the rescue...
TIA.
According to the debian wiki which has documentation on the AMI you are using, the username you need to use to login is 'admin'.
I have had many issues with connecting to EC2 via ssh.
ssh -i the-keypair-filename root#yourdomain.com
- Keypair file must be in same directory.
- I just used terminal to connect.
Make sure you generate or assign the keypair when launching the instance.
Also you can verify the keypair you have set in the AWS Management Console, this is done by selecting the running instance and then looking for "Key Pair Name:".
I hope this is helpful.
My problem was that I didn't add a volume that was expected in the fstab file so the server didn't start fully and the sshd daemon wasn't running.
Check with:
telnet HOST 22
Check the server logs to make sure it starts properly before you waste lots of time like I did.
Amazon Linux AMIs that use ec2-user password are listed at the bottom of this page.
http://aws.amazon.com/amazon-linux-ami/
Check that you are using one of those if trying to use ec2-user, or check the documentation for the AMI you are using.
Teri
Try using the "admin" username and ignore the username suggested by Amazon.
I had the similar problem and I have solved the issue by following approach.
1) Edited the knife.rb file in my chef folder i.e. :\Users\Administrator\chef-starter\chef-repo.chef\knife.rb as bellow:
knife[:aws_access_key_id] = "xxxxxxxxxxxxxxxxxxxx"
knife[:aws_secret_access_key] = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
knife[:region] = 'ap-southeast-1'
knife[:aws_ssh_key_id] = "ChefUser"
knife[:ssh_user]="ec2-user"
In the command prompt, issued the command to create an ec2-server:
knife ec2 server create -r "role[webserver]" --image ami-abcd1234 --flavor t1.micro -G ChefClient -x root -N server01 -i H:\Chef-files\ChefUser.pem
Note that, even though I had given all the details in the knife.rb file, I had to give the .pem file path in coomand line through -i option. That solved my problem.
Check, if the solution of mine helps you.
Cheers,
Chandan
Logging in as "ubuntu" worked for me:
ssh -i private_key.pem ubuntu#myubuntuserver
Hope this helps
--Erin