qpid proton ssl_client_options with ssl_certificate and default trust db - ssl

I am trying to make a secure connection with qpid proton for C++. The server requires a client certificate authentication which I can do with ssl_certificate and ssl_client_options classes.
The problem I have is that I don't know how to have client authentication with certificate and at the same time to use the system's default certificate trust database to check server's certificate.
As the reference documents (https://qpid.apache.org/releases/qpid-proton-0.37.0/proton/cpp/api/classproton_1_1ssl__client__options.html) state, I can set ssl_client_options to use client certificate and custom trust database, but I cannot set just the client certificate, and leave the default certificate trust database.
The only constructor where the certificate can be provided, requires a certificate trust database, too:
ssl_client_options (const ssl_certificate &, const std::string &trust_db, enum ssl::verify_mode=ssl::VERIFY_PEER_NAME)
There are other constructors, where default certificate trust database is used, but they do not accept a client certificate. These are all cunstructors from the reference:
Create SSL client with defaults (use system certificate trust database and require name verification)
ssl_client_options ()
Create SSL client with unusual verification policy (but default certificate trust database)
ssl_client_options (enum ssl::verify_mode)
Create SSL client specifying the certificate trust database.
ssl_client_options (const std::string &trust_db, enum ssl::verify_mode=ssl::VERIFY_PEER_NAME)
Create SSL client with a client certificate.
ssl_client_options (const ssl_certificate &, const std::string &trust_db, enum ssl::verify_mode=ssl::VERIFY_PEER_NAME)
A constructor that takes another class
ssl_client_options (const ssl_client_options &)
I will probably look into the source code, how the default certificate database is defined and try something to reach my goal, but that is not a good solution, if that changes in the future in the qpid proton library.
I can make a connection if I put proton::ssl::ANONYMOUS_PEER as the last parameter. However, server's identification check is lost in that way. That is unacceptable.

The only option I have found so far was to add ssl_client_options constructor that takes a certificate, and does not require a certificate trust database to the Qpid Proton library source code. The change is actually very simple, and I will try to contribute it to Qpid Proton project. That is actually not so simple, because I have to install a bunch of software in order to compile Qpid Proton and make all required tests. Then, hopefully, the change will eventually get into the released version of Qpid Proton, and in all major linux distributions. From what I have seen until now, that can take a very long time. :(

Related

Can I use my own Certificate Authority for HTTPS over LAN?

I have a server and a few clients, all running on different docker containers. The users can use the client by entering localhost:3000 on their browser (where the client docker is running).
All the containers run on the same LAN. I want to use HTTPS.
Can I sign a public private key pair using my own CA, then load the CA's public key to the browser?
I want to use the normal flow for public domains, but internally with my own CA.
Or should I look for another solution?
Meta: since you've now disclosed nodejs, that makes it at least borderline for topicality.
In general, the way PKIX (as used in SSL/TLS including HTTPS) works is that the server must have a privatekey and matching certificate; this is the same whether you use a public CA or your own (as you desire). The server should also have any intermediate or 'chain' cert(s) needed to verify its cert; a public CA will always need such chain cert(s) because CABforum rules (codifying common best practice) prohibits issuing 'subscriber' (EE) certs directly from a root, while your own CA is up to you -- you can choose to use intermediate(s) or not, although as I say it is considered best practice to use them and keep the root privatekey 'offline' -- in cryptography, that means not on any system that communicates with anybody, such as in this case servers that request certificates, thus eliminating one avenue of attack -- on a specialized device that is 'airgapped' (not connected or even able to be connected to any network) and in a locked vault, possibly with 'tamper protection', a polite name for self-destruct. As a known example of the rigor needed to secure something as sensitive as the root key of an important CA, compare Stuxnet.
The client(s) does not need and should not be configured with the server cert unless you want to do pinning; it(they) do need the CA root cert. Most clients, and particularly browsers, already have many/most/all public CA root certs builtin, so using a cert from such a CA does not require any action on the client(s); OTOH using your own CA requires adding your CA cert to the client(s). Chrome on Windows uses the Microsoft-supplied (Windows) store; you can add to this explicitly (using the GUI dialog, or the certutil program or powershell), although in domain-managed environments (e.g. businesses) it is also popular to 'push' a CA cert (or certs) using GPO. Firefox uses its own truststore, which you must add to explicitly.
In nodejs you configure the privatekey, server cert, and if needed chain cert(s), as documented
PS: note you generally should, and for Chrome (and new Edge, which is actually Chromium) must, have the SubjectAlternativeName (SAN) extension in the server cert specify its domain name(s), or optionally IP address(es), NOT (or not only) the CommonName (CN) attribute as you will find in many outdated and/or incompetent instructions and tutorials on the Web. OpenSSL commandline makes it easy to do CommonName but not quite so easy to do SAN; there are many Qs on several Stacks about this. Any public CA after about 2010 does SAN automatically.

simple Akka ssl encryption

There are several questions on stackoverflow regarding Akka, SSL and certificate management to enable secure (encrypted) peer to peer communication between Akka actors.
The Akka documentation on remoting (http://doc.akka.io/docs/akka/current/scala/remoting.html)
points readers to this resource as an example of how to Generate X.509 Certificates.
http://typesafehub.github.io/ssl-config/CertificateGeneration.html#generating-a-server-ca
Since the actors are running on internal servers, the Generation of a server CA for example.com (or really any DNS name) seems unrelated.
Most servers (for example EC2 instances running on Amazon Web Services) will be run in a VPC and the initial Akka remotes will be private IP addresses like
remote = "akka.tcp://sampleActorSystem#172.16.0.10:2553"
My understanding, is that it should be possible to create a self signed certificate and generate a trust store that all peers share.
As more Akka nodes are brought online, they should (I assume) be able to use the same self signed certificate and trust store used by all other peers. I also assume, there is no need to trust all peers with an ever growing list of certificates, even if you don't have a CA, since the trust store would validate that certificate, and avoid man in the middle attacks.
The ideal solution, and hope - is that it possible to generate a single self signed certificate, without the CA steps, a single trust store file, and share it among any combination of Akka remotes / (both the client calling the remote and the remote, i.e. all peers)
There must be a simple to follow process to generate certificates for simple internal encryption and client authentication (just trust all peers the same)
Question: can these all be the same file on every peer, which will ensure they are talking to trusted clients, and enable encryption?
key-store = "/example/path/to/mykeystore.jks"
trust-store = "/example/path/to/mytruststore.jks"
Question: Are X.509 instructions linked above overkill - Is there a simple self signed / trust store approach without the CA steps? Specifically for internal IP addresses only (no DNS) and without an ever increasing web of IP addresses in a cert, since servers could autoscale up and down.
First, I have to admit that I do not know Akka, but I can give you the guidelines of identification with X509 certificates in the SSL protocol.
akka server configuration require a SSL certificate bound to a hostname
You will need a server with a DNS hostname assigned, for hostname verification. In this example, we assume the hostname is example.com.
A SSL certificate can be bound to a DNS name or an IP (not usual). In order for the client verification to be correct, it must correspond to the IP / hostname of the server
AKKA requires a certificate for each server, issued by a common CA
CA
- server1: server1.yourdomain.com (or IP1)
- server2: server2.yourdomain.com (or IP2)
To simplify server deployment, you can use a wildcard *.yourdomain.com
CA
- server1: *.yourdomain.com
- server2: *.yourdomain.com
On the client side you need to configure a truststore including the public key of the CA certificate in the JKS. The client will trust in any certificate issued by this CA.
In the schema you have described I think you do not need the keystore. It is needed when you also want to identify the client with a certificate. The SSL encrypted channel will be stablished in both cases.
If you do not have a domain name like yourdomain.com and you want to use internal IP, I suggest to issue a certificate for each server and bound it to the IP address.
Depending on how akka is verifying the server certificate, it would be possible to use a unique self-signed certificate for all servers. Akka probably relies trust configuration to JVM defaults. If you include a self-signed certificate in the truststore (not the CA), the ssl socket factory will trust connections presenting this certificate, even if it is expired or if the hostname of the server and the certificate will not match. I do not recomend it

How sim800 get ssl certificate?

Sim800 supports SSL protocol. AT command "AT+CIPSSL" sets TCP to use SSL function.
In the "sim800_series_ssl_application_note_v1.01.pdf" is noted that: "Module will automatic begin SSL certificate after TCP connected."
My Problem: What is the exact meaning of the begin SSL certificate? what does sim800 do exactly? Does sim800 get SSL certificate from website? where does sim800 save SSL certificate?
As far as I know, SIM800 has some certificates in it and when you use a TCP+SSL or HTTP+SSL connection it will automatically use those certificates.
If those certificates are not ok for you, you will need to use an SD card, save there the certificates you want and use the command AT+SSLSETCERT to set the certificate you saved on your SD card. Here you can find how to use the File System.
Usually the certificates that come with the module are enough and you won't need this. But for example they didn't work for me when I tried to communicate with Azure via MQTT. I had to encrypt the data myself using wolfSSL library and send it using TCP without SSL.
Note: Not all SIM800 modules have SD card support.
There are a very few information about sim800 and ssl certificate on the web, and like you i got a lot of questions about it.
About your questions on how does sim800 get certificate and where does it save it, it seems, according to sim800_series_ssl_application_note_v1.01.pdf, that you can create (defining your own path), write and import a ssl certificate on your own with the AT+FSCREATE, AT+FSWRITE and AT+SSLSETCERT commands. An example is provided at the paragraph 3.10.
I'm sorry, i can't answer your other questions.
Anyway, if you get further informations about sim800 and ssl, i would be grateful if you share it with me.
When you use AT+CIPSSL you tell the SIM-module to use the SSL connection with TCP. When you use +CIPSTART command->
SIM module requests the TCP connection with the server through SSL.
Server sends the Server SSL certificate.
The authenticity of that certificate is checked with internal certificate authority certificate (The one that resides inside SIM-module) which is cryptographically connected with server certificate.
If the authenticity of certificate can not be confirmed SIM-module will close the connection unless you use the command AT+SSLOPT=0,0 (which forces the SIM-module to ignore invalid certificate authentication) prior to AT+CIPSSL command.
//Key exchange
SIM-module then encrypts it's master key (already inside SIM-module cannot be changed or read) with the public key (Which is part of the already sent server certificate) and sends it back to server.
Server then encrypts it's master key with SIM-module's master-key and sends it back to SIM-module. Key exchange is now complete as both (server and SIM-module) recieved master keys.
SIM-module currently doesn't support Client authentication which means that server cannot authenticate the client. That means there must be some other option of authentication (For example in MQTT that can be username and password that only client knows)
If you want your module to be able to authenticate server you will need to create the self-signed certificate for server and certificate authority certificate (for SIM-module) which is cryptographically connected to self-signed certificate and upload them to server and SIM-module (through AT+SSLSETCERT command from SD card).
If you only want to encrypt the data traffic you can ignore invalid certificate (AT+SSLOPT=0,0) as you will recieve publickey nevertheless. But if you want to be sure about server authenticity you will need to upload right certificate to module.

How can I setup an FTPS server on my aws EC2 ubuntu instance

1) I am trying to setup an FTPS server on my EC2 Ubuntu instance. I can only find resources to setup tutorials for an SFTP server.
2)From what I understand, the SSL certificate is only applicable to the server. When a user tries to FTPS to my server, should he/she upload a certificate or public/private key file similar to SFTP? Or only hostname, port, username, password is sufficient?
You might have better luck searching for "ftp over tls" which is another name for ftps. TLS is the successor protocol to SSL, though often still referred to casually as "SSL."
I use proftpd and I mention that primarily because their docs discuss some theory and troubleshooting techniques using openssl s_client -connect which you will find quite handy regardless of which server you deploy.
The SSL cert is only required at the server side, and if you happen to have a web server "wildcard" cert, you may be able to reuse that, and avoid purchasing a new one.
Client certs are optional; username and password will suffice in many applications. Properly configured, authentication will only happen over encrypted connections. (Don't configure the server to also operate in cleartext mode on the standard ftp port; inevitably you'll find a client who thinks they are using TLS when they are not).
If client certs are required, it is because of your policy, rather than technical reasons. You'll find that SSL client certs operate differently than SSH. Typically the client certs are signed you a certificate authority that you create, and then you trust them because they are signed by your certificate authority as opposed to your possession of their public key, as in SSH.

SSL client authentication returning Bad Certificate error

I was trying to connect to my custom wrote ssl server written in CPP. It has got client authentication features. Its throwing error Bad certificate when watched through Wireshark. At the server side the error returned was
14560:error:140890B2:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:no certificate returned:s3_srvr.c:2619:
I used the following code to force requesting client certificate
SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, NULL);
SSL_CTX_set_verify_depth(ctx, 1);
I could see client returning certificate in Wireshark.
Which function should be used to set the public key used for verifying the client certificate at the server side?
From the error messages it looks like your client does not present a certificate to server and you explicitely requested that a client needs to present one (in server code):
SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, NULL);
What you probably need is to tell your client code to use certificate (along with a private key):
SSL_CTX_use_certificate_chain_file(ctx, pcszCertPath);
SSL_CTX_use_PrivateKey_file(ctx, pcszPrivKeyPath,SSL_FILETYPE_PEM);
I hope that helps.
Also make sure that your server uses the same certificate chain (that it trusts the same CA's). If this is a problem, let me know and I'll help you do that.
With wireshark, you will find out if the server ever requested certificate from the client. The command would be "CertificateRequest".
I was getting a similar error (only line number different):
140671281543104:error:140890B2:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:no certificate returned:s3_srvr.c:3292:
I had generated self-signed certificates using the procedure mentioned in https://help.ubuntu.com/community/OpenSSL.
After juggling with the error for one day, i found that the error was because the self-generated CA was not in the trust chain of the machine I was using.
To add the CA to the trust chain in RHEL-7, one can follow the below procedure:
To add a certificate in the simple PEM or DER file formats to the
list of CAs trusted on the system:
Copy it to the
/etc/pki/ca-trust/source/anchors/
subdirectory, and run the
update-ca-trust
command.
If your certificate is in the extended BEGIN TRUSTED file format,
then place it into the main source/ directory instead.
I think the above procedure can be followed for fedora too.
If "update-ca-trust" command is not available, it might be useful to explore the commands like "update-ca-certificates".
Hope this will be useful to someone.
Difference Between SSLCACertificateFile and SSLCertificateChainFile
SSLCertificateChainFile is generally the correct option to choose, as it has the least impact; it causes the listed file to be sent along with the certificate to any clients that connect.
Provide all the Root CA into the following file to resolve the issue
SSLCACertificateFile (hereafter "CACert") does everything SSLCertificateChainFile does (hereafter "Chain"), and additionally permits the use of the cert in question to sign client certificates. This sort of authentication is quite rare (at least for the moment), and if you aren't using it, there's IMHO no reason to augment its functionality by using CACert instead of Chain. On the flipside, one could argue that there's no harm in the additional functionality, and CACert covers all cases. Both arguments are valid.
Needless to say, if you ask the cert vendor, they'll always push for CACert over Chain, since it gives them another thing (client certs) that they can potentially sell you down the line. ;)
Modify the server code with the below line:
Server code:
SSL_CTX_load_verify_locations(ctx,"client_certificate_ca.pem", NULL);
Here client_certificate_ca.pem is the file which generated the client certificates.
verify your client certificate with the "client_certificate_ca.pem" file using below command.
verify -CAfile client_certificate_ca.pem client_certificate.pem
Client code:
if (SSL_CTX_use_certificate_file(ctx, "client_certificate.pem", SSL_FILETYPE_PEM) <= 0)
{
}
if (SSL_CTX_use_PrivateKey_file(ctx, "client_certificate.ky", SSL_FILETYPE_PEM) <= 0)
{
}
if (!SSL_CTX_check_private_key(ctx))
{
}