I am currently developing using DDS with the security plugins enable.
When the application starts, it looks for the path to the CA certificate, Local certificate and private key and load them in memory for future usage.
Certificates containing the public keys are not sensitive as they are usually sent in clear and checked using the CA certificate. So an attacker has no need to get access to it. Is that correct?
However, on a Ubuntu filesystem, how can I protect the private key? The only way I see is to put the key as Read-Only only for a specific user that will run the application. But because of privilege escalation, this seems insecure.
Are there secure way to secure private keys on a filesystem ?
About the permissions_ca and Governance/Permissions documents, if those are updated by an attacker (which would create its own CA and sign new Governance/Permissions documents), then, can an application could have more permissions? Meaning that those documents should be secured on the filesystem?
Most of your questions are not specific to DDS Security, but are about general Public Key Infrastructure (PKI) mechanisms as leveraged by DDS Security.
Certificates containing the public keys are not sensitive as they are
usually sent in clear and checked using the CA certificate. So an
attacker has no need to get access to it. Is that correct?
Yes, that is correct. The built-in plugins as defined by the DDS Security specification use a PKI. The public key certificate does normally not contain any confidential information.
However, on a Ubuntu filesystem, how can I protect the private key?
Using "traditional" Unix permissions to allow only the owner of the file to access it is common practice. For example, SSH on Ubuntu by default stores private keys that way, in ~/.ssh. Additionally, the specification allows for encryption of the private key using a passphrase. That too is common practice.
Whether this is good enough for your scenario depends on your system's requirements. It is possible to integrate with existing, stronger key storage solutions like Windows certificate stores or macOS keychains by implementing customize security plugins. The pluggable architecture as defined in the spec was intended to allow for that, but the actual availability of such solutions depends on the DDS product that you are using.
About the permissions_ca and Governance/Permissions documents, if
those are updated by an attacker (which would create its own CA and
sign new Governance/Permissions documents), then, can an application
could have more permissions?
Both the Governance and Permissions documents have to be signed by a signing authority. Tampering with those files would break the signature verification and therefore would be detected by other Participants in the Domain.
All participants in the secured DDS Domain need to trust the same signing authority to make this mechanism work. For an attacker to successfully modify a Governance or Permissions document, it would have to have access to the private keys of the signing authority. Again, this is a common technique used in public key infrastructures similar to the public key certificate signing.
In spite of the tamper protection, it still makes sense to protect those files. The actual result of tampering or deletion of those files would be a denial of service, which is harmful as well.
Related
Need to authenticate a server using certificate, I have used OpenSSL to generate certificate and it was able to authenticate with certificate by enabling client certificate authentication in IIS.
Now when I export the certificate from the device and tried to install it in the other device it was able to authenticate the server, is there any possible way to link or generate a certificate that will only work for the specific machine?
Yes - keep the private key private.
Moving a certificate by itself to another client won't let you authenticate as the owner of that certificate. You would have to move both the certificate and its corresponding private key.
There are generally two ways you can stop the private key being copied:
Use administrative controls to ensure nobody in your organisation copies the keys. This is usually in the form of an agreement between the certificate issuer and the entity named in the certificate, to the effect that "you shall not copy the private key!!". As you can imagine, depending on the scenario, this might not be that enforceable.
If the certificate is certifying a device, generate and store the private key in a hardware device that is a permanent fixture in that client device. A Trusted Platform Module is an example of a device fitted in most modern end-user devices for this purpose.
If the certificate is certifying a person, generate and store the private key in a hardware device that is issued to that person. A smart-card is an example of such a device. You would probably also need administrative controls here to ensure that the user doesn't share their card with others and that they keep any PINs or other authentication data private.
Note that attempting to certify something like the DNS name of the client device as as unique identifier doesn't work, as DNS, MAC addresses etc. can be spoofed.
I'm adding SSL support (currently pushing forward with OpenSSL) to an existing application. I've never done anything with cryptology before and after reading numerous articles and watching videos I'm still a little confused as to how I can implement it for my specific situation.
Our software is client/server, and the end-user purchases both and will install it on their premises.
My first bit of confusion is regarding certificates and private keys, and how to manage these. Should I have one certificate that get installed along with the app? Should each end-user have their own certificate generated? What about private keys? Do I bake the private key into the server binary? Or should there be a file with the private key?
I'm sure this is a solved problem, but I'm not quite sure where to look, or what to search for.
Thanks for any help and advice.
Adding OpenSSL into existing app
If all you need is an example of a SSL/TLS client, have a look at the OpenSSL's wiki and TLS Client example.
My first bit of confusion is regarding certificates and private keys, and how to manage these.
Yes, key management and distribution is the hardest problem in crpyto.
Public CAs have legally binding documents covering these practices. They are called Certification Practice Statements (CPS). You can have a lot of fun with them because the company lawyers tell you what you don't want to hear (or the marketing department refuses to tell you).
For example, here's an excerpt from Apple Inc. Certification Authority Certification Practice Statement:
2.4.2. CA disclaimers of warranties
To the extent permitted by applicable law, Subscriber agreements,
if applicable, disclaim warranties from Apple, including any
warranty of merchantability or fitness for a particular purpose.
2.4.3. CA limitations of liability
To the extent permitted by applicable law, Subscriber agreements,
if applicable, shall limit liability on the part of Apple and shall
exclude liability for indirect, special, incidental, and
consequential damages.
So, Apple is selling you a product with no warranty and that offers no liability!!! And they want you to trust them and give them money... what a racket! And its not just Apple - other CAs have equally obscene CPS'es.
Should I have one certificate that get installed along with the app?
It depends. If you are running your own PKI (i.e., you are the CA and control the root certificate), then distribute your root X509 certificate with you application and nothing else. There's no need to trust any other CAs, like Verisign or Startcom.
If you are using someone else's PKI (or the Internet's PKI specified in RFC 5280), then distribute only the root X509 certificate needed to validate the chain. In this case, you will distribute one CA's root X509 certificate for validation. You could potentially trust just about any certificate signed by that particular CA, however (and its likely to be in the 10's of thousands if you are not careful).
If you don't know in advance, then you have to do like browsers and pick a bunch of CAs to trust and carry around their root certificates for your application. You can grab a list of them from Mozilla, for example. You could potentially trust just about any certificate signed by all CAs, however (and its likely to be in the 10's of millions if you are not careful).
There's a lot more to using public CAs like browsers, and you should read through Peter Gutmann's Engineering Security. Pay particular attention to the Security Diversification strategies.
When the client connects to your server, your server should send its X509 certificate (the leaf certificate) and any intermediate certificates required to build a valid chain back to the root certificate you distribute.
Finally, you can get free SSL/TLS certificates trusted by most major browsers (including mobile) from Eddy Nigg at Startcom. He charges for the revocation (if needed) because that's where the cost lies. Other CAs charge you up front and pocket the proceeds if not needed.
Should each end-user have their own certificate generated?
That is possible, too. That's called client certificates or client authentication authentication. Ideally, you would be running your own PKI because (1) you control everything (including the CA operations) and don't need to trust anyone outside the organization; and (2) it can get expensive to have a commercial CA sign every user's certificate.
If you don't want to use client side certificates, please look into PSK (Preshared Keys) and SRP (Secure Remote Password). Both beat the snot out of classic X509 using RSA key transport. PSK and SRP do so because they provide mutual authentication and channel binding. In these systems, both the client and server know the secret or password and the channel is setup up; or one (or both) does not know and channel setup fails. The plain text username and password are never put on the wire as in RSA transport and basic_auth schemes. (I prefer SRP because its based on Diffie-Hellman, and have implemented it in a few systems).
What about private keys?
Yes, you need to manage the private keys associated with certificates. You can (1) store them in the filesystem with permissions or ACLs; (2) store them in a Keystore or Keychain like Android, Mac OS X, iOS, or Windows; (3) store them in an Hardware Security Module (HSM); or (4) store them remotely while keeping them online using Key Management Interop Protocol (KMIP).
Note: unattended key storage on a server is a problem without a solution. See, for example, Peter Gutmann's Engineering Security, page 368 under "Wicked Hard Problems" and "Problems without Solutions".
Do I bake the private key into the server binary?
No. You generate them when needed and then store them with the best protection you can provide.
Or should there be a file with the private key?
Yes, something like that. See above.
I'm sure this is a solved problem, but I'm not quite sure where to look, or what to search for.
I'm not sure I would really call it solved because of the key distribution problem.
And some implementations are just really bad, so you would likely wonder how the code passed for production.
The first thing you probably want (since your focusing on key management) is a treatment of "key management" and "key hierarchies".
You might also want some reference material. From the security engineering point of view, read Gutmann's Engineering Security and Ross Anderson's Security Engineering. From an implementation standpoint, grab a copy of Network Security with OpenSSL and SSL and TLS: Designing and Building Secure Systems.
I have multiple tiny Linux embedded servers on Beaglebone Black (could by a RaspberryPi, it makes no difference) that need to exchange information with a main server (hosted on the web).
Ideally, each system talks to each other by simple RESTful commands - for instance, the main server sends out new configurations to the embedded servers - and the servers send back data.
Commands could be also issued by a human user from the main server or directly to the embedded servers.
What would it be the most "standard" way of authentication of each server against each other? I'm thinking OAuth, assuming that each machine has its own OAuth user - but I'm not sure if that is the correct pattern to follow.
What would it be the most "standard" way of authentication of each server against each other? I'm thinking OAuth, assuming that each machine has its own OAuth user - but I'm not sure if that is the correct pattern to follow.
Authenticating machines is no different than authenticating users. They are both security principals. In fact, Microsoft made machines a first-class citizen in Windows 2000. They can be a principal on securable objects like files and folders, just like regular users can.
(There is some hand waving since servers usually suffer from the Unattended Key Storage problem described by Gutmann in his Engineering Security book).
I would use a private PKI (i.e., be my own Certification Authority) and utilize mutual authentication based on public/private key pairs like SSL/TLS. This has the added benefit of re-using a lot of infrastructure, so the HTTP/HTTPS/REST "just works" as it always has.
If you use a Private PKI, issue certificates for the machines that include the following key usage:
Digital Signature (Key Usage)
Key Encipherment (Key Usage)
Key Agreement (Key Usage)
Web Client Authentication (Extended Key Usage)
Web Server Authentication (Extended Key Usage)
Or, run a private PKI and only allow communications between servers using a VPN based on your PKI. You can still tunnel your RESTful requests, and no others will be able to establish a VPN to one of your servers. You get the IP filters for free.
Or use a Kerberos style protocol with a key distribution center. You'll need the entire Kerberos infrastructure, including a KDC. Set up secure channels based on the secrets proctored by the KDC.
Or, use a SSH-like system, public/private key pairs and sneaker-net to copy the peer's public keys to one another. Only allow connections from machines whose public keys you have.
I probably would not use an OAuth-like system. In the OAuth-like system, you're going to be both the Provider and Relying Party. In this case, you might as well be a CA and reuse everything from SSL/TLS.
I think you need to Implement Mutual Authentication between servers using SSL for your requirement.
I do not know much about M2M environment , but using OAuth for Authenticating your Servers is OverKill .
https://security.stackexchange.com/questions/34897/configure-ssl-mutual-two-way-authentication
Also Encrypting your Communication Channel while Sending commands would make it more safe from Attacks
I am looking for a way to encrypt messages between client and server using the WCF. WCF offers a lot of built in security mechanisms to enrcypt traffic between client and server, but there seems to be nothing fitting my requirements.
I don't want to use certificates since they are too complicated, so don't suggest me to to use certificates please. I don't need confidentiality, so I though I'll go best using plain RSA.
I want real security, no hardcoded key or something. I was thinking about having a public/private keypair generated every time the server starts. Both keys will only be stored in RAM.
Then wen a client connects it should do exactly like SSL. Just as described here.
1.exchange some form of a private/public key pair; the server generates a key pair and keeps the private key to itself and shares the public key with the client (e.g. over a WCF message, for instance)
2.using that private/public key pair, exchange a common shared secret, e.g. an "encryption key" that will symmetrically encrypt your messages (and since it's symmetrical, the server can use the same key to decrypt the messages)
3.setup infrastructure on your client (e.g. a WCF extension called a behavior) to inspect the message before it goes out and encrypt it with your shared secret
That would be secure, wouldn't it?
Is there any existing solution to archive what I described? If not I'll create it on my own. Where do I start best? Which kind of WCF custom behaviour is the best to implement?
EDIT:
As this is NOT secure, I'll take the following approach:
When Installing the server component a new X509 certificate will be generated and automatially added to the cert store (of the server). The public part of this generated certificate will be dynamically included into the client setup. When running the client setup on the client machine the certificate will be installed into the trustet windows certificate store of the client.
So there's no extra work when installing the product and everything should be secure, just as we want it.
You've said you don't want to use certificates. I won't push certificate use on you, but one thing you are missing is that certificates serve a purpose.
A certificate proves that key you are negotiating an SSL connection with belongs to the entity you think it belongs to. If you have some way of ensuring this is the case without using certificates, by all means, use raw keys.
The problem is, in step 1:
1.exchange some form of a private/public key pair; the server generates a key pair and keeps the private key to itself and shares the public key with the client (e.g. over a WCF message, for instance)
How does the client know that the public key it received from the server wasn't intercepted by a man-in-the-middle and replaced with the MITM's key?
This is why certificates exist. If you don't want to use them, you have to come up with another way of solving this problem.
Do you have a small, well-known set of clients? Is it possible to preconfigure the server's public key on the client?
Alexandru Lungu has created an article on codeproject:
WCF Client Server Application with Custom Authentication, Authorization, Encryption and Compression
No, it would not be secure!
since there's no confidentiality, an attacker could do a men in the middle attack, and all the security is gone.
The only real secure way of encrypting messages between server and client IS to actually use digital certificates.
I'm sorry, the only two methods of providing secure communications are:
Use a public key infrastructure that includes a chain of trust relationships, a.k.a. certificates
or
Use a shared secret, a.k.a. a hardcoded key.
Nothing else addresses all of the known common attack vectors such as man-in-the-middle, replay attack, etc. That's the hard truth.
On the other hand I can offer you an alternative that may alleviate your problem somewhat: Use both.
Write a very, very simple web service whose only job is to generate symmetric keys. Publish this service via SSL. Require end user credential authentication in order to obtain a symmetric key.
Write the rest of your services without SSL but using the symmetric keys published via the first service.
That way your main app doesn't have to deal with the certificates.
I have stored Client SSL certificate in database as a file and its private key's password in a column (not using certificate store) for each web service that requires certificate.The reason I preferred this that I don't have to worry about user privilege to access the certificate if the code is moved to another server (Dev/QA/Prod). As certificate stored in centralized location, I don't have to install it in each machine. Moreover business people can upload certificate any time they want without intervening Developer and certificate will be different for QA and Production environment. Now my concern is that storing certificate in database compromise the security rather than storing it in certificate store?
From a security point of view, your strategy of placing the private keys AND the password in the database gives away rights to all the keys to the database admin, the software author AND the system admin.
Placing the private key's password on the system but not in the database only hands total control to the system admin and software author.
One other solution would be to store the private key in a hardware device (HSM) and only store references in your database. Your software would then use a hardware crypto API like PKCS#11 to perform the SSL client handshake crypto and your private keys would never be in system memory or on disk at all.
I guess you mean "I have stored Client SSL certificate in database as a file, its private key and its private key password". Maybe the X.509 certificate and its private key are in a single container, presumably in PKCS#12 format.
(Or do you just really want to store only the certificate? Storing just the certificate is fine, but why would you store the private key's password too.)
This isn't necessarily wrong, but you have to make sure access to this data-based is well protected.
In general, the private key is really that: private. Some certification authorities even mandate in their policies that no one other that the end-user may know the private key (at least with reasonable protection, e.g. within the browser or in a PKCS#12 file for which only they know the password).
This makes sense when using public and private keys.
I think you need to look carefully at all the angles of your design. It's not clear why you want to store the private keys in a database for the users to retrieve. It may make sense to store private keys in a central place, but this may also defeat the point of using client-certificate to authenticate (it really depends on the full picture and the degree of security you really want).
Storing all your users' private keys in a central place is probably not as secure as having each user use their own store, but it may be acceptable. Ultimately, it depends on the threats you may encounter.