I'm looking to obtain an certificate from an AATL authority to use in iText to perform tamper-proofing signatures to PDF documents as part of a cloud application that I'm working on.
As best as I'm able to determine, AATL certificates can be delivered as USB HSMs to customers after a standard Adobe AATL verification process. Unfortunately, this restricts the usage to devices I have physical access to, which obviously isn't feasible for a cloud application.
I've been trying to research what my best options are on this front, but haven't been able to find any clear guidance on best practices or impartial sources for knowledge. I've come up with two possible ideas to illustrate in slightly more concrete terms what I am looking for.
Obviously any answer that results in the same outcome of either of these ideas is more than welcome as well!
1st Idea
Is there any way for me to obtain an AATL certificate by generating a CSR from Azure Key Vault, or Azure HSM (Gemalto) and having an AATL provider issue their response such that the certificate is loaded into the Azure's standards compliant store?
By doing this, my hope would be that I could then code my Application using the Azure Key Vault APIs or the Gemalto HSM to perform signatures.
2nd Idea
If a USB HSM is my best option, is it possible to derive another certificate from my USB HSM and then load that into Azure Key vault? Will a key derived from one issued to my company by an AATL authority still pass Acrobat (and any other) authenticity checks? Or will any certificate with intermediaries between it and the AATL authority fail?
I've been digging into this since I have a very similar requirement at the moment. YES it is possible to store an AATL Document Signing certificate in Azure KeyVault because it is a FIPS 140-2 level 2 compliant HSM. You do not need the dedicated HSM although it is also supported (Azure dedicated HSM is FIPS 140-2 level 3 compliant).
As for the process, you are correct that you would need to issue a CSR from KeyVault directly. If your certificate is delivered on a USB HSM, you will not be able to transfer it to Azure KeyVault since it will be locked to the HSM.
I do not want to list any certificate providers in this answer but I was easily able to find at least 4 that supported my use-case with a quick Google search. I'm currently in the process of getting quotes from each of these vendors.
Related
Each user of our system uses an X509 certificate to sign documents or approve documents.
We issue certificates by ourselves and send them to users in form of a PKCS12 file. It works perfectly so far.
Now, we want to distribute our certificates in a USB Token like other Certificate Authorities do.
Can we make tokens by ourselves using .NET code? If not, which software is used for making such USB tokens?
USB Tokens are SmartCard in USB Drive with USB Connector fused into it. (Instead of Smartcard reader and USB Cable...!)
USB Tokens are cypto capable devices which stores user's private keys securely and public keys and Certificates may also be stored in it (but has limited storage space)
Any Government approved Certifying Authority or Self (Internal) Certifying Authority can enroll and issue certificates in USB Token.
Suggest you to buy any FIPS Certified USB Tokens or Smartcard available in your market.
Please refer to my posts about USB Token and APIs available for Certifying Authority:
https://security.stackexchange.com/a/252698/206413
https://stackoverflow.com/a/68556286/9659885
API available for Developers:
https://stackoverflow.com/a/63173083/9659885
I've evaluated a solution called EIDVirtual to create a smartcard from a regular USB. It's from mysmartlogon.com.
I works at my development environment. However I'm not sure is it straightforward for the end users or not. And the cost is needed to clarify as well. If each end user PC requires a license, then it is not feasible at all.
We are generating document PDF's as part of our server application workflow. We need to be able to sign these documents to prove they are from us and have not been tampered with. We currently do this using a self-signed cert and using syncfusion's PDF module (excellent sw btw!). The problem is (of course) that the self-signed cert is not in the CA trust chain so although the document is secured, it doesnt automatically validate that its from us.
I have been researching where to purchase AATL certified certificates from and have found several vendors (Identrust being one of the more affordable options). However, they all share the same delivery method which is they ship it to you on a secure USB or similar token. What I dont understand then is how to use this token with our hosted VM. Does anyone have any experience in using these types of token ie. are we simply able to export the private key from the token onto the server?
Thanks
You cannot use the tokens in this scenario.
The certificate issuer should provide you with a web-based API that you integrate in your signing process. Usually you send the document hash and get back the signature, but the actual flow and ins/outs depends on the certificate provider.
Then the PDF library you use should let you embed in the PDF file the externally computed signature.
I am currently developing using DDS with the security plugins enable.
When the application starts, it looks for the path to the CA certificate, Local certificate and private key and load them in memory for future usage.
Certificates containing the public keys are not sensitive as they are usually sent in clear and checked using the CA certificate. So an attacker has no need to get access to it. Is that correct?
However, on a Ubuntu filesystem, how can I protect the private key? The only way I see is to put the key as Read-Only only for a specific user that will run the application. But because of privilege escalation, this seems insecure.
Are there secure way to secure private keys on a filesystem ?
About the permissions_ca and Governance/Permissions documents, if those are updated by an attacker (which would create its own CA and sign new Governance/Permissions documents), then, can an application could have more permissions? Meaning that those documents should be secured on the filesystem?
Most of your questions are not specific to DDS Security, but are about general Public Key Infrastructure (PKI) mechanisms as leveraged by DDS Security.
Certificates containing the public keys are not sensitive as they are
usually sent in clear and checked using the CA certificate. So an
attacker has no need to get access to it. Is that correct?
Yes, that is correct. The built-in plugins as defined by the DDS Security specification use a PKI. The public key certificate does normally not contain any confidential information.
However, on a Ubuntu filesystem, how can I protect the private key?
Using "traditional" Unix permissions to allow only the owner of the file to access it is common practice. For example, SSH on Ubuntu by default stores private keys that way, in ~/.ssh. Additionally, the specification allows for encryption of the private key using a passphrase. That too is common practice.
Whether this is good enough for your scenario depends on your system's requirements. It is possible to integrate with existing, stronger key storage solutions like Windows certificate stores or macOS keychains by implementing customize security plugins. The pluggable architecture as defined in the spec was intended to allow for that, but the actual availability of such solutions depends on the DDS product that you are using.
About the permissions_ca and Governance/Permissions documents, if
those are updated by an attacker (which would create its own CA and
sign new Governance/Permissions documents), then, can an application
could have more permissions?
Both the Governance and Permissions documents have to be signed by a signing authority. Tampering with those files would break the signature verification and therefore would be detected by other Participants in the Domain.
All participants in the secured DDS Domain need to trust the same signing authority to make this mechanism work. For an attacker to successfully modify a Governance or Permissions document, it would have to have access to the private keys of the signing authority. Again, this is a common technique used in public key infrastructures similar to the public key certificate signing.
In spite of the tamper protection, it still makes sense to protect those files. The actual result of tampering or deletion of those files would be a denial of service, which is harmful as well.
I'm adding SSL support (currently pushing forward with OpenSSL) to an existing application. I've never done anything with cryptology before and after reading numerous articles and watching videos I'm still a little confused as to how I can implement it for my specific situation.
Our software is client/server, and the end-user purchases both and will install it on their premises.
My first bit of confusion is regarding certificates and private keys, and how to manage these. Should I have one certificate that get installed along with the app? Should each end-user have their own certificate generated? What about private keys? Do I bake the private key into the server binary? Or should there be a file with the private key?
I'm sure this is a solved problem, but I'm not quite sure where to look, or what to search for.
Thanks for any help and advice.
Adding OpenSSL into existing app
If all you need is an example of a SSL/TLS client, have a look at the OpenSSL's wiki and TLS Client example.
My first bit of confusion is regarding certificates and private keys, and how to manage these.
Yes, key management and distribution is the hardest problem in crpyto.
Public CAs have legally binding documents covering these practices. They are called Certification Practice Statements (CPS). You can have a lot of fun with them because the company lawyers tell you what you don't want to hear (or the marketing department refuses to tell you).
For example, here's an excerpt from Apple Inc. Certification Authority Certification Practice Statement:
2.4.2. CA disclaimers of warranties
To the extent permitted by applicable law, Subscriber agreements,
if applicable, disclaim warranties from Apple, including any
warranty of merchantability or fitness for a particular purpose.
2.4.3. CA limitations of liability
To the extent permitted by applicable law, Subscriber agreements,
if applicable, shall limit liability on the part of Apple and shall
exclude liability for indirect, special, incidental, and
consequential damages.
So, Apple is selling you a product with no warranty and that offers no liability!!! And they want you to trust them and give them money... what a racket! And its not just Apple - other CAs have equally obscene CPS'es.
Should I have one certificate that get installed along with the app?
It depends. If you are running your own PKI (i.e., you are the CA and control the root certificate), then distribute your root X509 certificate with you application and nothing else. There's no need to trust any other CAs, like Verisign or Startcom.
If you are using someone else's PKI (or the Internet's PKI specified in RFC 5280), then distribute only the root X509 certificate needed to validate the chain. In this case, you will distribute one CA's root X509 certificate for validation. You could potentially trust just about any certificate signed by that particular CA, however (and its likely to be in the 10's of thousands if you are not careful).
If you don't know in advance, then you have to do like browsers and pick a bunch of CAs to trust and carry around their root certificates for your application. You can grab a list of them from Mozilla, for example. You could potentially trust just about any certificate signed by all CAs, however (and its likely to be in the 10's of millions if you are not careful).
There's a lot more to using public CAs like browsers, and you should read through Peter Gutmann's Engineering Security. Pay particular attention to the Security Diversification strategies.
When the client connects to your server, your server should send its X509 certificate (the leaf certificate) and any intermediate certificates required to build a valid chain back to the root certificate you distribute.
Finally, you can get free SSL/TLS certificates trusted by most major browsers (including mobile) from Eddy Nigg at Startcom. He charges for the revocation (if needed) because that's where the cost lies. Other CAs charge you up front and pocket the proceeds if not needed.
Should each end-user have their own certificate generated?
That is possible, too. That's called client certificates or client authentication authentication. Ideally, you would be running your own PKI because (1) you control everything (including the CA operations) and don't need to trust anyone outside the organization; and (2) it can get expensive to have a commercial CA sign every user's certificate.
If you don't want to use client side certificates, please look into PSK (Preshared Keys) and SRP (Secure Remote Password). Both beat the snot out of classic X509 using RSA key transport. PSK and SRP do so because they provide mutual authentication and channel binding. In these systems, both the client and server know the secret or password and the channel is setup up; or one (or both) does not know and channel setup fails. The plain text username and password are never put on the wire as in RSA transport and basic_auth schemes. (I prefer SRP because its based on Diffie-Hellman, and have implemented it in a few systems).
What about private keys?
Yes, you need to manage the private keys associated with certificates. You can (1) store them in the filesystem with permissions or ACLs; (2) store them in a Keystore or Keychain like Android, Mac OS X, iOS, or Windows; (3) store them in an Hardware Security Module (HSM); or (4) store them remotely while keeping them online using Key Management Interop Protocol (KMIP).
Note: unattended key storage on a server is a problem without a solution. See, for example, Peter Gutmann's Engineering Security, page 368 under "Wicked Hard Problems" and "Problems without Solutions".
Do I bake the private key into the server binary?
No. You generate them when needed and then store them with the best protection you can provide.
Or should there be a file with the private key?
Yes, something like that. See above.
I'm sure this is a solved problem, but I'm not quite sure where to look, or what to search for.
I'm not sure I would really call it solved because of the key distribution problem.
And some implementations are just really bad, so you would likely wonder how the code passed for production.
The first thing you probably want (since your focusing on key management) is a treatment of "key management" and "key hierarchies".
You might also want some reference material. From the security engineering point of view, read Gutmann's Engineering Security and Ross Anderson's Security Engineering. From an implementation standpoint, grab a copy of Network Security with OpenSSL and SSL and TLS: Designing and Building Secure Systems.
I have multiple tiny Linux embedded servers on Beaglebone Black (could by a RaspberryPi, it makes no difference) that need to exchange information with a main server (hosted on the web).
Ideally, each system talks to each other by simple RESTful commands - for instance, the main server sends out new configurations to the embedded servers - and the servers send back data.
Commands could be also issued by a human user from the main server or directly to the embedded servers.
What would it be the most "standard" way of authentication of each server against each other? I'm thinking OAuth, assuming that each machine has its own OAuth user - but I'm not sure if that is the correct pattern to follow.
What would it be the most "standard" way of authentication of each server against each other? I'm thinking OAuth, assuming that each machine has its own OAuth user - but I'm not sure if that is the correct pattern to follow.
Authenticating machines is no different than authenticating users. They are both security principals. In fact, Microsoft made machines a first-class citizen in Windows 2000. They can be a principal on securable objects like files and folders, just like regular users can.
(There is some hand waving since servers usually suffer from the Unattended Key Storage problem described by Gutmann in his Engineering Security book).
I would use a private PKI (i.e., be my own Certification Authority) and utilize mutual authentication based on public/private key pairs like SSL/TLS. This has the added benefit of re-using a lot of infrastructure, so the HTTP/HTTPS/REST "just works" as it always has.
If you use a Private PKI, issue certificates for the machines that include the following key usage:
Digital Signature (Key Usage)
Key Encipherment (Key Usage)
Key Agreement (Key Usage)
Web Client Authentication (Extended Key Usage)
Web Server Authentication (Extended Key Usage)
Or, run a private PKI and only allow communications between servers using a VPN based on your PKI. You can still tunnel your RESTful requests, and no others will be able to establish a VPN to one of your servers. You get the IP filters for free.
Or use a Kerberos style protocol with a key distribution center. You'll need the entire Kerberos infrastructure, including a KDC. Set up secure channels based on the secrets proctored by the KDC.
Or, use a SSH-like system, public/private key pairs and sneaker-net to copy the peer's public keys to one another. Only allow connections from machines whose public keys you have.
I probably would not use an OAuth-like system. In the OAuth-like system, you're going to be both the Provider and Relying Party. In this case, you might as well be a CA and reuse everything from SSL/TLS.
I think you need to Implement Mutual Authentication between servers using SSL for your requirement.
I do not know much about M2M environment , but using OAuth for Authenticating your Servers is OverKill .
https://security.stackexchange.com/questions/34897/configure-ssl-mutual-two-way-authentication
Also Encrypting your Communication Channel while Sending commands would make it more safe from Attacks