What is an etoken? - cryptography

I need to write a code to check the validity of the digital certificate present in an etoken.
I am not familiar with etokens. Can anyone please answer my following questions,
How to access the digital certificate content from etoken?
Can we access the private key stored in etoken?
When we plug the etoken to an computer then does it copy the digital certificate on the computer or not? If yes then where does it copy it?
I need to write C++ program for the same. Can we use Cryptographic API's (like CrypImportKey() CryptExportKey() ) provided by Microsoft for the above requirement?

"etoken" was the name of one of first USB cryptotokens produced by Aladdin. What you are asking for is usually referred to as security token. This is a hardware device with it's own memory, in which certificates and private keys are stored.
Tokens need drivers to be installed in order to work properly. The driver set includes implementation of CSP (Cryptographic Service Provider) for CryptoAPI. CSP does the job of presenting certificates, stored in the token, to CryptoAPI. To answer your questions:
Via CryptoAPI or PKCS#11 interface (drivers for both are supplied by the vendor).
You can perform certain operations with the private key by calling the appropriate API. But the key itself is not extractable.
I can't say for sure but for me it looks like certificates are copied to in-memory certificate store for speed of operations.

In relation to your second question, I believe it is possible to access the private key on the security token. The security token had to be pre-programmed and loaded with a private key somehow. Also, the last time we renewed our certificate, we did it online, using the issuer's web interface which installed an ActiveX module that uploaded the new certificate to the device. I don't know if this procedure also uploaded a new key but possibly not, since I don't believe you need to change your private key to create a new public certificate for yourself (which needs to be signed by the issuer to be trusted I believe).
Sorry I might not make much sense as I am new to the whole idea of Public Key Infrastructure.
If someone else could validate/invalidate my claims, please share your knowledge.
EDIT: I found this hardware hack for Alladin devices: http://seclists.org/bugtraq/2000/May/48
Basically, it is possible to read the date on the eToken but it requires a direct hardware interface to the device's on-board memory.

Related

Client authentication vs. Password authentication in presence of TPM on the client device

With the goal of better understanding the pros and cons of using certificate-based client authentication vs. password-based authentication I have searched previous posts here and read this one.
Yet I’d like to consider a specific scenario where 1) clients are applications deployed on devices and using unguessable passwords (anyway available on the client device if taken over by an attacker); 2) certificates are signed by a private CA owned by the organization deploying the server as well as the client applications; 3) clients do not (need to) perform a logout; 4) TPM is available on the device and 5) an attacker can get physical access to a device holding the certificate/password.
The way I understand it, the key of a client certificate can be hardware-secured on the client TPM, thereby making it impossible to reuse the same certificate in a different device.
Still, I’m not clear if an attacker with physical access to the device would get a chance to read the secret as it is handed over to the application by the TPM.
Wondering if the same could be applied to passwords.
I did not consider revocation in my context because the server could as easily revoke a password if needed without putting a PKI in place.
Does the presence of a TPM makes one option over the other preferred? Are there other aspects that make one preferred over the other?
Still, I’m not clear if an attacker with physical access to the device would get a chance to read the secret as it is handed over to the application by the TPM.
the key point of TPM (like smart cards, HSMs) is key privacy. It never is exposed to an application. Instead, TPM exposes interfaces to perform cryptographic operations using the key, but not the access to key material. Instead of taking the key and doing something (sign or encrypt data) and thus accessing key material, you ask TPM to perform requested operation (sign or encrypt data) and get the result, but never touch key material.
In addition, passwords are often cached somewhere in memory by applications (due to unreleased handles, or active references for example), TPM performs cryptographic operations inside the chip and never leaves it.
This is where TPM beats passwords. If you have TPM, go with it.

what happens to JWT if someone gets my private and public key?

It seems to me that if my private and public key are compromised (which i use to sign and verify JWTs), that anyone can independently generate JWT tokens for themselves to use on my API?
Whereas on the other hand if I generated my own tokens myself, and stored a look-up table of 'one-way-hashed user id' => 'token', then if someone broke into my system, they would not be able to generate tokens to use on my API, and they would also not be able to use the tokens (because they would not know which token belonged to which user)
If someone breaks into your system and it is still secure, then you made a secure system; nothing to worry about.
with JWT, it appears to me that if someone breaks in, I do have something to worry about.
It seems to me that if my private and public key are compromised (which i use to sign and verify JWTs), that anyone can independently generate JWT tokens for themselves to use on my API?
Yes, that's correct.
Public keys are intended to be public and can be distributed.
On the other hand, private keys are supposed to be private and must be kept secure in your server. Anyone who has access to the private keys should be capable to issue tokens.
Disclosing your private key is a huge security breach.
It seems to me that if my private and public key are compromised (which i use to sign and verify JWTs), that anyone can independently generate JWT tokens for themselves to use on my API?
As also pointed out that you need to keep your Private Key Secure , the best way to keep it secure is to use an HSM for signing your data , in this case you can extend the JWT generator to sign the data through a crypto dll inside the HSM , this insures that the private key is never exposed outside the HSM
Whereas on the other hand if I generated my own tokens myself, and
stored a look-up table of 'one-way-hashed user id' => 'token',
Any one can generate your non-keyed hash. Secure hashes involved a private key which becomes a digital signature. Now we've come full circle, because that's exactly what a JWT token is.
Alternatively, you store them in a datastore, but now you must query this on every round trip. Most ticket(cookie)/token authentication systems use public key verification, which verifies the validity of the ticket/token without a database roundtrip.
If you store them in a datastore, now you must track expiration in the datastore as well. Tickets/tokens can have an expiration built into them. The nice thing about tickets/tokens is the client holds them. You can expire a session more quickly than the authentication. I.e. often you get a ticket that may allow you to be logged in for 2 hours, but the web server can expire your session in 10 minutes to reduce memory usage. When you access the web server in 15 minutes, it will see your ticket/token and see that it is still valid, and create a new session. This means at any point in time the server is tracking far fewer idle users.
JWT issuers are great for distributed systems, where authentication is shared. Rather than reimplement the authentication in every system, exposing multiple systems to the private key, as well as potential bugs in the authentication, we centralize it to one system. We can also leverage third party integrators that generate JWTs. All we need to do is get their public key for verifying the JWTs.
If someone breaks into your system and it is still secure, then you
made a secure system; nothing to worry about.
I have your list of nonces you were saving in your database now, and can login as anyone. I also likely have your connection strings, even if you're encrypting your application config, if I have root access then I can access the same key store that's used by the application to decrypt them. Now I get your username/passwords from your database and can login as anyone, regardless of what authentication scheme you use.
You'll be hard pressed to find a system that can still be secure after someone's gained root or physical access to the machine.
There's a small handful of systems that have purpose built hardware for storing keys and handle requests for encryption operations through an interface, thus ensuring the keys are protected at a hardware level and never accessed directly from software:
https://en.wikipedia.org/wiki/Hardware_security_module

Certificates, Provisioning Profiles, public/private keys demystified

This topic continues to confuse me. I thought I'd write out my current understanding and hopefully find out the things I'm right about/things I'm wrong about.
When you create a development certificate, there is a concept of a public and private key. The certificate available thru the provisioning portal holds on to a public key, while your private key is stored within your keychain. In order to code sign your app, you've got to have both.
In order to run an app, the device must have a provisioning profile, which essentially holds on to an app identifier, a set of recognized certificates (the app must've been signed by one of these certificates), and a set of device identifiers (which indicate which devices are allowed to run the app).
The 'recognized certificates' have references to the public key, while the private key is essentially passed on by the app.
Thus, with regards to the App Store, we can think of a normal device as coming with a default prov profile that already has apples 'public key' and apple performs their own code sign operation before distributing whereby they add their private key.
Perfect? Close? Way off? Insane?
For what it's worth, here is my updated understanding:
A Provisioning profile is a file that tells you which apps (via an AppID), signed by which developer (via the certificate) can run on which devices (the UDIDs).
With certificates, there is a concept of a public and private key. Public and private keys are mathematically linked such that one can encrypt the plain text, and one can decrypt the cipher text. Certificates allow apple to ensure two things: 1, that only registered developers can distribute their code, and 2, that the code that is being distributed isn't altered on its way to your device.
When you build your code in Xcode, you 'code sign' your application with the private key located in your keychain, thereby 'locking' it. In order to unlock/decrypt the code, the destination device must have access to your public key. The device gets the public key from your certificate that is included in the provisioning profile.
In order to verify that the code remains unchanged on its way from the developer to the device, your certificate includes an algorithm that can convert your code/data into what is known as a 'digest.' On the developer side, the data/code is run thru the algorithm, generating into a separate digest, which is then locked with the private key.
When the app package is received by the device, the device can ensure the code hasn't been altered by doing the following: unlocking the digest with the private key, running the unencrypted data thru the algorithm (remember, the device can access the cert thru its prov profile), and making sure the result is a digest identical to the one sent over from the developer.
Beyond that, the prov profile need only check the UDID of the phone, and make sure the AppID from the profile matches the identifier in the app.
The reason we don't need separate prov profiles for apps from the appstore, i assume, is because each iPhone ships with the public key used by apple to code sign distribution apps.
Ray Wenderlich has it explained reasonably well here. To improve your description, instead of
The 'recognized certificates' have references to the public key, while
the private key is essentially passed on by the app.
I would say:
The app .ipa includes a developer certificate. The developer
certificate is signed with your private key - as well as with the
official Apple private key.
Thus, by verifying the developer certificate with Apple's and your public keys, the iPhone can verify that:
you are the developer of this app
you have been certified by Apple for app development
this app is allow to be run on the iPhone (as long as there is a provisioning profile on the phone that refers to this developer certificate).
Your private key is not stored in any of the certificates or profiles, it is only used for signing. Not sure whether the public keys are stored. In order to be fully secure, the phone should fetch the public keys from Apple when verifying.

Document signed and timestamped locally and then uploaded to the server, does it have same characteristics?

Immagine a web application that lets you digitally sign (with personal digital certificates pkcs12 released by trusted CAs) and timestamp PDF documents with a Java applet or Active X. This must obviously happen on the machine of the user because the private key of the certificate is stored locally.
So once the PDF is signed and timestamped it is uploaded on the server.
Does the uploaded file have the same features of the one created locally? Does it have sense to talk about "the original version of the file"?
I'm a bit confused on this.
Correction:
i mean digitally sign a document with the private key of a personal digital certificate (should be pkcs7, pkcs12) to ensure that it has really been signed by someone and not someone else.
If by "the original version of the file" you mean that you intend to "freeze" the document so that nobody can ever make changes to it again - that is neither possible nor the purpose of a digital signature. Anyone could simply "cut out" the a signature embedded within a document, nobody would notice.
Protecting a document from subsequent modification involves some kind of DRM mechanism. For example, "watermarking" involving steganography is used to protect photos so that noone should be able to claim ownership of a photo, even after having modified it. But the technology is not very advanced yet, most algorithms can be easily broken.
This implies that the notion of "the original version of the file" in let's say a legal dispute is something that the involved parties have to agree upon in consent. There's no way to prove origin without either consent or a trusted third party that will attest the integrity of a document, e.g. if they have an independent copy of the document.
Apart from that, uploading a file should not change its contents. The file will have the exact same properties than the local one including the signature that was added on the client side.
The signature will only attest authenticity and integrity of the document. If it is vital for your application to be able to tell that the signed document received is actually the one that was expected, then I'd advise you to do the following:
Create the PDF on the server
Create a hash of the document (same algorithm that will be used by the signature applet)
Send the PDF to the client
Let the client sign it and send it back
Compare the client's hash with the one previously computed on the server
Validate the signature
Validating the signature will ensure integrity and authenticity, comparing the hashes will guarantee you that the signed document you received on the server is indeed a signed version of the original document previously created.
Concerning timestamps using local clocks: they're worthless, it's very easy to cheat. What you actually should use there is RFC 3161-compliant cryptographically secured timestamps, issued by a trusted third party. Currently that's the only reliable way to include the notion of time in PDF signatures. There's also built-in support for this in Adobe Reader for example. As these services are generally not available for free, it would make sense to add such a timestamp on the server after receiving the signed document. They are added as an unsigned attribute to the CMS (Adobe still speaks of PKCS7) signature, so it won't break the signature and can safely be added after signature creation.
Okay, let's try to answer your question (as I understand it).
You have some software which uses some private key (and a clock) to add a signature to a file.
This signature is depending on the contents of the file, and thus makes sure that the signer knew (or could have known) the contents of the file at the time it signed it. (There are some ways to have "blind signatures", but I assume this is not the case here.)
Uploading the signed file anywhere does not change anything here.
About the timestamp: The key holder can put in any timestamp it wants - so this only helps if you want to prove knowledge of the document at some point in time against the key holder, not if you are the key holder and want to prove that you signed at some point in time and not earlier or later. (Also, are you sure his clock is not skewed?)
About whether this is legally relevant, you will have to ask your lawyer. It might depend on
the jurisdiction in which the signature happened, and the one in which you want the signed document to be valid
whether the owner of the key had a chance to actually read the document before signing
whether the owner of the key had actually a choice of signing or not.
If you use some applet or ActiveX control in the user's browser, I would not be totally sure that the last two points really hold.

Understanding SSL

I have three questions regarding SSL that I don't fully understand.
If I get it correctly, a server A submits a request to a certain CA. Then, it receives (after validation etc.) a digital certificate composed of a public key + identity + an encription of this information using the CA's private key.
Later on, a client B wants to open an SSL communication with A, so A sends B its digital certificate.
My question is can't B just take this certificate, thus stealing the identity A - which will allow them to authenticate as A to C, for example. I understand that C will decrypt the certificate with the CA's public key, It will then encrypt its symetric key which will only be decryptable by the real A.
However, I do not see where authentication comes to play if B can actually steal A's identity. Unless I am missing something.
Second question: Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway?
If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers.
Thank you very much.
An SSL identity is characterized by four parts:
A private key, which is not shared with anyone.
A public key, which you can share with anyone.
The private and public key form a matched pair: anything you encrypt with one can be decrypted with the other, but you cannot decrypt something encrypted with the public key without the private key or vice versa. This is genuine mathematical magic.
Metadata attached to the public key that state who it is talking about. For a server key, this would identify the DNS name of the service that is being secured (among other things). Other data in here includes things like the intended uses (mainly used to limit the amount of damage that someone with a stolen certificate can do) and an expiry date (to limit how long a stolen certificate can be used for).
A digital signature on the combination of public key and metadata so that they can't be messed around with and so that someone else can know how much to trust the metadata. There are multiple ways to handle who does the signature:
Signing with the private key (from part 1, above); a self-signed certificate. Anyone can do this but it doesn't convey much trust (precisely because anyone can do this).
Getting a group of people who trust each other to vouch for you by signing the certificate; a web-of-trust (so called because the trust relationship is transitive and often symmetric as people sign each others certificates).
Getting a trusted third party to do the signing; a certificate authority (CA). The identity of the CA is guaranteed by another higher-level CA in a trust chain back to some root authority that “everyone” trusts (i.e., there's a list built into your SSL library, which it's possible to update at deployment time).
There's no basic technical difference between the three types of authorities above, but the nature of the trust people put in them is extremely variable. The details of why this is would require a very long answer indeed!
Items 2–4 are what comprises the digital certificate.
When the client, B, starts the SSL protocol with the server, A, the server's digital certificate is communicated to B as part of the protocol. A's private key is not sent, but because B can successfully decrypt a message sent by the other end with the public key in the digital certificate, B can know that A has the private key that matches. B can then look at the metadata in the certificate and see that the other end claims to be A, and can examine the signature to see how much to trust that assertion; if the metadata is signed by an authority that B trusts (directly or indirectly) then B can trust that the other end has A's SSL identity. If that identity is the one that they were expecting (i.e., they wanted to talk to A: in practice, this is done by comparing the DNS name in the certificate with the name that they used when looking up the server address) then they can know that they have a secured communications channel: they're good to go.
B can't impersonate A with that information though: B doesn't get A's private key and so it would all fall apart at the first stage of verification. In order for some third party to impersonate B, they need to have (at least) two of:
The private key. The owner of the identity needs to take care to stop this from happening, but it is ultimately in their hands.
A trusted authority that makes false statements. There's occasional weaknesses here — a self-signed authority is never very trustworthy, a web of trust runs into problems because trust is an awkward thing to handle transitively, and some CAs are thoroughly unscrupulous and others too inclined to not exclude the scum — but mostly this works fairly well because most parties are keen to not cause problems, often for financial reasons.
A way to poison DNS so that the target believes a different server is really the one being impersonated. Without DNSsec this is somewhat easy unfortunately, but this particular problem is being reduced.
As to your other questions…
Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway?
While keys are fairly long, certificates are longer (for one thing, they include the signers public key anyway, which is typically the same length the key being signed). Hashing is part of the general algorithm for signing documents anyway because nobody wants to be restricted to signing only very short things. Given that the algorithm is required, it makes sense to use it for this purpose.
If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers.
If you have several servers serving the same DNS name (there's many ways to do this, one of the simplest being round-robin DNS serving) you can put the same identity on each of them. This slightly reduces security, but only very slightly; it's still one service that just happens to be implemented by multiple servers. In theory you could give each one a different identity (though with the same name) but I can't think of any good reason for actually doing it; it's more likely to worry people than the alternative.
Also note that it's possible to have a certificate for more than one service name at once. There are two mechanisms for doing this (adding alternate names to the certificate or using a wildcard in the name in the certificate) but CAs tend to charge quite a lot for signing certificates with them in.
My question is can't "B" just take this certificate, thus stealing the identity of "A" - which will allow them to authenticate as "A" to "C"
There's also a private part of the certificate that does not get transmitted (the private key). Without the private key, B cannot authenticate as A. Similarly, I know your StackOverflow username, but that doens't let me log in as you.
Why use hashing on the certificate if a part of it is already encrypted by the CA?
By doing it this way, anyone can verify that it was the CA who produced the hash, and not someone else. This proves that the certificate was produced by the CA, and thus, the "validation etc." was performed.
If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers.
It depends on the particular case, but you will likely have identical certificates on each.
First question: You are correct about what you get back from the CA, but you are missing part of what you need before you submit your request to the CA. You need (1) a certificate request, and (2) the corresponding private key. You do not send the private key as part of the request; you keep it secret on your server. Your signed certificate includes a copy of the corresponding public key. Before any client will believe that B "owns" the certificate, B has to prove it by using the secret key to sign a challenge sent by the client. B cannot do that without A's private key.
Second question: Typical public-key cryptography operates on fixed-size data (e.g., 2048 bits) and is somewhat computationally expensive. So in order to digitally sign an arbitrary-size document, the document is hashed down to a fixed-size block which is then encrypted with the private key.
Third question: You can use a single certificate on multiple servers; you just need the corresponding private key on all servers. (And of course the DNS name used to reach the server must match the CN in the certificate, or the client will likely balk. But having one DNS name refer to multiple servers is a common and simple means of load-balancing.)
In general, yes, if the cert file gets stolen, nothing will stop someone from installing it on their server and suddenly assuming the stolen site's identity. However, unless the thief takes over control of the original site's DNS setup, any requests for the site's URL will still go to the original server, and the thief's server will stay idle.
It's the equivalent of building an exact duplicate of the Statue of Liberty in Antarctica with the expectation of stealing away New York's tourist revenue. Unless you start hacking every single tourist guide book and history textbook to replace "New York" with Antarctica, everyone'll still go to New York to see the real statue and the thief will just have a very big, green, complete idle icicle.
However, when you get a cert from a CA, the cert is password protected and cannot simply be installed in a webserver. Some places will remove the password so the webserver can restart itself without intervention. But a secure site will keep the password in place, which means that any server restarts will kill the site until someone gets to the admin console and enters the PW to decrypt the cert.
Question N°1
can't B just take this certificate [...] which will allow them to authenticate as A to C
This part of the a larger diagram deals with that question.
Mainly : if you only have the public key then you can not establish an SSL connection with any client because you need to exchange a secret key with them and that secret key has to be encrypted using your public key, which is why the client asks for it in the first time. The client is supposed to encrypt the shared secret key with your public key and you are supposed to decrypt it using your private key. Since you don't have the private key, you can't decrypt the secret exchange key, hence you can't establish any SSL communication with any client.
Question N°2
Why use hashing on the certificate if a part of it is already
encrypted by the CA?
This is also answered in the original diagram by the question "what's a signature ?". Basically, we're hashing the whole certificate to be sure that it hasn't been tampered with (data integrity), that nobody has changed anything in it, and that what you see is really what was delivered by the CA. The diagram shows how hasing makes that possible.
Question N°3
If I am stackoverflow and I have 3 servers [...] do I have to have a
different digital certificate for each of the 3 servers.
This is not necessarily always the case. Consider the situation where all three servers are on the same domain, then you only need one certificate, if each of them is on its own subdomain you can have one single wildcard certificate installed on all of them.
On the contrary, if you have one server that hosts multiple domains you would have one single multi-domain SSL certificate.
I also have some answers.
Q1) If B steals A's certificate and try to impersonate as A to C.
C will validate the IP address of B and find out that it does not belong to C. It will then abort the SSL connection. Of course, even if C sends an encrypted message, then only the Real A will be able to decrypt it.
Q2) A certificate is usually represented in plain-text using the common format X.509. All entries are readable by anyone. The hashing process is used to digitally sign a document. Digital signing a certificate makes the end user validate that the certificate has not been altered by anyone after it was created. Hashing and encrypting the content using the issuer's private key is done to create a digital signature.