Parsing an X509 Certificate - ssl

I currently need to parse the CommonName from a packet. I have code that works up to a point, however am having trouble skipping over the "issuer" member of a Certificate record for TLSv1.2. I have done research on the format of the SSL records and have investigated the dump via wireshark. I've found the format is generally - Length, followed by the data. However when trying to find the issue length, I cannot seem to get it, and is inconsistent with the bytes presented. Any ideas..or a better way to skip over the issuer field, and go directly to the "subject" of an TLS 1.2 record. Coded in C..Thank you for useful responses.

You need to understand ASN.1. Go read this book (it is a free download). Once you have read and understood it, you can write your decoder, following the ASN.1 specification for certificates. This is doable, but requires great implementation care. In fact, this is a bad idea unless you are a demi-god of C programming.
Alternatively, use some library that already knows how to decode a certificate. Typically, OpenSSL.

Related

AWS S3 SSE GetObject requires secret key

The idea was to generate a random key for every file being uploaded, pass this key to S3 in order to encrypt it and store the key in the database. Once the user wants to access the file, the key is read from the database and passed to S3 once again.
The first part works. My objects are uploaded and encrypted successfully, but I have issues with retrieving them.
Retrieving files with request headers set:
When setting the request headers such as x-amz-server-side-encryption-customer-algorithm etc. when performing the GET request to the resource, works, and I am able to access it. But since I want to these resources as src to an <img>-Tag, I cannot perform GET requests which require headers to be set.
Thus, I thought about:
Pre signing urls:
To create a pre signed url, I built the HMAC SHA1 of the required string and used it as a signature. The calculated signature is accepted by S3 but I get the following error when requesting the pre signed URL:
Requests specifying Server Side Encryption with Customer provided keys must provide an appropriate secret key.
The URL has the form:
https://s3-eu-west-1.amazonaws.com/bucket-id/resource-id?x-amz-server-side-encryption-customer-algorithm=AES256&AWSAccessKeyId=MyAccessKey&Expires=1429939889&Signature=GeneratedSignature
The reason why the error is shown seems to be pretty clear to me. At no point in the signing process was the encryption key used. Thus, the request cannot work. As a result, I added the encryption key as Base64, and Md5 representation as parameters to the URL. The URL now has the following format:
https://s3-eu-west-1.amazonaws.com/bucket-id/resource-id?x-amz-server-side-encryption-customer-algorithm=AES256&AWSAccessKeyId=MyAccessKey&Expires=1429939889&Signature=GeneratedSignature&x-amz-server-side-encryption-customer-key=Base64_Key&x-amz-server-side-encryption-customer-key-MD5=Md5_Key
Although the key is now present (imho), I do get the same error message.
Question
Does anyone know, how I can access my encrypted files with a GET request which does not provide any headers such as x-amz-server-side-encryption-customer-algorithm?
It seems intuitive enough to me that what you are trying should have worked.
Apparently, though, when they say "headers"...
you must provide all the encryption headers in your client application.
— http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html#sse-c-how-to-programmatically-intro
... they do indeed actually mean headers and S3 doesn't accept these particular values when delivered as part of the query string, as you would expect, since S3 sometimes is somewhat flexible in that regard.
I've tested this, and that's the conclusion I've come to: doing this isn't supported.
A GET request with x-amz-server-side-encryption-customer-algorithm=AES256 included in the query string (and signature), along with the X-Amz-Server-Side-Encryption-Customer-Key and X-Amz-Server-Side-Encryption-Customer-Key-MD5 headers does work as expected... as I believe you've discovered... but putting the key and key-md5 in the query string, with or without including it in the signature seems like a dead end.
It seemed somewhat strange, at first, that they wouldn't allow this in the query string, since so many other things are allowed there... but then again, if you're going to the trouble of encrypting something, there seems little point in revealing the encryption key in a link... not to mention that the key would then be captured in the S3 access logs, leaving the encryption seeming fairly well pointless all around -- and perhaps that was their motivation for requiring it to actually be sent in the headers and not the query string.
Based on what I've found in testing, though, I don't see a way to use encrypted objects with customer-provided keys in hyperlinks, directly.
Indirectly, of course, a reverse proxy in front of the S3 bucket could do the translation for you, taking the appropriate values from the query string and placing them into the headers, instead... but it's really not clear to me what's to be gained by using customer-provided encryption keys for downloadable objects, compared to letting S3 handle the at-rest encryption with AWS-managed keys. At-rest encryption is all you're getting either way.

winVerifyTrust is crushed when the sys time is not accurate

I am developing a C# .net 3.5 application.
I am trying to verify a file signature by using WinVerifyTrust.
I also want a revocation check so I set the following parametrs of the WinTrustData.
FdwRevocationChecks = WTD_REVOKE_WHOLECHAIN;
DwProvFlags = WTD_REVOCATION_CHECK_CHAIN;
everyting works OK except when I move the system time forward, then the method is stuck and winVerifyTrust return an answer only after a very long while.
Do you have any idea why it is happening and how can I prevent it?
Thanks
This might be happening since revocation information, whether CRL or OCSP has a thisUpdate field which tells when the revocation information becomes valid. The OS might have downloaded the revocation information and has to block till the time it becomes valid for use.

Is the specification for the DistinguishedName certificate_authorities in RFC2246 (TLS) documented anywhere?

RFC2246 (the TLS specification) lays out all the data structures for SSL/TLS handshakes (as well as a hell of a lot more that I'm not concerned with right now). For the certificate_request handshake type, one of the items that gets sent is a list of DistinguishedName objects (certificate_authorities), which is defined in the spec as an opaque (essentially, seems to be a blob). The spec then says that the DistinguishedName is derived from [X509] - but it doesn't say how close it is, or how it's different.
I'm trying to parse raw certificate_requests handshake messages, and running into trouble decoding the DistinguishedName. Sample hex of the certificate_authorities below is below - but where is the freaking exact spec for what the format of a DistinguishedName is?
Sample:
007f307d310b300906035504061302494c31163014060355040a130d5374617274436f6d204c74642e312b3029060355040b1322536563757265204469676974616c204365727469666963617465205369676e696e6731293027060355040313205374617274436f6d2043657274696669636174696f6e20417574686f72697479008f30818c310b300906035504061302494c31163014060355040a130d5374617274436f6d204c74642e312b3029060355040b1322536563757265204469676974616c204365727469666963617465205369676e696e67313830360603550403132f5374617274436f6d20436c6173732031205072696d61727920496e7465726d65646961746520436c69656e74204341008f30818c310b300906035504061302494c31163014060355040a130d5374617274436f6d204c74642e312b3029060355040b1322536563757265204469676974616c204365727469666963617465205369676e696e67313830360603550403132f5374617274436f6d20436c6173732032205072696d61727920496e7465726d65646961746520436c69656e74204341008f30818c310b300906035504061302494c31163014060355040a130d5374617274436f6d204c74642e312b3029060355040b1322536563757265204469676974616c204365727469666963617465205369676e696e67313830360603550403132f5374617274436f6d20436c6173732033205072696d61727920496e7465726d65646961746520436c69656e74204341

PKCS#7 SignedData and multiple digest algorithms

I'm investigating upgrading an application from SHA1 as the default PKCS#7 SignedData digest algorithm to stronger digests such as SHA256, in ways that preserve backwards compatibility for signature verifiers which do not support digest algorithms other than SHA1. I want to check my understanding of the PKCS#7 format and available options.
What think I want to do is digest message content with both SHA1 and SHA256 (or more generally, a set of digest algorithms) such that older applications can continue to verify via the SHA1, and upgraded applications can begin verifying via the SHA256 (more generally, the strongest digest provided), ignoring the weaker algorithm(s). [If there is a better approach, please let me know.]
It appears that within the PKCS#7 standard, the only way to provide multiple digests is to provide multiple SignerInfos, one for each digest algorithm. Unfortunately, this would seem to lead to a net decrease in security, as an attacker is able to strip all but the the SignerInfo with the weakest digest algorithm, which alone will still form a valid signature. Is this understanding correct?
If so, my idea was to use custom attributes within the authenticatedAttributes field of SignerInfo to provide additional message-digests for the additional digest algorithms (leaving SHA1 as the "default" algorithm for backwards compatibility). Since this field is authenticated as a single block, this would prevent the above attack. Does this seem like a viable approach? Is there a way to accomplish this or something similar without going outside of the PKCS standard?
Yes, you are right, in the current CMS RFC it says about the message digest attribute that
The SignedAttributes in a signerInfo
MUST include only one instance of the message-digest attribute.
Similarly, the AuthAttributes in an AuthenticatedData MUST include
only one instance of the message-digest attribute.
So it is true that the only way to provide multiple message digest values using the standard signed attributes is to provide several signedInfos.
And yes, any security system is as strong as its weakest link, so theoretically you will not gain anything by adding a SignedInfo with SHA-256 if you also still accept SHA-1 - as you said, the stronger signatures can always be stripped.
Your scheme with custom attributes is a bit harder to break - but there is still a SHA-1 hash floating around that can be attacked. It's no longer as easy as just stripping the attribute - as it's covered by the signature. But:
There is also the digest algorithm that is used to digest the signed attributes which serves as the basis of the final signature value. What do you intend to use there? SHA-256 or SHA-1? If it's SHA-1, then you will be in the same situation as before:
If I can produce collisions for SHA-1, then I would strip off your custom SHA-256 attribute and forge the SHA-1 attribute in such a way that the final SHA-1 digest for the signature adds up again. This shows that there will only be a gain in security if the signature digest algorithm would be SHA-256, too, but I'm guessing this is no option since you want to stay backwards-compatible.
What I would suggest in your situation is to keep using SHA-1 throughout but apply an RFC 3161-compliant timestamp to your signature as an unsigned attribute. Those timestamps are in fact signatures of their own. The good thing is you can use SHA-256 for the message imprint there and often the timestamp server applies its signature using the same digest algorithm you provided. Then reject any signature that either does not contain such a timestamp or contains only timestamps with message imprint/signature digest algorithms weaker than SHA-256.
What's the benefit of this solution? Your legacy applications should check for the presence of an unsigned timestamp attribute and if a strong digest was used for it, but otherwise ignore them and keep on verifying the signatures the same way they did before. New applications on the other hand will verify the signature but additionally verify the timestamp, too. As the timestamp signature "covers" the signature value, there's no longer a way for an attacker to forge the signature. Although the signature uses SHA-1 for the digest values an attacker would have to be able to break break the stronger digest of the timestamp, too.
An additional benefit of a timestamp is that you can associate a date of production with your signature - you can safely claim that the signature has been produced before the time of the timestamp. So even if a signature certificate were to be revoked, with the help of the timestamp you could still precisely decide whether to reject or accept a signature based on the time that the certificate was revoked. If the certificate was revoked after the timestamp, then you can accept the signature (add a safety margin (aka "grace period") - it takes some time until the information gets published), if it was revoked prior to the time of the timestamp then you want to reject the signature.
A last benefit of timestamps is that you can renew them over time if certain algorithms get weak. You could for example apply a new timestamp every 5-10 years using up-to-date algorithms and have the new timestamps cover all of the older signatures (including older timestamps). This way weak algorithms are then covered by the newer, stronger timestamp signature. Have a look at CAdES (there exists also an RFC, but it's outdated by now), which is based on CMS and makes an attempt at applying these strategies to provide for long-term archiving of CMS signatures.

Is this RSA-based signature (with recovery) scheme cryptographically sound?

I am implementing a simple license-file system, and would like to know if there are any mistakes I'm making with my current line of implementation.
The message data is smaller than the key. I'm using RSA, with a keysize of 3072bits.
The issuer of the licenses generates the message to be signed, and signs it, using a straightforwards RSA-based approach, and then applies a similar approach to encrypt the message. The encrypted message and the signature are stored together as the License file.
Sha512 the message.
Sign the hash with the private key.
Sign the message with the private key.
Concatenate and transmit.
On receipt, the verification process is:
Decrypt the message with the public key
Hash the message
Decrypt the hash from the file with the public key, and compare with the local hash.
The implementation is working correctly so far, and appears to be valid.
I'm currently zero-padding the message to match the keysize, which is probably
a bad move (I presume I should be using a PKCS padding algorithm, like 1 or 1.5?)
Does this strategy seem valid?
Are there any obvious flaws, or perspectives I'm overlooking?
The major flaw I noticed: you must verify the padding is still there when you decrypt.
(If you know the message length in advance then you might be able to get away with using your own padding scheme, but it would probably still be a good idea to use an existing one as you mentioned).
I am not sure why you're bothering to encrypt the message itself - as you've noted it can be decrypted by anyone with the public key anyway so it is not adding anything other than obfuscation. You might as well just send the message and the encrypted-padded-hash.
I would recommend using a high level library that provides a "sign message" function, like cryptlib or KeyCzar(if you can). These benefit from a lot more eyeballs than your code is likely to see, and take care of all the niggly padding issues and similar.