Differences between DomainKeys vs DKIM? - dkim

Please explain about differences between DomainKeys vs DKIM

DomainKeys Identified Mail (DKIM) is the successor to Yahoo DomainKeys. Being very similar in functionality to DomainKeys, DKIM has additionally adopted aspects from Cisco’s Identified Internet Mail standard (IIM), and the result has been an enhanced standard that provides more flexibility and security than its predecessor. Some of the differences between DomainKeys and DKIM include:
Multiple signature algorithms (as opposed to just one available with DomainKeys)
More options with regard to canonicalization, that validates both header and body
The ability to delegate signing to third parties
The ability for DKIM to self-sign the DKIM-Signature header field – to protect against its being modified
The ability for wildcard option on some parameters
The ability to support signature timeouts in DNS
http://www.socketlabs.com/articles/show/email-authentication-guide?page=6

Related

What is an X509 certificate profile?

I see the term "x509 profile" being used in a technical document on PKI that I am reading, but no explanation is given. I googled for what a "x509 profile" means but the results were not helpful. For example the wikipedia entry on x509 contains phrases like:
IPSec can use the RFC 4945 profile for authenticating peers.
The OpenCable security specification defines its own profile of X.509 for use in the cable industry.
And no where is the definition of a profile given. It seems it is assumed that the meaning of a profile is a given!
What exactly is a profile in x509 context? I know, from the word I can imagine it means some form classifications of x509, but the question is: what makes up this classification? What characteristics of an x509 are used to form these classifications/profiles? Where can one view all available classifications?
A minimal certificate consists of a Name and a Public Key. The CA which signs this certificate asserts that the entity named owns the private key which matches this Public Key.
In addition to this, a certificate can (and more often than not, must) contain additional information. Examples are Issuer and Version fields, and Key Usage, Enhanced Key Usage and Subject Alternate Name extensions.
A certificate profile is a definition of the additional information. For example, section 5.1.3.2 Key Usage of RFC 4945 says:
A summary of the logic flow for peer cert validation follows:
If no KU extension, continue.
If KU present and doesn't mention digitalSignature or nonRepudiation (both, in addition to other KUs, is also fine), reject cert.
Section 5.1.3.6 goes on to describe the Subject Alternate Name expected for IPSec certificates.
Basically, the profile is a definition of how a certificate is expected to be generated for a certain use-case.
You can define your own certificate profiles, but you'd need to have a very good reason to do so. Most use-cases have been covered by existing profiles, so you may end up re-inventing the wheel.
RFC 5280 defines a profiles for X.509 certificates and CRLs for use on the Internet. It lists what is expected of a certificate by services operating on the Internet (as opposed to other networks such as X.25). The fields are fixed (section 4.1) and it also defines standard extensions. In addition to those, you can also define your own extensions. However, you'd need CAs that can create those certificates and clients that understand what to do with them.

OAuth 2 + Attribute Based Encrytpion

Can I use Attribute Based Encryption (like CP-ABE scheme) and Oauth 2.0 to implement the Authorization, Confidentiality and Authentication (i.e. with FB, Google, Twitter etc.) in a web-application ?
Is there any example or framework ?
Is there any suggestion to use ABE with Oauth ?
Thanks
It doesn't make much sense to use OAUTH with CP-ABE. Those materials are on different levels. In fact, the OAUTH standard doesn't mention encryption at all, it just requires HTTPS without specifying a SSL/TLS version or cipher suite. Also OAUTH is concerned with resource access, thus authorization (but often misused to authenticate objects).
From the CP-ABE perspective there is no logic either. The idea of PKI is to establish a secure and trusted channel, not to do authentication or authorization.
If I have misunderstood the question, please clarity.
Update
Yes it is possible, but still subject to research, and therefore I wouldn't put too much trust in the method yet. What the paper describes it possible, but I doubt it will be used on large scale.
I am not an expert but want to comment here.
The key feature of OAuth2 is to facilitate authorization
The key feature of ABE is to facilitate encryption using attributes that helps in achieving privacy.
[for confidentiality and authentication, explore something else].
Since authorization and encryption/privacy are different aspects of security, they can be combined in innovative ways [such as this].

Clarification on HMAC authentication with WCF

I have been following a couple of articles regarding RESTful web services with WCF and more specifically, how to go about authentication in these. The main article I have been referencing is Aaron Skonnard's RESTful Web Services with WCF 3.5. Another one that specifically deals with HMAC authentication is Itai Goldstiens article which is based on Skonnards article.
I am confused about the "User Key" that is referenced to in both articles. I have a client application that is going to require a user to have both a user name and password.
Does this then mean that the key I use to initialise the
System.Security.Cryptography.HMACMD5 class is simply the users
password?
Given the method used to create the Mac in Itai's article
(shown below), am I right is thinking that key is the users
password and text is the string we are using confirm that the
details are in fact correct?
public static string EncodeText(byte[] key, string text, Encoding encoding)
{
HMACMD5 hmacMD5 = new HMACMD5(key);
byte[] textBytes = encoding.GetBytes(text);
byte[] encodedTextBytes =
hmacMD5.ComputeHash(textBytes);
string encodedText =
Convert.ToBase64String(encodedTextBytes);
return encodedText;
}
In my example, the text parameter would be a combination of request uri, a shared secret and timestamp (which will be available as a request header and used to prevent replay attacks).
Is this form of authentication decent? I've come across another thread here that suggests that the method defined in the articles above is "..a (sic) ugly hack." The author doesn't suggest why, but it is discouraging given that I've spent a few hours reading about this and getting it working. However, it's worth noting that the accepted answer on this question talks about a custom HMAC authorisation scheme so it is possible the ugly hack reference is simply the implementation of it rather than the use of HMAC algorithms themselves.
The diagram below if from the wikipedia article on Message Authentication Code. I feel like this should be a secure way to go, but I just want to make sure I understand it's use correctly and also make sure this isn't simply some dated mechanism that has been surpassed by something much better.
The key can be the user's password, but you absolutely should not do this.
First - the key has an optimal length equal to the size of the output hash, and a user's password will rarely be equal to that.
Second, there will never be enough randomness (entropy to use the technical term) in those bytes to be an adequate key.
Third, although you're preventing replay attacks, you're allowing anyone potentially to sign any kind of request, assuming they can also get hold of the shared secret (is that broadcast by the server at some point or is it derived only on the client and server? If broadcast, a man-in-the-middle attack can easily grab and store that - height of paranoia, yes, but I think you should think about it) unless the user changes their password.
Fourth - stop using HMACMD5 - use HMAC-SHA-256 as a minimum.
This key should at the very least be a series of bytes that are generated from the user's password - typically using something like PBKDF2 - however you should also include something transitory that is session-based and which, ideally, can't be known by an attacker.
That said, a lot of people might tell you that I'm being far too paranoid.
Personally I know I'm not an expert in authentication - it's a very delicate balancing act - so I rely on peer-reviewed and proven technologies. SSL (in this case authentication via client certificates), for example, might have it's weaknesses, but most people use it and if one of my systems gets exploited because of an SSL weakness, it's not going to be my fault. However if an exploit occurs because of some weakness that I wasn't clever enough to identify? I'd kick myself out of the front door.
Indidentally, for my rest services I now use SCRAM for authentication, using SHA512 and 512 bits of random salt for the stretching operation (many people will say that's excessive, but I won't have to change it for a while!), and then use a secure token (signed with an HMAC and encrypted with AES) derived from the authentication and other server-only-known information to persist an authenticated session. The token is stateless in the same way that Asp.Net forms authentication cookies are.
The password exchange works very well indeed, is secure even without SSL (in protecting the password) and has the added advantage of authenticating both client and server. The session persistence can be tuned based on the site and client - the token carries its own expiry and absolute expiry values within it, and these can be tuned easily. By encrypting client ID information into that token as well, it's possible to prevent duplication on to another machine by simply comparing the decrypted values from the client-supplied values. Only thing about that is watching out for IP address information, yes it can be spoofed but, primarily, you have to consider legitimate users on roaming networks.

REST API authentication with query string encryption

I am building a web application that provides an API as it's primary function. I have been looking into methods for authentication but have been struggling to make a decision on what to use.
Since this will be a paid service and the API is the service, I need to make it as easy to use as possible so as not to put people off but obviously I want it to be secure. I have considered using HTTP basic authentication over SSL but would like to avoid the costs/overheads/hassle of SSL if possible early on and maybe provide it as an option later.
I like the AWS style API authentication (see here) but the problem is I can't have users sending the query string as plain text along with a signature because the parameters may contain things like phone numbers which I think customers would rather not expose. I have thought about providing a secret key to encrypt the string which is sent along with an api key to identify the user.
What do you think the best option is to also encrypt the query string along with the request while maintaining simplicity?
Use HTTPS. It's simple, supported by almost all client libraries, trusted, secure, and it protects the URL and payload.

Expiration of a digitally signed PDF with multiple signatures

Context
My overall goal is to make a set of PDFs available, in such a way that users can be assured of the provenance of the documents (i.e., they came from the origin that they are expected to come from). I'm thinking about doing this by digitally signing the PDFs on the server. These signatures won't be in risk of expiring, because the server can just reissue new signed PDFs when the certificate is updated. Using SSL to serve the documents wouldn't be enough, because the files can be passed on to third parties, who don't want/need to access the server.
Problem
The expiration issue arises because some of these PDFs will already have one or more digital signatures (e.g., created for legal purposes). My question is, if the server signs the PDFs, will it also be ensuring the continued validity of the previous signatures, even after they expire, as long as the latest signature is valid?
I'm asking more on the theoretical side, although I plan to implement what I describe using iText, so any pointers on how to use it for my purpose are also welcome.
No, in a PDF all signatures should be validated independently. If you open a PDF with multiple signatures In Adbobe Reader all signatures are validated and you are going to get a warning message if one of the signature validations fails.
If you want to prevent against signature validation issues (for instance a validation failure due to signing certificate expiration) you should look at the PAdES standard (PDF Advanced Electronic Signature) Part #4 (PAdES-LTV Profile - PAdES Long Term Validation). This section of the standard deals with maintaining a proof of the validation across time in order to be able to revalidate the signatures in the future.
I don't know iText very much but it seems that PAdES-LTV is supported since I found this code sample : How to apply verification according to PAdES-LTV