Key Management Interoperable Protocol - key-management

When we say "KMIP is an interoperable protocol",
what does this really mean ? Please explain in this context only.
I know the meaning of interoperable.

As the use of cryptographic techniques increased, it is complicated by inconsistencies and duplication in the key management systems supporting the devices. Each device has its own key management system. Different customers use different Key Lifecycle Management Systems (KLMS). Each of these KLMS works on their own protocol to communicate. KMIP establishes a single, comprehensive protocol for communication between enterprise key management servers and cryptographic clients. It addresses the need for a comprehensive key management protocol and hence it is called as "interoperable".
This helps in reducing operational and infrastructure costs

Related

Is the example proposed by Microsoft for cryptography secure enough, or should I learn more?

This is the article published by Microsoft for encrypting/decrypting data using RSA:
https://learn.microsoft.com/en-us/dotnet/standard/security/walkthrough-creating-a-cryptographic-application
As a relatively new person into the cryptography world and having read a comment on stackoverflow saying that cryptography should use a hybrid model, I researched that and it seems that hybrid models use AES and RSA for encryption and I was wondering if the example provided by Microsoft fits into the hybrid model since it uses both and if is constructed well enough and not just for novice devs just venturing into the world of cryptography.
I already have a working example where an app would encode and another would decode by loading the private key file, similar to the example.
I found an article here:
https://www.codeproject.com/Tips/834977/Using-RSA-and-AES-for-File-Encryption
He creates signatures and manifests and I'm wondering if this is what I'm looking for is Microsoft's example generally just enough, or weak?
PS: I removed the key container code and persistence as I don't want to persist or store my keys on the local machine, instead they are exported as standalone files to be stored in a DB maybe, so I'm not looking for opinions on that part at the moment.
and not just for novice devs just venturing into the world of cryptography
Well, at least it tries to define some kind of protocol, although very sparse. It also uses CBC mode (implicitly, never a good idea) and RSA with PKCS#1 v1.5 padding for encryption. Most people would opt for OAEP if RSA is used and use an authenticated cipher such as GCM.
I already have a working example where an app would encode and another would decode by loading the private key file, similar to the example.
Bad idea, the example is for file encryption, not for transport mode security, for which you need a secure transport protocol. Both the RSA implementation and CBC implementation are malleable, and are both susceptible to padding oracle attacks as well.
I don't want to persist or store my keys on the local machine
You need to establish trust, something that is missing from the example. And to establish trust you do need to persist your keys, especially if they have been randomly generated.
In the end, asking if something is secure depends on context: you need to know what your goals are and then check if the protocol provides enough protection to achieve these goals.
This is also my problem with these generic examples or wrapper classes; they make no sense to me, as the generic security that they seem to provide may not fit your use case; I'd rather design a protocol specific to the use case.

Authentication tips using NTAG 424 DNA TT

I need to implement an authentication procedure between a reader an NFC tag but being my knowledge limited in this area I will appreciated some aid in order to understand few concepts.
Pardon in advance for rewrite the Bible but I could not summarize it more.
There are many tags families ( ICODE, MIFARE, NTAG...) but after doing a research I think NTAG 424 DNA matches my requirements(I need mainly authentication features).
It comes with AES encryption, CMAC protocol and 3-pass-authentication system and here is when I started to need assistance.
AES -> As I am concerned this is a block cipher to encrypt plain texts via permutations and mapping. Is a symmetric standard and it does not use the master key, instead session keys are used being them derivations from the master key. (Q01: What I do not know is where this keys are stored in the tag. Keys must be stored on specialized HW but no tag "specs" remark this, apart from MIFARE SAM labels.)
CMAC -> It is an alteration of CBC-MAC to make authentication secure for dynamically sized messages. If data is not confidential then MAC can be used on plain-texts to verify them, but to gain confidentiality and authentication features "Encrypt-than-mac" must be pursuit. Here also session keys are used, but not the same keys used in the encryption step.(Q02: The overall view of CMAC may be a protocol to implement verification along with confidentiality, this is my opinion and could be wrong.)
3-pass-protocol -> ISO/IEC 9798-2 norm where tag and reader are mutually verified. It may also use MAC along with session keys to achieve this task.(Q03: I think this is the upper layer of all the system to verify tags and readers. The "3 pass protocol" relays in MAC to be functional and, if confidentiality features are also needed, then CMAC might be used instead of single MAC. CMAC needs AES to be functional, applying session keys on each step. Please correct me if I am posting savages mistakes)
/*********/
P.S: I am aware that this is a coding related forum but surely I can find here someone with more knowledge than me about cryptography to answer this questions.
P.S.S: I totally do not know where master and session keys are kept in the Tag side. Have they need to be include by a separate HW along with the main NFC circuit ?
(Target)
This is to implement a mutual verification process between tag and reader, using the NTAG 424 DNA TagTamper label. (The target is to avoid 3ยบ parties copies, being authentication the predominant part instead of message confidentiality)
Lack of knowledge of cryptography and trying to understand how AES, CMAC and the mutual authentication are used on this NTAG.
(Extra Info)
NTAG 424 DNA TT: https://www.nxp.com/products/identification-security/rfid/nfc-hf/ntag/ntag-for-tags-labels/ntag-424-dna-424-dna-tagtamper-advanced-security-and-privacy-for-trusted-iot-applications:NTAG424DNA
ISO 9798-2: http://bcc.portal.gov.bd/sites/default/files/files/bcc.portal.gov.bd/page/adeaf3e5_cc55_4222_8767_f26bcaec3f70/ISO_IEC_9798-2.pdf
3-pass-authentication:https://prezi.com/p/rk6rhd03jjo5/3-pass-mutual-authentication/
Keys storage HW:https://www.microchip.com/design-centers/security-ics/cryptoauthentication
The NTAG424 chips are not particularly easy to use, but they offer some nice features which can be used for different security applications. However one important thing to note, is that although it heavily relies on encryption, from an implementation side, that is not the main challenge, because all of the aes encryption, cmac computation and so on is already available as some sort of package or library in most programming languages. Some examples are even given by nxp in their application note. For example in python you will be able to use the AES package from Crypto.Cipher import AES as stated in one of the examples of the application note.
My advice is to simply retrace their personalization example beginning at the initial authentication, and then work your way up to whatever you are trying to achieve. It is also possible to use these examples in order to test the encryption and the building of apdu commands. Most of the work is not hard, but sometimes the NXP documents can be a bit confusing.
One small note, if you are working with python, there is some code available on github which you might be able to reuse.
For iOS, I'm working on a library for DNA communication, NfcDnaKit:
https://github.com/johnnyb/nfc-dna-kit

Understanding Forward Secrecy

Recently, I was pointed to a post from 2011 by a friend, which described Google's move towards forward secrecy. From what I understand, the essence of forward secrecy seems to lie in the fact that the private keys are not kept in persistant storage.
I have various doubts about how something like this could be implemented.
What if the server goes down without warning - do the key pairs have to be regenerated? Does the public key have to be signed again to create another certificate?
Could someone point me to posts/pdfs where the implementation of something like this is described. Suggested reading resources?
Are you aware of anyone else that has implemented forward secrecy? Have you tried something similar at your workplace?
Thanks!
In Forward Secrecy, there are still long-term keys. The only implication is, that the compromisation of a long-term key will not allow an attacker to compromise temporary session keys, when the long-term key has changed. This means that a long-term key must not be derived from another (older) key.
Here is a good survey on this topic.
According to Wikipedia:
PFS is an optional feature in IPsec (RFC 2412).
SSH.
Off-the-Record Messaging, a cryptography protocol and library for many instant messaging clients, provides perfect forward secrecy as well as deniable encryption.
In theory, Transport Layer Security can choose appropriate ciphers since SSLv3, but in everyday practice many implementations refuse to offer PFS or only provide it with very low encryption grade.
In TLS and many other protocols forward secrecy is provided through the Diffie-Hellman (DH) algorithm. Vanilla DH is rather simple and provides perfect forward secrecy if the exponents are randomly generated each time, but provides no authentication. Therefore in TLS it is using in combination with a signature algorithm, usually RSA.
TLS provides many ciphersuites that support PFS and many that do not. Most TLS clients support PFS but many servers do not, because it is thought that PFS takes too much CPU.

Object Oriented module/definition for networking devices/topology?

Is there any module/definition available for a class/schema for representing the topology, connection, access details etc of networking devices ? The intent is to use this for automation, and to manage routers/servers as objects rather than as tcl keyed lists/arrays which gets unwieldy.
Look at SNMP (Simple Network Management Protocol). Most network devices and services, from IIS to Cisco routers, provide some sort of SNMP interface that may provide the capabilities for which you are searching. Specific implementations and capabilities vary between vendors and devices, but the protocol is standardized and very widely implemented.
The word topology in the context of communication nework refers to the way in which how devices are connectd over a network. Its important types are
BUS
RING
STAR
etc
Look into MIB2 (SNMP based). You should note there exists 10's of different MIBs to representing various networking technologies / solutions. You can even devise your private MIB to suit your needs.
You should refer to relevant IETF drafts explaining the nomenclature used in MIBs (when I find the reference, I'll post it).
I could also suggest you perform searches on keywords such as "OSS", "Network Management", "NMS".

Has anybody compared WCF and ZeroC ICE?

ZeroC's ICE (www.zeroc.com) looks interesting and I am interested in looking at it and comparing it to our existing software that uses WCF. In particular, our WCF app uses server callbacks (via HTTP).
Anybody who's compared them? How did it go? I'm particularly interested in the performance aspect, since interoperability isn't much of a concern for us right now. Thanks!
I did a very terse review of ICE a few years ago, and although I haven't compared them directly before, having reasonable knowledge of WCF my thoughts might have some relevance.
Firstly, it's not entierely fair to compare WCF with ICE as WCF as ICE is a specific remote communication mechanism and WCF is a higher level remote communications framework.
While WCF is often thought of as implementing SOAP web services, and that is indeed its main use to date, it can also be used for implementing remote services using all manner of encodings and transport channels, which means it can theoretically be used for performant comms between applications.
In comparison, ICE is a cross-platform remote communicaton mechanism that uses binary encoding for performant communications between applications. It's something of a simplified evolution of CORBA and is more directly comparable to CORBA, DCOM, .NET Remoting, and JNI.
However, even though there's no direct correspondence between ICE and WCF, if you need your .NET app to communicate remotely then they're both contenders. Some of the decision points you might want to consider include:
Resourcing. It'll be easier to find developers with WCF experience than ICE experience.
Performance. If you want performance then ICE performs fast, but WCF can also be used in a performant configuration. Alternatively, .NET Remoting can provide very good performance, and whatever the MS-sponsored benchmarks say I've seen it outperform WCF by 10%.
Cross-platform. If you need to communicate with non-Windows applications then you're limited with the WCF options you can use. In addition, since every SOAP stack seems to implement the standards differently it can be a pain creating truly generic Web Services (though WS-I helps)
If you don't need every ounce of performance from day one, then I'd personally plump for WCF to start with, and then consider ICE if performance ever becomes critical. Even then it might be cheaper to scale out your service boxes than it is to move to ICE, and if you don't have any exotic cross-platform needs then you could always look at reconfiguring WCF for binary encoding etc
Michi Henning from ZeroC has recently published a white paper on just this topic -- "Choosing Middleware: Why Performance and Scalability do (and do not) Matter". It compares Ice, WCF (binary & SOAP), and RMI with various performance metrics, platforms, languages, etc. There's more information on Michi's blog, but the white paper is also quite readable, with all the standard caveats of any benchmark.
Disclaimer: I've used Ice and RMI extensively, but never WCF.
Apache Thrift is another contender to ICE and WCF. It was developed and open sourced by Facebook. Apache Thrift is nice in some ways because its not only extremely efficient on the encoding side, it also supports adding of fields to structures without breaking all of the clients (something we found extremely useful for our projects).
Google Protocol Buffers would seem not really a contender as it doesn't mention .NET support on the home page. However, some community addons support C#. In addition, ICE provides emulation for Google Protocol Buffers if you're working with existing services.
Data point: we just converted a callback multi-platform and multi-language project from Ice to Thrift with pretty good results. Ice does a lot for you, so we had to implement disconnection listeners, connection events, etc. ourselves. And in one case we got bit in the proverbial with a big object lock that Ice was letting us get away with -- this caused a deadlock in the Thrift server but it was easily fixed by less lazy coding on the C# side.
I've just finished benchmarking, and in our application anything that pushes large amounts of data is faster than, or on par with, Ice. Shorter messages with more over-head (i.e., a "heartbeat" that updates a status over the protocol) is a bit slower.
The most important bit was that in order to implement the callback service correctly we had to extend Thrift interfaces and define our own protocol, along with a Thrift "Processor" and callback client-server. But I freely admit our application is /very/ special. The existing protocols and servers should be sufficient. But extending them, even to use multiplex sockets from .Net, was not terribly difficult.
We are using ICE to integrate modules written in both C++, Java and C#. The nice thing is that our server can access components on remote machines as well, so if we need more performance we can shift processing to different machines.
I've used both WCF and ICE, and I'd say that ICE is cleaner on the implementation side. ICE also has very detailed and readable documentation.
ICE supports some things that WCF cannot do, including load balancing, automated remote client updates, etc.