Time-locking and hash-locking in Corda - bitcoin

Comparing Bitcoin and Corda, can the concepts of time-locking and hash-locking be implemented in Corda?
Multisig, time-lock and hash-lock contracts are explained here (Building Block #3, #4 and #5): https://bitcoinmagazine.com/articles/understanding-the-lightning-network-part-building-a-bidirectional-payment-channel-1464710791/

You can sign a Corda tx with a half-open time window on the transaction. This is equivalent to a Bitcoin timelock (tx only valid after a certain timestamp as decided by the notaries clocks).
You can make a "hash lock" by encumbering a cash state with a state containing the hash, in which the verify function ensures the encumbrance can only be removed if the hash pre-image is supplied in a command. However there are no use cases for hash locks that I am aware of.
Corda does not need an equivalent of Bitcoin payment channels for two reasons:
1) We don't use PoW so our consensus mechanisms (notaries) will typically always be fast enough for realistic use cases.
2) It's not a consumer platform, at least not at the moment, so the primary use case for payment channels (micropayments) is irrelevant. Companies do not typically make micropayments to each other and they do not typically trade with anonymous counterparties, which is the main area in which payment channels are useful.
I don't know the background to this query but Corda does not need anything like the Lightning Network. In fact Bitcoin doesn't either.
I can answer this question somewhat authoritatively because I actually made the first proposals for the Bitcoin micropayment channel protocol back in 2011. See Example 7 in this page and its history:
https://en.bitcoin.it/w/index.php?title=Contract&oldid=21404
There's an implementation in the old Bitcoin library I wrote, and I made an app that used it to do micropayments for file downloads:
https://github.com/mikehearn/PayFile

Related

Using FastRTPS for a Command and Control Application

I'm trying to understand how to use the FAST-RTPS libraries to implement a Command and Control application. The requirement is to allow multiple writers to direct command messages to a single reader that is tasked with controlling a piece of equipment. In this application there can be one or more identical pieces of equipment being controlled, each using a unique instance of the same reader code. I already understand that I should set the reader's RELIABILITY_QOS to RELIABLE and the OWNERSHIP_QOS to EXCLUSIVE_OWNERSHIP. The part that I am still thinking about is how to configure my application so that when a writer sends a command to the reader controlling the piece of equipment, other readers that might also receive the message will not act on it. I would like to do this at the FAST-RTPS level; that is, configure the application so that only the reader controlling the equipment receives the command message versus allowing multiple readers to receive the control message while programming these readers so that only the controlling reader will act on it. My approach so far involves assigning all controlling writers and only the controlling reader to a partition (See Advanced Functionalities in the Fast-RTPS Users Manual). There will be one of these partitions for each piece of equipment. Is this the proper way to implement my requirements or are there other, better ways?
Thank you.
Since this question was asked in under data-distribution-service, this answer references the OMG DDS specification, currently at version 1.4.
Although you could use Partitions to achieve the selective delivery that you are looking for, this would probably not be the recommended approach for your use case. The main disadvantage that comes to mind is a situation where a single writer has to send control messages to multiple pieces of equipment. With your current approach, you need a single Partition for each equipment, and you additionally need each message to be written into the right Partition. This can only be achieved by attaching a single Partition to each DataWriter, which would consequently require a single DataWriter per piece of equipment. Depending on your set-up, you may end up with many DataWriters where you would prefer to have a few, from the perspective of resource usage perspective as well as code complexity.
The proper mechanism that is intended for this kind of use-case is the so-called ContentFilteredTopic, as found in section 2.2.2.3.3 ContentFilteredTopic Class in the specification. For your convenience, I quoted some of it:
ContentFilteredTopic describes a more sophisticated subscription
that indicates the subscriber does not want to necessarily see all
values of each instance published under the Topic. Rather, it wants
to see only the values whose contents satisfy certain criteria. This
class therefore can be used to request content-based subscriptions.
The selection of the content is done using the filter_expression
with parameters expression_parameters.
Using ContentFilteredTopics, each DataReader would use a filter_expression that aligns with an identifier of the device that it is associated with. At the application level on the sender side, DataWriters would not be aware of that; they would just be writing their control messages. The middleware would take care of the delivery to those (and only those) DataReaders for which the filter expressions matches the data.
This is a core feature of many DDS-based systems. Although the DDS specification does not require it, in many cases the implementation is smart enough to do filtering on the DataWriter side, before the message goes onto the wire, in cases where that makes sense.
I do not know how much of this is actually implemented by Fast-RTPS.

Authentication tips using NTAG 424 DNA TT

I need to implement an authentication procedure between a reader an NFC tag but being my knowledge limited in this area I will appreciated some aid in order to understand few concepts.
Pardon in advance for rewrite the Bible but I could not summarize it more.
There are many tags families ( ICODE, MIFARE, NTAG...) but after doing a research I think NTAG 424 DNA matches my requirements(I need mainly authentication features).
It comes with AES encryption, CMAC protocol and 3-pass-authentication system and here is when I started to need assistance.
AES -> As I am concerned this is a block cipher to encrypt plain texts via permutations and mapping. Is a symmetric standard and it does not use the master key, instead session keys are used being them derivations from the master key. (Q01: What I do not know is where this keys are stored in the tag. Keys must be stored on specialized HW but no tag "specs" remark this, apart from MIFARE SAM labels.)
CMAC -> It is an alteration of CBC-MAC to make authentication secure for dynamically sized messages. If data is not confidential then MAC can be used on plain-texts to verify them, but to gain confidentiality and authentication features "Encrypt-than-mac" must be pursuit. Here also session keys are used, but not the same keys used in the encryption step.(Q02: The overall view of CMAC may be a protocol to implement verification along with confidentiality, this is my opinion and could be wrong.)
3-pass-protocol -> ISO/IEC 9798-2 norm where tag and reader are mutually verified. It may also use MAC along with session keys to achieve this task.(Q03: I think this is the upper layer of all the system to verify tags and readers. The "3 pass protocol" relays in MAC to be functional and, if confidentiality features are also needed, then CMAC might be used instead of single MAC. CMAC needs AES to be functional, applying session keys on each step. Please correct me if I am posting savages mistakes)
/*********/
P.S: I am aware that this is a coding related forum but surely I can find here someone with more knowledge than me about cryptography to answer this questions.
P.S.S: I totally do not know where master and session keys are kept in the Tag side. Have they need to be include by a separate HW along with the main NFC circuit ?
(Target)
This is to implement a mutual verification process between tag and reader, using the NTAG 424 DNA TagTamper label. (The target is to avoid 3ยบ parties copies, being authentication the predominant part instead of message confidentiality)
Lack of knowledge of cryptography and trying to understand how AES, CMAC and the mutual authentication are used on this NTAG.
(Extra Info)
NTAG 424 DNA TT: https://www.nxp.com/products/identification-security/rfid/nfc-hf/ntag/ntag-for-tags-labels/ntag-424-dna-424-dna-tagtamper-advanced-security-and-privacy-for-trusted-iot-applications:NTAG424DNA
ISO 9798-2: http://bcc.portal.gov.bd/sites/default/files/files/bcc.portal.gov.bd/page/adeaf3e5_cc55_4222_8767_f26bcaec3f70/ISO_IEC_9798-2.pdf
3-pass-authentication:https://prezi.com/p/rk6rhd03jjo5/3-pass-mutual-authentication/
Keys storage HW:https://www.microchip.com/design-centers/security-ics/cryptoauthentication
The NTAG424 chips are not particularly easy to use, but they offer some nice features which can be used for different security applications. However one important thing to note, is that although it heavily relies on encryption, from an implementation side, that is not the main challenge, because all of the aes encryption, cmac computation and so on is already available as some sort of package or library in most programming languages. Some examples are even given by nxp in their application note. For example in python you will be able to use the AES package from Crypto.Cipher import AES as stated in one of the examples of the application note.
My advice is to simply retrace their personalization example beginning at the initial authentication, and then work your way up to whatever you are trying to achieve. It is also possible to use these examples in order to test the encryption and the building of apdu commands. Most of the work is not hard, but sometimes the NXP documents can be a bit confusing.
One small note, if you are working with python, there is some code available on github which you might be able to reuse.
For iOS, I'm working on a library for DNA communication, NfcDnaKit:
https://github.com/johnnyb/nfc-dna-kit

Ruminations on highly-scalable and modular distributed server side architectures

Mine is not really a question, it's more of a call for opinions - and perhaps this isn't even the right place to post it. Nevertheless, the community here is very informed, and there's no harm in trying...
I was thinking about ways to create a highly scalable and, above all, highly modular back-end architecture. For example, an entire back-end ecosystem for a large site that had the potential for future-proof evolution into a massive site.
This would entail a very high degree of separation of concerns, to the extent that not only could (say) the underling DB be replaced (ie from Oracle to MySQL) but the actual type of database could be replaced (ed SQL to KV, or vice versa).
I envision a situation where each sub-system exposes its own API within the back-end ecosystem. In this way, the API could remain constant, whilst the implementation could change (even radically) over time.
The system must be heterogeneous in that it's not tied to a specific language. It must be able to accommodate modules or entire sub-systems using different languages.
It then occurred to me that what I was imagining was simply the architecture of the web itself.
So here is my discussion point: apart from the overhead of using (mainly) text-based protocols is there any overriding reason why a complex back-end architecture should not be implemented in the manner I describe, or is there some strong rationale I'm missing for using communication protocols such as Twisted, AMQP, Thrift, etc?
UPDATE: Following a comment from #meagar, I should perhaps reformulate the question: are the clear advantages of using a very simple, flexible and well-understood architecture (ie all functionality exposed as a series RESTful APIs) enough to compensate the obvious performance hit incurred when using this architecture in a back-end context?
[code]the actual type of database could be replaced (ed SQL to KV, or vice versa).[/code]
And anyone who wrote a join between two tables will be sad. If you want the "ability" to switch to KV, then you should not expose an API richer than what KV can support.
The answer to your question depends on what it is you're trying to accomplish. You want to keep each module within reasonable reins. Use proper physical layering of code, use defined interfaces with side-effect contracts, use test cases for each success and failure case of each interface. That way, you can depend on things like "when user enters blah page, a user-blah fact is generated so that all registered fact listeners will be invoked." This allows you to extend the system without having direct calls from point A to point B, while still having some kind of control over widely disparate dependencies. (I hate code bases where you can't find-all to find all possible references to a symbol!)
However, the fact that we put lots of code and classes into a single system is because calling between systems is often very, very expensive. You want to think in terms of code modules making requests of each other where you can. The difference in timing between a function call and a REST call is something like one to a million (maybe you can get it as low as one to ten thousand, if you only count cycles, not wallclock time -- but I'm not so sure). Also, anything that goes on a wire in a datacenter may potentially suffer from packet loss, because there is no such thing as a 100% loss-free data center, no matter how hard you try. Packet loss means random latency spikes in the response time for your application.

A smart UDP protocol analyzer?

Is there a "smart" UDP protocol analyzer that can help me reverse engineer a message based protocol?
I'm using Wireshark to do the sniffing, but if there's a tool that can detect regularities in the protocol (repeated strings, bits of the protocol that are CRC/Checksum or length, ...) and aid the process that would help.
You are asking for a universal inference engine. The best way to try to recover the protocol (assuming you are in a jurisdiction that permits this) is to understand the underlying message transfer from the beginning of a session, and then trying to manually simulate the behaviour of each party through a sequence of ping-pong message trials. This way you develop an understanding of the message structures and their functioning.
Using the UDP frame boundaries is a good place to start looking for structure.
If you have no documentation, you will find that even if you gain a good understanding of the protocol, expect to be surprised many times during the project.
If you can, have your existing systems carry out exactly the scenario you need to use, and then simply replicate the same sequence with payload (and any checksum) changes only. This way you can possibly achieve the requirement without a comprehensive understanding of the protocol.
For an example of the effort in doing this you could look at a historical review of the Samba project at A bit of history and a bit of fun.

Why use AMQP/ZeroMQ/RabbitMQ

as opposed to writing your own library.
We're working on a project here that will be a self-dividing server pool, if one section grows too heavy, the manager would divide it and put it on another machine as a separate process. It would also alert all connected clients this affects to connect to the new server.
I am curious about using ZeroMQ for inter-server and inter-process communication. My partner would prefer to roll his own. I'm looking to the community to answer this question.
I'm a fairly novice programmer myself and just learned about messaging queues. As i've googled and read, it seems everyone is using messaging queues for all sorts of things, but why? What makes them better than writing your own library? Why are they so common and why are there so many?
what makes them better than writing your own library?
When rolling out the first version of your app, probably nothing: your needs are well defined and you will develop a messaging system that will fit your needs: small feature list, small source code etc.
Those tools are very useful after the first release, when you actually have to extend your application and add more features to it.
Let me give you a few use cases:
your app will have to talk to a big endian machine (sparc/powerpc) from a little endian machine (x86, intel/amd). Your messaging system had some endian ordering assumption: go and fix it
you designed your app so it is not a binary protocol/messaging system and now it is very slow because you spend most of your time parsing it (the number of messages increased and parsing became a bottleneck): adapt it so it can transport binary/fixed encoding
at the beginning you had 3 machine inside a lan, no noticeable delays everything gets to every machine. your client/boss/pointy-haired-devil-boss shows up and tell you that you will install the app on WAN you do not manage - and then you start having connection failures, bad latency etc. you need to store message and retry sending them later on: go back to the code and plug this stuff in (and enjoy)
messages sent need to have replies, but not all of them: you send some parameters in and expect a spreadsheet as a result instead of just sending and acknowledges, go back to code and plug this stuff in (and enjoy.)
some messages are critical and there reception/sending needs proper backup/persistence/. Why you ask ? auditing purposes
And many other use cases that I forgot ...
You can implement it yourself, but do not spend much time doing so: you will probably replace it later on anyway.
That's very much like asking: why use a database when you can write your own?
The answer is that using a tool that has been around for a while and is well understood in lots of different use cases, pays off more and more over time and as your requirements evolve. This is especially true if more than one developer is involved in a project. Do you want to become support staff for a queueing system if you change to a new project? Using a tool prevents that from happening. It becomes someone else's problem.
Case in point: persistence. Writing a tool to store one message on disk is easy. Writing a persistor that scales and performs well and stably, in many different use cases, and is manageable, and cheap to support, is hard. If you want to see someone complaining about how hard it is then look at this: http://www.lshift.net/blog/2009/12/07/rabbitmq-at-the-skills-matter-functional-programming-exchange
Anyway, I hope this helps. By all means write your own tool. Many many people have done so. Whatever solves your problem, is good.
I'm considering using ZeroMQ myself - hence I stumbled across this question.
Let's assume for the moment that you have the ability to implement a message queuing system that meets all of your requirements. Why would you adopt ZeroMQ (or other third party library) over the roll-your-own approach? Simple - cost.
Let's assume for a moment that ZeroMQ already meets all of your requirements. All that needs to be done is integrating it into your build, read some doco and then start using it. That's got to be far less effort than rolling your own. Plus, the maintenance burden has been shifted to another company. Since ZeroMQ is free, it's like you've just grown your development team to include (part of) the ZeroMQ team.
If you ran a Software Development business, then I think that you would balance the cost/risk of using third party libraries against rolling your own, and in this case, using ZeroMQ would win hands down.
Perhaps you (or rather, your partner) suffer, as so many developers do, from the "Not Invented Here" syndrome? If so, adjust your attitude and reassess the use of ZeroMQ. Personally, I much prefer the benefits of Proudly Found Elsewhere attitude. I'm hoping I can proud of finding ZeroMQ... time will tell.
EDIT: I came across this video from the ZeroMQ developers that talks about why you should use ZeroMQ.
what makes them better than writing your own library?
Message queuing systems are transactional, which is conceptually easy to use as a client, but hard to get right as an implementor, especially considering persistent queues. You might think you can get away with writing a quick messaging library, but without transactions and persistence, you'd not have the full benefits of a messaging system.
Persistence in this context means that the messaging middleware keeps unhandled messages in permanent storage (on disk) in case the server goes down; after a restart, the messages can be handled and no retransmit is necessary (the sender does not even know there was a problem). Transactional means that you can read messages from different queues and write messages to different queues in a transactional manner, meaning that either all reads and writes succeed or (if one or more fail) none succeeds. This is not really much different from the transactionality known from interfacing with databases and has the same benefits (it simplifies error handling; without transactions, you would have to assure that each individual read/write succeeds, and if one or more fail, you have to roll back those changes that did succeed).
Before writing your own library, read the 0MQ Guide here: http://zguide.zeromq.org/page:all
Chances are that you will either decide to install RabbitMQ, or else you will make your library on top of ZeroMQ since they have already done all the hard parts.
If you have a little time give it a try and roll out your own implemntation! The learnings of this excercise will convince you about the wisdom of using an already tested library.