Blockchain: Nubits vs Bitcoin speed - e-commerce

I am researching about Nubits and Bitcoin, I realized Nubits is faster than Bitcoin! How could it be possible while Nubits implement on Bitcoin Blockchain?

It's possible because the consensus algorithm that is executed in Nubits is the Proof of Stake. These algorithm objective is to create blocks by consensus among the nodes of the network.
Bitcoin executes the Proof of Work algorithm to achieve consensus. Each node tries to find a nonce to create the block with a valid hash, i.e. a hash wich is smaller than a predefined number. The predefined number allows Bitcoin to control the average time that is needed to create a block. The predefine number or difficulty changes to maintain an average time of 10 minutes to create each block.
However, other implementations of Blockchain use other algorithms. For example, Hyperledger Fabirc uses the PBFT wich achieves the consensus in 10 seconds, i.e. creates each block in 10 seconds.
Nubits, like other implementations of Blockchain, uses the Proof of Stake. Proof of stake is a different way to validate transactions based and achieve the distributed consensus. It is still an algorithm, and the purpose is the same of the proof of work, but the process to reach the goal is quite different. Unlike the proof-of-Work, where the algorithm rewards miners who solve mathematical problems with the goal of validating transactions and creating new blocks, with the proof of stake, the creator of a new block is chosen in a deterministic way, depending on its wealth, also defined as stake.

Related

Is there a scenario in Bitcoin where given all the nonces, you can't solve the cryptographic puzzle?

I just read that the Bitcoin nonce is 4-bytes, meaning there are ~4 billion possibilities for the nonce.
Is there a scenario in Bitcoin where given all the nonces, you can't produce a hash/solution for the cryptographic puzzle where you need a value that begins with the necessary # of zeros?
In other words, if you need 11 beginning zeros to solve the puzzle, perhaps no nonce will yield this.
If this case existed, my presumption is you'd need to select different transactions for the block you're mining given that these are selected by the miner.
It'd be great to get some clarity on this as I haven't seen it addressed anywhere.
Not only is this possible, it happens all the time.
So, here's how it works: when a golden nonce isn't found during mining -- when all possibilities are brute forced -- the date of the block is updated to a new time and the mining process starts anew.
Changing the date/time of the block changes the hashing algorithm solution so the miner has to brute force all of the nonces (a 32 bit value) again using the new date/time.

What is the need for dynamic consensus in hyperledger projects

I read hyperledger sawtooth supports dynamic consensus, mean the consensus algorithm can be changed dynamically. My question is what is the need or when it is necessary to change the consensus dynamically ?. What forces us to change the consensus dynamically ?
I read the Fabric and Sawtooth documentation. Could not find the necessity for dynamic consensus
Nothing forces any blockchain to change consensus--you can keep the same consensus algorithm forever.
However, consensus algorithms are an active area of research. New and more efficient algorithms are being proposed. It may be that a blockchain may want to switch to a new algorithm. Or perhaps the current algorithm is not suitable. For example, some algorithms are efficient with a few nodes (e.g., PBFT) but are O(n^2), meaning they create an exponentially increasing number of messages as nodes increase and do not scale.
Some consensus algorithms are BFT, Byzantine Fault Tolerant, meaning they withstand bad or malicious actors (nodes). Other algorithms are just CFT, Crash Fault Tolerant, meaning they can withstand a node crashing, but not a bad actor. So one may want to change from a BFT-friendly algorithms (such as PoET SGX).
Hyperledger Sawtooth, by the way, supports PoET, RAFT, and DevMode consensus. The last is for experimental and testing use only--not production. Soon to be added is PBFT consensus. For more detail on Sawtooth consensus, see https://github.com/danintel/sawtooth-faq/blob/master/consensus.rst

Clarification in the Ethereum White Paper

I was going through the Ethereum White Paper and it was mentioned that the scripting language implemented in bitcoin blockchain has a limitation of value-blindness and Blockchain-blindness (point 2 and 4 in the paper). I am finding it hard to comprehend what this means. It would be great if someone could help understand this with an example.
Value Blindness:
There is no way for a UTXO script to provide
fine-grained control over the amount that can be withdrawn. For
example, one powerful use case of an oracle contract would be a
hedging contract, where A and B put in $1000 worth of BTC and after 30
days the script sends $1,000 worth of BTC to A and the rest to B. This
would require an oracle to determine the value of 1 BTC in USD[Note
3], but even then it is a massive improvement in terms of trust and
infrastructure requirement over the fully centralized solutions that
are available now. However, because UTXO are all-or-nothing, the only
way to achieve this is through the very inefficient hack of having
many UTXO of varying denominations (eg. one UTXO of 2k for every k up
to 30) and having O pick which UTXO to send to A and which to B.
Blockchain-blindness
UTXO are blind to certain blockchain data such as the nonce and
previous block hash. This severely limits applications in gambling,
and several other categories, by depriving the scripting language of a
potentially valuable source of randomness.

How to secure new public proof of work blockchain?

I am fairly new to the bitcoin and blockchain technology and have recently started reading about it. So my understanding and the question bellow may not be very accurate.
As I have understood so far, proof of work is the basic building block for a bitcoin block chain and because of it, an attacker will have to produce more than 50% of the total compute power (i.e control more than 50% of the nodes)
in order to manipulate the block chain by being able to produce longest block chain consistently.
Now bitcoin guys were bit lucky as they were the first and nobody paid attention in there early days. Once bitcoin gathered momentum, number of honest nodes become predominant and system became inherently secure.
But now, how someone can start a new public blockchain (for completely different application) safely ? Because, if a new blockchain is floated with few mining nodes,
any attacker can come with more compute nodes and hijack the blockchain as there are small number of honest nodes.
it depends on what you want to do. There are many implementations of Blockchain, each of them has its objective. Bitcoin was the first implementation of Blockchain. Bitcoin is a cryptocurrency and like Bitcoin, there have been developed many other cryptocurrencies.
However, the Blockchain technology would be useful to many things: for example, to control the vote in a distributed way in the elections. Because of that, there are many implementations of Blockchains.
Hyperledger Fabric is a private Blockchain, where the acces to it must be controled
Ethereum is a public Blockchian to transfer assets. Anyone could create his tokens and start using them through the Ethereum network. So, you will use an existing Chain and attackers couldn't hijack you. I think that this would be a great start. If I were you, I'd continue reading this.
To avoid the attack you are describing (51% attack), where existing miners hijack a new network there are a couple ways to avoid this.
Merge Mining
The smaller chain includes block data in the larger chain (e.g. Bitcoin) so the blocks are mined with the hashpower of the larger network.
Change the hashing algorithm
For Bitcoin, two rounds of SHA256 is the hashing algorithm. Because there is so much mining power, it is possible to get attacked fairly easily because Bitcoin miners can just point their existing miners to the small network long enough to execute an attack, and then switch back. This happened to Bitcoin Gold recently. So, use something other than SHA256 where there is already a lot of hashpower in terms of hardware out there.

Travelling Salesman and Map/Reduce: Abandon Channel

This is an academic rather than practical question. In the Traveling Salesman Problem, or any other which involves finding a minimum optimization ... if one were using a map/reduce approach it seems like there would be some value to having some means for the current minimum result to be broadcast to all of the computational nodes in some manner that allows them to abandon computations which exceed that.
In other words if we map the problem out we'd like each node to know when to give up on a given partial result before it's complete but when it's already exceeded some other solution.
One approach that comes immediately to mind would be if the reducer had a means to provide feedback to the mapper. Consider if we had 100 nodes, and millions of paths being fed to them by the mapper. If the reducer feeds the best result to the mapper than that value could be including as an argument along with each new path (problem subset). In this approach the granularity is fairly rough ... the 100 nodes will each keep grinding away on their partition of the problem to completion and only get the new minimum with their next request from the mapper. (For a small number of nodes and a huge number of problem partitions/subsets to work across this granularity would be inconsequential; also it's likely that one could apply heuristics to the sequence in which the possible routes or problem subsets are fed to the nodes to get a rapid convergence towards the optimum and thus minimize the amount of "wasted" computation performed by the nodes).
Another approach that comes to mind would be for the nodes to be actively subscribed to some sort of channel, or multicast or even broadcast from which they could glean new minimums from their computational loop. In that case they could immediately abandon a bad computation when notified of a better solution (by one of their peers).
So, my questions are:
Is this concept covered by any terms of art in relation to existing map/reduce discussions
Do any of the current map/reduce frameworks provide features to support this sort of dynamic feedback?
Is there some flaw with this idea ... some reason why it's stupid?
that's a cool theme, that doesn't have that much literature, that was done on it before. So this is pretty much a brainstorming post, rather than an answer to all your problems ;)
So every TSP can be expressed as a graph, that looks possibly like this one: (taken it from the german Wikipedia)
Now you can run a graph algorithm on it. MapReduce can be used for graph processing quite well, although it has much overhead.
You need a paradigm that is called "Message Passing". It was described in this paper here: Paper.
And I blog'd about it in terms of graph exploration, it tells quite simple how it works. My Blogpost
This is the way how you can tell the mapper what is the current minimum result (maybe just for the vertex itself).
With all the knowledge in the back of the mind, it should be pretty standard to think of a branch and bound algorithm (that you described) to get to the goal. Like having a random start vertex and branching to every adjacent vertex. This causes a message to be send to each of this adjacents with the cost it can be reached from the start vertex (Map Step). The vertex itself only updates its cost if it is lower than the currently stored cost (Reduce Step). Initially this should be set to infinity.
You're doing this over and over again until you've reached the start vertex again (obviously after you visited every other one). So you have to somehow keep track of the currently best way to reach a vertex, this can be stored in the vertex itself, too. And every now and then you have to bound this branching and cut off branches that are too costly, this can be done in the reduce step after reading the messages.
Basically this is just a mix of graph algorithms in MapReduce and a kind of shortest paths.
Note that this won't yield to the optimal way between the nodes, it is still a heuristic thing. And you're just parallizing the NP-hard problem.
BUT a little self-advertising again, maybe you've read it already in the blog post I've linked, there exists an abstraction to MapReduce, that has way less overhead in this kind of graph processing. It is called BSP (Bulk synchonous parallel). It is more freely in the communication and it's computing model. So I'm sure that this can be a lot better implemented with BSP than MapReduce. You can realize these channels you've spoken about better with it.
I'm currently involved in an Summer of Code project which targets these SSSP problems with BSP. Maybe you want to visit if you're interested. This could then be a part solution, it is described very well in my blog, too. SSSP's in my blog
I'm excited to hear some feedback ;)
It seems that Storm implements what I was thinking of. It's essentially a computational topology (think of how each compute node might be routing results based on a key/hashing function to the specific reducers).
This is not exactly what I described, but might be useful if one had a sufficiently low-latency way to propagate current bounding (i.e. local optimum information) which each node in the topology could update/receive in order to know which results to discard.