The Dfinity Consensus White Paper

Dfinity released their first of what they claim to be many white papers last week.  This particular white paper was centred around the consensus protocol.  If Dfinity is in fact planning to release many white papers it makes sense that they would release the consensus white paper first, as the consensus layer is the foundation for any other innovations that will come from the larger Dfinity tech stack.  The core of Dfinity’s biggest consensus innovation is the threshold relay, which uses BLS cryptography, and is encased in this consensus white paper.  This post is intended to be a very broad overview of the consensus white paper. However, I intend to do a follow up post detailing BLS cryptography and threshold signatures, the main drivers of innovation in this white paper.

Verifiable Random Function

Let’s begin with the Verifiable Random Function (VRF), as this is the smallest building block of the Dfinity protocol.  A VRF is very simply, a pseudo-random function that provides publicly verifiable proofs of its outputs’ correctness.  If we recall from my previous post “The Blockchain from a Git Perspective” I point out that from a git perspective, consensus is simply randomizing the selection of the maintainer of the “repo”, where the repo is the block chain.  I then make the claim that proof of work mining is simply a method for distributing the amount of time a node on the network is allowed to be the maintainer of the repo.  But this begs the question hundreds of engineers have asked since Bitcoin: what if, rather than competing to act as the maintainer of the repo by burning electricity to secure the block chain, there was some other method of randomly selecting the maintainer of the repo?  Enter the VRF.

What if there was a way to randomly select a maintainer of the repo without relying on a proof of work competition?  Lets say every block included in it the name of the maintainer of the next block, but no one could guess which name would be chosen, not even the current maintainer, until the block was created.  At a very basic level, this is what the VRF enables.  However, rather than the name of a maintainer randomly being selected, the VRF is used to randomly select a group which can then be used to randomly select the group after that, and so on.

threshold_relay
A broad overview of the Dfinity block chain.  Here the VRFs are the little red rectangles “Rand i – 1”, “Rand i”, “Rand i + 1”, etc.  The outputs of the VRF aid in randomly selecting each group, “Group i”, “Group i + 1”, “Group i + 2”, etc.

If we can devise a method for a de-centrally agreed upon VRF, it should be relatively easy to randomly select a miner without the need for proof of work.  The VRF is triggered every block to produce a new output using BLS cryptography in the Dfinity protocol.  This per block VRF is dubbed the random beacon and has various use cases in the Dfinity block chain.

Threshold Relay

This hypothetical decentralized VRF is all well and good, but how is it achieved in practice?  Recall, it must be trustless the same way proof of work is.  This is where Dfinity has made a major breakthrough.  They’re using BLS rather than RSA or ECDSA, due to the fact that has a unique* threshold version as well as a distributed key generation for this unique threshold version.  This allows for a signature to be valid if a threshold of private keys signing the message is reached.

A brief example.  Let’s say 100 nodes are randomly selected to partake in the generation of the next random beacon value, and the threshold has been set to 51.  This means that after 51 of the 100 randomly selected nodes have signed the message, the system will generate the next random beacon value for the entire network.

selected_nodes
A screen shot of selected nodes in the Dfinity network.  For our example, green nodes are the nodes selected by the previous random beacon to sign the current random beacon, and there are 100 of them.  The grey nodes are all nodes in the Dfinity network.  Of the 100 green nodes, 51 would have to sign the message to propagate the next random beacon value to the rest of the network.

What’s amazing about this process is, it doesn’t matter which 51 of the 100 nodes sign the message, it will always produce the same random output, and this random output can always be verified of correctness.  The random beacon output generated in our toy example is then used by the system to randomly select the next 100 nodes to generate the next random beacon value, ad infinitum.  This is called the Threshold Relay.

Now that we have a trustless agreed upon method for generating randomness in the block chain, it’s a simple matter of using this random value to do various things on the block chain, such as selecting block makers, or a random subset of nodes for the random beacon generation of the next round (the 100 randomly selected nodes in our example).

threshold_relay
Hate to use this again, but, it illustrates exactly what the random beacon is/can used for.  It ties together both the block chain and the threshold relay chain, which is why I focused so heavily on it in this post.  However, a decentrally agreed upon source of verifiable randomness could be used for a whole host of things.

So what?

But why does this matter? So we have a different way of selecting a “maintainer” of our repo, who cares?  Well firstly, this is much more computationally, and therefore economically, efficient than proof of work.  We’ve all heard the stories of Bitcoin mining using more power than Ireland.  Message signing is a constant time operation, while proof of work is anything but constant.  There are also claims of empty blocks being mined in the Ethereum block chain in order for the miner to get the block out in time and receive the block reward.

The threshold relay also allows for faster block speeds, as block time is simply a system parameter to be tweaked, rather than dependent on peculiarities of the crypto economics of proof of work (see the BCH “emergency” difficulty adjustment).

Perhaps most importantly, however, Dfinity has devised a way to achieve near instant finality using what they call notarizations.  This is unheard of in block chain.  Even if Ethereum manages to roll out proof of stake, it will still be hampered by lengthy finality times due to the fact that an adversary could theoretically hide a mined longer chain (this is why you have to wait for X transactions to be confirmed before your balance shows up in exchanges by the way).  In Dfinity, this is not possible.

Conclusion

Note that this is an extremely simplistic view of the Dfinity protocol (I’ve left things like block notarization out).  But I didn’t want to inundate readers with complex explanations and math proofs.  I understand, however, that Dfinity must go through this pedantry in a white paper, particularly to defend the block speeds they’re claiming to achieve.

Given how important BLS and threshold signatures are to this protocol, I intend to add a second post teasing apart this cryptography in more detail, not only for my own benefit, but also my readers.  Until then!

 

 

*Dfinity defines uniqueness in the whitepaper as: A signature scheme is called unique if for every message and every public key there is only one signature that validates successfully. This property applies to single signature schemes and threshold signature schemes alike.

Advertisements

2 thoughts on “The Dfinity Consensus White Paper

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s