The Dfinity Consensus White Paper

Dfinity released their first of what they claim to be many white papers last week.  This particular white paper was centred around the consensus protocol.  If Dfinity is in fact planning to release many white papers it makes sense that they would release the consensus white paper first, as the consensus layer is the foundation for any other innovations that will come from the larger Dfinity tech stack.  The core of Dfinity’s biggest consensus innovation is the threshold relay, which uses BLS cryptography, and is encased in this consensus white paper.  This post is intended to be a very broad overview of the consensus white paper. However, I intend to do a follow up post detailing BLS cryptography and threshold signatures, the main drivers of innovation in this white paper.

Verifiable Random Function

Let’s begin with the Verifiable Random Function (VRF), as this is the smallest building block of the Dfinity protocol.  A VRF is very simply, a pseudo-random function that provides publicly verifiable proofs of its outputs’ correctness.  If we recall from my previous post “The Blockchain from a Git Perspective” I point out that from a git perspective, consensus is simply randomizing the selection of the maintainer of the “repo”, where the repo is the block chain.  I then make the claim that proof of work mining is simply a method for distributing the amount of time a node on the network is allowed to be the maintainer of the repo.  But this begs the question hundreds of engineers have asked since Bitcoin: what if, rather than competing to act as the maintainer of the repo by burning electricity to secure the block chain, there was some other method of randomly selecting the maintainer of the repo?  Enter the VRF.

What if there was a way to randomly select a maintainer of the repo without relying on a proof of work competition?  Lets say every block included in it the name of the maintainer of the next block, but no one could guess which name would be chosen, not even the current maintainer, until the block was created.  At a very basic level, this is what the VRF enables.  However, rather than the name of a maintainer randomly being selected, the VRF is used to randomly select a group which can then be used to randomly select the group after that, and so on.

threshold_relay
A broad overview of the Dfinity block chain.  Here the VRFs are the little red rectangles “Rand i – 1”, “Rand i”, “Rand i + 1”, etc.  The outputs of the VRF aid in randomly selecting each group, “Group i”, “Group i + 1”, “Group i + 2”, etc.

If we can devise a method for a de-centrally agreed upon VRF, it should be relatively easy to randomly select a miner without the need for proof of work.  The VRF is triggered every block to produce a new output using BLS cryptography in the Dfinity protocol.  This per block VRF is dubbed the random beacon and has various use cases in the Dfinity block chain.

Threshold Relay

This hypothetical decentralized VRF is all well and good, but how is it achieved in practice?  Recall, it must be trustless the same way proof of work is.  This is where Dfinity has made a major breakthrough.  They’re using BLS rather than RSA or ECDSA, due to the fact that has a unique* threshold version as well as a distributed key generation for this unique threshold version.  This allows for a signature to be valid if a threshold of private keys signing the message is reached.

A brief example.  Let’s say 100 nodes are randomly selected to partake in the generation of the next random beacon value, and the threshold has been set to 51.  This means that after 51 of the 100 randomly selected nodes have signed the message, the system will generate the next random beacon value for the entire network.

selected_nodes
A screen shot of selected nodes in the Dfinity network.  For our example, green nodes are the nodes selected by the previous random beacon to sign the current random beacon, and there are 100 of them.  The grey nodes are all nodes in the Dfinity network.  Of the 100 green nodes, 51 would have to sign the message to propagate the next random beacon value to the rest of the network.

What’s amazing about this process is, it doesn’t matter which 51 of the 100 nodes sign the message, it will always produce the same random output, and this random output can always be verified of correctness.  The random beacon output generated in our toy example is then used by the system to randomly select the next 100 nodes to generate the next random beacon value, ad infinitum.  This is called the Threshold Relay.

Now that we have a trustless agreed upon method for generating randomness in the block chain, it’s a simple matter of using this random value to do various things on the block chain, such as selecting block makers, or a random subset of nodes for the random beacon generation of the next round (the 100 randomly selected nodes in our example).

threshold_relay
Hate to use this again, but, it illustrates exactly what the random beacon is/can used for.  It ties together both the block chain and the threshold relay chain, which is why I focused so heavily on it in this post.  However, a decentrally agreed upon source of verifiable randomness could be used for a whole host of things.

So what?

But why does this matter? So we have a different way of selecting a “maintainer” of our repo, who cares?  Well firstly, this is much more computationally, and therefore economically, efficient than proof of work.  We’ve all heard the stories of Bitcoin mining using more power than Ireland.  Message signing is a constant time operation, while proof of work is anything but constant.  There are also claims of empty blocks being mined in the Ethereum block chain in order for the miner to get the block out in time and receive the block reward.

The threshold relay also allows for faster block speeds, as block time is simply a system parameter to be tweaked, rather than dependent on peculiarities of the crypto economics of proof of work (see the BCH “emergency” difficulty adjustment).

Perhaps most importantly, however, Dfinity has devised a way to achieve near instant finality using what they call notarizations.  This is unheard of in block chain.  Even if Ethereum manages to roll out proof of stake, it will still be hampered by lengthy finality times due to the fact that an adversary could theoretically hide a mined longer chain (this is why you have to wait for X transactions to be confirmed before your balance shows up in exchanges by the way).  In Dfinity, this is not possible.

Conclusion

Note that this is an extremely simplistic view of the Dfinity protocol (I’ve left things like block notarization out).  But I didn’t want to inundate readers with complex explanations and math proofs.  I understand, however, that Dfinity must go through this pedantry in a white paper, particularly to defend the block speeds they’re claiming to achieve.

Given how important BLS and threshold signatures are to this protocol, I intend to add a second post teasing apart this cryptography in more detail, not only for my own benefit, but also my readers.  Until then!

 

 

*Dfinity defines uniqueness in the whitepaper as: A signature scheme is called unique if for every message and every public key there is only one signature that validates successfully. This property applies to single signature schemes and threshold signature schemes alike.

Advertisements

Bitcoin is not Cryptocurrency

“4 Strategies for Investing in Bitcoin”

  1. Don’t

With the recent rise in Bitcoin prices, everyone has taken notice.  Even this guy:

hqdefault
My favorite part of the video I watched recently was when he said: “Ether, or Ethereum, Ethereum is the more scientific name for Ether”

Because of this, I’m getting a lot of questions about Bitcoin, but almost none about cryptocurrency.  People are having a hard enough time wrapping their heads around the technology of Bitcoin that they’re not even bothering to ask about the larger cryptocurrency ecosystem.  They’re also equating Bitcoin to cryptocurrency.  Bitcoin is a cryptocurrency, yes, but Bitcoin is not the cryptocurrency.  This may seem obvious, and indeed it is.  However, people are conflating the price and notoriety of Bitcoin with the tech of Bitcoin, thinking, “since Bitcoin has the highest market cap, surely it must have the best technology, and the most utility, therefore, whatever Jackson says about Bitcoin must hold at least somewhat for the entirety of the cryptocurrency market”, which is patently false.

An example.  Someone recently told me they asked their financial advisor about buying Bitcoin, and he told them it was a ponzi scheme and to steer clear. I think this financial advisor came to the right conclusion, but probably for the wrong reasons.  If you’d have asked him what he thinks about cryptocurrency in general he probably would’ve said the same thing, but that’s because his only frame of reference for cryptocurrency is Bitcoin.  Bitcoin may very will be a “ponzi scheme” (I don’t know if it is or isn’t, so don’t ask me) but that doesn’t say anything about cryptocurrency generally.

But, I’ve also had numerous discussions with engineers who view it the same way.  Many call Bitcoin digital gold, which is one step removed from real gold, of which the famous Warren Buffet has this to say:

The second major category of investments involves assets that will never produce anything, but that are purchased in the buyer’s hope that someone else – who also knows that the assets will be forever unproductive – will pay more for them in the future. Tulips, of all things, briefly became a favorite of such buyers in the 17th century.

This type of investment requires an expanding pool of buyers, who, in turn, are enticed because they believe the buying pool will expand still further. Owners are not inspired by what the asset itself can produce – it will remain lifeless forever – but rather by the belief that others will desire it even more avidly in the future.

The major asset in this category is gold, currently a huge favorite of investors who fear almost all other assets, especially paper money (of whose value, as noted, they are right to be fearful). Gold, however, has two significant shortcomings, being neither of much use nor procreative. True, gold has some industrial and decorative utility, but the demand for these purposes is both limited and incapable of soaking up new production. Meanwhile, if you own one ounce of gold for an eternity, you will still own one ounce at its end.

What motivates most gold purchasers is their belief that the ranks of the fearful will grow. During the past decade that belief has proved correct. Beyond that, the rising price has on its own generated additional buying enthusiasm, attracting purchasers who see the rise as validating an investment thesis.

As “bandwagon” investors join any party, they create their own truth – for a while. Over the past 15 years, both Internet stocks and houses have demonstrated the extraordinary excesses that can be created by combining an initially sensible thesis with well-publicized rising prices. In these bubbles, an army of originally skeptical investors succumbed to the “proof” delivered by the market, and the pool of buyers – for a time – expanded sufficiently to keep the bandwagon rolling. But bubbles blown large enough inevitably pop. And then the old proverb is confirmed once again: “What the wise man does in the beginning, the fool does in the end.”

Today the world’s gold stock is about 170,000 metric tons. If all of this gold were melded together, it would form a cube of about 68 feet per side. (Picture it fitting comfortably within a baseball infield.) At $1,750 per ounce – gold’s price as I write this – its value would be $9.6 trillion. Call this cube pile A.

Let’s now create a pile B costing an equal amount. For that, we could buy all U.S. cropland (400 million acres with output of about $200 billion annually), plus 16 Exxon Mobils (the world’s most profitable company, one earning more than $40 billion annually). After these purchases, we would have about $1 trillion left over for walking-around money (no sense feeling strapped after this buying binge). Can you imagine an investor with $9.6 trillion selecting pile A over pile B?

Beyond the staggering valuation given the existing stock of gold, current prices make today’s annual production of gold command about $160 billion. Buyers – whether jewelry and industrial users, frightened individuals, or speculators – must continually absorb this additional supply to merely maintain an equilibrium at present prices.

A century from now the 400 million acres of farmland will have produced staggering amounts of corn, wheat, cotton, and other crops – and will continue to produce that valuable bounty, whatever the currency may be. Exxon Mobil will probably have delivered trillions of dollars in dividends to its owners and will also hold assets worth many more trillions (and, remember, you get 16 Exxons). The 170,000 tons of gold will be unchanged in size and still incapable of producing anything. You can fondle the cube, but it will not respond.

Admittedly, when people a century from now are fearful, it’s likely many will still rush to gold. I’m confident, however, that the $9.6 trillion current valuation of pile A will compound over the century at a rate far inferior to that achieved by pile B.

I can’t blame Mr. Buffet for not investing in Bitcoin.  However, he’s recently said he was wrong about Google and Amazon.  Buffet isn’t a technologist, he’s a financier.  He also claims he only invests in things he understands.  I don’t blame him, or many other people for not understanding this tech and it’s nuance.

However, I tend to agree with Mr. Buffet, and the financial advisor with regard to Bitcoin.  Bitcoin may very well be a ponzi scheme in as much as gold is.  Bitcoin core is positioning itself as a digital gold, as such, if you don’t think gold is a solid investment, you probably shouldn’t invest in Bitcoin, using the logic given above by Buffet.  But this does not mean that all cryptocurrencies are positioning themselves as digital gold and that you shouldn’t invest in any cryptocurrencies.*

Also, while I’m at it, too many people who haven’t investigated the cryptocurrency ecosystem are substituting arguments against Bitcoin with arguments against cryptocurrency in general.  What may hold true for Bitcoin in an argument does not necessarily hold true for all cryptocurrencies.

Building Blocks

Speaking as a software engineer, we’re trained to crystallize logical concepts, and then use these crystallized logical structures to build more logical structures on top of the existing ones, and with the block chain, we’re doing just that.  Block chain engineers are extrapolating from Bitcoin’s block chain structure, building new things with it everyday.  One of the breakthroughs for me when learning about cryptocurrency was when I investigated Ethereum and saw the inferences made from Bitcoin’s original block chain.  Once this connection was made I started to realize the possibilities were vast, not just internal to the Ethereum block chain as many before me have already stated, but with the block chain concept as a whole.  We’re still trying to figure out which building blocks fit where, which are useless, and which will form the backbone of this new landscape.  But that’s technology.  There will be many more innovations to come, as this technology is very much still in it’s infancy.

Yes, we’re in a bubble

Yes, we’re in a cryptocurrency bubble, and everyone is trying to cash in on the craze.  But this doesn’t mean there isn’t still true innovation and disruption happening in computation in this space.  But also, what’s wrong with a bubble?  Look at the beautiful infrastructure and technology the last tech bubble gave us.  The cryptocurrency bubble is turning heads, just as the dot-com bubble did before it, and luring brilliant engineers into it’s fold due to these absurd valuations.  Yes, a lot of people are going to lose a lot of money, but a lot of people have, and will continue to, make a lot of money, just as they did during the dot-com bubble, and I don’t think there is anything wrong with that.

 

 

*note: this also doesn’t mean that Bitcoin’s only use is as a digital store of value, but for the sake of keeping this digestible for beginners, I won’t go there.  In fact, the only reason I bring this up is because I know advanced readers from the Bitcoin community are going to pitch a fit about this.  Yes, most of us understand you’re purchasing access to the btc network, but with all the price speculation it remains to be seen if access to this network is priced according to it’s value.

My First Crypto Paycheck

Here is a fifth post written in my series of articles for ETHLend, an Ethereum based lending start up.  Enjoy!

ETHLend

As some of my followers may know, I write for the Ethereum start up ETHLend. I started out blogging about cryptocurrency on the weekends because it is a fascinating subject and there is always something to write about. I felt the best way to learn about the subject was to try and explain it to non-technical readers. Often, the hardest people to convince of this tech are non-technical readers. The technical readers have either investigated the tech themselves and have made up their minds how they feel about it, or, if they haven’t, they aren’t going to come to my blog for their information, at least not initially. For this reason, a majority of my posts are geared toward non-technical readers (with a few exceptions 1, 2, 3, 4). This was mostly a weekend hobby, but when I heard ETHLend was looking for technical writers to write about the development they were doing I jumped at the chance to get paid for a weekend hobby. I quickly got in touch with Jordan, and we started the discussions with Stani on the ETHLend slack channel.

My “Job”

The majority of my duties involve explaining the breakneck development that the ETHLend dev team is doing in as simple terms as possible, as well as positing thought experiments with regard to where this tech could possibly lead and implementing these ideas in experimental code. This has been an absolutely blast, and has allowed me to approach the decentralized ecosystem from the perspective of a lending platform. It’s one thing to read about a Solidity library and follow someone else’s tutorial, it’s another to actually explore it through concrete use.

LEND

One of the most fascinating aspects of working with an Ethereum start up that was pre-ICO, was I got a bit of insight into the world of ICOs that I otherwise wouldn’t have. During my negotiation with Stani before I started writing for ETHLend, we agreed that payment would happen completely in LEND. Since these tokens were pre-ICO, they hadn’t hit any of the exchanges and had no valuation. To put this into perspective, 1K LEND is equal to $150 now, but at the time of negotiation, neither Stani nor I had any idea what these tokens would actually be worth. The ultimate skin in the game.

ETHLendPrice

As crazy as it may sound to some, this is one of, if not the largest, motivators for me to accept the position. I felt I was following the footsteps of many in the crypto community who came before me, like Vitalik Buterin and Andreas Antonopoulos. The legend of being paid in crypto appealed to me.

I also find it fascinating that a protocol can create their own currency which it can use to pay it’s members and sustain itself. This is one of the most important disruptive aspects of cryptocurrency.  Cryptocurrency will be around for a long time because of the fact that an entire digital economy can be built from software.  This gives software engineers an avenue to monetize open source projects.  Which incentivizes more developers to go the open source route, driving the creation of more digital micro-economies.

It’s also ruthlessly free market. If the market decides this token (and therefore the protocol, or software backing it) is useless, then the project dies. This is why my excitement at being paid completely in crypto may seem foreign to some, as there is obviously a large amount of risk involved. An agreement to be paid 100 LEND per word can just as easily be worth $100 as it can $.01 after the ICO.

Thank You

I’ve recently received my first payment of LEND tokens for my writings, and for that, I am extremely grateful to the ETHLend team for all of the hard work they’ve put behind the project and give the tokens value to the decentralized community. I’m grateful at the opportunity to join the ranks of writers to be paid fully in cryptocurrency. I’m also grateful to be a part of this new version of the decentralized economy. Finally, I’m grateful to everyone who’s read any of my material, argued with me, debated me, and most importantly asked questions. Here’s to another year of cryptocurrency!

Jackson

Buying the Dip, or, using ETHLend as a P2P Margin Trading Instrument

Here is a fourth post written in my series of articles for ETHLend, an Ethereum based lending start up.  Enjoy!

Introduction

When I first joined ETHLend I brought it up with my coworkers over lunch that I was joining an Ethereum start up as a writer. Being engineers they were intrigued and wanted to know more about the project, the Ethereum block chain, and what problems ETHLend was aiming to solve. I had just finished this article comparing ETHLend with Prosper. So I explained the use cases of ETHLend from that context. We all thought it was an interesting idea and moved on to other conversation. However, recently with the large influx of new crypto investors, and the bloodbath in crypto today, it got me thinking. Perhaps the most interesting use case for ETHLend is the ability to get quick leverage when the market dips, rather than as a decentralized replacement for micro loans.

Good Luck Buying the Dip

I’ve been in cryptocurrency since Bitcoin was $500, so I know, like many of us do, that this extreme market correction isn’t abnormal. With this correction in prices, I’m interested in buying the dip. I knew the dip was coming eventually, so I pre-emptively initiated a transfer from my bank last week. The funds won’t be available in my exchange until December 28th. Good luck buying the dip. Good luck making any sort of agile investments when this is the case.

Indeed, with the influx of new investors, this seems to be the worst news when I break it to them that they probably won’t be able to buy any cryptocurrency for at least another week, and that’s if they’re lucky. I’ve seen people have to sign up to numerous exchanges due to the different KYC laws each exchange abides by in hopes of getting added to even one. Add to the mix, the unique bugs on each exchange’s on-boarding pipelines and the whole thing is a bit of a nightmare for a new cryptocurrency investor, particularly when a dip like yesterday happens.

The Two Week Fiat-Crypto Border

While sitting in the airport yesterday, watching the market correct, and wishing my transfer was complete I started thinking about ETHLend. Maybe ETHLend’s use case isn’t an Ethereum version of Prosper, but rather some kind of internal block chain leverage instrument. With this massive delay of getting funds into an exchange account, it’s almost like there is a two week wide border separating traditional assets and crypto assets. It may be that ETHLend’s primary use case is as an investment leverage instrument for the crypto side of this fiat-crypto border, moreso than transparent lending. Don’t get me wrong, that’s one obvious advantage, but sometimes big breakthroughs are more nuanced than the primary goal of a start up.

An example. The market corrected. Because of this, I’m interested in buying more crypto assets. Tokens have gone on sale. I tell my bank to send funds to an exchange in preparation for my discount holiday shopping spree. Two weeks later my funds arrive…and the stores are all closed.

What if, instead of requesting funds from my bank, which moves at a snails pace (this is why we’re all so excited about this technology in the first place, isn’t it?). I simply place a loan against my current token holdings. Maybe I’m certain Dash will recover within the next month and I’m holding 500 BAT. I request a loan to get .2 ETH, or even better, 125 Tether (ETH seems to be a part of the correction, sadly), which I can then use to purchase the Dash I so badly covet. Let’s say the loan lasts for 1 month. If, as I expect it will, the Dash price recovers, I pay the agreed upon interested on the loan, the lender is happy, and I’m happy because I got my Dash at the holiday discount. If instead, I’m wrong, and after a month the price continues to drop, the lender keeps my 500 BAT as collateral. In either case, I still get my Dash.

Conclusion

Maybe ETHLend really shines as a peer to peer “bank”, with which quick crypto leverage is achieved. It can still have it’s more obvious use case as a decentralized Prosper, but speaking as an investor, it sure would’ve been nice to have been able to buy this dip. I’ve yet to use my LEND tokens on the ETHLend platform, but I know the next time the market corrects, I won’t be heading to my lethargic bank, but rather to ETHLend.

External Price Feeds in ETHLend using Oraclize

Here is a third post written in my series of articles for ETHLend, an Ethereum based lending start up.  Enjoy!

ETHLend is an Ethereum based lending platform that aims to unbank lending and offer blockchain liquidity backed by crypto assets. In short, ETHLend does this by allowing borrowers to put different types of tokens up as collateral for pseudonymous and transparent borrowing and liquidity. While the the idea is simple, in practice, the engineering necessary to provide a rich set of features to the user is non trivial since the ethereum infrastructure is still in it’s infancy. Today we’ll investigate one of the challenges.

Price Feeds

When a user provides a token as collateral to the ETHLend smart contract, the question is: what is this collateral really worth, and who gets to decide this? Is it the lender, or the borrower? or someone else completely?

As an example, perhaps I would like a loan of 10 ETH for my Sunday Lambo shopping. I’m willing to give you 10 DOGE as collateral for this loan. Obviously this is an extreme example, and I would hope no lenders would be willing to take such a laughable loan, however, what about a loan of 1 ETH for 45 OMG?

It becomes clear that we need a way of accessing data outside the ethereum network in order to have an agreed upon standard for fiat, ether, and token values. Rather than letting any one person decide the value, we can allow the market to decide. But how are we going to get this data? It isn’t readily available to the Ethereum blockchain.

Oraclize

Many in blockchain are already familiar with the phenomenal Oraclize service. For those not familiar, a brief explanation from Oraclize themselves:

“Oraclize is a service offering a set of tools and APIs aiming to enhance the power of smart contracts. By providing access to both on-chain and off-chain data, Oraclize allows to find an answer to any query your contract may have.”

Essentially, what Oraclize does, is it empowers smart contracts to access information outside of the blockchain in a trustless way. Want to pay a user ether every time they tweet your hashtag? Oraclize makes this possible. Want to attach a bounty to your stack overflow question? Oraclize makes this possible. The possibilities with the brilliant Oraclize service are endless.

Tying Them Together

ETHLend has developed an Oraclized based smart contract that allows for loans to be requested in USD but transacted on the Ethereum blockchain using the Kraken API for the exchange rate. Internally, ETHLend’s contract keeps the USD/ETH price updated every minute through the EthTicker method. It then exposes this publically to dApp developers through a call to the smart contract’s “ethPriceInUsd” variable. Any time a user would like to create a loan in USD, or even get the exchange rate, this can be readily retrieved and accessible to all ETHLend participants as the agreed upon value. Obviously this contract can easily be extended to any fiat currencies currently trading against ETH, as well as any number of ERC20 tokens as well.

Edge Cases

There are, and will continue to be, a number of different edges cases that may arise in the lending cycle, and ETHLend is continually closing the gap on these. Let’s take a few short toy examples to illustrate how this external price feed comes in handy.

How do we know if the borrower has enough value in the tokens they’ve offered as collateral for a loan? Easy, have the smart contract check the exchange rate for the collateral and the requested loan amount to ensure the borrower and lender are copacetic on the value proposition.

What about when the loan has been filled and the collateral drops or rises in value before the loan is repaid? Crypto is highly volatile after all. With the current ETHLend smart contract, since the loan was presumably created in USD, the volatility of the underlying cryptocurrency is irrelevent. A loan for 500 USD today may involve .625 ETH, and at the time of repayment, 50 ETH, however, as long as the two parties are comfortable treating USD as the pegged currency, they are happy with the outcome. With this new smart contract, they now have the ability to do so, should they choose.

Conclusion

As I said, even a simple concept such as lending on the blockchain is actually fraught with different edge cases. Because of this, ETHLend is always looking out for talented blockchain engineers to contribute!

As we saw, having a way to value currencies in a loan market is of utmost importance for decentralized lending. With Oraclize, ETHLend has this power, and is able to give a much more rich set of functionality to users than it can without Oraclize, making the loan process even more transparent in the process.

The Internet vs Itself

My writing with ETHLend often has me diving in uncharted technology.  Often I’m doing things that have never been done before on the planet, such as connecting ETHLend to uPort for example.  This gives me a unique perspective into the daily lives of decentralized developers.  At the same time I work for a very large and customer centric tech company.  The mesh of these two perspectives has led me to ponder the following question: Is the first to market advantage of centralized user experience too powerful for decentralized tech to overcome?

Web3.0

Ethereum is often touted as “web3.0”.  It’s true that what decentralized technologies are trying to accomplish are magnificent and gargantuan tasks.  It’s also true that using ethereum and other decentralized software feels like using the internet in the early days of the web.  Back then you often had to do what seemed like mysterious incantations, without much idea as to what they were actually doing.  Obviously most cryptocurrency feels very much the same way, with things like the iota wallet showing no balance (1,2,3,4,5,6) and the DAO hack.  This makes for an easy comparison to the early web, however, I argue that the comparison is a bit too easy.

User experience on the centralized web is miles ahead of where it was originally, which makes it miles ahead of its decentralized counterparts.  I don’t see regular users leaving the comforts of their Mercedes to jump back on the horse and buggy of decentralized user experience.  And, more importantly, I believe that this analogue will always be valid.

This is not to say that evangelists and technocrats won’t fully embrace this tech in much the same way as the early internet was embraced.  This is because most of us understand what’s happening behind the mask of UI and are much more willing to forgive technical mistakes or ineptitudes than the average user.  My concern is not that the tech won’t be adopted, but that the adoption will never grow above a certain percentage of the population.

The Internet vs Itself

As stated above, it’s easy to look to the technological growth of the internet as an example for decentralized tech to emulate. The problem with this perspective is, the internet didn’t have to compete with itself.  The internet competed with print, and I would argue, until the user experience felt effortless, it fought an uphill battle in much the same say decentralized tech is fighting now.  The problem is, the current state of user experience on the internet is so far ahead of its decentralized counterparts, and will continue to outpace the growth of its decentralized counterparts.

Does UX matter?

One could argue that user experience isn’t everything.  The needs decentralized technologies are attempting to fill are not necessarily motivated by user experience (although, I would argue the motivations of a technology should always be driven by user experience, but that’s a different discussion for a different day).  However, in order to achieve mass adoption, a superior, or at least equal, decentralized user experience must be achieved.

Put yourself in a layman’s shoes.  They don’t understand what the decentralized tech is trying to achieve, they just know it is or isn’t working as well as the centralized version, and will therefore go back to the centralized alternative in the case when it isn’t.  Take the case of steemit.  They’re paying people to use the platform, but the UX is lagging so far behind that of reddit that it doesn’t matter.  People still continue to stay on reddit, and, even after trying steemit will return to reddit.

Engineers Are Users Too

This leads me to my final point.  Engineers are users too.  Building good, clean, maintainable software, such as reddit, is a difficult enough job as it is.  Software developers don’t need to make their difficult jobs any more difficult than they already are.  This is exactly what they’d be doing by opting for decentralized tooling.  The current state of affairs with respect to infrastructure surrounding the modern web makes software development a pleasant experience.  The same cannot be said for the decentralized tool kit.  I hope this changes in the future, but I fear that much like user experience, the tooling of the modern web is also so far ahead, and will continue to outpace its decentralized counterparts, that this will never happen.

This is in large part due to the Pareto Law nature of technology.  The decentralized web must compete with the centralized web.  The centralized web is an internet of billions of dollars in capital, allowing it to hire hundreds of millions of software engineers to work on even the most minute detail of its infrastructure.  While the decentralized alternatives have a hundred thousand at most scattered about the globe working for less, and often for free.  Don’t get me wrong, I commend their efforts, and count myself as one, considering I spend my weekends writing about this tech.

Cryptocurrency

With cryptocurrency, however, more and more funding is being poured into the decentralized alternatives, which is why it’s even possible to argue against centralization right now.  This is actually my main motivation for investing in cryptocurrency.  I invest not because I’m interested in buying a lambo, but because I know the only way to spearhead this technology is to put capital into it and see where that takes us.  It remains to be seen if this funding can ever eclipse the centralized counterparts, however.

mw-fo937_bitcoi_20170621155338_mg
We’ve all seen these little infographics showing how cryptocurrency actually stacks up against companies like Apple and Amazon.  Granted, this one is dated, but, even with the unprecedented surge in bitcoin price, all of cryptocurrency still doesn’t match Amazon’s market cap.

Conclusion

This is why I won’t be quitting my day job any time soon to join the decentralized army full time.   As an engineer I firmly believe that decentralized technology is a more robust design than the centralized alternatives, but the cat is out of the bag.  Users have grown too accustomed to having their data now.  Don’t get me wrong, the engineering involved in bringing centralized software to fruition is absolutely brilliant, and has taken decades to perfect.  I believe decentralized tech will get there one day.  But when that day comes, the bar for user experience will be moved still higher by centralized technology.

 

 

 

 

 

 

Lessons learned from an ICO

Background

A few months ago I dabbled in solidity development.  I’m interested in all things cryptocurrency, and ethereum smart contracts appear to be a very large aspect of the current state of cryptocurrency.  As such, I had no choice but to experiment with the technology.  This experimentation involved using the truffle.js framework to follow the ethereum foundation tutorials.

During initial smart contract development it’s common to use the testrpc client as this speeds up deployment times allowing for quicker code iterations. There is also less complexity in this chain, so less can go wrong and you can focus explicitly on development.  However, given that the eventual goal is deploying smart contracts to the main net, eventually you begin to turn your attention to the ethereum test networks.  After testing my contracts using testrpc my next step is typically to test on the parity development chain, which similar, to the testrpc client has less complexity but is an actual blockchain.  After testing on the parity dev chain, I move to the kovan or ropsten test networks.  At the time the ropsten network was under attack (and looks like it is again).  So my only choice for testing was kovan.

Kovan

This is where I begin to have problems in my development and eventually moved on to other aspects of technology (I think at the time it was react-native).  I was working through a tutorial to deploy a DAO like smart contract, and everything was going well, until I attempted to deploy the contract to the kovan test network.

When I would deploy via the testrpc client, or even the parity dev chain, the contract would deploy successfully and I could interact with it via javascript.  However, on kovan the deploy would hang, and then after a long period of time, would fail with this cryptic error:

Error encountered, bailing. Network state unknown. Review successful transactions manually.
    Error: Contract transaction couldn't be found after 50 blocks

At the time I didn’t really understand what was going on behind the scenes with regard to addresses and transactions.  I knew that I had to sign the transaction in parity, but I had already deployed on the parity development chain, created two different accounts on the kovan chain, one email verified in order to get some test ether, and had my main net accounts floating around in the parity files as well.  So I tried a number of different key files, unsure of which was correct,  none of them were signing the transaction successfully, or if they were, there wasn’t enough ether in the wallet.

At the time I wasn’t even sure I was barking up the right tree, and given that I wasn’t actually trying to deploy to the main net anyway, I posted to stack overflow, never heard back, and shelved the issue.

Enter the ICO

A few weeks ago I posted about proof of stake, and through this post met the CEO of a blockchain start up.  I ended up proof reading their white paper, and kept in touch during their ICO.  During the pre-sale they were deploying their smart contract for the ICO and were getting the exact same error I was getting months ago.  Since this seemed to be a recurring error rather than a one off issue specific to me I felt this warranted more investigation.

I went back to my original DAO smart contract that was giving me issues and tried deploying it again.  During this time the CEO made a number of observations eventually leading to a successful deployment of their smart contract.

Here are a few lessons to keep in mind when deploying smart contracts to both the main net and the test nets.

Don’t forget to sign your transaction in parity

Since it’d been a few months since my previous test deployment I had to remind myself how to process works.  When you run:

truffle migrate --network kovan

The transaction still needs to be signed by the address set in the truffle.js file for the kovan network settings.  If you forget this, truffle will just wait, and after a very long time finally fail with the cryptic error message mentioned above.

There’s no harm in trying different keys

As users we’ve been trained that after too many password attempts you’ll be locked out of your account.  In my case I had a bunch of different key files for the same address, if I’d just tried all of them eventually I would’ve found the correct one.

Key management is paramount

I wouldn’t have needed to try every key file if I’d paid more attention to what I was doing with each key while creating test accounts.  As an ethereum user, this important, but it becomes even more important as this can introduce inadvertent bugs, adding to the already high cognitive load associated with development.

Make sure you have enough ether to cover the gas costs

I remember this kept coming up while I was troubleshooting the issue a few months ago.  The first question asked in forums was always “what did you set your gas price to?” and “does the signing address have enough ether to cover this cost?”

Usually when you sign the transaction in parity it will allow you to set the gas price at that time, and will show you the current average gas price on the network.

Screenshot from 2017-09-29 19-47-03

Screenshot from 2017-09-29 19-47-20

You can also set the gas price explicitly in the truffle.js network settings.  For example:


kovan:{

from: 'KOVAN ADDRESS',
network_id: 42,
host:'localhost',
port:8545,
gas: 4712388,
gasPrice: 25000000000
}

would set the transaction to use 4712388 gas with a gas price of 25000000000 on the kovan network.

Use the parity transaction viewer

Screenshot from 2017-09-29 19-53-17

This was one of the jewels passed along to me by the CEO that I wasn’t aware of.  Parity comes with a bunch of dApps, and one of them is a transaction viewer.  This allows you to keep tabs on the state of your transactions, as well as view other pending transactions on the blockchain.  I believe this is what led the CEO to the truffle.js gas price file insight.

Use etherscan

At the end of the day, when I finally went to check if my contract deployed successfully on the kovan network, I looked at all transactions made by my kovan address, and some of my transactions from months ago actually made it onto the blockchain.  Truffle uses what they call an “initial migration” and then the actual contract deploy.  Some of my initial migrations made it into the blockchain, but not the actual contracts, until sorting out the rest of the things discussed above.

This lesson is an obvious one.  Always check etherscan.  Although, some times this may add to the confusion.  Since the kovan network was sluggish at the time, it took awhile for my transactions to show up, this coupled with the cryptic truffle error lead me to believe absolutely nothing was happening.

Conclusion

Most of these are trivial to debug, but when combined lead to difficult debugging, add to that cryptic error messages and it can often be difficult to break down what’s going wrong in a systematic way.  But, this is brand new tech, and because of this, if you’re a developer, any help with these frameworks will speed up development iterations and in turn make this tech easier to work with for other developers in the future. parity truffle

That’s all for now.  I’ve recently upgraded my miner to a new beta AMD blockchain driver, so I may pass along this bit of info in my next post.  Until then!

Proof Of Stake vs Proof Of Work

I’ve decided to write post about the differences between proof of stake (a protocol currently being used by Neo and being worked on by Ethereum), and proof of work (a protocol made famous by Bitcoin, and currently in use by coins like ZCash and Monero).  I felt motivated to write this post because there seems to be a bit of confusion when I talk with people about the proof of stake protocol as to what exactly happens.  Many I’ve talked with seem to view it as creating money out of thin air (as if mining wasn’t that already), or at the very least less secure than proof of work.

Proof of Work

I believe people feel more comfortable with proof of work because it’s the simpler of the two protocols.  The idea is this: Your computer is going to try billions of different inputs to a hash algorithm (it’s going to put in work), and if it comes up with the right output (it’s proved that it’s worked on the puzzle sufficiently), you’ll be rewarded. Here is an example proof of work algorithm from the Ethereum cryptocurrency tutorial:

// The coin starts with a challenge
bytes32 public currentChallenge;
// Variable to keep track of when rewards were given
uint public timeOfLastProof;
//Difficulty starts reasonably low
uint public difficulty = 10**32;

function proofOfWork(uint nonce){
    // Generate a random hash based on input
    bytes8 n = bytes8(sha3(nonce, currentChallenge));
    // Check if it's under the difficulty
    require(n >= bytes8(difficulty));
    // Calculate time since last reward was given
    uint timeSinceLastProof = (now - timeOfLastProof);
    // Rewards cannot be given too quickly
    require(timeSinceLastProof >=  5 seconds);
    // The reward to the winner grows by the minute
    balanceOf[msg.sender] += timeSinceLastProof / 60 seconds;
    // Adjusts the difficulty
    difficulty = difficulty * 10 minutes / timeSinceLastProof + 1;
    // Reset the counter
    timeOfLastProof = now;
    // Save a hash that will be used as the next proof
    currentChallenge = sha3(nonce, currentChallenge, block.blockhash(block.number - 1));
}

If you were to mine this coin, you’d essentially send your input (nonce) to the proofOfWork function in this smart contract.  If your input is below the current difficulty level, and it’s been long enough since the last block was mined, you receive a reward, otherwise the function returns (that’s what the require statement does in solidity) and you try the next input you think might result in a sha3 hash below the current difficulty.  This is proof of work mining in a nutshell.

Proof of Stake

Proof of stake has the same goal as proof of work: to achieve distributed consensus of the state of the blockchain.  Going back to the git perspective, both protocols are trying to select maintainers of the blockchain “branch” without allowing anyone too much control.  Proof of stake does this by substituting out hash power for economic power.  The more coins you have, the more likely you, or the block you’ve chosen, is to be used and the more you’ll be rewarded for it.  I believe cryptocurrency developers are moving in this direction because unlike proof of work, proof of stake has the added property that the more coins you’re holding, the more likely you are to act in solidarity with the will of the users of blockchain when selecting blocks.  In proof of work there is a tension between miners and users of the blockchain that may not exist in a proof of stake protocol (this is yet to be seen), as often the users will also be the validators (a miner in proof of stake is often called a validator).  There’s also the added benefit that proof of stake doesn’t cost millions of dollars in power and bandwidth every year to maintain the blockchain.

Casper

Let’s use the Ethereum casper protocol as a detailed example for proof of stake, as this one seems to be getting so many people interested in what proof of stake is.

The casper protocol will involve a smart contract being deployed to the Ethereum blockchain.  An address interested in becoming a validator will send the amount of ETH they would like to stake on blocks to the smart contract.  The smart contract will then receive two messages from validator addresses, PREPARE and COMMIT.  Prepare is essentially a validator saying “I think this set of transactions should be the next block”, if one of these blocks attains a 2/3’s economic vote in the smart contract, it becomes a possibility for a COMMIT.  After the possible PREPARE blocks have been selected, validators vote on this set of blocks with the COMMIT message, once again, if 2/3’s economic vote is found on a COMMIT block, it will be added to the block chain and all the validators who took part in selecting this block will be rewarded for minting the block in proportion to the amount of ETH they deposited to the smart contract when joining the validator pool.  As far as I’m aware, there doesn’t exist a mechanism for selecting validators*, but it could easily be something like a random subset selection of all possible validators weighted by the amount of their deposit in each dynasty.

Nothing to Stake

One of the problems with proof of work is the “nothing to stake” problem.  The idea is as follows: If I don’t have to compute any hard hash puzzles, why not bet on every block that comes my way? Since this incentive structure exists for everyone in a nothing to stake protocol, everyone decides to stake their hard earned crypto currency on every block.  Now we have no consensus, there are 50 different chains all growing at the same rate and all possibly legitimate because no one wants to take the lead and decide on one.  Also because of this lack of consensus, double spend attacks become much easier and more likely than they are on a proof of work protocol.

Ethereum’s casper protocol circumvents the nothing to stake protocol by locking the funds in the smart contract discussed above, only paying them out after a sufficient amount of time, and destroying the ether, or penalizing it, for various kinds of behaviour (to include malicious).

 

Conclusion

I think people are uneasy about proof of stake due to a misunderstanding of proof of work more so than anything else.  As I stated in my git perspective of the blockchain the only reason miners exist is to act as the “maintainer” of the blockchain, and since we want this maintainer to change often, mining was used as a mechanism to distribute time as the maintainer evenly.  With proof of stake, the same thing is happening, it’s just the mechanism to choose maintainers is based on the amount of cryptocurrency a person holds, rather than their hash power.  The 51% attack we saw in the previous post  now becomes a 51% currency attack, whereby you’d have to own 51% of the cryptocurrency in which you’re attacking.  This is a presumably much more difficult feat to accomplish than purchasing 51% of the hash power.  In the currency case, you’ve just purchased 51% of the currency, all the while raising it’s market price and only have 49% of the rest of the currency to defraud, at which point, news will probably have broken that someone purchased 51% of the currency on the market, and the currency is now socially worthless.  In the case of proof of work, you just secretly buy more computing power, or bribe, or even hack existing mining pools, and rather than defrauding 49% of the currency you’re able to defraud all of it.

As you can see, we aren’t creating money out of thin air, at least in the casper protocol, there is a very real chance of losing your money, and your money is also stuck in the smart contract, so it’s no different than a government bond gaining interest, or mining for that matter.

Until next time!

* if someone has any information let me know.  There is a reddit discussion here, but since it’s a year old, I hesitate to trust it given how much Ethereum proof of stake has changed, this seems to suggest its proportionate to the ETH you deposit, Vlad also mentions it as a possibility here.  I looked briefly at the casper source code and didn’t see validator selection anywhere, but since I was brief, there’s a very good chance I wasn’t looking in the correct place.

Iota Address Hygiene and Tangle Transaction Lookup

In my previous post I wrote a broad overview of what the tangle is, and compared it with the blockchain.  Well, this post took off, and I had many great discussions and received a lot of great feedback, as well as new information.  Today I’ll be applying some of this feedback, as well as spreading some of this new information I’ve received over the course of my discussions.

Iota Address Hygiene

The section of the previous post that seemed to strike the largest nerve was in regard to criticisms I’d heard about the tangle protocol.  One of which was given by Eric Hop: “The only drawback with iota is that it’s not safe to send multiple transactions to the same address.” and later Eric produced this forum link explaining the dangers of sending from the same wallet address multiple times.  The reason sending an iota transaction from the same address twice is insecure is because iota has elected to use the quantum resistant Winternitz One-Time Signature Scheme.  I’m not entirely certain of the details of the Winternitz encryption, however I do know the security degrades exponentially the more the encryption is used to sign the same transaction.  This is why the encryption is called a “One-Time Signature Scheme”, it is intended to be used only once.  If you click the above forum link and read through, you’ll see that the iota wallet automatically moves your balance to a new address any time you send any iota on the tangle because of this flaw in the encryption scheme.

While currently I don’t view this as a problem, it is something to be aware of if you’re developing software on the tangle that doesn’t use the wallet for transactions (see phx’s post in the first forum link for how the wallet handles this under the hood).

Transaction Lookup

Another criticism mentioned in the previous post was the question of transaction lookup efficiency.  One of the beautiful things about the block chain is that it is really easy to look up how much bitcoin an address is holding in it.  Simply, follow the blocks back from the current one and keep track of all transactions leaving or entering that address.  With the tangle it seems like this problem becomes insurmountable.  Iota does this with yet another simple, yet novel, idea: toss the concept of order, or time, out the window.  Essentially, it doesn’t care how you got the balance in your address, it simply cares that your balance is never negative.   To do this, a node syncing on the tangle simply iterates all the transactions known to the tangle, and groups them by address, regardless of the order in which they occurred.  This lack of order allows for frameworks like map reduce (something we’ve discussed previously) to be used on the tangle, since transactions can be grouped in parallel.

IOT

One thing that’s been annoying me lately about the iota community is the focus on IOT (internet of things).  I know that is the direction the iota dev’s are pushing the software, and it’s obviously a great use case, however, I don’t feel the tangle should be considered an exclusively IOT protocol.  It has many possible use cases, even simply as a micro payment currency, something other cryptocurrency’s are severely lacking.  I don’t feel that iota should be hitching its wagon to a horse that may be dead in the future, as this dead horse could end up an albatross of the protocol.

This was a short post, but I wanted to have time to fully digest material in the links I was given last week before writing about them.  I also felt that the topics for this post were better treated in isolation, rather than as additions to the tangle vs blockchain material.

I had intended to post about proof of stake next as it seems to be a hot topic lately, however, after doing some research into the tangle protocol, I might go through some of the tutorials, or even on a bug bounty!  We’ll see where my curiosity takes me, but hopefully you’ll be along for the ride! until next time!

 

NOTE: here’s a paper on the Winternitz encryption if you’re interested in the details, given to me by the BlockchainNation Facebook group moderator Greg Dubela (@DoctorDoobs).

 

Iota’s Tangle Protocol

In a previous post we looked at the blockchain.  I explained the blockchain data structure from the perspective of git.  Today I’d like to take a look at iota’s brand new protocol called tangle and explore what makes it different from a blockchain, and why in my opinion it’s such a simple, yet novel idea.

Tangle

When we originally discussed the blockchain I pointed out that it’s essentially a linked list with some very special properties.  It seems like the next logical extension we all should’ve been asking ourselves, yet weren’t until now, is why a linked list?  What other data structures could we apply this same technique to?  This is what the tangle protocol has done.

Rather than storing the transactions on a linked-list, the transactions are stored on a DAG (directed acyclic graph).  Often the most simple ideas are the most brilliant.  This was the case with the blockchain, and now is the case with it’s extension, the tangle protocol.

200px-polytree-svg (a simple example of a DAG)

So what? why does it matter?

No fees, lower transaction times

Because the transactions are in a DAG, it further decentralizes the transactions.  Each node holds one transaction, because of this, they’re now small enough for other users of the protocol to perform validation, and proof of work without needing an ASIC.  As we saw in the blockchain post, the users had to rely on the miners to do the proof of work because a large portion of transactions were stored in a block, and there was only 1 branch.  Now the work can be parallelized and decentralized.  Because this consensus can happen on multiple branches at the same time, this makes transaction time much lower compared to standard blockchain technologies.wii-tangle

Looking at the above picture, it’s as if the blocks have been broken apart and their transactions scattered about in the ever forward moving continuous DAG, and they’re all happening in real time.  With the blockchain we viewed time in a block scale, imagine each block being a day, things could only happen at the end or beginning of a day.  With tangle, we’re able to inspect transactions on an hour, or even minute scale.

Decentralization

I think it’s hard for anyone in the cryptocurrency community to find anything wrong with the tangle protocol.  It’s increasing decentralization in a big way, and that’s something every one can agree on, particularly after the recent bitcoin hard fork, which was due in large part to the centralization of mining power.  With tangle we’re giving the hash power back to the users of the protocol.

I’ve also heard during this interview with one of the founders of iota, that the protocol was recently attacked with 300% hash power, and actually got faster and more robust from the attack.  This is very important, because as a technologist, my instinct was initially to say “well, it isn’t being mined by special hardware, can it really be that cryptographically secure?”  This protocol is still very new, and there is still a lot that needs to be hashed out.  For example, one thing I’ve heard asked is:  “what is the efficiency of transaction look up?”*  since transactions are now in a DAG this certainly raises the complexity associated with finding your transaction in the graph.  It is called a tangle after all, and I don’t know about you, but the word tangle doesn’t necessarily bring about ideas of order to mind.  However, I’m very excited about this paradigm shift away from the blockchain from a purely scientific standpoint.

After having further discussions with the tech community, I’ve elected to write another post directed at the details of some of the following criticisms of iota.  See you then!

 

NOTE: here are a few videos I found that were pretty good while doing research into this new topic. [1, 2, 3]

*Since writing this post, here’s another criticism I’ve found from Mr. Eric Hop:

“The only drawback with iota is that it’s not safe to send multiple transactions to the same address.”

My response to this was: “This seems like it would be easy for them to remedy, no?”

Here is Eric’s response after an impressive deep dive into the internals of the protocol:

You can use an address for receiving as long as you have not used it for any outgoing transaction. What this means is that once you have sent a transaction with a specific address as input, you should never use it again. This is because IOTA uses Winternitz one-time signatures which degrade security exponentially after each reuse.

So I was wrong in that it is unsafe to send multiple transactions to the same address. It only starts to become unsafe once you have spent some of the IOTA on that address.

Spending from the same address multiple times increases the risk that your address will be compromised, but your seed is still secure.

Addresses are generated by the wallet starting at index zero. It increments the index every time it finds an address already in the tangle. When it finds the first unused address that is the address returned.
That is why it is a good idea to connect a receiving address to the tangle already. So that the wallet will not generate the same address again while nothing has been received on that address.

Why the wallet does not simply keep track of the last index used is beyond me. It then could simply start at the next index when a new receive address is required. If I ever find out the reason for this I will follow up.

I also see no particular good reason other than accidentally being able to receive on an address that was spent from already for not being able to generate addresses offline like with Bitcoin wallets.

The seed should be a unique starting point, from which you could generate addresses one after another, incrementing the index every time.
The only security issue that could arise is when you would use the same seed again on a different offline wallet that would then proceed to generate the same string of addresses, or on an online wallet, that would potentially generate the next address in the sequence, in which case the offline wallet does not know about this fact and will happily generate the same next address…”

This made me sceptical of the Winternitz encryption (it seemed to be the cause of the majority of issues).  Eric explained the Winternitz was chosen due to it’s quantum proof qualities.