How to Stake Lisk

Lisk was one of my first crypto purchases.  I’d been wanting to invest in cryptocurrency while I was in college, but I didn’t want my cash stuck in volatile cryptocurrency in case I needed it during college.  After I graduated I went on a crypto buying spree.  During this time I kept hearing Lisk come up as an alternative to Ethereum.  I did some quick research, saw that it used JavaScript instead of Solidity and was sold.

I’m investing in cryptocurrency for two reasons.  Firstly, as Warren Buffet said, “Never invest in a business you cannot understand.” I believe being a developer gives me an edge when evaluating cryptocurrencies to invest in.  And secondly, I believe in the tech, and am interested in it.  Since I know how quickly things can be built with JavaScript and some of the troubles people have been having with Solidity (myself included) I figured it wasn’t a terrible investment.

Lisk has more than doubled in value since I purchased it, and, we can stake Lisk, which is what we’ll be doing today.

The Wallet

When I purchased Lisk I’ll be honest, I didn’t really know what I was doing.  I just knew I wouldn’t learn what to do if I didn’t invest.  Because of this, I currently have my Lisk in FreeWallet, which is not only unsafe, but you also cannot stake in it.  So our first step will be to download the lisk nano wallet.  This actually isn’t a bad first step, as you can make the assumption that your coins are in an exchange wallet, or any non Lisk wallet for that matter while following this tutorial, so long as you know how to transfer them.

Once you have the wallet, either login or create a new account.  Next, send the transaction to get the LSK into your new nano wallet to start voting.

Voting

Only 101 delegates can forge blocks.  When you vote, it costs 1 LSK and you can vote for 33 delegates with that 1 LSK.  To see all of the delegates available for vote click the voting tab in the nano wallet.  You’ll see that there are obviously more than 101 delegates in that list, however only the top 101 will be able to forge blocks.

When I looked into staking, initially I was a bit confused about the 1 LSK voting fee.  I thought, “so what, I have to keep voting 1 LSK at a time until all my LSK is used?”  Nope, the network knows the value of the LSK in your wallet and puts the weight of your vote and rewards behind your delegates appropriately.  Also, voting is a one time thing, once your votes are set, you’ll continue to receive rewards.

Who to vote for?

Not all of the delegates will pay out rewards to their constituents, some use the rewards to develop the Lisk ecosystem.  So this begs the question, who do you vote for?

There are a number of different sites available to help inform a vote.  The first and most common is EarnLisk.com, another one is tools.mylisk.com, and finally the official Lisk Delegate Monitor.  In the Delegate Monitor, you can click the profile icon next to the username.  This will take you to that delegates forum post which will explain their payout proportion, if they are a pool, and what they’ll do with the funds if they aren’t.

Validating your vote

After casting your votes, you should be able to go to the Lisk Delegate Monitor, click on a delegate you voted for, and see your wallet address under the Voters header.

I’ve yet to receive any rewards yet, but I’ll be sure to update this post when I do and explain the process if it involves anything other than receiving rewards.  Until next time!

Helpful Links

As usual, I like to pass along references to good content that I found while writing my posts.  There really isn’t much about this topic online, but there is one really good youtube channel with good Lisk specific content.

NOTE: since this post it’s been asked what the rate of return is for staking Lisk.  I’ve yet to get any returns so I can’t say for sure.  However, this video lays out all of the math, and makes the claim that it should be around 20% annually, but the block rewards will decrease over 5 years, and therefore, so will the rate of return.

Advertisements

Radeon AMD Beta Blockchain Driver for Ubuntu Linux

In my previous post outlining tips about smart contract deployment using parity and truffle, I mentioned I’ll be passing along a bit of mining news in this post.  AMD has finally released their block chain specific drivers for Linux, on order to overcome growing memory sizes for memory hard mining algorithms.  This will be an extremely short post explaining how to do the upgrade, since it’s almost trivial.

The install

First install the new software:

wget -qO - http://repo.radeon.com/rocm/apt/debian/rocm.gpg.key | sudo apt-key add -
sudo sh -c 'echo deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main > /etc/apt/sources.list.d/rocm.list'
sudo apt-get update
sudo apt-get install rocm

Next add GRUB_CMDLINE_LINUX="amdgpu.vm_fragment_size=9" to the grub file:

sudo vim  /etc/default/grub
GRUB_CMDLINE_LINUX="amdgpu.vm_fragment_size=9"
sudo update-grub 

and reboot.

Hash Rate

We experienced a ~2 Mh/s increase on each GPU after installing this new driver, which seems to be what everyone else is getting as well.  As of now there doesn’t appear to be any drawbacks associated with this new driver, so give it a shot!

Notes

A couple things to note.  When I did this install, I thought it was replacing the amdgpu-pro drivers we installed in this tutorial.  So I un-installed them before the reboot.  This is not correct.  The rocm package is a kernel level component for the entire amdgpu-pro driver package, not a replacement.

That’s all for now! as I said, this one would be short.  Next time I plan on discussing some tweaks we’ve made to our operating system, and why we made them.  Until then!

Lessons learned from an ICO

Background

A few months ago I dabbled in solidity development.  I’m interested in all things cryptocurrency, and ethereum smart contracts appear to be a very large aspect of the current state of cryptocurrency.  As such, I had no choice but to experiment with the technology.  This experimentation involved using the truffle.js framework to follow the ethereum foundation tutorials.

During initial smart contract development it’s common to use the testrpc client as this speeds up deployment times allowing for quicker code iterations. There is also less complexity in this chain, so less can go wrong and you can focus explicitly on development.  However, given that the eventual goal is deploying smart contracts to the main net, eventually you begin to turn your attention to the ethereum test networks.  After testing my contracts using testrpc my next step is typically to test on the parity development chain, which similar, to the testrpc client has less complexity but is an actual blockchain.  After testing on the parity dev chain, I move to the kovan or ropsten test networks.  At the time the ropsten network was under attack (and looks like it is again).  So my only choice for testing was kovan.

Kovan

This is where I begin to have problems in my development and eventually moved on to other aspects of technology (I think at the time it was react-native).  I was working through a tutorial to deploy a DAO like smart contract, and everything was going well, until I attempted to deploy the contract to the kovan test network.

When I would deploy via the testrpc client, or even the parity dev chain, the contract would deploy successfully and I could interact with it via javascript.  However, on kovan the deploy would hang, and then after a long period of time, would fail with this cryptic error:

Error encountered, bailing. Network state unknown. Review successful transactions manually.
    Error: Contract transaction couldn't be found after 50 blocks

At the time I didn’t really understand what was going on behind the scenes with regard to addresses and transactions.  I knew that I had to sign the transaction in parity, but I had already deployed on the parity development chain, created two different accounts on the kovan chain, one email verified in order to get some test ether, and had my main net accounts floating around in the parity files as well.  So I tried a number of different key files, unsure of which was correct,  none of them were signing the transaction successfully, or if they were, there wasn’t enough ether in the wallet.

At the time I wasn’t even sure I was barking up the right tree, and given that I wasn’t actually trying to deploy to the main net anyway, I posted to stack overflow, never heard back, and shelved the issue.

Enter the ICO

A few weeks ago I posted about proof of stake, and through this post met the CEO of a blockchain start up.  I ended up proof reading their white paper, and kept in touch during their ICO.  During the pre-sale they were deploying their smart contract for the ICO and were getting the exact same error I was getting months ago.  Since this seemed to be a recurring error rather than a one off issue specific to me I felt this warranted more investigation.

I went back to my original DAO smart contract that was giving me issues and tried deploying it again.  During this time the CEO made a number of observations eventually leading to a successful deployment of their smart contract.

Here are a few lessons to keep in mind when deploying smart contracts to both the main net and the test nets.

Don’t forget to sign your transaction in parity

Since it’d been a few months since my previous test deployment I had to remind myself how to process works.  When you run:

truffle migrate --network kovan

The transaction still needs to be signed by the address set in the truffle.js file for the kovan network settings.  If you forget this, truffle will just wait, and after a very long time finally fail with the cryptic error message mentioned above.

There’s no harm in trying different keys

As users we’ve been trained that after too many password attempts you’ll be locked out of your account.  In my case I had a bunch of different key files for the same address, if I’d just tried all of them eventually I would’ve found the correct one.

Key management is paramount

I wouldn’t have needed to try every key file if I’d paid more attention to what I was doing with each key while creating test accounts.  As an ethereum user, this important, but it becomes even more important as this can introduce inadvertent bugs, adding to the already high cognitive load associated with development.

Make sure you have enough ether to cover the gas costs

I remember this kept coming up while I was troubleshooting the issue a few months ago.  The first question asked in forums was always “what did you set your gas price to?” and “does the signing address have enough ether to cover this cost?”

Usually when you sign the transaction in parity it will allow you to set the gas price at that time, and will show you the current average gas price on the network.

Screenshot from 2017-09-29 19-47-03

Screenshot from 2017-09-29 19-47-20

You can also set the gas price explicitly in the truffle.js network settings.  For example:


kovan:{

from: 'KOVAN ADDRESS',
network_id: 42,
host:'localhost',
port:8545,
gas: 4712388,
gasPrice: 25000000000
}

would set the transaction to use 4712388 gas with a gas price of 25000000000 on the kovan network.

Use the parity transaction viewer

Screenshot from 2017-09-29 19-53-17

This was one of the jewels passed along to me by the CEO that I wasn’t aware of.  Parity comes with a bunch of dApps, and one of them is a transaction viewer.  This allows you to keep tabs on the state of your transactions, as well as view other pending transactions on the blockchain.  I believe this is what led the CEO to the truffle.js gas price file insight.

Use etherscan

At the end of the day, when I finally went to check if my contract deployed successfully on the kovan network, I looked at all transactions made by my kovan address, and some of my transactions from months ago actually made it onto the blockchain.  Truffle uses what they call an “initial migration” and then the actual contract deploy.  Some of my initial migrations made it into the blockchain, but not the actual contracts, until sorting out the rest of the things discussed above.

This lesson is an obvious one.  Always check etherscan.  Although, some times this may add to the confusion.  Since the kovan network was sluggish at the time, it took awhile for my transactions to show up, this coupled with the cryptic truffle error lead me to believe absolutely nothing was happening.

Conclusion

Most of these are trivial to debug, but when combined lead to difficult debugging, add to that cryptic error messages and it can often be difficult to break down what’s going wrong in a systematic way.  But, this is brand new tech, and because of this, if you’re a developer, any help with these frameworks will speed up development iterations and in turn make this tech easier to work with for other developers in the future. parity truffle

That’s all for now.  I’ve recently upgraded my miner to a new beta AMD blockchain driver, so I may pass along this bit of info in my next post.  Until then!

A Beginner’s Altcoin Mining Setup with AMD Radeon RX470s, Ubuntu, and Claymore Dual Miner

In previous posts I’ve mentioned that in addition to researching cryptocurrency, I also mine it.  This set off a small flurry of questions about the process of printing money with your computer.

1*kN-euZQpGTqcvB4O5KhSlw

Today I’ll begin the first in a series of posts about altcoin mining.

Buying the hardware

Before you can do any kind of set up, you obviously have to invest in the hardware.  There is a boatload of information out there.  We set out to mimic the Ethereum miners that currently exist as a baseline and plan to explore other hardware now that we’ve accomplished this task.

With mining, you’re building a computer from scratch.  This means you’ll need at the very least the following parts: a CPU, a motherboard, RAM, a HDD, a power supply, and in the case of a miner or gaming computer GPUs.

The community consensus for a baseline Ethereum mining rig is as follows:

All together, purchasing this gear, with all 6 GPUs the motherboard can handle, will end up costing you about $2000, and should net you a hash rate of around 120 MH/s.  If you mine Ethereum, this will make you around $200 per month, without doing anything, but keeping the miner running (which is actually more difficult than it sounds, more on this in another post).  However, I’d recommend to begin with you only purchase 1 GPU, to test with, making the initial investment only around $1000, but the hash rate only around 20 MH/s.

We came to this hardware consensus by doing a bit of research.  Here are a few resources we found useful during this research:

The OS

I’m not sure how necessary this is, but we had the OS installed on a SSD prior to the hardware installation (mostly because we were waiting for the hardware to arrive anyway).  Even if it isn’t necessary, it was a good way to test the hardware installation all the way through to OS boot.

We’re using Ubuntu as we’re planning on scaling beyond 1 miner and didn’t want to pay the licensing fee for Windows with every new miner.  As we’ll see in future posts, this is nice from a customization perspective as well.

There are plenty of tutorials available online for installing Ubuntu, here’s a link to the official one.  As usual, if you run into any snags during the process don’t hesitate to get in touch!

Assembling the Hardware

Great! All the hardware has finally arrived! now what?

I’m a software developer, so if you’re like me, the computer has always come assembled ready to be programmed.

52f21c5369111c889e0fe58442ae827d

This is where the resources mentioned above became useful.  We used buriedOne for info on the hardware set up, as well as EVGA’s official video.  If you’re lucky you won’t have any snags.  If you’re not lucky, the problems could be anywhere from a bad GPU, to a bad display monitor cable, to you shorting something with static while doing the install.  If you do have any problems, feel free to get in touch! I’d be happy to work through any issues or point you in the right direction.

The Driver

If you’ve made it to this point, your miner is now booting into Ubuntu, however, since the GPUs aren’t the integrated GPU, Ubuntu doesn’t know how to talk to them.  Because of this you need to install AMD’s drivers.  Since this information appeared to be pretty sparse online, I’ll go in depth rather than linking to other resources as I typically do.

First, I’d recommend you install ssh, as it might be useful to have remote access to the miner during installation in case you have any hardware issues.

Then go to AMD’s website and find the link to the Ubuntu download (AMDGPU-Pro Driver Version 17.30 for Ubuntu 16.04.3​) and download the driver.  Then go to this AMD tutorial explaining how to install the driver.  The steps should work, HOWEVER, during the step

./amdgpu-pro-install –y

We had to add the –compute flag, as such:


./amdgpu-pro-install --compute

 

Otherwise, on reboot and login, the Ubuntu desktop failed to load.

If you lose access to your mouse and/or keyboard, you can ssh into the miner and run the following command to get them back


sudo apt-get install --reinstall xserver-xorg-input-all

 

And finally, if you ever want to try a different driver, or have simply had enough with mining, you can run this:


amggpu-pro-uninstall

 

From anywhere in the terminal (it’s added to the path) to uninstall the drivers.

Verifying the Driver Install

After reboot you can run the following command to ensure that Ubuntu is in fact seeing the AMD GPU:


lspci -vnnn | perl -lne 'print if /^\d+\:.+(\[\S+\:\S+\])/' | grep VGA

 

You should see a line similar to:


01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:67df] (rev cf) (prog-if 00 [VGA controller])

For each GPU attached to the miner.

 

The Software

If Ubuntu is able to detect and use your GPUs, it’s time to start mining some coins!

The de facto mining software for ethereum is claymore’s dual miner so that’s what we’ll use for this tutorial.  This link (starting from Step 5) was immensely helpful for the initial set up, and we followed it as a baseline, but have since experimented with a set up that works better for us.  You can also set up dual mining using this link.

Congrats! now you’re mining Ethereum!  Making money while you sleep!  Printing money with your computer!

In the next mining post I’ll walk you through some software customizations we’ve made to our rig, as well as some explorations we’ve made into hardware alternatives to the baseline system.

 

 

 

Criticisms of Proof of Stake

Oh boy.  What a week it’s been.  My previous post meant to give a brief overview of Proof of Stake for non technical readers seemed to strike a nerve.

To start, I think people misunderstood the motive of the post.  I was simply giving a brief technical explanation of proof of stake for non technical readers, and some readers seemed to interpret this as support for proof of stake.  I’m a software developer.  I don’t care about the moral or ethical implications of power consumption or immutability.  I’m purely interested in the mechanisms, and I’ve yet to find someone who doesn’t agree that the casper protocol isn’t at least interesting, which is why I blogged about it.

As some of my readers know, one of the main reasons I post is to partake in the discussion that ensues afterwards, and there was a lot of it.  My previous post was on the frontpage of /r/blockchain for two days as well as /r/ethermining for a day and a half, which is great because these were my target audience in writing the post.  However, I also posted on a whim to /r/btc and /r/ethereum where it stayed on the front page of controversial for an entire day, which in my opinion is also great.  I wasn’t expecting the feedback I received, however this is exactly my motivation for writing.  I would like to thank everyone who took the time to read and provide feedback.  In this post I’ll spell out some of the criticisms of Proof of Stake that I left out of the previous post (again, it was intended for a less technical reader), as well as some new ones that I’d never considered until now, thanks to my readership.

External/Internal Staking

It seemed the major criticism people levied against proof of stake was that the staking was internal to the system rather than external.  /u/jps_  makes the point concisely:

“Proof of Work has something physical at stake, namely the electricity necessary to probably solve the puzzle. In essence, the bits in the network are secured by activity outside the network: someone has to generate that electricity. This results in a stable equilibrium, because in order to compromise the bits in the network, one must expend considerable external energy. The laws of thermodynamics make it very difficult to profit by consuming energy”

I don’t buy that external staking makes proof of work superior.  If we assume a free market economy, I should be able to exchange my money for electricity or cryptocurrency.  Obviously markets aren’t rational and the price of a cryptocurrency or the price of electricity isn’t what it should at the particular time that I make this trade of cryptocurrency to electricity, but that’s a discussion for a different day.  The point I’m making is, technically, the $300 of cryptocurrency I bought is of equal staking value to the $300 of electricity I “staked” to mine the cryptocurrency at the moment I did both.  We can discuss the volatility of electricity prices vs cryptocurrency prices, and decide one is a more stable form of staking than the other, but the fact remains that $300 of electricity is equivalent to $300 in cryptocurrency at the time of mining, and it would therefore take an extra $300 more to perform a 51% attack in both cases.*

One argument you could make is that the ethereum casper roll out is premature.  If the market cap of the ether being used to stake is less than the cost of electricity miners are putting forth to mine ether, then you could make a case that this is raising the probability of a 51% attack.

Another argument is that once the electricity is used, it’s on the blockchain forever, while the staked cryptocurrency can easily be converted back into fiat.  As we saw with the casper smart contract from the previous post, the funds are locked for a certain number of blocks.  This may not be completely irreversible as in the proof of work case, but this number can be altered to suit the needs of the blockchain (which, depending on your opinion may or may not be a good thing, see below), to include irreversibility if need be.

Nothing to Stake

I introduced this briefly in the previous post, but didn’t go into much detail, as I didn’t want to confuse non blockchain readers.  Since it’s already been introduced, I’ll assume everyone is familiar with the problem and why it exists.  I mention that casper purports to solve this problem by use of an arbiter smart contract which penalizes malicious validators.  One argument that kept coming up was concern that a malicious chain could be built and hidden from the rest of the validators and then shown at an opportune time.  Casper handles this by locking the validator funds inside the smart contract and receiving “tattle-tell” transactions from validators in the event that evidence for malicious behaviour is found in previous blocks.  One of these malicious behaviours is not betting on a chain, another is betting on the “wrong” chain.  These are both actions that would be necessary to build this “hidden” chain, and since they’re penalized, you’d run out of ether far before you could pull the attack off.

Rather than discussing this particular case (many others were brought up, and many more will follow I’m sure**), the point is, this smart contract can be altered to ward off any sort of malicious proof of stake behaviour that may arise in the future.

Centralization

This leads us to perhaps the most damning criticism of casper: the fact that casper’s proof of stake involves a smart contract that acts as the arbiter of the validators.  There is no clear analogue to this in the proof of work context, it simply doesn’t exist.  It’s obviously a single point of failure, as well as an attack vector, and, depending on your perspective of the blockchain, a terrible case of centralization.  One point continually driven home in my discussions with /u/Erumara was his/her reluctance to support something so complex compared to proof of work.  And I have to admit, I do agree that proof of work is much simpler than every proof of stake solution I’ve seen proposed.

However, as /u/naterush1997 points out:

In both cases, there is “centralizing” code – in that it is code that everyone relies on. However, the Casper contract being public means that we have the benefit of seeing if there is some fatal flaw and/or bug. In the case of the attack on PoW described above, this would be impossible, as the attack described is indistinguishable from someone having a ton of computing power.

And even if it is overly centralized, I don’t think the technology shouldn’t still be explored.  Obviously there is a large difference in opinion between the bitcoin and ethereum community in this regard, and I intend on exploring these differences in a future post.  For now, let’s just say the ethereum developers are more willing to take concrete risks with their software even if it ends badly. and in fact, I agree with this strategy.  It’s one thing to pursue an idea simply for the sake of it (as in the case of pure research), it’s entirely different to have millions of dollars at stake in the pursuit of an idea.  This is part of what got me into cryptocurrency.  This mixture of direct financial skin in the game of all parties involved (investors, users, developers) and interesting technology can’t be found anywhere else in the world.

Conclusion

One of the best outcomes from the post was this Andreas video /u/dietrolldietroll passed me.  He makes the External/Internal staking argument, but at the end of the video (at around 42:56) when pressed by a question, he says that both proof of stake and proof of work can coexist in the market due to their different use cases.  I’d say that sums up my opinion of the matter fairly well.  As I said before, tech needs to crash and burn to move forward.  I’m not saying proof of stake will be a catastrophic failure for ethereum, but even if it is, it will be a success for the blockchain movement at large.

I received a bunch of criticism (bunch of haters man) in /r/ethermining for an off hand conjecture I made about proof of stake privacy coins, so I intend to fully rectify this in my next post.  Until then!

*After considering this idea during my writing I came upon a new idea.  If it’s true that bitcoin is defended directly by the cost of electricity used to mine coins, could the sum of all electricity used up to block A be considered the true “value” of bitcoin at that block?

**For example, what if a validator never checked in to the smart contract, and is therefore never penalized, then finally showed up, having rewritten the entire blockchain to look much more attractive to the rest of the validators.  They’d need 51% of the currency to pull off this attack, but I believe using the casper smart contract, even this might be possible to defend against.

 

EDIT: /u/jps_ in response to my argument against external staking:

If you buy $300 worth of Electricity and use it to secure a PoW network, it buys a finite time/amount of security. After the expenditure, the electricity is consumed and there is no more security. The only residual value you hold is the rewards earned along the way. These rewards cost you an extrinsic $300 that is not returned to you. This creates an objectively extrinsic value of the reward generated in return for security: basically, the reward is worth the expenditure in electricity generated to consume it.

If you take $300 and buy ETH and stake it, you can stake that $300 for as long as you want. Whenever you cease participating in securing the network, your $300 in ETH is returned to you, in addition to your rewards from staking.

Therefore, when you started staking you had $300 you exchanged for ETH. You finish staking and you hold ETH you can sell for $300. Plus rewards. Your net extrinsic expenditure is zero, and your net gain is the staking rewards.

So PoS is a value tautology. It creates something at no external cost, which has a putative external value greater than zero.

Proof Of Stake vs Proof Of Work

I’ve decided to write post about the differences between proof of stake (a protocol currently being used by Neo and being worked on by Ethereum), and proof of work (a protocol made famous by Bitcoin, and currently in use by coins like ZCash and Monero).  I felt motivated to write this post because there seems to be a bit of confusion when I talk with people about the proof of stake protocol as to what exactly happens.  Many I’ve talked with seem to view it as creating money out of thin air (as if mining wasn’t that already), or at the very least less secure than proof of work.

Proof of Work

I believe people feel more comfortable with proof of work because it’s the simpler of the two protocols.  The idea is this: Your computer is going to try billions of different inputs to a hash algorithm (it’s going to put in work), and if it comes up with the right output (it’s proved that it’s worked on the puzzle sufficiently), you’ll be rewarded. Here is an example proof of work algorithm from the Ethereum cryptocurrency tutorial:

// The coin starts with a challenge
bytes32 public currentChallenge;
// Variable to keep track of when rewards were given
uint public timeOfLastProof;
//Difficulty starts reasonably low
uint public difficulty = 10**32;

function proofOfWork(uint nonce){
    // Generate a random hash based on input
    bytes8 n = bytes8(sha3(nonce, currentChallenge));
    // Check if it's under the difficulty
    require(n >= bytes8(difficulty));
    // Calculate time since last reward was given
    uint timeSinceLastProof = (now - timeOfLastProof);
    // Rewards cannot be given too quickly
    require(timeSinceLastProof >=  5 seconds);
    // The reward to the winner grows by the minute
    balanceOf[msg.sender] += timeSinceLastProof / 60 seconds;
    // Adjusts the difficulty
    difficulty = difficulty * 10 minutes / timeSinceLastProof + 1;
    // Reset the counter
    timeOfLastProof = now;
    // Save a hash that will be used as the next proof
    currentChallenge = sha3(nonce, currentChallenge, block.blockhash(block.number - 1));
}

If you were to mine this coin, you’d essentially send your input (nonce) to the proofOfWork function in this smart contract.  If your input is below the current difficulty level, and it’s been long enough since the last block was mined, you receive a reward, otherwise the function returns (that’s what the require statement does in solidity) and you try the next input you think might result in a sha3 hash below the current difficulty.  This is proof of work mining in a nutshell.

Proof of Stake

Proof of stake has the same goal as proof of work: to achieve distributed consensus of the state of the blockchain.  Going back to the git perspective, both protocols are trying to select maintainers of the blockchain “branch” without allowing anyone too much control.  Proof of stake does this by substituting out hash power for economic power.  The more coins you have, the more likely you, or the block you’ve chosen, is to be used and the more you’ll be rewarded for it.  I believe cryptocurrency developers are moving in this direction because unlike proof of work, proof of stake has the added property that the more coins you’re holding, the more likely you are to act in solidarity with the will of the users of blockchain when selecting blocks.  In proof of work there is a tension between miners and users of the blockchain that may not exist in a proof of stake protocol (this is yet to be seen), as often the users will also be the validators (a miner in proof of stake is often called a validator).  There’s also the added benefit that proof of stake doesn’t cost millions of dollars in power and bandwidth every year to maintain the blockchain.

Casper

Let’s use the Ethereum casper protocol as a detailed example for proof of stake, as this one seems to be getting so many people interested in what proof of stake is.

The casper protocol will involve a smart contract being deployed to the Ethereum blockchain.  An address interested in becoming a validator will send the amount of ETH they would like to stake on blocks to the smart contract.  The smart contract will then receive two messages from validator addresses, PREPARE and COMMIT.  Prepare is essentially a validator saying “I think this set of transactions should be the next block”, if one of these blocks attains a 2/3’s economic vote in the smart contract, it becomes a possibility for a COMMIT.  After the possible PREPARE blocks have been selected, validators vote on this set of blocks with the COMMIT message, once again, if 2/3’s economic vote is found on a COMMIT block, it will be added to the block chain and all the validators who took part in selecting this block will be rewarded for minting the block in proportion to the amount of ETH they deposited to the smart contract when joining the validator pool.  As far as I’m aware, there doesn’t exist a mechanism for selecting validators*, but it could easily be something like a random subset selection of all possible validators weighted by the amount of their deposit in each dynasty.

Nothing to Stake

One of the problems with proof of work is the “nothing to stake” problem.  The idea is as follows: If I don’t have to compute any hard hash puzzles, why not bet on every block that comes my way? Since this incentive structure exists for everyone in a nothing to stake protocol, everyone decides to stake their hard earned crypto currency on every block.  Now we have no consensus, there are 50 different chains all growing at the same rate and all possibly legitimate because no one wants to take the lead and decide on one.  Also because of this lack of consensus, double spend attacks become much easier and more likely than they are on a proof of work protocol.

Ethereum’s casper protocol circumvents the nothing to stake protocol by locking the funds in the smart contract discussed above, only paying them out after a sufficient amount of time, and destroying the ether, or penalizing it, for various kinds of behaviour (to include malicious).

 

Conclusion

I think people are uneasy about proof of stake due to a misunderstanding of proof of work more so than anything else.  As I stated in my git perspective of the blockchain the only reason miners exist is to act as the “maintainer” of the blockchain, and since we want this maintainer to change often, mining was used as a mechanism to distribute time as the maintainer evenly.  With proof of stake, the same thing is happening, it’s just the mechanism to choose maintainers is based on the amount of cryptocurrency a person holds, rather than their hash power.  The 51% attack we saw in the previous post  now becomes a 51% currency attack, whereby you’d have to own 51% of the cryptocurrency in which you’re attacking.  This is a presumably much more difficult feat to accomplish than purchasing 51% of the hash power.  In the currency case, you’ve just purchased 51% of the currency, all the while raising it’s market price and only have 49% of the rest of the currency to defraud, at which point, news will probably have broken that someone purchased 51% of the currency on the market, and the currency is now socially worthless.  In the case of proof of work, you just secretly buy more computing power, or bribe, or even hack existing mining pools, and rather than defrauding 49% of the currency you’re able to defraud all of it.

As you can see, we aren’t creating money out of thin air, at least in the casper protocol, there is a very real chance of losing your money, and your money is also stuck in the smart contract, so it’s no different than a government bond gaining interest, or mining for that matter.

Until next time!

* if someone has any information let me know.  There is a reddit discussion here, but since it’s a year old, I hesitate to trust it given how much Ethereum proof of stake has changed, this seems to suggest its proportionate to the ETH you deposit, Vlad also mentions it as a possibility here.  I looked briefly at the casper source code and didn’t see validator selection anywhere, but since I was brief, there’s a very good chance I wasn’t looking in the correct place.

Iota Address Hygiene and Tangle Transaction Lookup

In my previous post I wrote a broad overview of what the tangle is, and compared it with the blockchain.  Well, this post took off, and I had many great discussions and received a lot of great feedback, as well as new information.  Today I’ll be applying some of this feedback, as well as spreading some of this new information I’ve received over the course of my discussions.

Iota Address Hygiene

The section of the previous post that seemed to strike the largest nerve was in regard to criticisms I’d heard about the tangle protocol.  One of which was given by Eric Hop: “The only drawback with iota is that it’s not safe to send multiple transactions to the same address.” and later Eric produced this forum link explaining the dangers of sending from the same wallet address multiple times.  The reason sending an iota transaction from the same address twice is insecure is because iota has elected to use the quantum resistant Winternitz One-Time Signature Scheme.  I’m not entirely certain of the details of the Winternitz encryption, however I do know the security degrades exponentially the more the encryption is used to sign the same transaction.  This is why the encryption is called a “One-Time Signature Scheme”, it is intended to be used only once.  If you click the above forum link and read through, you’ll see that the iota wallet automatically moves your balance to a new address any time you send any iota on the tangle because of this flaw in the encryption scheme.

While currently I don’t view this as a problem, it is something to be aware of if you’re developing software on the tangle that doesn’t use the wallet for transactions (see phx’s post in the first forum link for how the wallet handles this under the hood).

Transaction Lookup

Another criticism mentioned in the previous post was the question of transaction lookup efficiency.  One of the beautiful things about the block chain is that it is really easy to look up how much bitcoin an address is holding in it.  Simply, follow the blocks back from the current one and keep track of all transactions leaving or entering that address.  With the tangle it seems like this problem becomes insurmountable.  Iota does this with yet another simple, yet novel, idea: toss the concept of order, or time, out the window.  Essentially, it doesn’t care how you got the balance in your address, it simply cares that your balance is never negative.   To do this, a node syncing on the tangle simply iterates all the transactions known to the tangle, and groups them by address, regardless of the order in which they occurred.  This lack of order allows for frameworks like map reduce (something we’ve discussed previously) to be used on the tangle, since transactions can be grouped in parallel.

IOT

One thing that’s been annoying me lately about the iota community is the focus on IOT (internet of things).  I know that is the direction the iota dev’s are pushing the software, and it’s obviously a great use case, however, I don’t feel the tangle should be considered an exclusively IOT protocol.  It has many possible use cases, even simply as a micro payment currency, something other cryptocurrency’s are severely lacking.  I don’t feel that iota should be hitching its wagon to a horse that may be dead in the future, as this dead horse could end up an albatross of the protocol.

This was a short post, but I wanted to have time to fully digest material in the links I was given last week before writing about them.  I also felt that the topics for this post were better treated in isolation, rather than as additions to the tangle vs blockchain material.

I had intended to post about proof of stake next as it seems to be a hot topic lately, however, after doing some research into the tangle protocol, I might go through some of the tutorials, or even on a bug bounty!  We’ll see where my curiosity takes me, but hopefully you’ll be along for the ride! until next time!

 

NOTE: here’s a paper on the Winternitz encryption if you’re interested in the details, given to me by the BlockchainNation Facebook group moderator Greg Dubela (@DoctorDoobs).

 

Iota’s Tangle Protocol

In a previous post we looked at the blockchain.  I explained the blockchain data structure from the perspective of git.  Today I’d like to take a look at iota’s brand new protocol called tangle and explore what makes it different from a blockchain, and why in my opinion it’s such a simple, yet novel idea.

Tangle

When we originally discussed the blockchain I pointed out that it’s essentially a linked list with some very special properties.  It seems like the next logical extension we all should’ve been asking ourselves, yet weren’t until now, is why a linked list?  What other data structures could we apply this same technique to?  This is what the tangle protocol has done.

Rather than storing the transactions on a linked-list, the transactions are stored on a DAG (directed acyclic graph).  Often the most simple ideas are the most brilliant.  This was the case with the blockchain, and now is the case with it’s extension, the tangle protocol.

200px-polytree-svg (a simple example of a DAG)

So what? why does it matter?

No fees, lower transaction times

Because the transactions are in a DAG, it further decentralizes the transactions.  Each node holds one transaction, because of this, they’re now small enough for other users of the protocol to perform validation, and proof of work without needing an ASIC.  As we saw in the blockchain post, the users had to rely on the miners to do the proof of work because a large portion of transactions were stored in a block, and there was only 1 branch.  Now the work can be parallelized and decentralized.  Because this consensus can happen on multiple branches at the same time, this makes transaction time much lower compared to standard blockchain technologies.wii-tangle

Looking at the above picture, it’s as if the blocks have been broken apart and their transactions scattered about in the ever forward moving continuous DAG, and they’re all happening in real time.  With the blockchain we viewed time in a block scale, imagine each block being a day, things could only happen at the end or beginning of a day.  With tangle, we’re able to inspect transactions on an hour, or even minute scale.

Decentralization

I think it’s hard for anyone in the cryptocurrency community to find anything wrong with the tangle protocol.  It’s increasing decentralization in a big way, and that’s something every one can agree on, particularly after the recent bitcoin hard fork, which was due in large part to the centralization of mining power.  With tangle we’re giving the hash power back to the users of the protocol.

I’ve also heard during this interview with one of the founders of iota, that the protocol was recently attacked with 300% hash power, and actually got faster and more robust from the attack.  This is very important, because as a technologist, my instinct was initially to say “well, it isn’t being mined by special hardware, can it really be that cryptographically secure?”  This protocol is still very new, and there is still a lot that needs to be hashed out.  For example, one thing I’ve heard asked is:  “what is the efficiency of transaction look up?”*  since transactions are now in a DAG this certainly raises the complexity associated with finding your transaction in the graph.  It is called a tangle after all, and I don’t know about you, but the word tangle doesn’t necessarily bring about ideas of order to mind.  However, I’m very excited about this paradigm shift away from the blockchain from a purely scientific standpoint.

After having further discussions with the tech community, I’ve elected to write another post directed at the details of some of the following criticisms of iota.  See you then!

 

NOTE: here are a few videos I found that were pretty good while doing research into this new topic. [1, 2, 3]

*Since writing this post, here’s another criticism I’ve found from Mr. Eric Hop:

“The only drawback with iota is that it’s not safe to send multiple transactions to the same address.”

My response to this was: “This seems like it would be easy for them to remedy, no?”

Here is Eric’s response after an impressive deep dive into the internals of the protocol:

You can use an address for receiving as long as you have not used it for any outgoing transaction. What this means is that once you have sent a transaction with a specific address as input, you should never use it again. This is because IOTA uses Winternitz one-time signatures which degrade security exponentially after each reuse.

So I was wrong in that it is unsafe to send multiple transactions to the same address. It only starts to become unsafe once you have spent some of the IOTA on that address.

Spending from the same address multiple times increases the risk that your address will be compromised, but your seed is still secure.

Addresses are generated by the wallet starting at index zero. It increments the index every time it finds an address already in the tangle. When it finds the first unused address that is the address returned.
That is why it is a good idea to connect a receiving address to the tangle already. So that the wallet will not generate the same address again while nothing has been received on that address.

Why the wallet does not simply keep track of the last index used is beyond me. It then could simply start at the next index when a new receive address is required. If I ever find out the reason for this I will follow up.

I also see no particular good reason other than accidentally being able to receive on an address that was spent from already for not being able to generate addresses offline like with Bitcoin wallets.

The seed should be a unique starting point, from which you could generate addresses one after another, incrementing the index every time.
The only security issue that could arise is when you would use the same seed again on a different offline wallet that would then proceed to generate the same string of addresses, or on an online wallet, that would potentially generate the next address in the sequence, in which case the offline wallet does not know about this fact and will happily generate the same next address…”

This made me sceptical of the Winternitz encryption (it seemed to be the cause of the majority of issues).  Eric explained the Winternitz was chosen due to it’s quantum proof qualities.

The blockchain, from a git perspective.

The blockchain is a revolution idea that’s changed the way computation and transaction will be done for decades to come.  There is already plenty of information out there on this topic, so I’ll spare the internet yet another tutorial.  However, I do think I have an interesting perspective to bring to the discussion.  I often tend to view the blockchain as very similar to the wildly popular version control software git.  In this post I’ll be both teaching, as well as defending my unique position.

git

If you’ve spent even a small amount of time in tech you’ve heard of git.  git is decentralized source control, and has revolutionized the way software developers collaborate on projects.  The idea behind git, is essentially a linked-list where each commit is a new node in the linked list pointing back to a previous snapshot of the code.  What’s interesting about git, is you can have forks in your linked-list, and these can be merged or deleted (or, orphaned, in blockchain parlance) at will.capture_stepup1_5_6

In a sense, git is a controllable linked list history of your project (note, that this can be any project, not just software, even artists should use git).

The Blockchain

Now, let’s talk about the blockchain.  The blockchain, is like git, except inside of the linked-list’s nodes lie transactions, rather than code changes.  That’s all bitcoin is, a big long string of git commits with balances of addresses in the commits rather than code changes.  So what is the big deal?

When you’re using git, you don’t really care what a developer does if they fork your code and make changes to it, as long as you get to decide what gets merged back into your original code beforehand.  The same can’t be said of financial transactions. You do care if someone makes changes to that history.  For example, let’s say I was able to fork the bitcoin blockchain, and everyone had to follow whatever changes I made.  Well of course, I’d create a transaction between my address and satoshi’s claiming a few hundred bitcoin.  Then I’d merge it back into the master branch.  Sound good? of course not.  This is the ingenious part of bitcoin.

Mining the blockchain

Because we don’t want people to have the ability to alter the history or future of the blockchain, we need a way of achieving what’s called consensus.  From a git perspective, consensus is simply the maintainer of the repo.  Great, who do we trust to maintain the bitcoin blockchain? satoshi, maybe? how about no one.  How about we place our trust in mathematics.

blockchain

(here is our block chain, look familiar? it’s a git history, but with miners added on the last “commit”)

The way consensus is achieved without a centralized power having any control, is hundreds of thousands of computers around the globe (more computing power than google, in fact) are competing to cryptographically secure the bitcoin blockchain by solving hash puzzles.  The nature of the puzzles is that it’s nearly impossible for the same computer to mine two consecutive blocks.  This enforces decentralization mathematically.  These computers are called “miners” and whenever they find the solution to one of these puzzles, they get to “mine” a block (or commit, from the git perspective) on the blockchain.  But what’s in it for them? Each of the blocks contain a transaction that allows the miner to pay itself a certain amount of bitcoin, this is called the block reward.  Users of bitcoin also must pay transaction fees to move bitcoin from one address to another, the miner of a block also gets to collect these fees.

Forking

Like git, the blockchain can also be forked.  We saw it happen recently, when the blockchain split into the bitcoin cash branch, and the bitcoin branch.  In this case the fork was planned a head of time due to politics, however you’ll often hear the phrase “51% attack”.  This is a security concern for blockchain developers, that they always keep in mind when designing new blockchains.

Revisiting our initial question of “who do we trust to maintain our repo?”.  What would happen if someone managed to get enough computing power together to guarantee that they could mine every block? (note, for simplicity, let’s just leave it at that, but it doesn’t actually have to be every block)  They would position themselves as the “maintainer” of that blockchain.  They could decide what does and does not get added to our master branch of transactions.  This is obviously dangerous.  Luckily, as I said before, the amount of computational power that currently exists in the bitcoin blockchain is more than google has.  In effect, an attacker would have to manage to get their hands more computing power than google.  Attacks like this must still be considered with new non proof of work consensus algorithms that teams are currently developing though.

That’s all for now.  This style of blockchain is called a Proof of Work blockchain.  This means that a miner has proved that they’ve spent enough computation power to solve a hashing puzzle.  They’re proved their work.  There are other styles of blockchain that we’ll explore in future posts.  One is called Proof of Stake, whereby users of the blockchain stake their cryptocurrency in place of  computational power to “bet” on the probability of the next block and are rewarded for correct answers.  Another brand new one is called tangle whereby a user of the “blockchain” (in tangle’s case it’s actually a direct acyclic graph) validates other blocks in place of their transaction fee, thus trading a small amount of hash power in place of a transaction fee.

 

Higher-Order Functions, map, reduce, filter. Yeah, the ones used in Big Data Frameworks like Apache Spark and Hadoop

In my previous post about Functional Programming in Java, I mentioned higher-order functions that are often used in big data frameworks like Hadoop, and Spark.  We’ll be discussing only a small subset of the possible functions available in Spark and Hadoop because this subset are the functions I’ve found most useful as a developer not working in big data.  However, as we’ll see, by the end we’ll have fundamental enough knowledge to apply the function calls to a big data framework if necessary.

map()

We’ll begin with perhaps the easiest function to grasp, and the first in the all too familiar phrase map-reduce: the map function.  The way I’ve often heard map described is that we’re “mapping an input to a specified output”.  To do this mapping we’ll need a function that maps a set of inputs to a set of outputs.  In essence, you’re iterating over every object in a data structure and mapping it to a new object.  It often feels very much to me like a for each loop that forces the programmer to do something inside of it.  Here’s a quick example:

    var arr = ['1','2','3','4','5'];
    arr.map( i => console.log(i));

Here the function we’re supplying to map the inputs (‘1′,’2’,…, etc) is a lambda, that takes i as a parameter and maps this input to the console as it’s output.  It’s obviously the same as:


arr.forEach(function(i) {
  console.log(i);
});

or even the age old:


for(var i in arr){
  console.log(arr[i]);
}

Obviously the example using the forEach could’ve used a lambda in place of the function, and the map function can take a non lambda function, but I wanted to illustrate the many different ways to write it.  We can also add an arbitrarily long function in place of the console log.


arr.map( i => {
  var iTimes20 = i * 20;
  if(i > 3){
    console.log((iTimes20 % 20) == 0);
  }
});

“What happens in map, stays in map”

One thing that always bites me in the ass is I think I’m altering the object passed in by the lambda.  In this case one of the strings ‘1’, ‘2’, …. just as you would be when using the old for loop, but this isn’t the case.  If you do something like this:

arr.map( i => i = 100);

and print out the result of arr, you’ll see it remains unchanged.  This is one of the main tenets of functional programming.  You never want to “cause side effects”.  What this means is, you want to avoid changing the state of a program in unintentional places.  What happens in map, stays in map.  If you want to alter the state of the object in arr, you need to return out a new copy of the original array:


var newArr = arr.map( i => return 100 );

Now if you console log newArr, you’ll see an array of 100’s in place of the original array, and nothing is changed in arr itself.  This is one advantage map has over the old school for loop, you can be certain the state of the original container holding the objects will be the same after the for loop as it was before.

This idea is difficult to wrap your head around as a college student (at least it was for me).  You’re preparing for interviews and space vs time complexity is beat over your head again and again.  The above code looks atrocious from this perspective.  You’re unnecessarily creating a new array, making the space 2n (where n is the size of the array).  Yes, you’re correct.  However, as you’ll see when you get into industry, writing bug free code is often much more important than shaving off a factor of n.  In reality, the code is still O(n), and you can always come back to refactor this bit of code if the bottleneck of the software ends up being this line.  It’s often the case that the bottlenecks appear elsewhere in the software architecture though, and they’ll have been discovered in design.

reduce()

I often think of reduce as a concise replacement for this programming construct:


var finalSum = 0;

var arr = [10,20,30,40,50];

for(var i in arr){
  finalSum += arr[i];
}

The same thing is accomplished using reduce:


var finalSum = add.reduce((countSoFar,currentVal) => {
  return countSoFar + currentVal;
});

Here, countSoFar is an “accumulator” which carries the returned value throughout the function calls, and currentVal is the current object in the collection.  So we’re adding the accumulated sum we’ve seen up to this point, to the current value in the array, and returning this to the countSoFar for the next iteration.  This particular example is kind of trivial, however, since you have access to the accumulator you can do some really interesting things.  For example:


var doubleNestedArray = [
  ['Bob', 'White'],
  ['Clark', 'Kent'],
  ['Bilbo','Baggins']
];

var toMap = doubleNestedArray.reduce((carriedObject, currentArrayValue) => {
  carriedObject[currentArrayValue[0]] = currentArrayValue[1];
  return carriedObject;
}, {});

This will return the array as a map object that looks like this: { Bob: ‘White’, Clark: ‘Kent’, Bilbo: ‘Baggins’ }

Here we see a feature of reduce that I didn’t mention previously.  The second argument after the lambda function is the initial value for the reduce.  In our case it’s an empty javascript object, however you could’ve easily added an initial person to our map:


var toMap= doubleNestedArray.reduce(carriedObject, currentArray) => {
  carriedObject[currentArray[0]] = currentArray[1];
  return carriedObject;
}, {Bruce: 'Wayne'});

I often use reduce on large JSON objects returned by an API, where I want to sum over one attribute across all the objects.  For example, getting the star count from a list of GitHub repos:


return repos.data.reduce(function(count,repo){
  return count + repo.stargazers_count;
},0);

filter()

Finally, lets throw in one more for good measure, as it’s another that very frequently comes up in big data computing (I think I’ve heard the joke somewhere: “it should be map-filter-reduce but that doesn’t roll off the tongue quite like map-reduce”).  The filter function is a replacement for this construct:


var arr = [10,20,30,40];

var lessThan12 = [];

for(var i in arr){
  if(arr[i] < 12){
    lessThan12.push(arr[i]);
  }
}

This can be shortened to:

var lessThan12 = arr.filter( num => return num < 12; );

Naturally, any sort or predicate logic can be put in place of the if statement to select elements from an arbitrary container.

One thing I’ll often forget is you have to return the predicate.  It seems odd because you’re returning a boolean, but getting the actual value that evaluates to true in the returned container.

A big data example

A famous “hello world” example from big data is the word count.  Let’s use our new found knowledge on this problem.


var TheCrocodile = "How doth the little crocodile
Improve his shining tail
And pour the waters of the Nile
On every golden scale
How cheerfully he seems to grin
How neatly spreads his claws
And welcomes little fishes in
With gently smiling jaws";

var stringCount = TheCrocodile
                     .toLowerCase()
                     .split(' ')
                     .reduce((count, word) => {
                       count[word] = count[word] + 1 || 1;
                       return count;
                     }, {});

 

Hopefully now the power of functional programming is beginning to be more apparent.  Counting the words in a string took us 4 lines of code.  Not only this, but this code could be parallelized to multiple machines using Hadoop or Spark.

That’s all for functional programming!  In the next post, we’ll finally start talking about a fundamental topic associated with this blog: cryptocurrency.  I’m particularly interested in Ethereum as a developer, and therefore the solidity programming language.  We’ll start this broad and deeply interesting topic with a brief explanation of what the blockchain is, and move forward from there.  See you then!