The Internet vs Itself

My writing with ETHLend often has me diving in uncharted technology.  Often I’m doing things that have never been done before on the planet, such as connecting ETHLend to uPort for example.  This gives me a unique perspective into the daily lives of decentralized developers.  At the same time I work for a very large and customer centric tech company.  The mesh of these two perspectives has led me to ponder the following question: Is the first to market advantage of centralized user experience too powerful for decentralized tech to overcome?

Web3.0

Ethereum is often touted as “web3.0”.  It’s true that what decentralized technologies are trying to accomplish are magnificent and gargantuan tasks.  It’s also true that using ethereum and other decentralized software feels like using the internet in the early days of the web.  Back then you often had to do what seemed like mysterious incantations, without much idea as to what they were actually doing.  Obviously most cryptocurrency feels very much the same way, with things like the iota wallet showing no balance (1,2,3,4,5,6) and the DAO hack.  This makes for an easy comparison to the early web, however, I argue that the comparison is a bit too easy.

User experience on the centralized web is miles ahead of where it was originally, which makes it miles ahead of its decentralized counterparts.  I don’t see regular users leaving the comforts of their Mercedes to jump back on the horse and buggy of decentralized user experience.  And, more importantly, I believe that this analogue will always be valid.

This is not to say that evangelists and technocrats won’t fully embrace this tech in much the same way as the early internet was embraced.  This is because most of us understand what’s happening behind the mask of UI and are much more willing to forgive technical mistakes or ineptitudes than the average user.  My concern is not that the tech won’t be adopted, but that the adoption will never grow above a certain percentage of the population.

The Internet vs Itself

As stated above, it’s easy to look to the technological growth of the internet as an example for decentralized tech to emulate. The problem with this perspective is, the internet didn’t have to compete with itself.  The internet competed with print, and I would argue, until the user experience felt effortless, it fought an uphill battle in much the same say decentralized tech is fighting now.  The problem is, the current state of user experience on the internet is so far ahead of its decentralized counterparts, and will continue to outpace the growth of its decentralized counterparts.

Does UX matter?

One could argue that user experience isn’t everything.  The needs decentralized technologies are attempting to fill are not necessarily motivated by user experience (although, I would argue the motivations of a technology should always be driven by user experience, but that’s a different discussion for a different day).  However, in order to achieve mass adoption, a superior, or at least equal, decentralized user experience must be achieved.

Put yourself in a layman’s shoes.  They don’t understand what the decentralized tech is trying to achieve, they just know it is or isn’t working as well as the centralized version, and will therefore go back to the centralized alternative in the case when it isn’t.  Take the case of steemit.  They’re paying people to use the platform, but the UX is lagging so far behind that of reddit that it doesn’t matter.  People still continue to stay on reddit, and, even after trying steemit will return to reddit.

Engineers Are Users Too

This leads me to my final point.  Engineers are users too.  Building good, clean, maintainable software, such as reddit, is a difficult enough job as it is.  Software developers don’t need to make their difficult jobs any more difficult than they already are.  This is exactly what they’d be doing by opting for decentralized tooling.  The current state of affairs with respect to infrastructure surrounding the modern web makes software development a pleasant experience.  The same cannot be said for the decentralized tool kit.  I hope this changes in the future, but I fear that much like user experience, the tooling of the modern web is also so far ahead, and will continue to outpace its decentralized counterparts, that this will never happen.

This is in large part due to the Pareto Law nature of technology.  The decentralized web must compete with the centralized web.  The centralized web is an internet of billions of dollars in capital, allowing it to hire hundreds of millions of software engineers to work on even the most minute detail of its infrastructure.  While the decentralized alternatives have a hundred thousand at most scattered about the globe working for less, and often for free.  Don’t get me wrong, I commend their efforts, and count myself as one, considering I spend my weekends writing about this tech.

Cryptocurrency

With cryptocurrency, however, more and more funding is being poured into the decentralized alternatives, which is why it’s even possible to argue against centralization right now.  This is actually my main motivation for investing in cryptocurrency.  I invest not because I’m interested in buying a lambo, but because I know the only way to spearhead this technology is to put capital into it and see where that takes us.  It remains to be seen if this funding can ever eclipse the centralized counterparts, however.

mw-fo937_bitcoi_20170621155338_mg
We’ve all seen these little infographics showing how cryptocurrency actually stacks up against companies like Apple and Amazon.  Granted, this one is dated, but, even with the unprecedented surge in bitcoin price, all of cryptocurrency still doesn’t match Amazon’s market cap.

Conclusion

This is why I won’t be quitting my day job any time soon to join the decentralized army full time.   As an engineer I firmly believe that decentralized technology is a more robust design than the centralized alternatives, but the cat is out of the bag.  Users have grown too accustomed to having their data now.  Don’t get me wrong, the engineering involved in bringing centralized software to fruition is absolutely brilliant, and has taken decades to perfect.  I believe decentralized tech will get there one day.  But when that day comes, the bar for user experience will be moved still higher by centralized technology.

 

 

 

 

 

 

Advertisements

ETHLend & uPort, A Match Made in Heaven

Here is a second post written in my series of articles for ETHLend, an Ethereum based lending start up.  Enjoy!

In my previous write up I mentioned that we would dive deeper into what a possible block chain credit score would look like. Well, in order to have a credit score, you must first have an identity with which to attach that credit score. It therefore seems the most logical place to start in our investigation of credit scoring is identity. Today, we’ll be using uPort to create a loan on ETHLend using our uPort identity. For the sake of brevity, I’m going to assume you’ve followed the steps in the uPort devPortal and have an identity. I’m also going to assume you’re familiar with how ETHLend works.

uport-connect

To start, we’re going to create a quick and dirty node.js cli for illustrative purposes which we’ll use to connect our uPort identity to:

const uport_lib = require('uport-connect');   
const qrcode = require('qrcode-terminal');      
const SimpleSigner = uport_lib.SimpleSigner;   
const Connect = uport_lib.Connect;      
const uport = new Connect('ETHLend Integration', {       
  clientId: 'UPORT_APP_ADDRESS',       
  network: 'rinkeby',      
  signer: SimpleSigner('SIGNING_KEY'),              
  uriHandler = uri => qrcode.generate(uri,{small:true})  
})
uport.requestCredentials(['name'])
  .then(userProfile => {  console.log(userProfile);  })    
  .catch(err => {  console.log(err)  })

You’ll obviously need to npm install uport-connect as well as qrcode-terminal.

Put the above code into a file (may I suggest ETHLendRocks.js?) and run:

node ETHLendRocks.js

You should see a QR code generated on your terminal. Scan it, and grant your uPort mobile app access to your uPort identity credentials. You should see your uPort profile data logged to the terminal. Now that we have an identity, we can store different attributes about this identity using IPFS and the uPort-registry package.

uport-registry

In order integrate uport-registry we need to do a bit of extra set up to our existing application:

const uport_lib = require('uport-connect');
const MNID = require('mnid'); 
const qrcode = require('qrcode-terminal');
const registryArtifact = require('uport-registry'); 
const Contract = require('truffle-contract'); 
const Registry = Contract(registryArtifact);  
const NUMBER_OF_LOANS = 10; //random number of loans of illustration
var registryInstance;  
const SimpleSigner = uport_lib.SimpleSigner;   
const Connect = uport_lib.Connect;      
const uport = new Connect('ETHLend Integration', {       
  clientId: 'UPORT_APP_ADDRESS',       
  network: 'rinkeby',       
  signer: SimpleSigner('SIGNING_KEY'),
  uriHandler = uri => qrcode.generate(uri,{small:true})  
})   
 
const web3 = uport.getWeb3();
Registry.setProvider(web3.currentProvider); 
Registry.deployed()
  .then(reg => {  
    //set the global registryInstance variable to access anywhere in the app
    registryInstance = reg;
  });    
	 
uport.requestCredentials(['name'])   
  .then(userProfile => {  
    //decode the users address based on which network they're on   
    let addressPayload = MNID.decode(userProfile.address);     
    return registryInstance.set('Open Loans',addressPayload.address,NUMBER_OF_LOANS,{from:addressPayload.address});    
  }) 
  .catch(err => {    console.log(err)  })

Using the registryInstance.set parameters we can put any sort of financial data we want to store about a uPort identity. We can then retrieve it using the registryInstance.get() method:

uport.requestCredentials(['name'])
  .then(userProfile => {     
    let addressPayload = MNID.decode(userProfile.address);     
    return instance.set('Open Loans',addressPayload.address,NUMBER_OF_LOANS, {from: addressPayload.address});   
  })
  .then(tx => {     
    const subject = tx.logs[0].args.subject;
    instance.get('Open Loans', subject, subject)
      .then(value => {      
        console.log(hexToString(value));     
      })   
  })
  .catch(err => {     
    console.log(err)   
  })  
	
function hexToString (hex) {     
  var string = '';     
  for (var i = 0; i < hex.length; i += 2) {       
    string += String.fromCharCode(parseInt(hex.substr(i, 2), 16));    
  }     
  return string; 
}

Obviously, this can be used to store information about a uPort identity any time they do anything in an ETHLend dApp. How many times they’ve defaulted, how many loans are currently open, and what value they’re for, are a few examples.

ETHLend

To integrate this uPort identity into the ETHLend ecosystem we simply need to create a contract object with whatever contract we’re interested in interacting with:

const web3 = uport.getWeb3();
const CreateLedgerContractObj = () => {      
  let LedgerContractABI = web3.eth.contract(LEDGER_CONTRACT_ABI);
  let LedgerContractObj = LendingContractABI.at(LEDGER_CONTRACT_ADDRESS);      
  
  return LedgerContractObj;  
}
const LedgerContract = CreateLedgerContractObj();

In this case I’ve chosen to use ETHLend’s Ledger contract. So you’d fill in the LEDGER_CONTRACT_ABI and LEDGER_CONTRACT_ADDRESS with whichever smart contract you’re interested in interacting with.

Once we have the smart contract object, we can call methods on it inside the .then() block of the requestCredentials call:

uport.requestCredentials(['name'])
  .then(userProfile => {     
    let addressPayload = MNID.decode(userProfile.address);     
    const usersEthereumAddress = addressPayload.address;     
		makeLoanRequest(usersEthereumAddress, 100);    
  })   
  .catch(err => {     
    console.log(err)   
  })  
	
const makeLoanRequest = (address, loanAmount) => {   
  LedgerContract.createNewLendingRequest({     
    from: creator,
    value: loanAmount,     
    gas: 2000000    
  }, (err,txhash) => {     
    if(err) throw err;           
    waitForMined(txhash, {blockNumber: null},          
    pendingCB() => {              
      console.log('waiting for loan request to be mined...');         
    },          
    successCB() => {       
      console.log('your loan\'s transaction hash is:',txhash);       
      incrementIPFSLoanCount(address);       
      incrementIPFSOpenLoanAmount(address, loanAmount);       
      //any other financial data you'd like to track can be get and set here...     
    }          
  }) 
}
const incrementIPFSLoanCount = (subject) => {   
  instance.get('Open Loans', subject, subject)
    .then(value => {     
      instance.set('Open Loans', subject, value + 1, {from: subject})    
  }) 
}
const incrementIPFSOpenLoanAmount = (subject, openedLoanAmount) => {   
  instance.get('Open Loan Value', subject, subject)
  .then(value => {     
    instance.set('Open Loans', subject, value + openedLoanAmount, {from: subject})    
  }) 
}

Here I’ve illustrated how one would store the number and amount of open loans made by a uPort identity at the time of loan creation, but any number of metrics can be used and any point in a loaning and borrowing process.

Developers

This brings me to my final point. If we’re going to use IPFS as an open, transparent data store, then the ETHLend smart contract is going to need to update the data associated with the uPort identity via IPFS as well, depending on various conditions known only to the ETHLend smart contract. Currently this is not implemented.

If you’re a developer and you’re interested in developing smart contract code, this is a perfect opportunity to make your mark! Check out the ETHLend repo, and see if you can get your pull request merged into the master branch, or, feel free to get in touch with ETHLend via the many social media channels:

https://twitter.com/ethlend1

https://www.facebook.com/ETHLend/

https://www.instagram.com/ethlend1/

https://steemit.com/ethereum/@ethlend

uPort is always interested in hearing from developers as well and in fact were instrumental in making this article happen.

https://www.uport.me/

https://github.com/uport-project

https://gitter.im/uport-project/Lobby

The Future of Lending is Blockchain

Note

I’ve recently been hired by ETHLend to write about their decentralized lending ecosystem.  I plan on posting all pieces done for them on my blog as well, in order to keep a record of my work.  Below is the first post! Enjoy!

 

 

I invest in everything.  I hold stocks, bonds, mutual funds, and cryptocurrency.  I copy trade.  I hold microloans, real estate, cryptocurrency miners, cash, fiat currency, gold, silver, you name it, I hold it.  The variety of my holdings goes beyond diversification.  I’m interested in testing every form of investment, if nothing more than for pure curiosity of the financial instrument.  It’s one thing to read a book about the subject, it’s entirely different to have some skin in the game.  It’s often that you don’t get a full picture of where you want you finances until you’ve tested the waters in a concrete way.

In a sense, many of these holdings are different forms of lending.  In the case of cryptocurrency you’re often supplying value to a decentralized ecosystem in exchange for some kind of utility, or return, in the case of proof of stake.  In the case of bonds, you’re supplying capital to an entity at some agreed upon interest rate.  However, in all my dealings with traditional financial instruments, there is one thing that separates them from blockchain based financial instruments: their opacity.

Prosper

Let’s take as a comparison a centralized lending platform, and it’s decentralized counterpart.

As previously stated, I hold microloans on the prosper platform.  I initially invested because it seemed to me a novel idea.  I remember when I was younger googling “how to start your own bank”.  I graduated high school during the 2008 financial crisis, and it seemed to me, the banks had it made.  They give you money, do nothing, and profit.  If you don’t pay them back, they take your collateral.  Nearly no risk, and a guaranteed return.  Obviously things aren’t that simple, but this desire to be a moneylender stayed with me.  This desire led to me invest some bitcoin profits into prosper microloans.

The prosper system offers a number of metrics to a would be lender.  Among these are the borrowers state, their FICO score range, their income, and their Debt/Income ratio.  Another metric is the proprietary “Prosper rating”.

prosper

You’d think that all of these metrics would make it easy to choose a borrower that won’t default, and yields a nice return.  However, the world is a complex place, and even the most risk averse borrower can have their spell of bad luck.

ETHLend

With prosper all I get are these metrics, which granted, are great metrics.  However, I have nothing but these metrics.  No past purchases, and certainly no detailed tracking with regard to how the money I’ve let is being spent.  Once the money is lent, it’s a black box that either returns, or doesn’t return interest.

And therein lies the difference between the two platforms.  With ETHLend, I don’t need third parties to background check borrowers.  I don’t need to rely on a “proprietary rating” to inform my decision on whether to lend or not.  And even if I did decide to defer to a blockchain based rating, the rating will be fully open and audit-able.  I can follow every step of the process, in much the same way we can audit the bitcoin code, yet have no idea how our Facebook feed is generated for us.  I can trace every transaction made up to the point of requesting a loan from me, and if anything unsavoury surfaces, or a transaction is made that I’m uncomfortable with, I can refuse the loan.  In the case of a centralized lending service, I have no idea who the borrower is, how they spent their money in the past, or how they’re spending the money I’ve lent to them currently.  Sure, the metrics afford me an broad overview, but with the blockchain I can drill down into every minute detail of an address’s history at my discretion.

Let’s say I lend to an IoT vending machine which uses my currency to order goods to sell, and with it’s profits pays my interest, all via ETHLend.  Currently a credit score on the blockchain doesn’t exist, so let’s assume I know nothing about this vending machine.  Let’s say this device is vending to addresses I’m not comfortable with, or receiving goods from addresses I’m uncomfortable with.  I can immediately blacklist this device as soon as these transactions appear on the blockchain.  And, thanks to the blockchain, if the device has made any transaction in the past that I’m not comfortable with, prior to requesting a loan, I don’t even need to go through the hassle of loaning to it first, and blacklisting it afterwards, I can blacklist it as soon as this unsavoury transaction is made.

The implications for this are obvious.  Since all transactions are open and auditable, every address can have assigned to it a credit score, almost trivially.  In the next post, we’ll discuss the details of what this credit score might look like.  We’ll also take a look at how ETHLend handles collateral, and how the ENS domain names function in this context.  Until then!

 

 

 

How to Stake Lisk

Lisk was one of my first crypto purchases.  I’d been wanting to invest in cryptocurrency while I was in college, but I didn’t want my cash stuck in volatile cryptocurrency in case I needed it during college.  After I graduated I went on a crypto buying spree.  During this time I kept hearing Lisk come up as an alternative to Ethereum.  I did some quick research, saw that it used JavaScript instead of Solidity and was sold.

I’m investing in cryptocurrency for two reasons.  Firstly, as Warren Buffet said, “Never invest in a business you cannot understand.” I believe being a developer gives me an edge when evaluating cryptocurrencies to invest in.  And secondly, I believe in the tech, and am interested in it.  Since I know how quickly things can be built with JavaScript and some of the troubles people have been having with Solidity (myself included) I figured it wasn’t a terrible investment.

Lisk has more than doubled in value since I purchased it, and, we can stake Lisk, which is what we’ll be doing today.

The Wallet

When I purchased Lisk I’ll be honest, I didn’t really know what I was doing.  I just knew I wouldn’t learn what to do if I didn’t invest.  Because of this, I currently have my Lisk in FreeWallet, which is not only unsafe, but you also cannot stake in it.  So our first step will be to download the lisk nano wallet.  This actually isn’t a bad first step, as you can make the assumption that your coins are in an exchange wallet, or any non Lisk wallet for that matter while following this tutorial, so long as you know how to transfer them.

Once you have the wallet, either login or create a new account.  Next, send the transaction to get the LSK into your new nano wallet to start voting.

Voting

Only 101 delegates can forge blocks.  When you vote, it costs 1 LSK and you can vote for 33 delegates with that 1 LSK.  To see all of the delegates available for vote click the voting tab in the nano wallet.  You’ll see that there are obviously more than 101 delegates in that list, however only the top 101 will be able to forge blocks.

When I looked into staking, initially I was a bit confused about the 1 LSK voting fee.  I thought, “so what, I have to keep voting 1 LSK at a time until all my LSK is used?”  Nope, the network knows the value of the LSK in your wallet and puts the weight of your vote and rewards behind your delegates appropriately.  Also, voting is a one time thing, once your votes are set, you’ll continue to receive rewards.

Who to vote for?

Not all of the delegates will pay out rewards to their constituents, some use the rewards to develop the Lisk ecosystem.  So this begs the question, who do you vote for?

There are a number of different sites available to help inform a vote.  The first and most common is EarnLisk.com, another one is tools.mylisk.com, and finally the official Lisk Delegate Monitor.  In the Delegate Monitor, you can click the profile icon next to the username.  This will take you to that delegates forum post which will explain their payout proportion, if they are a pool, and what they’ll do with the funds if they aren’t.

Validating your vote

After casting your votes, you should be able to go to the Lisk Delegate Monitor, click on a delegate you voted for, and see your wallet address under the Voters header.

I’ve yet to receive any rewards yet, but I’ll be sure to update this post when I do and explain the process if it involves anything other than receiving rewards.  Until next time!

Helpful Links

As usual, I like to pass along references to good content that I found while writing my posts.  There really isn’t much about this topic online, but there is one really good youtube channel with good Lisk specific content.

NOTE: since this post it’s been asked what the rate of return is for staking Lisk.  I’ve yet to get any returns so I can’t say for sure.  However, this video lays out all of the math, and makes the claim that it should be around 20% annually, but the block rewards will decrease over 5 years, and therefore, so will the rate of return.

Radeon AMD Beta Blockchain Driver for Ubuntu Linux

In my previous post outlining tips about smart contract deployment using parity and truffle, I mentioned I’ll be passing along a bit of mining news in this post.  AMD has finally released their block chain specific drivers for Linux, on order to overcome growing memory sizes for memory hard mining algorithms.  This will be an extremely short post explaining how to do the upgrade, since it’s almost trivial.

The install

First install the new software:

wget -qO - http://repo.radeon.com/rocm/apt/debian/rocm.gpg.key | sudo apt-key add -
sudo sh -c 'echo deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main > /etc/apt/sources.list.d/rocm.list'
sudo apt-get update
sudo apt-get install rocm

Next add GRUB_CMDLINE_LINUX="amdgpu.vm_fragment_size=9" to the grub file:

sudo vim  /etc/default/grub
GRUB_CMDLINE_LINUX="amdgpu.vm_fragment_size=9"
sudo update-grub 

and reboot.

Hash Rate

We experienced a ~2 Mh/s increase on each GPU after installing this new driver, which seems to be what everyone else is getting as well.  As of now there doesn’t appear to be any drawbacks associated with this new driver, so give it a shot!

Notes

A couple things to note.  When I did this install, I thought it was replacing the amdgpu-pro drivers we installed in this tutorial.  So I un-installed them before the reboot.  This is not correct.  The rocm package is a kernel level component for the entire amdgpu-pro driver package, not a replacement.

That’s all for now! as I said, this one would be short.  Next time I plan on discussing some tweaks we’ve made to our operating system, and why we made them.  Until then!

Lessons learned from an ICO

Background

A few months ago I dabbled in solidity development.  I’m interested in all things cryptocurrency, and ethereum smart contracts appear to be a very large aspect of the current state of cryptocurrency.  As such, I had no choice but to experiment with the technology.  This experimentation involved using the truffle.js framework to follow the ethereum foundation tutorials.

During initial smart contract development it’s common to use the testrpc client as this speeds up deployment times allowing for quicker code iterations. There is also less complexity in this chain, so less can go wrong and you can focus explicitly on development.  However, given that the eventual goal is deploying smart contracts to the main net, eventually you begin to turn your attention to the ethereum test networks.  After testing my contracts using testrpc my next step is typically to test on the parity development chain, which similar, to the testrpc client has less complexity but is an actual blockchain.  After testing on the parity dev chain, I move to the kovan or ropsten test networks.  At the time the ropsten network was under attack (and looks like it is again).  So my only choice for testing was kovan.

Kovan

This is where I begin to have problems in my development and eventually moved on to other aspects of technology (I think at the time it was react-native).  I was working through a tutorial to deploy a DAO like smart contract, and everything was going well, until I attempted to deploy the contract to the kovan test network.

When I would deploy via the testrpc client, or even the parity dev chain, the contract would deploy successfully and I could interact with it via javascript.  However, on kovan the deploy would hang, and then after a long period of time, would fail with this cryptic error:

Error encountered, bailing. Network state unknown. Review successful transactions manually.
    Error: Contract transaction couldn't be found after 50 blocks

At the time I didn’t really understand what was going on behind the scenes with regard to addresses and transactions.  I knew that I had to sign the transaction in parity, but I had already deployed on the parity development chain, created two different accounts on the kovan chain, one email verified in order to get some test ether, and had my main net accounts floating around in the parity files as well.  So I tried a number of different key files, unsure of which was correct,  none of them were signing the transaction successfully, or if they were, there wasn’t enough ether in the wallet.

At the time I wasn’t even sure I was barking up the right tree, and given that I wasn’t actually trying to deploy to the main net anyway, I posted to stack overflow, never heard back, and shelved the issue.

Enter the ICO

A few weeks ago I posted about proof of stake, and through this post met the CEO of a blockchain start up.  I ended up proof reading their white paper, and kept in touch during their ICO.  During the pre-sale they were deploying their smart contract for the ICO and were getting the exact same error I was getting months ago.  Since this seemed to be a recurring error rather than a one off issue specific to me I felt this warranted more investigation.

I went back to my original DAO smart contract that was giving me issues and tried deploying it again.  During this time the CEO made a number of observations eventually leading to a successful deployment of their smart contract.

Here are a few lessons to keep in mind when deploying smart contracts to both the main net and the test nets.

Don’t forget to sign your transaction in parity

Since it’d been a few months since my previous test deployment I had to remind myself how to process works.  When you run:

truffle migrate --network kovan

The transaction still needs to be signed by the address set in the truffle.js file for the kovan network settings.  If you forget this, truffle will just wait, and after a very long time finally fail with the cryptic error message mentioned above.

There’s no harm in trying different keys

As users we’ve been trained that after too many password attempts you’ll be locked out of your account.  In my case I had a bunch of different key files for the same address, if I’d just tried all of them eventually I would’ve found the correct one.

Key management is paramount

I wouldn’t have needed to try every key file if I’d paid more attention to what I was doing with each key while creating test accounts.  As an ethereum user, this important, but it becomes even more important as this can introduce inadvertent bugs, adding to the already high cognitive load associated with development.

Make sure you have enough ether to cover the gas costs

I remember this kept coming up while I was troubleshooting the issue a few months ago.  The first question asked in forums was always “what did you set your gas price to?” and “does the signing address have enough ether to cover this cost?”

Usually when you sign the transaction in parity it will allow you to set the gas price at that time, and will show you the current average gas price on the network.

Screenshot from 2017-09-29 19-47-03

Screenshot from 2017-09-29 19-47-20

You can also set the gas price explicitly in the truffle.js network settings.  For example:


kovan:{

from: 'KOVAN ADDRESS',
network_id: 42,
host:'localhost',
port:8545,
gas: 4712388,
gasPrice: 25000000000
}

would set the transaction to use 4712388 gas with a gas price of 25000000000 on the kovan network.

Use the parity transaction viewer

Screenshot from 2017-09-29 19-53-17

This was one of the jewels passed along to me by the CEO that I wasn’t aware of.  Parity comes with a bunch of dApps, and one of them is a transaction viewer.  This allows you to keep tabs on the state of your transactions, as well as view other pending transactions on the blockchain.  I believe this is what led the CEO to the truffle.js gas price file insight.

Use etherscan

At the end of the day, when I finally went to check if my contract deployed successfully on the kovan network, I looked at all transactions made by my kovan address, and some of my transactions from months ago actually made it onto the blockchain.  Truffle uses what they call an “initial migration” and then the actual contract deploy.  Some of my initial migrations made it into the blockchain, but not the actual contracts, until sorting out the rest of the things discussed above.

This lesson is an obvious one.  Always check etherscan.  Although, some times this may add to the confusion.  Since the kovan network was sluggish at the time, it took awhile for my transactions to show up, this coupled with the cryptic truffle error lead me to believe absolutely nothing was happening.

Conclusion

Most of these are trivial to debug, but when combined lead to difficult debugging, add to that cryptic error messages and it can often be difficult to break down what’s going wrong in a systematic way.  But, this is brand new tech, and because of this, if you’re a developer, any help with these frameworks will speed up development iterations and in turn make this tech easier to work with for other developers in the future. parity truffle

That’s all for now.  I’ve recently upgraded my miner to a new beta AMD blockchain driver, so I may pass along this bit of info in my next post.  Until then!

A Beginner’s Altcoin Mining Setup with AMD Radeon RX470s, Ubuntu, and Claymore Dual Miner

In previous posts I’ve mentioned that in addition to researching cryptocurrency, I also mine it.  This set off a small flurry of questions about the process of printing money with your computer.

1*kN-euZQpGTqcvB4O5KhSlw

Today I’ll begin the first in a series of posts about altcoin mining.

Buying the hardware

Before you can do any kind of set up, you obviously have to invest in the hardware.  There is a boatload of information out there.  We set out to mimic the Ethereum miners that currently exist as a baseline and plan to explore other hardware now that we’ve accomplished this task.

With mining, you’re building a computer from scratch.  This means you’ll need at the very least the following parts: a CPU, a motherboard, RAM, a HDD, a power supply, and in the case of a miner or gaming computer GPUs.

The community consensus for a baseline Ethereum mining rig is as follows:

All together, purchasing this gear, with all 6 GPUs the motherboard can handle, will end up costing you about $2000, and should net you a hash rate of around 120 MH/s.  If you mine Ethereum, this will make you around $200 per month, without doing anything, but keeping the miner running (which is actually more difficult than it sounds, more on this in another post).  However, I’d recommend to begin with you only purchase 1 GPU, to test with, making the initial investment only around $1000, but the hash rate only around 20 MH/s.

We came to this hardware consensus by doing a bit of research.  Here are a few resources we found useful during this research:

The OS

I’m not sure how necessary this is, but we had the OS installed on a SSD prior to the hardware installation (mostly because we were waiting for the hardware to arrive anyway).  Even if it isn’t necessary, it was a good way to test the hardware installation all the way through to OS boot.

We’re using Ubuntu as we’re planning on scaling beyond 1 miner and didn’t want to pay the licensing fee for Windows with every new miner.  As we’ll see in future posts, this is nice from a customization perspective as well.

There are plenty of tutorials available online for installing Ubuntu, here’s a link to the official one.  As usual, if you run into any snags during the process don’t hesitate to get in touch!

Assembling the Hardware

Great! All the hardware has finally arrived! now what?

I’m a software developer, so if you’re like me, the computer has always come assembled ready to be programmed.

52f21c5369111c889e0fe58442ae827d

This is where the resources mentioned above became useful.  We used buriedOne for info on the hardware set up, as well as EVGA’s official video.  If you’re lucky you won’t have any snags.  If you’re not lucky, the problems could be anywhere from a bad GPU, to a bad display monitor cable, to you shorting something with static while doing the install.  If you do have any problems, feel free to get in touch! I’d be happy to work through any issues or point you in the right direction.

The Driver

If you’ve made it to this point, your miner is now booting into Ubuntu, however, since the GPUs aren’t the integrated GPU, Ubuntu doesn’t know how to talk to them.  Because of this you need to install AMD’s drivers.  Since this information appeared to be pretty sparse online, I’ll go in depth rather than linking to other resources as I typically do.

First, I’d recommend you install ssh, as it might be useful to have remote access to the miner during installation in case you have any hardware issues.

Then go to AMD’s website and find the link to the Ubuntu download (AMDGPU-Pro Driver Version 17.30 for Ubuntu 16.04.3​) and download the driver.  Then go to this AMD tutorial explaining how to install the driver.  The steps should work, HOWEVER, during the step

./amdgpu-pro-install –y

We had to add the –compute flag, as such:


./amdgpu-pro-install --compute

 

Otherwise, on reboot and login, the Ubuntu desktop failed to load.

If you lose access to your mouse and/or keyboard, you can ssh into the miner and run the following command to get them back


sudo apt-get install --reinstall xserver-xorg-input-all

 

And finally, if you ever want to try a different driver, or have simply had enough with mining, you can run this:


amggpu-pro-uninstall

 

From anywhere in the terminal (it’s added to the path) to uninstall the drivers.

Verifying the Driver Install

After reboot you can run the following command to ensure that Ubuntu is in fact seeing the AMD GPU:


lspci -vnnn | perl -lne 'print if /^\d+\:.+(\[\S+\:\S+\])/' | grep VGA

 

You should see a line similar to:


01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:67df] (rev cf) (prog-if 00 [VGA controller])

For each GPU attached to the miner.

 

The Software

If Ubuntu is able to detect and use your GPUs, it’s time to start mining some coins!

The de facto mining software for ethereum is claymore’s dual miner so that’s what we’ll use for this tutorial.  This link (starting from Step 5) was immensely helpful for the initial set up, and we followed it as a baseline, but have since experimented with a set up that works better for us.  You can also set up dual mining using this link.

Congrats! now you’re mining Ethereum!  Making money while you sleep!  Printing money with your computer!

In the next mining post I’ll walk you through some software customizations we’ve made to our rig, as well as some explorations we’ve made into hardware alternatives to the baseline system.

 

 

 

Criticisms of Proof of Stake

Oh boy.  What a week it’s been.  My previous post meant to give a brief overview of Proof of Stake for non technical readers seemed to strike a nerve.

To start, I think people misunderstood the motive of the post.  I was simply giving a brief technical explanation of proof of stake for non technical readers, and some readers seemed to interpret this as support for proof of stake.  I’m a software developer.  I don’t care about the moral or ethical implications of power consumption or immutability.  I’m purely interested in the mechanisms, and I’ve yet to find someone who doesn’t agree that the casper protocol isn’t at least interesting, which is why I blogged about it.

As some of my readers know, one of the main reasons I post is to partake in the discussion that ensues afterwards, and there was a lot of it.  My previous post was on the frontpage of /r/blockchain for two days as well as /r/ethermining for a day and a half, which is great because these were my target audience in writing the post.  However, I also posted on a whim to /r/btc and /r/ethereum where it stayed on the front page of controversial for an entire day, which in my opinion is also great.  I wasn’t expecting the feedback I received, however this is exactly my motivation for writing.  I would like to thank everyone who took the time to read and provide feedback.  In this post I’ll spell out some of the criticisms of Proof of Stake that I left out of the previous post (again, it was intended for a less technical reader), as well as some new ones that I’d never considered until now, thanks to my readership.

External/Internal Staking

It seemed the major criticism people levied against proof of stake was that the staking was internal to the system rather than external.  /u/jps_  makes the point concisely:

“Proof of Work has something physical at stake, namely the electricity necessary to probably solve the puzzle. In essence, the bits in the network are secured by activity outside the network: someone has to generate that electricity. This results in a stable equilibrium, because in order to compromise the bits in the network, one must expend considerable external energy. The laws of thermodynamics make it very difficult to profit by consuming energy”

I don’t buy that external staking makes proof of work superior.  If we assume a free market economy, I should be able to exchange my money for electricity or cryptocurrency.  Obviously markets aren’t rational and the price of a cryptocurrency or the price of electricity isn’t what it should at the particular time that I make this trade of cryptocurrency to electricity, but that’s a discussion for a different day.  The point I’m making is, technically, the $300 of cryptocurrency I bought is of equal staking value to the $300 of electricity I “staked” to mine the cryptocurrency at the moment I did both.  We can discuss the volatility of electricity prices vs cryptocurrency prices, and decide one is a more stable form of staking than the other, but the fact remains that $300 of electricity is equivalent to $300 in cryptocurrency at the time of mining, and it would therefore take an extra $300 more to perform a 51% attack in both cases.*

One argument you could make is that the ethereum casper roll out is premature.  If the market cap of the ether being used to stake is less than the cost of electricity miners are putting forth to mine ether, then you could make a case that this is raising the probability of a 51% attack.

Another argument is that once the electricity is used, it’s on the blockchain forever, while the staked cryptocurrency can easily be converted back into fiat.  As we saw with the casper smart contract from the previous post, the funds are locked for a certain number of blocks.  This may not be completely irreversible as in the proof of work case, but this number can be altered to suit the needs of the blockchain (which, depending on your opinion may or may not be a good thing, see below), to include irreversibility if need be.

Nothing to Stake

I introduced this briefly in the previous post, but didn’t go into much detail, as I didn’t want to confuse non blockchain readers.  Since it’s already been introduced, I’ll assume everyone is familiar with the problem and why it exists.  I mention that casper purports to solve this problem by use of an arbiter smart contract which penalizes malicious validators.  One argument that kept coming up was concern that a malicious chain could be built and hidden from the rest of the validators and then shown at an opportune time.  Casper handles this by locking the validator funds inside the smart contract and receiving “tattle-tell” transactions from validators in the event that evidence for malicious behaviour is found in previous blocks.  One of these malicious behaviours is not betting on a chain, another is betting on the “wrong” chain.  These are both actions that would be necessary to build this “hidden” chain, and since they’re penalized, you’d run out of ether far before you could pull the attack off.

Rather than discussing this particular case (many others were brought up, and many more will follow I’m sure**), the point is, this smart contract can be altered to ward off any sort of malicious proof of stake behaviour that may arise in the future.

Centralization

This leads us to perhaps the most damning criticism of casper: the fact that casper’s proof of stake involves a smart contract that acts as the arbiter of the validators.  There is no clear analogue to this in the proof of work context, it simply doesn’t exist.  It’s obviously a single point of failure, as well as an attack vector, and, depending on your perspective of the blockchain, a terrible case of centralization.  One point continually driven home in my discussions with /u/Erumara was his/her reluctance to support something so complex compared to proof of work.  And I have to admit, I do agree that proof of work is much simpler than every proof of stake solution I’ve seen proposed.

However, as /u/naterush1997 points out:

In both cases, there is “centralizing” code – in that it is code that everyone relies on. However, the Casper contract being public means that we have the benefit of seeing if there is some fatal flaw and/or bug. In the case of the attack on PoW described above, this would be impossible, as the attack described is indistinguishable from someone having a ton of computing power.

And even if it is overly centralized, I don’t think the technology shouldn’t still be explored.  Obviously there is a large difference in opinion between the bitcoin and ethereum community in this regard, and I intend on exploring these differences in a future post.  For now, let’s just say the ethereum developers are more willing to take concrete risks with their software even if it ends badly. and in fact, I agree with this strategy.  It’s one thing to pursue an idea simply for the sake of it (as in the case of pure research), it’s entirely different to have millions of dollars at stake in the pursuit of an idea.  This is part of what got me into cryptocurrency.  This mixture of direct financial skin in the game of all parties involved (investors, users, developers) and interesting technology can’t be found anywhere else in the world.

Conclusion

One of the best outcomes from the post was this Andreas video /u/dietrolldietroll passed me.  He makes the External/Internal staking argument, but at the end of the video (at around 42:56) when pressed by a question, he says that both proof of stake and proof of work can coexist in the market due to their different use cases.  I’d say that sums up my opinion of the matter fairly well.  As I said before, tech needs to crash and burn to move forward.  I’m not saying proof of stake will be a catastrophic failure for ethereum, but even if it is, it will be a success for the blockchain movement at large.

I received a bunch of criticism (bunch of haters man) in /r/ethermining for an off hand conjecture I made about proof of stake privacy coins, so I intend to fully rectify this in my next post.  Until then!

*After considering this idea during my writing I came upon a new idea.  If it’s true that bitcoin is defended directly by the cost of electricity used to mine coins, could the sum of all electricity used up to block A be considered the true “value” of bitcoin at that block?

**For example, what if a validator never checked in to the smart contract, and is therefore never penalized, then finally showed up, having rewritten the entire blockchain to look much more attractive to the rest of the validators.  They’d need 51% of the currency to pull off this attack, but I believe using the casper smart contract, even this might be possible to defend against.

 

EDIT: /u/jps_ in response to my argument against external staking:

If you buy $300 worth of Electricity and use it to secure a PoW network, it buys a finite time/amount of security. After the expenditure, the electricity is consumed and there is no more security. The only residual value you hold is the rewards earned along the way. These rewards cost you an extrinsic $300 that is not returned to you. This creates an objectively extrinsic value of the reward generated in return for security: basically, the reward is worth the expenditure in electricity generated to consume it.

If you take $300 and buy ETH and stake it, you can stake that $300 for as long as you want. Whenever you cease participating in securing the network, your $300 in ETH is returned to you, in addition to your rewards from staking.

Therefore, when you started staking you had $300 you exchanged for ETH. You finish staking and you hold ETH you can sell for $300. Plus rewards. Your net extrinsic expenditure is zero, and your net gain is the staking rewards.

So PoS is a value tautology. It creates something at no external cost, which has a putative external value greater than zero.

Proof Of Stake vs Proof Of Work

I’ve decided to write post about the differences between proof of stake (a protocol currently being used by Neo and being worked on by Ethereum), and proof of work (a protocol made famous by Bitcoin, and currently in use by coins like ZCash and Monero).  I felt motivated to write this post because there seems to be a bit of confusion when I talk with people about the proof of stake protocol as to what exactly happens.  Many I’ve talked with seem to view it as creating money out of thin air (as if mining wasn’t that already), or at the very least less secure than proof of work.

Proof of Work

I believe people feel more comfortable with proof of work because it’s the simpler of the two protocols.  The idea is this: Your computer is going to try billions of different inputs to a hash algorithm (it’s going to put in work), and if it comes up with the right output (it’s proved that it’s worked on the puzzle sufficiently), you’ll be rewarded. Here is an example proof of work algorithm from the Ethereum cryptocurrency tutorial:

// The coin starts with a challenge
bytes32 public currentChallenge;
// Variable to keep track of when rewards were given
uint public timeOfLastProof;
//Difficulty starts reasonably low
uint public difficulty = 10**32;

function proofOfWork(uint nonce){
    // Generate a random hash based on input
    bytes8 n = bytes8(sha3(nonce, currentChallenge));
    // Check if it's under the difficulty
    require(n >= bytes8(difficulty));
    // Calculate time since last reward was given
    uint timeSinceLastProof = (now - timeOfLastProof);
    // Rewards cannot be given too quickly
    require(timeSinceLastProof >=  5 seconds);
    // The reward to the winner grows by the minute
    balanceOf[msg.sender] += timeSinceLastProof / 60 seconds;
    // Adjusts the difficulty
    difficulty = difficulty * 10 minutes / timeSinceLastProof + 1;
    // Reset the counter
    timeOfLastProof = now;
    // Save a hash that will be used as the next proof
    currentChallenge = sha3(nonce, currentChallenge, block.blockhash(block.number - 1));
}

If you were to mine this coin, you’d essentially send your input (nonce) to the proofOfWork function in this smart contract.  If your input is below the current difficulty level, and it’s been long enough since the last block was mined, you receive a reward, otherwise the function returns (that’s what the require statement does in solidity) and you try the next input you think might result in a sha3 hash below the current difficulty.  This is proof of work mining in a nutshell.

Proof of Stake

Proof of stake has the same goal as proof of work: to achieve distributed consensus of the state of the blockchain.  Going back to the git perspective, both protocols are trying to select maintainers of the blockchain “branch” without allowing anyone too much control.  Proof of stake does this by substituting out hash power for economic power.  The more coins you have, the more likely you, or the block you’ve chosen, is to be used and the more you’ll be rewarded for it.  I believe cryptocurrency developers are moving in this direction because unlike proof of work, proof of stake has the added property that the more coins you’re holding, the more likely you are to act in solidarity with the will of the users of blockchain when selecting blocks.  In proof of work there is a tension between miners and users of the blockchain that may not exist in a proof of stake protocol (this is yet to be seen), as often the users will also be the validators (a miner in proof of stake is often called a validator).  There’s also the added benefit that proof of stake doesn’t cost millions of dollars in power and bandwidth every year to maintain the blockchain.

Casper

Let’s use the Ethereum casper protocol as a detailed example for proof of stake, as this one seems to be getting so many people interested in what proof of stake is.

The casper protocol will involve a smart contract being deployed to the Ethereum blockchain.  An address interested in becoming a validator will send the amount of ETH they would like to stake on blocks to the smart contract.  The smart contract will then receive two messages from validator addresses, PREPARE and COMMIT.  Prepare is essentially a validator saying “I think this set of transactions should be the next block”, if one of these blocks attains a 2/3’s economic vote in the smart contract, it becomes a possibility for a COMMIT.  After the possible PREPARE blocks have been selected, validators vote on this set of blocks with the COMMIT message, once again, if 2/3’s economic vote is found on a COMMIT block, it will be added to the block chain and all the validators who took part in selecting this block will be rewarded for minting the block in proportion to the amount of ETH they deposited to the smart contract when joining the validator pool.  As far as I’m aware, there doesn’t exist a mechanism for selecting validators*, but it could easily be something like a random subset selection of all possible validators weighted by the amount of their deposit in each dynasty.

Nothing to Stake

One of the problems with proof of work is the “nothing to stake” problem.  The idea is as follows: If I don’t have to compute any hard hash puzzles, why not bet on every block that comes my way? Since this incentive structure exists for everyone in a nothing to stake protocol, everyone decides to stake their hard earned crypto currency on every block.  Now we have no consensus, there are 50 different chains all growing at the same rate and all possibly legitimate because no one wants to take the lead and decide on one.  Also because of this lack of consensus, double spend attacks become much easier and more likely than they are on a proof of work protocol.

Ethereum’s casper protocol circumvents the nothing to stake protocol by locking the funds in the smart contract discussed above, only paying them out after a sufficient amount of time, and destroying the ether, or penalizing it, for various kinds of behaviour (to include malicious).

 

Conclusion

I think people are uneasy about proof of stake due to a misunderstanding of proof of work more so than anything else.  As I stated in my git perspective of the blockchain the only reason miners exist is to act as the “maintainer” of the blockchain, and since we want this maintainer to change often, mining was used as a mechanism to distribute time as the maintainer evenly.  With proof of stake, the same thing is happening, it’s just the mechanism to choose maintainers is based on the amount of cryptocurrency a person holds, rather than their hash power.  The 51% attack we saw in the previous post  now becomes a 51% currency attack, whereby you’d have to own 51% of the cryptocurrency in which you’re attacking.  This is a presumably much more difficult feat to accomplish than purchasing 51% of the hash power.  In the currency case, you’ve just purchased 51% of the currency, all the while raising it’s market price and only have 49% of the rest of the currency to defraud, at which point, news will probably have broken that someone purchased 51% of the currency on the market, and the currency is now socially worthless.  In the case of proof of work, you just secretly buy more computing power, or bribe, or even hack existing mining pools, and rather than defrauding 49% of the currency you’re able to defraud all of it.

As you can see, we aren’t creating money out of thin air, at least in the casper protocol, there is a very real chance of losing your money, and your money is also stuck in the smart contract, so it’s no different than a government bond gaining interest, or mining for that matter.

Until next time!

* if someone has any information let me know.  There is a reddit discussion here, but since it’s a year old, I hesitate to trust it given how much Ethereum proof of stake has changed, this seems to suggest its proportionate to the ETH you deposit, Vlad also mentions it as a possibility here.  I looked briefly at the casper source code and didn’t see validator selection anywhere, but since I was brief, there’s a very good chance I wasn’t looking in the correct place.

Iota Address Hygiene and Tangle Transaction Lookup

In my previous post I wrote a broad overview of what the tangle is, and compared it with the blockchain.  Well, this post took off, and I had many great discussions and received a lot of great feedback, as well as new information.  Today I’ll be applying some of this feedback, as well as spreading some of this new information I’ve received over the course of my discussions.

Iota Address Hygiene

The section of the previous post that seemed to strike the largest nerve was in regard to criticisms I’d heard about the tangle protocol.  One of which was given by Eric Hop: “The only drawback with iota is that it’s not safe to send multiple transactions to the same address.” and later Eric produced this forum link explaining the dangers of sending from the same wallet address multiple times.  The reason sending an iota transaction from the same address twice is insecure is because iota has elected to use the quantum resistant Winternitz One-Time Signature Scheme.  I’m not entirely certain of the details of the Winternitz encryption, however I do know the security degrades exponentially the more the encryption is used to sign the same transaction.  This is why the encryption is called a “One-Time Signature Scheme”, it is intended to be used only once.  If you click the above forum link and read through, you’ll see that the iota wallet automatically moves your balance to a new address any time you send any iota on the tangle because of this flaw in the encryption scheme.

While currently I don’t view this as a problem, it is something to be aware of if you’re developing software on the tangle that doesn’t use the wallet for transactions (see phx’s post in the first forum link for how the wallet handles this under the hood).

Transaction Lookup

Another criticism mentioned in the previous post was the question of transaction lookup efficiency.  One of the beautiful things about the block chain is that it is really easy to look up how much bitcoin an address is holding in it.  Simply, follow the blocks back from the current one and keep track of all transactions leaving or entering that address.  With the tangle it seems like this problem becomes insurmountable.  Iota does this with yet another simple, yet novel, idea: toss the concept of order, or time, out the window.  Essentially, it doesn’t care how you got the balance in your address, it simply cares that your balance is never negative.   To do this, a node syncing on the tangle simply iterates all the transactions known to the tangle, and groups them by address, regardless of the order in which they occurred.  This lack of order allows for frameworks like map reduce (something we’ve discussed previously) to be used on the tangle, since transactions can be grouped in parallel.

IOT

One thing that’s been annoying me lately about the iota community is the focus on IOT (internet of things).  I know that is the direction the iota dev’s are pushing the software, and it’s obviously a great use case, however, I don’t feel the tangle should be considered an exclusively IOT protocol.  It has many possible use cases, even simply as a micro payment currency, something other cryptocurrency’s are severely lacking.  I don’t feel that iota should be hitching its wagon to a horse that may be dead in the future, as this dead horse could end up an albatross of the protocol.

This was a short post, but I wanted to have time to fully digest material in the links I was given last week before writing about them.  I also felt that the topics for this post were better treated in isolation, rather than as additions to the tangle vs blockchain material.

I had intended to post about proof of stake next as it seems to be a hot topic lately, however, after doing some research into the tangle protocol, I might go through some of the tutorials, or even on a bug bounty!  We’ll see where my curiosity takes me, but hopefully you’ll be along for the ride! until next time!

 

NOTE: here’s a paper on the Winternitz encryption if you’re interested in the details, given to me by the BlockchainNation Facebook group moderator Greg Dubela (@DoctorDoobs).