Why I Turned Down an Interview with Google

Around the time I was graduating from Utah State University I went on a massive job hunt.  The aim of this massive job hunt was more to garner interviewing chops, as well as to experience as many different companies and their interview process as I could.  During that time I got offers from a bunch of great companies and got to visit many different corporate campuses all over the United States.


The obvious front runner for every computer science major is Google.  To many of us Google represents the pinnacle of computer science.  They pay really well, they have a great company culture, they’re illustrious, they’ve got great brand recognition for non-technical people, and the interview is well known as one of the most difficult. They also have the 20% rule (that doesn’t actually exist), which peaked my interest above all else.

My thought was, I’ll prepare for an interview with Google, and if I don’t manage to land an interview or get an offer, hopefully those skills will be transferable to other interviews at other companies.  As luck would have it, I did manage to interview at Google, and, as luck would have it, I also managed to not receive an offer.  However, I did receive offers from every other company that I interviewed with.


One of the most intriguing processes I went through was at Amazon.  In fact, I didn’t even have an on-site interview.  The interview process for college graduates at Amazon at the time involved an online pre-screen (nearly every other company also has this, including Google).  After this, you’re given a proctored online behavioral assessment, immediately followed by a coding assessment, which is also proctored.  At the time, I thought this proctored assessment was taking the place of a phone screen and would ultimately lead to an on-site interview.  I took it, and did relatively well.  I knew I did well because before the interview process I searched for what the assessment generally involved and found there were a certain number of coding questions.  I finished these and had time to spare, so I clicked complete, thinking “great, I’ll get called for an on-site”, instead yet another coding problem came up, this one more difficult that the previous ones.  I couldn’t quite get all of the tests to pass and eventually ran out of time.  After a few weeks I got an email from an Amazon recruiter saying something along the lines of “we’d like to move forward” and nothing else.  Again, I was thinking “great, I’ll get an on-site at Amazon”.  A few more weeks pass, and I’m directly sent an offer letter via email.

IBM Watson

Contrast this to how IBM handled the offer process.  I did a similar online coding assessment, which I had to do in a language I wasn’t terribly familiar with (the choices were Python and Java).  I then had a phone screen which went extremely well.  Rather than moving forward with an on-site or directly to an offer I was sent an invitation to some bizarre hiring event in Atlanta.  Mind you, I was still a student at USU and the event was scheduled near to midterms or finals (I don’t remember which).  I was also trying to prepare for interviews at this time, so I was spending all of my time on weekends and after work/school working through algorithm questions.  It felt like a waste of resources, especially considering after searching for what the hiring event entailed I found out that everyone would receive an offer from IBM regardless of how they performed at the event.  For these reasons I tried to get out of going, but was told that if I didn’t attend I wouldn’t be given an offer.

The event was very odd to me.  At one point during the event we were sent into a conference room and given some kind of propaganda type film (I will say, it was very well made) about how IBM has been around since the beginning of computing.  I also received a text from “Watson” during my flight back, informing me that I’d be receiving my formal offer soon.

It all seemed a bit unnecessary and wasteful.  One thing that really intrigued me about Amazon was there was no frills and no bullshit.  I met their hiring bar, and they sent me an offer, the end.

Why I Turned Down an Interview with Google

Fast forward 8 months.  I’ve been with Amazon as an SDE since graduating.  During this time I’ve been contacted by all manner of recruiters, CEOs, start ups, block chain enthusiasts, and competing companies trying to marshal skilled developers behind their cause.  Recently a Google recruiter has reached out to me about interviewing again at Google. I’ve elected to pass on the opportunity.  Here’s why.

The Hiring Bar

The hiring bar at these companies is a bit absurd if you ask me.  I’ve often heard the argument for this absurdity is “it’s the best indicator we have” or “how else will they assess candidates?” I don’t have the solution to this problem, but I do know I’m not going to validate it as an engineer.  I did previously in order to make up my mind for myself using actual experience and I know which side of the fence I sit.

There are a massive amount of things that go into being a good engineer, let alone being a good employee.  I would say coding maybe makes up 40% of this, and I would say algorithmic skill accounts for maybe 5% of this 40%.  I would also say that assessing for this 5% is largely random luck.  Yet for some reason, this 5% of the 40% is what’s most carefully assessed during the interview process.

My time is better spent on things like building businesses, teaching others, and investigating technology that I find interesting rather than preparing for irrelevant algorithm puzzles.  Don’t get me wrong, I’m glad I did it a year ago, but I don’t want to do that now.

The Culture

As I stated in the opening paragraph, the phrase “they have a good culture” is often heard in the tech industry when talking about Google.  However, I feel that’s a marketing ploy that’s worked out quite nicely for Google (see the non-existent 20% rule that lured me in).  With recent controversy like the James Damore lawsuit, or the $2.7 billion EU fine, to even just a few weeks ago this article written by a (now) ex-Google engineer.

What intrigued me the most about Amazon was there was no illusion.  This is business and you are an employee that will be leveraged in creating new businesses for Amazon.  For many this may be a bad thing, but for me, that’s just fine.  Google is no different.

My Interests

Since my days as a musician I’ve never made decisions strictly for money.  I took the job at Amazon because I wanted to see how a large scale tech company functions internally.  I wanted to learn how to write large scale software using industry best practices.  I wanted to grow as a leader, a coder and an engineer.  If I’m going to leave my position at Amazon it’s going to be for a position I feel passionate about, not more money.  I’m not convinced that Google is going to give me that.  Sure they’ll say “pick any team you’re interested in” (and that’s assuming I’m some tech rock star, which I’m not) or whatever other lip service is given to new hires.  But, one of the things I learned interviewing at so many different places is, you rarely get to pick anything and recruiters will tell you a lot of stuff in the initial phone call to get you excited about the company and the position.

My Current Position

I really like my current manager.  This is her first role as a manager.  This is my first role at a large tech company.  I feel like her and I are learning the ropes together.  Up to now she’s doing a great job, and she herself has a very good manager who I can tell is mentoring her very well.  Together they’ve created a culture of high quality work without sacrificing the work life balance of the engineers.

I also like my teammates.  To put it bluntly, there aren’t any assholes.  Everyone is ready and willing to step in and help when a helping hand is necessary.  For the most part no one is overly critical of one another and the senior engineers on the team are always happy to share wisdom.

I’m sure this same scenario could easily play out at Google as well, but, why put myself through months of mind numbing algorithm puzzles, and set my side projects on hold, for a chance to roll the dice on a new team at a new company for slightly more money?




Lessons learned from an ICO


A few months ago I dabbled in solidity development.  I’m interested in all things cryptocurrency, and ethereum smart contracts appear to be a very large aspect of the current state of cryptocurrency.  As such, I had no choice but to experiment with the technology.  This experimentation involved using the truffle.js framework to follow the ethereum foundation tutorials.

During initial smart contract development it’s common to use the testrpc client as this speeds up deployment times allowing for quicker code iterations. There is also less complexity in this chain, so less can go wrong and you can focus explicitly on development.  However, given that the eventual goal is deploying smart contracts to the main net, eventually you begin to turn your attention to the ethereum test networks.  After testing my contracts using testrpc my next step is typically to test on the parity development chain, which similar, to the testrpc client has less complexity but is an actual blockchain.  After testing on the parity dev chain, I move to the kovan or ropsten test networks.  At the time the ropsten network was under attack (and looks like it is again).  So my only choice for testing was kovan.


This is where I begin to have problems in my development and eventually moved on to other aspects of technology (I think at the time it was react-native).  I was working through a tutorial to deploy a DAO like smart contract, and everything was going well, until I attempted to deploy the contract to the kovan test network.

When I would deploy via the testrpc client, or even the parity dev chain, the contract would deploy successfully and I could interact with it via javascript.  However, on kovan the deploy would hang, and then after a long period of time, would fail with this cryptic error:

Error encountered, bailing. Network state unknown. Review successful transactions manually.
    Error: Contract transaction couldn't be found after 50 blocks

At the time I didn’t really understand what was going on behind the scenes with regard to addresses and transactions.  I knew that I had to sign the transaction in parity, but I had already deployed on the parity development chain, created two different accounts on the kovan chain, one email verified in order to get some test ether, and had my main net accounts floating around in the parity files as well.  So I tried a number of different key files, unsure of which was correct,  none of them were signing the transaction successfully, or if they were, there wasn’t enough ether in the wallet.

At the time I wasn’t even sure I was barking up the right tree, and given that I wasn’t actually trying to deploy to the main net anyway, I posted to stack overflow, never heard back, and shelved the issue.

Enter the ICO

A few weeks ago I posted about proof of stake, and through this post met the CEO of a blockchain start up.  I ended up proof reading their white paper, and kept in touch during their ICO.  During the pre-sale they were deploying their smart contract for the ICO and were getting the exact same error I was getting months ago.  Since this seemed to be a recurring error rather than a one off issue specific to me I felt this warranted more investigation.

I went back to my original DAO smart contract that was giving me issues and tried deploying it again.  During this time the CEO made a number of observations eventually leading to a successful deployment of their smart contract.

Here are a few lessons to keep in mind when deploying smart contracts to both the main net and the test nets.

Don’t forget to sign your transaction in parity

Since it’d been a few months since my previous test deployment I had to remind myself how to process works.  When you run:

truffle migrate --network kovan

The transaction still needs to be signed by the address set in the truffle.js file for the kovan network settings.  If you forget this, truffle will just wait, and after a very long time finally fail with the cryptic error message mentioned above.

There’s no harm in trying different keys

As users we’ve been trained that after too many password attempts you’ll be locked out of your account.  In my case I had a bunch of different key files for the same address, if I’d just tried all of them eventually I would’ve found the correct one.

Key management is paramount

I wouldn’t have needed to try every key file if I’d paid more attention to what I was doing with each key while creating test accounts.  As an ethereum user, this important, but it becomes even more important as this can introduce inadvertent bugs, adding to the already high cognitive load associated with development.

Make sure you have enough ether to cover the gas costs

I remember this kept coming up while I was troubleshooting the issue a few months ago.  The first question asked in forums was always “what did you set your gas price to?” and “does the signing address have enough ether to cover this cost?”

Usually when you sign the transaction in parity it will allow you to set the gas price at that time, and will show you the current average gas price on the network.

Screenshot from 2017-09-29 19-47-03

Screenshot from 2017-09-29 19-47-20

You can also set the gas price explicitly in the truffle.js network settings.  For example:


network_id: 42,
gas: 4712388,
gasPrice: 25000000000

would set the transaction to use 4712388 gas with a gas price of 25000000000 on the kovan network.

Use the parity transaction viewer

Screenshot from 2017-09-29 19-53-17

This was one of the jewels passed along to me by the CEO that I wasn’t aware of.  Parity comes with a bunch of dApps, and one of them is a transaction viewer.  This allows you to keep tabs on the state of your transactions, as well as view other pending transactions on the blockchain.  I believe this is what led the CEO to the truffle.js gas price file insight.

Use etherscan

At the end of the day, when I finally went to check if my contract deployed successfully on the kovan network, I looked at all transactions made by my kovan address, and some of my transactions from months ago actually made it onto the blockchain.  Truffle uses what they call an “initial migration” and then the actual contract deploy.  Some of my initial migrations made it into the blockchain, but not the actual contracts, until sorting out the rest of the things discussed above.

This lesson is an obvious one.  Always check etherscan.  Although, some times this may add to the confusion.  Since the kovan network was sluggish at the time, it took awhile for my transactions to show up, this coupled with the cryptic truffle error lead me to believe absolutely nothing was happening.


Most of these are trivial to debug, but when combined lead to difficult debugging, add to that cryptic error messages and it can often be difficult to break down what’s going wrong in a systematic way.  But, this is brand new tech, and because of this, if you’re a developer, any help with these frameworks will speed up development iterations and in turn make this tech easier to work with for other developers in the future. parity truffle

That’s all for now.  I’ve recently upgraded my miner to a new beta AMD blockchain driver, so I may pass along this bit of info in my next post.  Until then!

Higher-Order Functions, map, reduce, filter. Yeah, the ones used in Big Data Frameworks like Apache Spark and Hadoop

In my previous post about Functional Programming in Java, I mentioned higher-order functions that are often used in big data frameworks like Hadoop, and Spark.  We’ll be discussing only a small subset of the possible functions available in Spark and Hadoop because this subset are the functions I’ve found most useful as a developer not working in big data.  However, as we’ll see, by the end we’ll have fundamental enough knowledge to apply the function calls to a big data framework if necessary.


We’ll begin with perhaps the easiest function to grasp, and the first in the all too familiar phrase map-reduce: the map function.  The way I’ve often heard map described is that we’re “mapping an input to a specified output”.  To do this mapping we’ll need a function that maps a set of inputs to a set of outputs.  In essence, you’re iterating over every object in a data structure and mapping it to a new object.  It often feels very much to me like a for each loop that forces the programmer to do something inside of it.  Here’s a quick example:

    var arr = ['1','2','3','4','5'];
    arr.map( i => console.log(i));

Here the function we’re supplying to map the inputs (‘1′,’2’,…, etc) is a lambda, that takes i as a parameter and maps this input to the console as it’s output.  It’s obviously the same as:

arr.forEach(function(i) {

or even the age old:

for(var i in arr){

Obviously the example using the forEach could’ve used a lambda in place of the function, and the map function can take a non lambda function, but I wanted to illustrate the many different ways to write it.  We can also add an arbitrarily long function in place of the console log.

arr.map( i => {
  var iTimes20 = i * 20;
  if(i > 3){
    console.log((iTimes20 % 20) == 0);

“What happens in map, stays in map”

One thing that always bites me in the ass is I think I’m altering the object passed in by the lambda.  In this case one of the strings ‘1’, ‘2’, …. just as you would be when using the old for loop, but this isn’t the case.  If you do something like this:

arr.map( i => i = 100);

and print out the result of arr, you’ll see it remains unchanged.  This is one of the main tenets of functional programming.  You never want to “cause side effects”.  What this means is, you want to avoid changing the state of a program in unintentional places.  What happens in map, stays in map.  If you want to alter the state of the object in arr, you need to return out a new copy of the original array:

var newArr = arr.map( i => return 100 );

Now if you console log newArr, you’ll see an array of 100’s in place of the original array, and nothing is changed in arr itself.  This is one advantage map has over the old school for loop, you can be certain the state of the original container holding the objects will be the same after the for loop as it was before.

This idea is difficult to wrap your head around as a college student (at least it was for me).  You’re preparing for interviews and space vs time complexity is beat over your head again and again.  The above code looks atrocious from this perspective.  You’re unnecessarily creating a new array, making the space 2n (where n is the size of the array).  Yes, you’re correct.  However, as you’ll see when you get into industry, writing bug free code is often much more important than shaving off a factor of n.  In reality, the code is still O(n), and you can always come back to refactor this bit of code if the bottleneck of the software ends up being this line.  It’s often the case that the bottlenecks appear elsewhere in the software architecture though, and they’ll have been discovered in design.


I often think of reduce as a concise replacement for this programming construct:

var finalSum = 0;

var arr = [10,20,30,40,50];

for(var i in arr){
  finalSum += arr[i];

The same thing is accomplished using reduce:

var finalSum = add.reduce((countSoFar,currentVal) => {
  return countSoFar + currentVal;

Here, countSoFar is an “accumulator” which carries the returned value throughout the function calls, and currentVal is the current object in the collection.  So we’re adding the accumulated sum we’ve seen up to this point, to the current value in the array, and returning this to the countSoFar for the next iteration.  This particular example is kind of trivial, however, since you have access to the accumulator you can do some really interesting things.  For example:

var doubleNestedArray = [
  ['Bob', 'White'],
  ['Clark', 'Kent'],

var toMap = doubleNestedArray.reduce((carriedObject, currentArrayValue) => {
  carriedObject[currentArrayValue[0]] = currentArrayValue[1];
  return carriedObject;
}, {});

This will return the array as a map object that looks like this: { Bob: ‘White’, Clark: ‘Kent’, Bilbo: ‘Baggins’ }

Here we see a feature of reduce that I didn’t mention previously.  The second argument after the lambda function is the initial value for the reduce.  In our case it’s an empty javascript object, however you could’ve easily added an initial person to our map:

var toMap= doubleNestedArray.reduce(carriedObject, currentArray) => {
  carriedObject[currentArray[0]] = currentArray[1];
  return carriedObject;
}, {Bruce: 'Wayne'});

I often use reduce on large JSON objects returned by an API, where I want to sum over one attribute across all the objects.  For example, getting the star count from a list of GitHub repos:

return repos.data.reduce(function(count,repo){
  return count + repo.stargazers_count;


Finally, lets throw in one more for good measure, as it’s another that very frequently comes up in big data computing (I think I’ve heard the joke somewhere: “it should be map-filter-reduce but that doesn’t roll off the tongue quite like map-reduce”).  The filter function is a replacement for this construct:

var arr = [10,20,30,40];

var lessThan12 = [];

for(var i in arr){
  if(arr[i] < 12){

This can be shortened to:

var lessThan12 = arr.filter( num => return num < 12; );

Naturally, any sort or predicate logic can be put in place of the if statement to select elements from an arbitrary container.

One thing I’ll often forget is you have to return the predicate.  It seems odd because you’re returning a boolean, but getting the actual value that evaluates to true in the returned container.

A big data example

A famous “hello world” example from big data is the word count.  Let’s use our new found knowledge on this problem.

var TheCrocodile = "How doth the little crocodile
Improve his shining tail
And pour the waters of the Nile
On every golden scale
How cheerfully he seems to grin
How neatly spreads his claws
And welcomes little fishes in
With gently smiling jaws";

var stringCount = TheCrocodile
                     .split(' ')
                     .reduce((count, word) => {
                       count[word] = count[word] + 1 || 1;
                       return count;
                     }, {});


Hopefully now the power of functional programming is beginning to be more apparent.  Counting the words in a string took us 4 lines of code.  Not only this, but this code could be parallelized to multiple machines using Hadoop or Spark.

That’s all for functional programming!  In the next post, we’ll finally start talking about a fundamental topic associated with this blog: cryptocurrency.  I’m particularly interested in Ethereum as a developer, and therefore the solidity programming language.  We’ll start this broad and deeply interesting topic with a brief explanation of what the blockchain is, and move forward from there.  See you then!

Functional Programming in Java

As I stated in my previous post I’m deep diving into Java because that’s the primary language used at Amazon.  The first thing I did was investigate what functional aspects have been added to the language since I last used it, because I enjoy the functional nature of JavaScript.  In this tutorial, I’ll show you what I’ve learned so far.

Our Previous Example

To begin, let’s rewrite the Java example used in the previous post in order to show how Java has evolved to handle functions with Java 8.  Originally we had:

public static Integer addOne(Integer number){ 
    return number + 1; 

public static Integer squareNumber(Integer number){ 
    return number * number; 

public static Integer composeFunctions(Integer input){ 
    return addOne(squareNumber(input));


This can be rewritten functionally as:

public static Function<Integer,Integer> addOne = (Integer number) -> {
    return number + 1;

public static Function<Integer,Integer> squareNumber = (Integer number) -> {
    return number*number;

public static Integer composeFunctions(Function f1, Function f2, Integer input){
    return (Integer)f2.apply(f1.apply(input));


You’ll notice a few peculiarities that don’t match up with the JavaScript example.  The first is the lambda functions.  In the case of our JavaScript example, we were able to assign our functions to variables:

var fun = function(){
    console.log("I'm a function!");



(anyone? anyone? alright…sorry back to the code….)

However, we can use a slick bit of syntax named the “fat-arrow function” to write this another way:

var fun =  () => {
    console.log("I'm a Star Wars");


That’s all this is in the Java code, but they’re called “lambda functions” and they use -> instead of =>.  Obviously there are going to be some differences in the details, but for our purposes, you can view them as the same. (note, they’re also called anonymous functions)

A second peculiarity you’ll notice is the Function<Integer,Integer> variable type.  This is Java’s way of overcoming the “functions can’t be variables” we talked about in the previous post.  This variable is a Function variable (just like Integer, or String).  The syntax is a throwback to the OG functional programming languages as they define their functions similarly.  What it’s saying is “this is a function that receives an Integer type and returns an Integer type”.

This allows Java to do things like have functions which take Integers and return functions that take Integers and return Integers (Function<Integer,Function<Integer,Integer>>), or functions that take functions and return functions that take Integers and return Integers (Function<Function<Integer,Integer>,Integer>> I could go on for days, here simultaneously having a bit of fun, as well as foreshadowing for my frustrations with the Java language below).

Yet another peculiarity is this .apply() business.  JavaScript has this as well, and honestly I haven’t looked into the difference between .apply() and simply calling the function in JavaScript, although I’m certain there is a difference.  But anyway, that’s all that’s happening here.  f1.apply(input) is calling our first function with the input, this is returning an Integer, which is passed into f1.apply(), thus calling f1, and, finally, we return the result when the f1.apply() call finishes.

The final peculiarity is, the .apply() method returns an object.  We need to cast this object to an Integer which should be safe enough given we can see the function we’re passing in returns an Integer, however, it would pay to be extra safe about this if you were writing any kind of serious code, especially because the parameter is simply a Function, and not a Function<Integer,Integer> specifically.

In the previous post I pointed out that JavaScript is a mess of a language.  Here we see one of the differences between JavaScript and Java, and that’s static typing (I’ve heard this joked about as “JavaScript: everything is everything!”).  Here we are explicitly defining that the first two parameters passed into our composeFunctions method must be functions (ok, I’ll admit, bad naming).  In the case of the equivalent JavaScript code, we could very easily pass in an object in place of the function and we won’t get an error until run time, this can lead to some nasty bugs, and a severe lack of readability when looking at other people’s code.  But I digress, onto the next example!

First Class Functions In Java

public static void testFunction(String name){

    Runnable innerFunction = () -> {
        System.out.printf("hello! ");

    BiConsumer<Runnable, String> lateExecution = (Runnable inputFunction, String n) -> {

    BiConsumer<Runnable, String> lateNoExecution = (Runnable inputFunction, String n) -> {

    lateExecution.accept(innerFunction, name);



We previously discussed the issues with JavaScript, so now let’s bitch about Java.  In the equivalent JavaScript code from the previous post it was extremely succinct and easy to follow what was going on.  We were defining some inner functions, assigning them to variables and calling them.  The code is literally running off the page in this Java example. In this Java case, you have to know what a Runnable is, and that it’s an functional interface (whatever that is) that takes no parameters and returns none.  You also need to know that a BiConsumer takes two parameters specifically, and returns nothing.  Don’t you dare try using the Function interface for either of these things.  Don’t think about calling apply() on the BiConsumer either.

The equivalent JavaScript took me 1 minute to write and didn’t involve any googling or a compiler to double check.  Sadly, I can’t say the same for the Java.  Obviously the functional aspects of Java were an afterthought (which is fine, it wasn’t intended for that purpose anyway), and you get that feeling when trying to program functionally in it.

Remember in the previous post when I asserted that functional languages are great because the modularity is natural to a programmer, making it more probable to occur?  When you have to memorize 10 different functional interfaces to get anything done, that reduction in cognitive load vanishes.

In the next post I’m thinking we’ll look further into lambda functions, as well as the .map(), .filter(), and .reduce() higher order functions (used in the famous Hadoop big data framework).  Until then!


The Power of Functional Programming (and why we’ll be exploring it in Java)

I dislike Java as a programming language.

I learned C++ at Utah State University (not that I like C++ any better), we were required to take a Java course to supplement the C++ knowledge, but I had studied Java in my free time before going back to school anyway.  I am grateful to Java as it was my first experience with Object Oriented Programming (OOP).  C++ is less verbose than Java, but I don’t necessarily love C++ more than Java.  In all honesty, I’m not much of a language snob.  I’m firmly in the “choose the language that’s best for the job” camp.  Which is why I haven’t complained about using Java.  It really is the right tool for what Amazon is doing.

This is why after school I focused heavily on learning JavaScript.  In my opinion JavaScript is an absolute mess of a language, however, in many cases, particularly the projects I’m interested in building, it is the best tool for the job.  One thing in particular I like about programming in JavaScript is it’s functional aspects.  I’m not one for details or pedantry, so I’ll stay out of defining functional programming or using any programming language buzz words.  Here is the wikipedia to Functional Programming though, and if there is interest, I’d be more than happy to dive into other details of functional programming not discussed here (we’ll only be covering a very small subset).  For now, I’ll take a very simple definition of functional programming: functions can be treated as variables in functional languages, as this is the most advantageous aspect the paradigm possesses over procedural languages (C++, Java, etc).

I remember when I first heard of functional programming.  I thought “great, so what? I can accomplish anything I need to in a procedural language, why bother learning another paradigm and confusing myself?”.  In fact, although I’d dabbled in Haskell, this was more out of curiosity about all the hype surrounding functional programming than anything else.  It was really certain frameworks in JavaScript (looking at you React.js) that naturally lend themselves to the functional paradigm that introduced me in a pragmatic way.

Higher Order Functions

So functions can be treated as variables, what does this really mean?  Here’s a contrived example to illustrate the point:

var addOne = function(number){
    return number + 1;

var squareNumber = function(number){
    return number * number;

var composeFunctions = function(func1, func2, input){
    return func2(func1(input));

Calling composeFunctions(squareNumber, addOne, 100) will square 100, pass the result into addOne, and add 1 to 10000.  Here’s the equivalent Java code:

public static Integer addOne(Integer number){
    return number + 1;

public static Integer squareNumber(Integer number){
    return number*number;

public static Integer composeFunctions(Integer input){
    return addOne(squareNumber(input));

If you’ve studied functional programming in any small sense, you’ll be able to spot a limitation with this Java code (hint: notice how in the JavaScript we’ve got three function parameters to composeFunctions, but only 1 in the Java code).  What happens if we want the ability to call the addOne function first, and then the squareNumber function after that?  We’d have to add a second Java method for this and explicitly call these functions in the reverse order:

public static Integer composeFunctionsReverse(Integer input){
    return squareNumber(addOne(input));

This is because in the past, Java couldn’t treat functions as variables, and because of this, you can’t pass a function into another function as a parameter.  In the case of the JavaScript, if we wanted to change the order the functions executed in, it’s as simple as swapping the parameters to the function (calling composeFunctions(addOne, squareNumber, 100) instead of composeFunctions(squareNumber, addOne, 100)).

I remember when I was looking into functional programming initially, I saw examples like this and thought “great, so what? I’ll just write another one line function, what’s the big deal?”.  I’ll admit, in this contrived example it doesn’t seem like much.  However, when you start adding things like networking code to make API calls, or threading, the power of this small abstraction becomes enormous.  Not to mention the cleanliness of the code (we didn’t have to write the second function in the JavaScript case).

First Class Functions

Another powerful aspect of this paradigm is, the functions can be written inside of other functions, and executed whenever the programmer dictates:

var testFunction = function(name){
    var innerFunction = function(){
        //ignore the process.stdout.write, it's just console.log
        //without the newline
        process.stdout.write("hello! ");

    var lateExecution = function(inputFunction, name){
        inputFunction() + console.log(name);

    var lateNoExecution = function(inputFunction, name){
        inputFunction + console.log(name);

    lateExecution(innerFunction, name);
    lateNoExecution(innerFunction, name);

Here is yet another contrived example, however, it illustrates the point quite well, particularly because there is no equivalent Java code to do the same thing (until recently of course, check out the next tutorial for this).

When we call testFunction(“Jackson”), the innerFunction is created at the time of calling, we then pass this variable into both our lateExecution and lateNoExecution variables (also both created at the time of calling testFunction).  The lateExecution function receives this innerFunction, immediately calls it, then console.log’s the name passed into it.  However, in lateNoExecution, we don’t ever execute the function passed to us, so nothing happens, other than “Jackson” is printed to the screen, even though we still have access to the function as a variable.

I know if you’ve come from procedural programming, you’ll read the above paragraph and thinking “man, that’s a lot of this calling that, calling this, but maybe not calling that, and only when the function is executed”.  I know because I thought the same thing.  But now that I’m on the other side of the fence I’m realizing the writers of posts I’d read previously were simply struggling to come up with succinct examples motivating the paradigm.  It just isn’t easy to motivate functional programming in small examples.  The paradigm really shines when code starts to grow very large, because functions can be chosen or removed at will and modularized with one simple key stroke rather than redesigning the software from the ground up (looking at you class hierarchy diagrams).

As an example, in the composeFunctions portion of this post, what would happen if we needed to do some preprocessing on the number passed into us before calling the other two functions, based on whether the number was even or odd?  We’d do the check on the number, and add the preprocessing.  Well next month, a customer has decided the preprocessing needs to be done in another way.  Between now and then, we’ve added a bit more logic in that if statement block.  Now we need to unravel the entire mess and change whatever preprocessing we were doing, which may effect logic (aka mutate state) later in the scope without us noticing.  If we were using functional programming, we’d have simply added a third preprocessing function as a parameter to composeFunctions, and called it inside the if statement.  When the customer changes the preprocessing requirements, we simply pass in the new function.

Another possbility: what if we wanted to reuse this composeFunctions method for a succession of two different function calls, rather than squareNumber and addOne?  (perhaps cubeNumber and addTwo?) without removing the functionality gained from the original code?  In Java you’d have to write a new function, calling cubeNumber and addTwo specifically, whereas here, we need only call composeFunctions with the newly written cubeNumber and addTwo (or hell, even cubeNumber and addOne, or addOne and addTwo).

This paradigm lends itself to modularity in a way procedural programming does not.  I’m not saying you couldn’t accomplish the same things in a procedural language, but what I am saying is, it’s much easier to do in a functional language, making a programmer much more apt to reach for it.

Hopefully I’ve managed to motivated the power of functional programming (although after reading the above paragraphs, I’m not sure sure I have! ha ha!).  This is a very small piece of the larger paradigm, and in fact, the power isn’t done justice by these contrived examples, the power lies in the completely new way of looking at code, rather than specific use cases or academic ammunition for water cooler arguments over which language is better.  As we’ll see in the tutorial, even traditionally procedural languages are adding functional aspects because of it’s power.