Hosting a Tor Relay Node on an AWS EC2 instance

Back to Business

Ok, lets pick up where we left off .  We’re now ssh’d into our EC2 Ubuntu server.  The next step is installing tor. Do not run apt-get install tor.  Instead, follow the directions found here (we’re running Xenial Xerus, just so you don’t have to check).  As a sanity check, type tor into the terminal, you should see an error after following those steps.  If you see an error like “command tor not found” back up, and retrace your steps, or get in touch with me for help, something went wrong.


Just like we did in the Windows tutorial.  We’re going to edit the torrc file to configure the node.  I use vim, but feel free to use whatever command line editor you’re familiar with.  If you’ve never used one before, again, feel free to get in touch!  I’m a missionary for The Church of Vim.

The torrc file is located in the /etc/tor/ directory. So run:

sudo vim /etc/tor/torrc

Don’t forget the sudo, otherwise the file will be read only.

Paste the following lines at the bottom of the file:

ORPort 9001


RelayBandwidthRate 75 KBytes # Throttle traffic to 75KB/s (600Kbps)

RelayBandwidthBurst 200 KBytes # But allow bursts up to 200KB (1600Kb)

AccountingMax 1 GBytes

AccountingStart month 3 15:00

ExitPolicy reject *:* # no exits allowed

Here <YOU NODE NICKNAME> should be a unique name for your node that you’ll remember, because we’ll use it to search for the node using atlas.  We’re also directing tor to listen on the port we opened in the previous tutorial, as well as limiting the bandwidth that the tor network can use, and disallowing exit traffic.  We have to limit the bandwidth to these specific metrics, otherwise we’ll accrue charges from Amazon for our EC2 instance.  This explains why the tor network is slow, as discussed in our previous post.  Sadly, we’re one of the slow nodes.

Finally, run:

sudo service tor reload

Sanity Check

The tor documentation tells you the logs will be output to the /var/log/tor/ directory.  I never saw them there, instead I had to use journalctl.  Just like the Windows tutorial, we need to ensure the line “Self-testing indicates your ORPort is reachable from the outside. Excellent. Publishing server descriptor.” is being output by the tor node, otherwise the node isn’t working.  So we’ll run:

journalctl | grep "Self-testing indicates your ORPort is reachable from the outside. Excellent. Publishing server descriptor."

If nothing is output to the console, your tor service isn’t running.  Just a few troubleshooting tips.  Run:

sudo service tor stop && sudo tor

If you never see that line output, there is a problem with your tor installation, consider uninstalling tor and reinstalling it.  If, however, it runs fine, there is a problem with the init.d script to daemonize the tor process.  Consider this reddit link.

Going back to AWS cost from the previous post, we can also monitor our tor node bandwidth usage using the following command:

journalctl | grep "Accounting enabled."

You should see output like:

Heartbeat: Accounting enabled. Sent: 94.42 MB, Received: 121.84 MB, Used: 122.38 MB / 1.00 GB, Rule: max. The current accounting interval ends on 2017-09-03 15:00:00, in 21 days 2:08 hours.

Here the node is telling us that it’s used 122.38 MB of the 1GB we’ve defined as the monthly max in the torrc (the Accounting lines).  When we reach 1GB of traffic, the node will stop accepting connections for the rest of the month, thus keeping our EC2 cost inside the free tier metrics.


If the Self-testing indicates... line showed up after running the journalctl command, your node is up and running!  We can now head over to atlas and search for the Nickname you set in the torrc file.  Keep in mind, that like the last tutorial, it will take a few hours to show up on the atlas webpage.

If you named your node something unique, when you search for it, you’ll be taken directly to your details page.  Here’s quick run down.

OR Addresses, is the public IP address of your node, Contact is your contact information if you set it in the torrc file (we didn’t, but it’s explained in the Windows tutorial).  Advertised Bandwidth is how much bandwidth you’re telling the rest of the network you’re willing to allow through you.  Exit Policy is set to reject all exit traffic, as we said, those nodes get put under extra scrutiny.  Uptime is how long our node has been active on the tor network.  You can see AS Name is, Inc, since we’re using Amazon’s hardware.  Below this you can see metric graphs for your node.  Since ours is so new, nothing shows up.


You’ve just joined a unique brotherhood.  You are a protector of internet freedom.  You’ve managed to do what only a very small portion of the population has done, and that’s take concrete action toward a free and open internet.  Anyone can click like, or retweet, but few actually look into the details and understand what it takes for a free and open internet.  I’m proud of you for following along, and if you know me at all, you’ll know this may be the first time I’ve ever said that.


Let me know if I can make this tutorial better or if you managed to make it through!


Creating an EC2 Instance on AWS to use as a Tor Relay node

But This One Goes to Eleven

A few weeks ago I documented the steps necessary to get a tor relay node up and running using Windows.  This was mainly because I wanted to give a gentle introduction to anyone interested in cryptography and internet freedom who may not have a large chunk of experience in tech.  I strongly believe anyone can become an expert in any niche given the scale of information currently available on the internet, and I don’t think technology is any different.  All it takes is a little bit of inspiration, which I was hoping to provide by taking it slow.  Now we’re going to turn up the volume a bit.



One large drawback of hosting a relay node on your personal laptop is your connection to the tor network is not persistent.  Every time you close your laptop or shut it off and turn it back on, your tor software loses all the connections that were so helpfully shuttling network traffic for everyone.  This time, we’re going to set up a Free Tier AWS EC2 instance in the cloud, and run the relay node software using this server space.  This will ensure our node will forever stay up supporting the tor network and internet freedom!

One thing to note is, once you’ve got your EC2 instance provisioned, you can put any software you want on it and it will automatically be connected to the rest of the internet!  I use this frequently for a lot of my side projects.

Also, I’m going to assume you’re running some flavor of Linux and have some familiarity with the command line moving forward.  If you don’t, please don’t fret, just google, and as always, don’t hesitate to reach out to me and ask for help!

Welcome to The Cloud

First, we’ll go to and click “Sign In to Console”.  You’ll click “I am a new user.” after being directed to the next page and entering your email address.  Heore you’ll enter your first name, retype your email address, and set your password.  Then click “Create account”.  You’ll then be redirected to a Contact Information page.  Select the Personal Account radio button.  Fill out the information in the form.  When you’re done, click the Agreement check box and then Create Account (if it took you more than two tries to get the captcha don’t worry….I did too).  Next AWS asks for your payment information.  I know when I first signed up I was hesitant.  I’m always hestitant about inputting credit card info.  However, I assure you, your account will not be charged throughout this tutorial.  And in fact, I’ve inadvertently racked up $1100 in AWS charges after uploading my IAM keys to github (yes, I’m that stupid) and had them all refunded.  Amazon rocks with refunds.

Next, you’ll fill out the captcha and click the button to receive the phone call from Amazon herself.  Type in the pin shown on the screen and end the phone call.  Then click Continue.  We want the Basic support plan, so select that and again click continue.  You’ll be taken to the homepage, click Signin to Console.  Enter your email, and password your set earlier in the tutorial.

You now have an AWS account.  Welcome to the cloud.


There are an endless supply of cloud computing resources supplied by Amazon.  We’ll only be looking at 1 today, but I recommend you check out as much as you can.  In fact, if you don’t have a job, or dislike the one you’ve got and you learn the ins and outs of AWS, you could easily land yourself a dev-ops role at a great company.


We’ll be using the EC2 service provided by AWS, so click that one, under AWS services.  EC2 stands for Elastic Compute Gen 2.  The elastic part means that your computational resources can scale up or down depending on how much you’re using.  I believe you can also bid on computing resources, whereby the price fluctuates with demand for Amazon’s servers.  Since we’ll be doing very minimal computing, this means ours will be free!

Click the blue Launch Instance button.  You’ll be presented with various different Operating Systems that you can put on your EC2 instance.  We’ll be using Ubuntu, so scroll down until you see Ubuntu Server 16.04 and click select.  Here we’ll choose the computing resources that we’d like to use on the server.  Leave it as is, which should be the t2.micro settings, which as Amazon has so kindly informed us is free tier eligible.  Now, click Review and Launch, and finally, Launch.

Private Key File

You’ll now be presented with a pop up informing you that you need to create a private key file in order to SSH (secure shell) into your Ubuntu server.  As I said, we’re turning the volume up for this tutorial.  Here’s the general idea of what we’re doing.

You have a laptop that you’re reading this blog post on.  Somewhere in New York or Oregon, or hell, even Tokyo, Amazon has a bunch of servers it uses for various tasks on its website.  Well it’s decided to sell some of those servers to the public to make money when they aren’t using them.  You’ve just been given a server somewhere in the deepest depths of Mordor.c-hrivxvoaalxjl

Great! we have a server! but how do we access it?  We have to use a networking protocol that allows us to pass commands to the computer through the internet.  However, we don’t want anyone to be able to pass commands to our computer, so we need a way of securing it.  Enter: the private key file.  Using roughly the same technique that keeps your traffic secure while you do your online banking, we can keep our commands to the cloud computer safe.

Bombs Away!

Select Create a new key pair, then name it whatever you like.  I’m going to name mine “torRelayNode” because that’s what I’ll be using this EC2 instance for.  Then click “Download Key Pair”.  Save this file somewhere safe, if anyone gets ahold of it, they’ll have access to your server and can do whatever they’d like with your AWS account.


Now click Launch Instances, then View Instances.  You should see your new t2.micro instance, with an Instance State of running.

While we’re here, we’re going to set the port forwarding for the port we’ll be using on the tor node.  On the left, you’ll see Network & Security, and under that Security Groups, click that.  You should see two security groups, we want the one that belongs to the launch-wizard-1 Group Name.  When you click it you should see it’s current network rules.  The only one should be of type SSH, using the TCP protocol, on port 22, from any source (  Click the Edit button above that, and Add Rule.  From the drop down under Type, choose Custom TCP, the protocol is TCP, the port will be 9001 (as we’ll see in the next tutorial).  Using the Source dropdown, select “Anywhere”.  You’ve now added the firewall rule to allow for incoming connections on the tor network to connect to your node on port 9001.  If at any point you decide to change the tor node’s port in the torrc file, don’t forget to change it here as well!

You now have your very own cloud computer!


But how do we access it?  We use that private key file we downloaded, and ssh into it. First we need to get the public IP address associated with our EC2 instance.  On the left (where you saw Network & Security) you should see Instances, click Instances under that.  You should see “Public DNS:” below where all of your EC2 instances are listed, and a big long string starting with “ec2-“.  That’s your shiny new server’s address on the internet.  Copy it, we’ll need it later.

Next, we’ll open up the command line, cd into whatever directory you downloaded or moved the private key file to, then change the execution permissions of that file with chmod:

chmod 400 <The name of your .pem file, in my case torRelayNode.pem>

and finally:

ssh -i <The name of your .pem file> ubuntu@ec2-the-ip-address-you-copied-from-before

Note, that you’ll need to be in the same directory as your .pem file, or pass the command the path to your .pem file.  If it’s your first time logging into your EC2 instance, you’ll be asked if you want to trust it.  Type yes and hit enter.  Now you’re on your new computer!

Shutting it Down

Before I forget, if you do somehow end up getting charged somewhere down the road from AWS, you can kill your instance by going to the Instances page, clicking your instance, selecting the Actions drop down, and changing the Instance State from Start to Stop (and the nuclear option, Terminate).  You’ll still have to cough up the charges (they should be small) but this should stop you from getting any more.  Also, on that note, I’d recommend looking into setting a billing alarm to send you an email if you ever do get charged.  I’ll also monitor this EC2 instance for a month and post an update if it does end up charging me, so we can fix it together if it does.  Also, again, don’t hesitate to get in touch if you have any problems with the charges!

Next Steps

In the next blog post, we’ll get the tor relay software up and running on your new cloud server, and you can now brag at parties that you’re actually protecting revolutionaries in dictatorial countries, just like I do any time someone asks me which VPN to use.  Also, don’t forget to buy a tor sticker to put on your laptop, as you’ll have joined the brotherhood after the next tutorial.









Functional Programming in Java

As I stated in my previous post I’m deep diving into Java because that’s the primary language used at Amazon.  The first thing I did was investigate what functional aspects have been added to the language since I last used it, because I enjoy the functional nature of JavaScript.  In this tutorial, I’ll show you what I’ve learned so far.

Our Previous Example

To begin, let’s rewrite the Java example used in the previous post in order to show how Java has evolved to handle functions with Java 8.  Originally we had:

public static Integer addOne(Integer number){ 
    return number + 1; 

public static Integer squareNumber(Integer number){ 
    return number * number; 

public static Integer composeFunctions(Integer input){ 
    return addOne(squareNumber(input));


This can be rewritten functionally as:

public static Function<Integer,Integer> addOne = (Integer number) -> {
    return number + 1;

public static Function<Integer,Integer> squareNumber = (Integer number) -> {
    return number*number;

public static Integer composeFunctions(Function f1, Function f2, Integer input){
    return (Integer)f2.apply(f1.apply(input));


You’ll notice a few peculiarities that don’t match up with the JavaScript example.  The first is the lambda functions.  In the case of our JavaScript example, we were able to assign our functions to variables:

var fun = function(){
    console.log("I'm a function!");



(anyone? anyone? alright…sorry back to the code….)

However, we can use a slick bit of syntax named the “fat-arrow function” to write this another way:

var fun =  () => {
    console.log("I'm a Star Wars");


That’s all this is in the Java code, but they’re called “lambda functions” and they use -> instead of =>.  Obviously there are going to be some differences in the details, but for our purposes, you can view them as the same. (note, they’re also called anonymous functions)

A second peculiarity you’ll notice is the Function<Integer,Integer> variable type.  This is Java’s way of overcoming the “functions can’t be variables” we talked about in the previous post.  This variable is a Function variable (just like Integer, or String).  The syntax is a throwback to the OG functional programming languages as they define their functions similarly.  What it’s saying is “this is a function that receives an Integer type and returns an Integer type”.

This allows Java to do things like have functions which take Integers and return functions that take Integers and return Integers (Function<Integer,Function<Integer,Integer>>), or functions that take functions and return functions that take Integers and return Integers (Function<Function<Integer,Integer>,Integer>> I could go on for days, here simultaneously having a bit of fun, as well as foreshadowing for my frustrations with the Java language below).

Yet another peculiarity is this .apply() business.  JavaScript has this as well, and honestly I haven’t looked into the difference between .apply() and simply calling the function in JavaScript, although I’m certain there is a difference.  But anyway, that’s all that’s happening here.  f1.apply(input) is calling our first function with the input, this is returning an Integer, which is passed into f1.apply(), thus calling f1, and, finally, we return the result when the f1.apply() call finishes.

The final peculiarity is, the .apply() method returns an object.  We need to cast this object to an Integer which should be safe enough given we can see the function we’re passing in returns an Integer, however, it would pay to be extra safe about this if you were writing any kind of serious code, especially because the parameter is simply a Function, and not a Function<Integer,Integer> specifically.

In the previous post I pointed out that JavaScript is a mess of a language.  Here we see one of the differences between JavaScript and Java, and that’s static typing (I’ve heard this joked about as “JavaScript: everything is everything!”).  Here we are explicitly defining that the first two parameters passed into our composeFunctions method must be functions (ok, I’ll admit, bad naming).  In the case of the equivalent JavaScript code, we could very easily pass in an object in place of the function and we won’t get an error until run time, this can lead to some nasty bugs, and a severe lack of readability when looking at other people’s code.  But I digress, onto the next example!

First Class Functions In Java

public static void testFunction(String name){

    Runnable innerFunction = () -> {
        System.out.printf("hello! ");

    BiConsumer<Runnable, String> lateExecution = (Runnable inputFunction, String n) -> {;

    BiConsumer<Runnable, String> lateNoExecution = (Runnable inputFunction, String n) -> {

    lateExecution.accept(innerFunction, name);



We previously discussed the issues with JavaScript, so now let’s bitch about Java.  In the equivalent JavaScript code from the previous post it was extremely succinct and easy to follow what was going on.  We were defining some inner functions, assigning them to variables and calling them.  The code is literally running off the page in this Java example. In this Java case, you have to know what a Runnable is, and that it’s an functional interface (whatever that is) that takes no parameters and returns none.  You also need to know that a BiConsumer takes two parameters specifically, and returns nothing.  Don’t you dare try using the Function interface for either of these things.  Don’t think about calling apply() on the BiConsumer either.

The equivalent JavaScript took me 1 minute to write and didn’t involve any googling or a compiler to double check.  Sadly, I can’t say the same for the Java.  Obviously the functional aspects of Java were an afterthought (which is fine, it wasn’t intended for that purpose anyway), and you get that feeling when trying to program functionally in it.

Remember in the previous post when I asserted that functional languages are great because the modularity is natural to a programmer, making it more probable to occur?  When you have to memorize 10 different functional interfaces to get anything done, that reduction in cognitive load vanishes.

In the next post I’m thinking we’ll look further into lambda functions, as well as the .map(), .filter(), and .reduce() higher order functions (used in the famous Hadoop big data framework).  Until then!


The Power of Functional Programming (and why we’ll be exploring it in Java)

I dislike Java as a programming language.

I learned C++ at Utah State University (not that I like C++ any better), we were required to take a Java course to supplement the C++ knowledge, but I had studied Java in my free time before going back to school anyway.  I am grateful to Java as it was my first experience with Object Oriented Programming (OOP).  C++ is less verbose than Java, but I don’t necessarily love C++ more than Java.  In all honesty, I’m not much of a language snob.  I’m firmly in the “choose the language that’s best for the job” camp.  Which is why I haven’t complained about using Java.  It really is the right tool for what Amazon is doing.

This is why after school I focused heavily on learning JavaScript.  In my opinion JavaScript is an absolute mess of a language, however, in many cases, particularly the projects I’m interested in building, it is the best tool for the job.  One thing in particular I like about programming in JavaScript is it’s functional aspects.  I’m not one for details or pedantry, so I’ll stay out of defining functional programming or using any programming language buzz words.  Here is the wikipedia to Functional Programming though, and if there is interest, I’d be more than happy to dive into other details of functional programming not discussed here (we’ll only be covering a very small subset).  For now, I’ll take a very simple definition of functional programming: functions can be treated as variables in functional languages, as this is the most advantageous aspect the paradigm possesses over procedural languages (C++, Java, etc).

I remember when I first heard of functional programming.  I thought “great, so what? I can accomplish anything I need to in a procedural language, why bother learning another paradigm and confusing myself?”.  In fact, although I’d dabbled in Haskell, this was more out of curiosity about all the hype surrounding functional programming than anything else.  It was really certain frameworks in JavaScript (looking at you React.js) that naturally lend themselves to the functional paradigm that introduced me in a pragmatic way.

Higher Order Functions

So functions can be treated as variables, what does this really mean?  Here’s a contrived example to illustrate the point:

var addOne = function(number){
    return number + 1;

var squareNumber = function(number){
    return number * number;

var composeFunctions = function(func1, func2, input){
    return func2(func1(input));

Calling composeFunctions(squareNumber, addOne, 100) will square 100, pass the result into addOne, and add 1 to 10000.  Here’s the equivalent Java code:

public static Integer addOne(Integer number){
    return number + 1;

public static Integer squareNumber(Integer number){
    return number*number;

public static Integer composeFunctions(Integer input){
    return addOne(squareNumber(input));

If you’ve studied functional programming in any small sense, you’ll be able to spot a limitation with this Java code (hint: notice how in the JavaScript we’ve got three function parameters to composeFunctions, but only 1 in the Java code).  What happens if we want the ability to call the addOne function first, and then the squareNumber function after that?  We’d have to add a second Java method for this and explicitly call these functions in the reverse order:

public static Integer composeFunctionsReverse(Integer input){
    return squareNumber(addOne(input));

This is because in the past, Java couldn’t treat functions as variables, and because of this, you can’t pass a function into another function as a parameter.  In the case of the JavaScript, if we wanted to change the order the functions executed in, it’s as simple as swapping the parameters to the function (calling composeFunctions(addOne, squareNumber, 100) instead of composeFunctions(squareNumber, addOne, 100)).

I remember when I was looking into functional programming initially, I saw examples like this and thought “great, so what? I’ll just write another one line function, what’s the big deal?”.  I’ll admit, in this contrived example it doesn’t seem like much.  However, when you start adding things like networking code to make API calls, or threading, the power of this small abstraction becomes enormous.  Not to mention the cleanliness of the code (we didn’t have to write the second function in the JavaScript case).

First Class Functions

Another powerful aspect of this paradigm is, the functions can be written inside of other functions, and executed whenever the programmer dictates:

var testFunction = function(name){
    var innerFunction = function(){
        //ignore the process.stdout.write, it's just console.log
        //without the newline
        process.stdout.write("hello! ");

    var lateExecution = function(inputFunction, name){
        inputFunction() + console.log(name);

    var lateNoExecution = function(inputFunction, name){
        inputFunction + console.log(name);

    lateExecution(innerFunction, name);
    lateNoExecution(innerFunction, name);

Here is yet another contrived example, however, it illustrates the point quite well, particularly because there is no equivalent Java code to do the same thing (until recently of course, check out the next tutorial for this).

When we call testFunction(“Jackson”), the innerFunction is created at the time of calling, we then pass this variable into both our lateExecution and lateNoExecution variables (also both created at the time of calling testFunction).  The lateExecution function receives this innerFunction, immediately calls it, then console.log’s the name passed into it.  However, in lateNoExecution, we don’t ever execute the function passed to us, so nothing happens, other than “Jackson” is printed to the screen, even though we still have access to the function as a variable.

I know if you’ve come from procedural programming, you’ll read the above paragraph and thinking “man, that’s a lot of this calling that, calling this, but maybe not calling that, and only when the function is executed”.  I know because I thought the same thing.  But now that I’m on the other side of the fence I’m realizing the writers of posts I’d read previously were simply struggling to come up with succinct examples motivating the paradigm.  It just isn’t easy to motivate functional programming in small examples.  The paradigm really shines when code starts to grow very large, because functions can be chosen or removed at will and modularized with one simple key stroke rather than redesigning the software from the ground up (looking at you class hierarchy diagrams).

As an example, in the composeFunctions portion of this post, what would happen if we needed to do some preprocessing on the number passed into us before calling the other two functions, based on whether the number was even or odd?  We’d do the check on the number, and add the preprocessing.  Well next month, a customer has decided the preprocessing needs to be done in another way.  Between now and then, we’ve added a bit more logic in that if statement block.  Now we need to unravel the entire mess and change whatever preprocessing we were doing, which may effect logic (aka mutate state) later in the scope without us noticing.  If we were using functional programming, we’d have simply added a third preprocessing function as a parameter to composeFunctions, and called it inside the if statement.  When the customer changes the preprocessing requirements, we simply pass in the new function.

Another possbility: what if we wanted to reuse this composeFunctions method for a succession of two different function calls, rather than squareNumber and addOne?  (perhaps cubeNumber and addTwo?) without removing the functionality gained from the original code?  In Java you’d have to write a new function, calling cubeNumber and addTwo specifically, whereas here, we need only call composeFunctions with the newly written cubeNumber and addTwo (or hell, even cubeNumber and addOne, or addOne and addTwo).

This paradigm lends itself to modularity in a way procedural programming does not.  I’m not saying you couldn’t accomplish the same things in a procedural language, but what I am saying is, it’s much easier to do in a functional language, making a programmer much more apt to reach for it.

Hopefully I’ve managed to motivated the power of functional programming (although after reading the above paragraphs, I’m not sure sure I have! ha ha!).  This is a very small piece of the larger paradigm, and in fact, the power isn’t done justice by these contrived examples, the power lies in the completely new way of looking at code, rather than specific use cases or academic ammunition for water cooler arguments over which language is better.  As we’ll see in the tutorial, even traditionally procedural languages are adding functional aspects because of it’s power.

Hosting a Tor Node on Windows 7

Please note, this tutorial is not for the technologically feint of heart.  However, I’ll give you my personal guarantee that if you run into troubles, I will respond to every request I receive for help.

First, lets to go to this URL:

and download the Expert Bundle for Microsoft Windows (that’s right, you’re now a tor expert, go a head and tell your friends).  We’re going to follow the steps for Windows, because if you’re on Linux, I’m going to assume you can figure all of this out on your own.  If not, feel free to get in touch and we can walk through the steps together.  And in fact, the steps on the torproject website are made for Linux.

Once the download is finished, extract the contents of the zip file.  You should now see two folders, Tor and Data.  Go to the Tor folder and create a new text file.  Inside of it, paste this:

ORPort 443
Exitpolicy reject *:*
Nickname ididntedittheconfig
ContactInfo <your email address>

where <your email address> is your email address.


Now begins the feint of heart parts.  We’re going to log into your router, and enable port-forwarding for the port tor uses to connect to the rest of the tor network (443).  Look into your specific router’s configuration.  A quick google search for something along the lines of “linksys WRT160nl ip address” should tell you which ip address your router resides on.  Type this ip address into your browser and log into the router admin page.  Each router is different, but again, a quick google search for something like “belkin n600 enable port forwarding” should yield good results.  The tricky part is, you’ll have to know what ip address your computer currently has to type into the device IP portion of the port forwarding inputs.  Your router will pass all the traffic it receives on port 443 on to your computer.  To get the ip address, follow these steps.


Hurray! you got port forwarding working! this is useful for all sorts of things outside of tor.  If you want to make any device at your house accessible from the internet, you’d simply add the port forwarding in your router, and forward the traffic to that device in your house.

Ok, if you followed the steps to find your IP address, you’ve at least opened the terminal now and ran one command.  Things are going to get a little bit ambiguous because we’re all on different computers and the files are all probably in different places, but here’s the general idea (and I’d recommend google a lot if you get stuck, as well as getting in touch with me).  We’re going to change directories (cd) into the extracted tor-win32- folder.  When you get there, cd one more time into the Tor folder.  You should see your newly created torrc.txt file, as well as a tor.exe file.  Now, type this:

tor.exe -f torrc.txt

If you get the error 'tor.exe' is not recognized... then you’re not in the correct directory.

Tor will now spew out a bunch of information to your terminal, one of which is “Guessed our IP address as <Ip address>”.  Remember this, as we’ll use it to validate that our node is actually working.  Eventually, you should see the line “Self-testing indicates your ORPort is reachable from the outside.  Excellent.  Publishing server descriptor.”  This means you did everything right.



Another way to check is atlas.  Click the link, and in the top right corner search for “ididntedittheconfig” (notice above, that’s the Nickname we added to the torrc file at the very beginning, you can name your tor node whatever you want, you’d use this new nickname to search for your node on atlas).  You should see a bunch of nodes named that, since its the default setting, but one will be that IP address I told you to remember in the previous section.  Please note, atlas takes awhile to update, so you probably won’t see it immediately, don’t panic.

Tor Network

Welcome to the tor network!  The only remaining issue to handle is, when you turn off your computer or log out, you’ll be disconnected from the tor network.  Ideally, we’d like to reconnect to the network any time this happens.  This would involve running the tor.exe -f torrc.txt command when the computer wakes up, or powers on.  But we’ll save this for another tutorial, or if you’re feeling adventurous, for you to try (a quick search for “run command on power up windows” should give you a good place to start).  Alternatively, you could use a free tier AWS EC2 instance to persistently host a node.

Motivating The Hosting of a Tor Relay Node

It seems as though my rant on cryptography has struck a nerve. I got a lot of good feedback and have had the opportunity to have some really great conversations as a result, which is a by-product I wasn’t expecting from starting this blog.

One of the more interesting conversations was with a long time friend, we’ll call him Jake, for the sake of anonymity. Many people have many different reasons for their interest in cryptography. I wasn’t aware at the time, but the person asking about VPNs that spurred the tutorial on TunnelBear VPN was asking because of security. They’d been hacked on various platforms and wanted to make sure that never happened again. Jake was coming at the problem from a different, and in my opinion, interesting perspective. One of the most fascinating aspects of cryptography.

Jake was asking which VPN I use, and I said something along the lines of “I don’t use one, I host a tor relay node”. Which resulted in a whole host of questions, many of which I’m sure you’re also wondering. Some of you may be familiar with tor, and it’s quite possible a lot of you have used tor in the past. It also may be that you used tor, it was slow as hell, and you vowed never to use it again. Once again, I’ve elected to split this post into two, one for a tutorial explaining how to set up a tor node, and another deep dive into the details of how tor works, and why everyone should host a tor node. This post will be the deep dive, and in my opinion is the more interesting of the two.

Jake’s interest in cryptography was: what happens when the government decides to shut off the internet in a country, and does cryptography have the power to circumvent this? In fact it does. Tor doesn’t solve this problem (although it does solve the government firewall problem), but Firechat  does. If there is interest, I’d be happy to dive further into Firechat in another post and how it’s empowering revolutionaries to get data outside a country after the internet is completely shut down.  The general idea that ties both technologies is this.  The more people using them, the more robust and faster they become, as we’ll see in this “case study” of tor.

What Is Tor?

I thought for a second about writing a big long explanation of tor, and how it works. I’d much prefer to link to information whenever necessary in this blog. The reason is two fold. First, if you can get the information straight from the horses mouth, this makes the information more “pure”. I don’t want us to play telephone, and the developers of tor say, “tor rocks” out the other end pops “tea socks” from me. Secondly, the majority of this information exists all over on the internet, and rather than adding to the glut of data for a few extra clicks, I view myself more as a filter or conduit to the information. Of course, it’s also just more efficient.

So, without further ado:

This is what tor is straight from the tor project themselves. Read it. If you have any questions, do not hesitate to get in touch,, @sjkelleyjr.

The TLDR version is, tor is a network running under the existing internet. A subset of the internet. This subset is sometimes called the deep web, a term that’s been loaded with political connotations thanks to politicians and the media. If you read the torproject link, you now understand how the dark web works. See? not so scary. This explanation of the deep web, and the difference between the deep web and the dark web is also pretty good.

Deep Web

The majority of tor users aren’t using tor to access the deeb web. As a user of tor, I can say, the deep web is more of a by product of tor’s engineering goals. The goal is to allow for untraceable web surfing (remember that haystack thing we talked about?), and to allow for this, you have to allow for deep web content to also be accessible on your network.

One way to think of it is. The internet is an infrastructure that’s owned by telecom companies. We’re all using their computers to route our packets, and we pay for that, and we also have not had much say in what’s happening on those computers historically. Tor is an internet that is owned by people like you and I hosting nodes, rather than telecom companies.

Why Host a Node?

Let’s finally cut to the chase. The whole reason for this blog post is to convince you to host your own tor node, and then make it easy for you to do so.

Remember above when I pointed out some users of tor may have opened it, it was slow as hell, and they closed it and never opened it again? The reason it was so slow was the lack of nodes on the tor network. If you look back on the torproject explanation of what tor is, you’ll see a grid of 9 computers in the image. Those are the tor nodes. They’re letting the tor network use their computer to route packets from Alice’s computer to Jane or Bob’s.

Let’s say 500 million people are trying to use the tor network (but aren’t hosting nodes), they’re the Alice’s. If you’ve ever torrented, they’re the leechers. Let’s also say there’s only 1 computer hosting a node. See a problem here? If 500 million people are all trying to route traffic through 1 computer, that computer is not going to be able to keep up at all. Not only that, if anyone wanted to trace anyone’s traffic, they would simply go to that 1 computer and look at all the packets coming in, and all the packets going out. A tor network of 1 computer is useless. It’s essentially your home router.

Now, let’s say that everyone using the tor network is also hosting a tor node. 500 million people are using the network, but also contributing their bandwidth to the network. In this way packets can choose 500 million different computers to use to get from Alice to Bob rather than just the 1, greatly reducing the load on each of the 500 million computers, and making it impossible to trace traffic through any route in the network.

So you see. The tor network is only as fast as the number of nodes in the network. If you used tor, and never hosted a node, you’re a leecher, and really, you can’t be surprised, or annoyed at speed or robustness of the network, as you’re the reason for that slowness.

But the reason for hosting a node is even deeper than that. Yeah, you want to torrent and you’re annoyed that your ISP is throttling that, or even blocking you from doing so, I get it. But what if your government was murdering the children of it’s dissidents? And what if that government also had a stranglehold on the internet in your country? and what if they decided it was time to cut off access to the outside world? What if you wanted the rest of the world to know about these murders? What if you found out about tor, figured out how to use it, and were uploading a video to twitter about the atrocities? and what if you weren’t able to complete the upload because the tor network couldn’t handle the bandwidth? Luckily, someone in the US was able to pirate all the seasons of Breaking Bad using the tor network though. Again, this would be much less of a problem, if everyone on the internet also hosted a tor node.

It’s interesting to me that everyone wants to talk about progressive politics and the plight of the working man, but no one is willing to take 30 seconds out of their day to figure out how to set up and host a tor relay node. We’re all willing to take a day off work to go march about Trump’s election, or will eat lunch somewhere that claims to send it’s profits to Uganda, but won’t consider a few small technological steps that mathematically produce results. We would have no idea of what the NSA was doing behind closed doors if it weren’t for the tor network. Edward Snowden leaked with tor.

I get it, we humans have certain biases, and actions in which we can’t see reaction are less important to us, and that’s why we go march. It gives us a much stronger psychological boost, but really, hosting a node is just too easy not to.  And, once it’s up, it’s up for life.

I’m not trying to scare you off from using tor to pirate Breaking Bad. The last thing I want is for anyone to feel guilty using tor. However, I do feel that everyone should host a tor node. If you really want to be a progressive patriot, or care about people in oppressive governments, this is what you should be doing this weekend.

A Rant on the NSA and VPNs (or, your civic duty to put on a pair of pants)

I’ve gotten enough family and friends asking about VPNs that I’ve decided it’s time for a blog post about it.  I can’t explain how happy I am to hear people asking unprovoked about VPNs.  As anyone who’s had a conversation with me at any kind of gathering knows, I care deeply about a free and anonymous internet.  I strongly believe that cryptography is the way to achieve this.  I read cypherpunks on my honeymoon.  I’m invested in cryptocurrency (no, not bitcoin, and yes, including ICOs), and I’m in the process of building a miner.  I spent my downtime after leaving SDL learning solidity.  That’s how much I care.

With the Snowden leaks it became known that the NSA was surveilling public communications in mass.  We often hear the tired trope of “well, I’m not doing anything wrong on the internet, so why would I care?”.  And in response to that I’ll trot out the equally tired trope of “perhaps you’re not doing anything wrong under the current regime, but what happens when that regime changes, and they don’t like something you’ve done in the past on the internet?”.

There are more nuanced reasons for the necessity of secure communications online, however.  One of which is brought to light by the recent cyber attacks [1,2].  It isn’t secure for all communication to be stored in a centralized manner anywhere, as this leads to the possibility of massive cyber attacks.  Both of the above cyber attacks used NSAs own tools.  The internet wasn’t originally designed to be centralized, and any centralization of power or data is extremely dangerous.

Another nuanced point will play to your civic duties as a progressive, libertarian, patriot, or whatever you people are calling yourselves nowadays.  As it stands now, it is relatively easy for the NSA to target individuals, nefarious or otherwise, because of the nature of their internet traffic.  If a user is using a VPN, or tor, or any kind of abnormal encryption, they’re immediately given a jaundiced eye, even if this person is simply browsing reddit.  If, however, everyone on the internet decided tomorrow to tunnel all their traffic through tor or a VPN, or use an elliptic curve cryptography instead of TLS, this would create a massive computational haystack with which the NSA would have to search in to even check for the existence of needles.

In a sense we’d all be pretending to carry around bombs in the NSAs eyes, making anyone actually carrying a bomb look less suspicious.  And therein lies the quandary.  We want the NSA to find bombs, but at what cost? The verdict on whether or not the NSA is effective in their goal is hotly contested [1,2,3,4,…I could go on for days].  I don’t want to stray too far off topic (this was actually intended to be an introduction to this post walking a user through the basic set up of the TunnelBear VPN), but there are numerous discussions pointing out the irrelevance of the NSA’s effectiveness to the moral question of mass surveillance.

But I digress.

I personally believe that the internet was meant to be fully anonymous and fully encrypted.  Rather than thinking of it as “we’re all pretending to carry around bombs”, I believe it’s more similar to “we’re all wearing clothes, and anyone not encrypting their traffic is buck ass naked”.  Don’t you think that’s a bit suspicious?

And this is why I believe everyone should use tor or VPN for every form of internet communication.  It is your civic duty to put on a pair of pants.

TunnelBear VPN

Phew.  I just got done writing the introduction to this post.

It turned into a cryptography rant, which I eventually moved to its own post.  The theme of this post will be staying out of the weeds.  If you’re interested in my cryptography rant, I’ll be posting it sometime soon, so feel free to subscribe or follow me on twitter and get updated when I do.  For this one, we’ll dive right in.

A number of friends and family have asked about VPN (which if you read the rant you’ll know brings me a lot of happiness).  Usually in the form of “which VPN is the best?”, to which I typically respond “just use tor“.  The problem with VPNs is, they cost money.  If you go the free route, your internet is either as slow as it is using tor, or their is a data cap on the service.  In both cases you’re better off using tor.  However, what if you’ve got cash to shell out for your online anonymity? Enter…


I’ve used this in the past when I was just toying around with VPNs and found it insanely easy to use and set up.  For that reason I chose to use TunnelBear for this little tutorial, because I’d like to direct it toward less technical people.  If there is interest, I’ll be more than happy to do a follow up with a more serious service, but for now TunnelBear will be great to get our feet wet with a VPN.

Getting an Account

The first step is creating an account.  I lucked out when I started writing this and found a referral link to get 5GB of data free instead of the typical 500MB you’re given.  So we’ll use it!  Click the link, create an account, and the download for the installer will immediately start.  Once it’s done, go ahead and run it.  Agree to the service terms and install!  After the installation finishes, login with the username and password you created at the above link.  Before you can actually use the software, they’ll send a confirmation link to your email.

The Fun

Now begins the fun.  This is why I chose TunnelBear.  People like to see feedback in the user interface to let them know something is actually happening.  Well TunnelBear certainly does a fantastic job of this.

After your account has been verified simply turn the VPN on.  You’ll see in the UI that your sheep has tunneled to a new continent and is now a vicious bear.  You’ll also see your monthly data cap going down from 5.5GB when you navigate to a new page.  You’ll notice that your internet is now presumably a bit slower.  See these (1, 2, 3) for explanation.

Maybe your traffic got tunneled to Canada but you’d like it to appear as though you’re in India.  With TunnelBear this is super simple.  Simply use the map to navigate to another tunnel and now all of your traffic appears to be from that continent!

If you read either of the two links above explaining the speed loss, you’ll now understand why when you tunnel to a continent farther from you, your speed degrades a bit more.


There you have it, you’ve got a VPN up and running in a matter of seconds!  Well, kind of.  Remember what I was saying about shelling out cash?  Once your 5.5GB of data runs out on TunnelBear you’ll no longer be allowed to use their services for that month.  For reference that’s about 2 hours of HD streaming on netflix.  If you’re willing to pay $10 a month, however, you’ll have unlimited anonymous internet to your hearts content.  You can also tweet to TunnelBear and receive an extra GB of data per month on your free account.


I recently heard back from a friend who pointed me to this video.  In it JayzTwoCents recommends Astrill.  I haven’t tried Astrill, but JayzTwoCents is very scientific about the process and says he tried a bunch and Astrill was the best.


The friend ended up pulling the trigger on NordVPN after doing their own research.  Another pointed out one of the flaws of TunnelBear is you can’t tunnel torrent traffic using their service.  Apparently the Bear isn’t too keen on pirates.

AWS Access Keys on GitHub Public Repo

Well, it finally happened. After years of playing fast and loose with API keys on GitHub, I’ve been hacked.

I woke up to an unusually large billing alarm from my AWS account. Since, the code running on this account was storing almost no data I thought this was a bit odd. So I logged in to my billing account and redirected to the service center to submit a claim and found this:

Screenshot from 2017-07-12 13-27-39

Well, the temporary limiting of my account didn’t appear to help much. The hacker had created about 20 m3.2xlarge EC2 instances and a 35gb snapshot for every. single. region. available on AWS (Mumbai, Tokyo, Ireland, the list goes on and on), using the keys I uploaded to GitHub.  Needless to say, I spent this morning terminating all the EC2 instances, clearing the security groups, and deleting the public key from the account.  Sounds like a blast right?

The funny thing is, the chunk of commented out code holding the keys was only a quick test written months ago, that I never used again, which is why I forgot it even existed in the code.

This was back when I was still learning the ropes. Now, whenever I need an API key, I immediately put it into a JSON file, and add that JSON file to the .gitignore for the project, so I don’t have to worry about this stuff while pushing and pulling.  I’ve heard there are better tools out there for storing ssh keys, but I’m curious if there are any for API keys.  If anyone has any suggestions I’d love to hear them in the comments.

I submitted a claim to get a refund for the charges, we’ll see what happens.


The charges racked up by the hacker ended up being $1100 by the time I managed to kill all the instances!  I’ve since received this from Amazon AWS

Screenshot from 2017-07-13 09-37-46

Thank God from Amazon’s customer support.  The hack was obviously my own fault, but they’re still willing to cover the costs.

Firebase Hosting and Google Domains

I decided my real first post was going to be a tutorial describing the steps I took to get my portfolio page built and hosted on Google’s firebase, along with my domain name pointing to this web page.


I was initially drawn to creating a portfolio page after taking this Udemy freelance bootcamp.  Honestly, I won’t recommend it.  But it did open my eyes to the fact that building a freelance client base takes time, and a lot of marketing (hence the blogging and portfolio page).  Since my jumping off point was via this bootcamp, I tried to follow the design of the instructor’s portfolio page.  In the course, he mentions that wordpress or wix are fine for something like this.  So I poked around the websites looking at all the flashy UX templates with coffee and mouse pictures in the backgrounds.  Finally, I came to the conclusion that I wanted something simple and easily maintainable.  This lead me to this little number.  Perfect.  Simple, no extra animations, short and to the point, almost like an online resume.


The first thing I did was clone the repo.  I then used the two demo projects available at the above link to change different aspects of the HTML.  Since the page is a simple single page application, I could simply open the HTML in the browser to see my updated changes (word of caution, sometimes the browser will cache the assets, so I’d switch between firefox and chrome when that happened, or simply close the browser and open it again).

A few bugs

Of course, things never go as planned.  The HTML featured a github activity calendar and feed functionality, however, this wasn’t working on my page.  So I did some digging and found a few problems with the code.

The problem lines were here and here.  I’m not sure if you can source code without the “http:” but it looks like that’s what was left out of these lines (if anyone does know of anything, please drop a comment!).  They’re also using old versions of the libraries they’re importing.

Finally, the version of github-calendar located in the assets/plugins/ directory of the repo had an old version of code in it.  So the library was scraping non existent elements on the github HTML.

I forked the original repo, made the changes and submitted a pull-request, however, the repo hasn’t been committed to since 2016, so it’s highly unlikely that the pull-request will be merged.


Now that I had my HTML looking slick, I wanted to use firebase to host the webpage.  I’d used firebase previously for hosting a project I built called Github Battle to learn react.js from Tyler McGinnis, who’s since gone on to be an instructor of the Udacity react nanodegree.  This guy knows his stuff, can’t give him enough praise.

To deploy to firebase is dead simple.  Simply follow these instructions (be sure to run the firebase commands inside the directory you’ll be deploying from, in the case of this tutorial it would be the Developer-Theme directory). I will typically click the Firebase console link, then Add Project and create the project via the console before running firebase init.

When you run

firebase init

You’ll be prompted with this:

Screenshot from 2017-07-11 21-12-55

You’ll press down twice to hover over ‘Hosting’ and press the spacebar then press enter.  If you’ve created the project in the firebase console prior to running the init command you’ll see your newly created project in the list of default firebase projects to choose from.   As you can see I’ve got quite a few projects.

Screenshot from 2017-07-11 21-18-29


Here’s a gotcha if you’ve been following along with the portfolio website building.  Firebase will ask which directory you want to use as your public directory.  The repo we cloned doesn’t have a public directory.  So, I created one.  I made a public directory and copied the assets directory, favicon.ico, and index.html into that public directory.  I then allowed firebase to proceed as usually using public as the public directory.  You’ll then be asked if you want to rewrite all urls (this means that if, for example a user went to, they’ll be redirected to the index.html page).  Since this is a single page application, we do.  You might also be told that there already exists an index.html page, and then asked if you want to over write it, of course, you don’t.


You should now be able to run

firebase deploy

and see something like this

Screenshot from 2017-07-11 21-26-10

If you navigate your browser to your Hosting URL, you and anyone else on the internet should now see your website!


Now comes the fun.

Great, you’ve got a hosted URL, but you don’t want to navigate users to any time you want them to check out your portfolio, do you?  You need a domain name (something cool and flashy, like, maybe

I poked around the web for a bit, everyone and their dog sells domains nowadays.  I’d thought about using Amazon’s Route 53 (hoorah!).  However, since I was using firebase to host the page, I thought it might streamline the process to use google domains, and I was right.  I searched for my domain ( and paid the $1 a month to reserve it.  After my purchase cleared I was redirected to where you’ll see something like this

Screenshot from 2017-07-11 21-42-08Now, log into your firebase console and select your newly created project.  You’ll see a sidebar like this

Screenshot from 2017-07-11 21-43-35

Click Hosting, then Connect Domain and type in your newly reserved domain name.  If you elected to reserve the domain name via google domains, you won’t need to verify ownership (I told you it would streamline the process), since google already knows you own the domain name.  If you didn’t you’ll see something like this

Screenshot from 2017-07-12 12-16-10

And you’re on your own (however, I assume the next steps are VERY similar).  If you did you’ll see something like this

Credit to this stack overflow post for the image, and in fact I’d highly recommend going to it.  He has essentially done all the work for us, with a slight gotcha, which I suspect is his problem (and you’ll see my first stack overflow post there hoping to fix the issue!).

The Final Gotcha

Go back to your page and click the DNS button.  This will drop down a bunch of different options you can customize with the DNS portion of your Domain Name.  The one we’re interested in is Custom Resource Records, so scroll down to it.  You should see something like this.

enter image description here

(again from the stack overflow post).  Here’s where I believe the poster went wrong.  Type in your URL exactly as it was in the Connect Domain modal on your firebase console.  The SO poster has his crossed out in red, but for our example it would be, where it says @, then type in the IP address given in firebase where it says IPv4 address (in this case  Now instead of clicking Add, click the + next to IPv4 and add the second IP address given in the firebase console. You’ll notice, the Name entry automatically converts “” to “@”.  You’ll also notice that the poster has “”, which is not correct.


If you’re like me you’re curious as to what @ means in the context of DNS, so I decided to do a bit of research.  @ simply means “this domain name” so in our case @ =  Feel free to Google this yourself for a more detailed explanation, and I may even do a post at a later date if this is something that interests readers.


These DNS changes take time to propagate throughout DNS servers, so you won’t be able to type your URL into a browser and get directed to your firebase page until then (it shouldn’t take any more than 2 hours from what I’ve read online).  Once it does, try typing www . <YOURURL> . com (where YOURURL is the URL you registered).  You’ll notice it didn’t work.  What gives?

Not Quite

Back in firebase you should see the option to add a redirect.  You want to enable the redirect of the www. version of your URL to your firebase URL as well.  This involves exactly the same steps as the original URL, however, in you’ll add www. in front of the URL, and use the IP addresses given to you for the www. URL in firebase.  When you click Add, the record should once again convert “” to “www”.  These changes must also propagate, but I noticed they take less time.

Really Done