Higher-Order Functions, map, reduce, filter. Yeah, the ones used in Big Data Frameworks like Apache Spark and Hadoop

In my previous post about Functional Programming in Java, I mentioned higher-order functions that are often used in big data frameworks like Hadoop, and Spark.  We’ll be discussing only a small subset of the possible functions available in Spark and Hadoop because this subset are the functions I’ve found most useful as a developer not working in big data.  However, as we’ll see, by the end we’ll have fundamental enough knowledge to apply the function calls to a big data framework if necessary.


We’ll begin with perhaps the easiest function to grasp, and the first in the all too familiar phrase map-reduce: the map function.  The way I’ve often heard map described is that we’re “mapping an input to a specified output”.  To do this mapping we’ll need a function that maps a set of inputs to a set of outputs.  In essence, you’re iterating over every object in a data structure and mapping it to a new object.  It often feels very much to me like a for each loop that forces the programmer to do something inside of it.  Here’s a quick example:

    var arr = ['1','2','3','4','5'];
    arr.map( i => console.log(i));

Here the function we’re supplying to map the inputs (‘1′,’2’,…, etc) is a lambda, that takes i as a parameter and maps this input to the console as it’s output.  It’s obviously the same as:

arr.forEach(function(i) {

or even the age old:

for(var i in arr){

Obviously the example using the forEach could’ve used a lambda in place of the function, and the map function can take a non lambda function, but I wanted to illustrate the many different ways to write it.  We can also add an arbitrarily long function in place of the console log.

arr.map( i => {
  var iTimes20 = i * 20;
  if(i > 3){
    console.log((iTimes20 % 20) == 0);

“What happens in map, stays in map”

One thing that always bites me in the ass is I think I’m altering the object passed in by the lambda.  In this case one of the strings ‘1’, ‘2’, …. just as you would be when using the old for loop, but this isn’t the case.  If you do something like this:

arr.map( i => i = 100);

and print out the result of arr, you’ll see it remains unchanged.  This is one of the main tenets of functional programming.  You never want to “cause side effects”.  What this means is, you want to avoid changing the state of a program in unintentional places.  What happens in map, stays in map.  If you want to alter the state of the object in arr, you need to return out a new copy of the original array:

var newArr = arr.map( i => return 100 );

Now if you console log newArr, you’ll see an array of 100’s in place of the original array, and nothing is changed in arr itself.  This is one advantage map has over the old school for loop, you can be certain the state of the original container holding the objects will be the same after the for loop as it was before.

This idea is difficult to wrap your head around as a college student (at least it was for me).  You’re preparing for interviews and space vs time complexity is beat over your head again and again.  The above code looks atrocious from this perspective.  You’re unnecessarily creating a new array, making the space 2n (where n is the size of the array).  Yes, you’re correct.  However, as you’ll see when you get into industry, writing bug free code is often much more important than shaving off a factor of n.  In reality, the code is still O(n), and you can always come back to refactor this bit of code if the bottleneck of the software ends up being this line.  It’s often the case that the bottlenecks appear elsewhere in the software architecture though, and they’ll have been discovered in design.


I often think of reduce as a concise replacement for this programming construct:

var finalSum = 0;

var arr = [10,20,30,40,50];

for(var i in arr){
  finalSum += arr[i];

The same thing is accomplished using reduce:

var finalSum = add.reduce((countSoFar,currentVal) => {
  return countSoFar + currentVal;

Here, countSoFar is an “accumulator” which carries the returned value throughout the function calls, and currentVal is the current object in the collection.  So we’re adding the accumulated sum we’ve seen up to this point, to the current value in the array, and returning this to the countSoFar for the next iteration.  This particular example is kind of trivial, however, since you have access to the accumulator you can do some really interesting things.  For example:

var doubleNestedArray = [
  ['Bob', 'White'],
  ['Clark', 'Kent'],

var toMap = doubleNestedArray.reduce((carriedObject, currentArrayValue) => {
  carriedObject[currentArrayValue[0]] = currentArrayValue[1];
  return carriedObject;
}, {});

This will return the array as a map object that looks like this: { Bob: ‘White’, Clark: ‘Kent’, Bilbo: ‘Baggins’ }

Here we see a feature of reduce that I didn’t mention previously.  The second argument after the lambda function is the initial value for the reduce.  In our case it’s an empty javascript object, however you could’ve easily added an initial person to our map:

var toMap= doubleNestedArray.reduce(carriedObject, currentArray) => {
  carriedObject[currentArray[0]] = currentArray[1];
  return carriedObject;
}, {Bruce: 'Wayne'});

I often use reduce on large JSON objects returned by an API, where I want to sum over one attribute across all the objects.  For example, getting the star count from a list of GitHub repos:

return repos.data.reduce(function(count,repo){
  return count + repo.stargazers_count;


Finally, lets throw in one more for good measure, as it’s another that very frequently comes up in big data computing (I think I’ve heard the joke somewhere: “it should be map-filter-reduce but that doesn’t roll off the tongue quite like map-reduce”).  The filter function is a replacement for this construct:

var arr = [10,20,30,40];

var lessThan12 = [];

for(var i in arr){
  if(arr[i] < 12){

This can be shortened to:

var lessThan12 = arr.filter( num => return num < 12; );

Naturally, any sort or predicate logic can be put in place of the if statement to select elements from an arbitrary container.

One thing I’ll often forget is you have to return the predicate.  It seems odd because you’re returning a boolean, but getting the actual value that evaluates to true in the returned container.

A big data example

A famous “hello world” example from big data is the word count.  Let’s use our new found knowledge on this problem.

var TheCrocodile = "How doth the little crocodile
Improve his shining tail
And pour the waters of the Nile
On every golden scale
How cheerfully he seems to grin
How neatly spreads his claws
And welcomes little fishes in
With gently smiling jaws";

var stringCount = TheCrocodile
                     .split(' ')
                     .reduce((count, word) => {
                       count[word] = count[word] + 1 || 1;
                       return count;
                     }, {});


Hopefully now the power of functional programming is beginning to be more apparent.  Counting the words in a string took us 4 lines of code.  Not only this, but this code could be parallelized to multiple machines using Hadoop or Spark.

That’s all for functional programming!  In the next post, we’ll finally start talking about a fundamental topic associated with this blog: cryptocurrency.  I’m particularly interested in Ethereum as a developer, and therefore the solidity programming language.  We’ll start this broad and deeply interesting topic with a brief explanation of what the blockchain is, and move forward from there.  See you then!


Hosting a Tor Relay Node on an AWS EC2 instance

Back to Business

Ok, lets pick up where we left off .  We’re now ssh’d into our EC2 Ubuntu server.  The next step is installing tor. Do not run apt-get install tor.  Instead, follow the directions found here (we’re running Xenial Xerus, just so you don’t have to check).  As a sanity check, type tor into the terminal, you should see an error after following those steps.  If you see an error like “command tor not found” back up, and retrace your steps, or get in touch with me for help, something went wrong.


Just like we did in the Windows tutorial.  We’re going to edit the torrc file to configure the node.  I use vim, but feel free to use whatever command line editor you’re familiar with.  If you’ve never used one before, again, feel free to get in touch!  I’m a missionary for The Church of Vim.

The torrc file is located in the /etc/tor/ directory. So run:

sudo vim /etc/tor/torrc

Don’t forget the sudo, otherwise the file will be read only.

Paste the following lines at the bottom of the file:

ORPort 9001


RelayBandwidthRate 75 KBytes # Throttle traffic to 75KB/s (600Kbps)

RelayBandwidthBurst 200 KBytes # But allow bursts up to 200KB (1600Kb)

AccountingMax 1 GBytes

AccountingStart month 3 15:00

ExitPolicy reject *:* # no exits allowed

Here <YOU NODE NICKNAME> should be a unique name for your node that you’ll remember, because we’ll use it to search for the node using atlas.  We’re also directing tor to listen on the port we opened in the previous tutorial, as well as limiting the bandwidth that the tor network can use, and disallowing exit traffic.  We have to limit the bandwidth to these specific metrics, otherwise we’ll accrue charges from Amazon for our EC2 instance.  This explains why the tor network is slow, as discussed in our previous post.  Sadly, we’re one of the slow nodes.

Finally, run:

sudo service tor reload

Sanity Check

The tor documentation tells you the logs will be output to the /var/log/tor/ directory.  I never saw them there, instead I had to use journalctl.  Just like the Windows tutorial, we need to ensure the line “Self-testing indicates your ORPort is reachable from the outside. Excellent. Publishing server descriptor.” is being output by the tor node, otherwise the node isn’t working.  So we’ll run:

journalctl | grep "Self-testing indicates your ORPort is reachable from the outside. Excellent. Publishing server descriptor."

If nothing is output to the console, your tor service isn’t running.  Just a few troubleshooting tips.  Run:

sudo service tor stop && sudo tor

If you never see that line output, there is a problem with your tor installation, consider uninstalling tor and reinstalling it.  If, however, it runs fine, there is a problem with the init.d script to daemonize the tor process.  Consider this reddit link.

Going back to AWS cost from the previous post, we can also monitor our tor node bandwidth usage using the following command:

journalctl | grep "Accounting enabled."

You should see output like:

Heartbeat: Accounting enabled. Sent: 94.42 MB, Received: 121.84 MB, Used: 122.38 MB / 1.00 GB, Rule: max. The current accounting interval ends on 2017-09-03 15:00:00, in 21 days 2:08 hours.

Here the node is telling us that it’s used 122.38 MB of the 1GB we’ve defined as the monthly max in the torrc (the Accounting lines).  When we reach 1GB of traffic, the node will stop accepting connections for the rest of the month, thus keeping our EC2 cost inside the free tier metrics.


If the Self-testing indicates... line showed up after running the journalctl command, your node is up and running!  We can now head over to atlas and search for the Nickname you set in the torrc file.  Keep in mind, that like the last tutorial, it will take a few hours to show up on the atlas webpage.

If you named your node something unique, when you search for it, you’ll be taken directly to your details page.  Here’s quick run down.

OR Addresses, is the public IP address of your node, Contact is your contact information if you set it in the torrc file (we didn’t, but it’s explained in the Windows tutorial).  Advertised Bandwidth is how much bandwidth you’re telling the rest of the network you’re willing to allow through you.  Exit Policy is set to reject all exit traffic, as we said, those nodes get put under extra scrutiny.  Uptime is how long our node has been active on the tor network.  You can see AS Name is Amazon.com, Inc, since we’re using Amazon’s hardware.  Below this you can see metric graphs for your node.  Since ours is so new, nothing shows up.


You’ve just joined a unique brotherhood.  You are a protector of internet freedom.  You’ve managed to do what only a very small portion of the population has done, and that’s take concrete action toward a free and open internet.  Anyone can click like, or retweet, but few actually look into the details and understand what it takes for a free and open internet.  I’m proud of you for following along, and if you know me at all, you’ll know this may be the first time I’ve ever said that.


Let me know if I can make this tutorial better or if you managed to make it through! twitter.com/sjkelleyjr instagram.com/sjkelleyjr


Creating an EC2 Instance on AWS to use as a Tor Relay node

But This One Goes to Eleven

A few weeks ago I documented the steps necessary to get a tor relay node up and running using Windows.  This was mainly because I wanted to give a gentle introduction to anyone interested in cryptography and internet freedom who may not have a large chunk of experience in tech.  I strongly believe anyone can become an expert in any niche given the scale of information currently available on the internet, and I don’t think technology is any different.  All it takes is a little bit of inspiration, which I was hoping to provide by taking it slow.  Now we’re going to turn up the volume a bit.



One large drawback of hosting a relay node on your personal laptop is your connection to the tor network is not persistent.  Every time you close your laptop or shut it off and turn it back on, your tor software loses all the connections that were so helpfully shuttling network traffic for everyone.  This time, we’re going to set up a Free Tier AWS EC2 instance in the cloud, and run the relay node software using this server space.  This will ensure our node will forever stay up supporting the tor network and internet freedom!

One thing to note is, once you’ve got your EC2 instance provisioned, you can put any software you want on it and it will automatically be connected to the rest of the internet!  I use this frequently for a lot of my side projects.

Also, I’m going to assume you’re running some flavor of Linux and have some familiarity with the command line moving forward.  If you don’t, please don’t fret, just google, and as always, don’t hesitate to reach out to me and ask for help!

Welcome to The Cloud

First, we’ll go to https://aws.amazon.com/ and click “Sign In to Console”.  You’ll click “I am a new user.” after being directed to the next page and entering your email address.  Heore you’ll enter your first name, retype your email address, and set your password.  Then click “Create account”.  You’ll then be redirected to a Contact Information page.  Select the Personal Account radio button.  Fill out the information in the form.  When you’re done, click the Agreement check box and then Create Account (if it took you more than two tries to get the captcha don’t worry….I did too).  Next AWS asks for your payment information.  I know when I first signed up I was hesitant.  I’m always hestitant about inputting credit card info.  However, I assure you, your account will not be charged throughout this tutorial.  And in fact, I’ve inadvertently racked up $1100 in AWS charges after uploading my IAM keys to github (yes, I’m that stupid) and had them all refunded.  Amazon rocks with refunds.

Next, you’ll fill out the captcha and click the button to receive the phone call from Amazon herself.  Type in the pin shown on the screen and end the phone call.  Then click Continue.  We want the Basic support plan, so select that and again click continue.  You’ll be taken to the homepage, click Signin to Console.  Enter your email, and password your set earlier in the tutorial.

You now have an AWS account.  Welcome to the cloud.


There are an endless supply of cloud computing resources supplied by Amazon.  We’ll only be looking at 1 today, but I recommend you check out as much as you can.  In fact, if you don’t have a job, or dislike the one you’ve got and you learn the ins and outs of AWS, you could easily land yourself a dev-ops role at a great company.


We’ll be using the EC2 service provided by AWS, so click that one, under AWS services.  EC2 stands for Elastic Compute Gen 2.  The elastic part means that your computational resources can scale up or down depending on how much you’re using.  I believe you can also bid on computing resources, whereby the price fluctuates with demand for Amazon’s servers.  Since we’ll be doing very minimal computing, this means ours will be free!

Click the blue Launch Instance button.  You’ll be presented with various different Operating Systems that you can put on your EC2 instance.  We’ll be using Ubuntu, so scroll down until you see Ubuntu Server 16.04 and click select.  Here we’ll choose the computing resources that we’d like to use on the server.  Leave it as is, which should be the t2.micro settings, which as Amazon has so kindly informed us is free tier eligible.  Now, click Review and Launch, and finally, Launch.

Private Key File

You’ll now be presented with a pop up informing you that you need to create a private key file in order to SSH (secure shell) into your Ubuntu server.  As I said, we’re turning the volume up for this tutorial.  Here’s the general idea of what we’re doing.

You have a laptop that you’re reading this blog post on.  Somewhere in New York or Oregon, or hell, even Tokyo, Amazon has a bunch of servers it uses for various tasks on its website.  Well it’s decided to sell some of those servers to the public to make money when they aren’t using them.  You’ve just been given a server somewhere in the deepest depths of Mordor.c-hrivxvoaalxjl

Great! we have a server! but how do we access it?  We have to use a networking protocol that allows us to pass commands to the computer through the internet.  However, we don’t want anyone to be able to pass commands to our computer, so we need a way of securing it.  Enter: the private key file.  Using roughly the same technique that keeps your traffic secure while you do your online banking, we can keep our commands to the cloud computer safe.

Bombs Away!

Select Create a new key pair, then name it whatever you like.  I’m going to name mine “torRelayNode” because that’s what I’ll be using this EC2 instance for.  Then click “Download Key Pair”.  Save this file somewhere safe, if anyone gets ahold of it, they’ll have access to your server and can do whatever they’d like with your AWS account.


Now click Launch Instances, then View Instances.  You should see your new t2.micro instance, with an Instance State of running.

While we’re here, we’re going to set the port forwarding for the port we’ll be using on the tor node.  On the left, you’ll see Network & Security, and under that Security Groups, click that.  You should see two security groups, we want the one that belongs to the launch-wizard-1 Group Name.  When you click it you should see it’s current network rules.  The only one should be of type SSH, using the TCP protocol, on port 22, from any source (  Click the Edit button above that, and Add Rule.  From the drop down under Type, choose Custom TCP, the protocol is TCP, the port will be 9001 (as we’ll see in the next tutorial).  Using the Source dropdown, select “Anywhere”.  You’ve now added the firewall rule to allow for incoming connections on the tor network to connect to your node on port 9001.  If at any point you decide to change the tor node’s port in the torrc file, don’t forget to change it here as well!

You now have your very own cloud computer!


But how do we access it?  We use that private key file we downloaded, and ssh into it. First we need to get the public IP address associated with our EC2 instance.  On the left (where you saw Network & Security) you should see Instances, click Instances under that.  You should see “Public DNS:” below where all of your EC2 instances are listed, and a big long string starting with “ec2-“.  That’s your shiny new server’s address on the internet.  Copy it, we’ll need it later.

Next, we’ll open up the command line, cd into whatever directory you downloaded or moved the private key file to, then change the execution permissions of that file with chmod:

chmod 400 <The name of your .pem file, in my case torRelayNode.pem>

and finally:

ssh -i <The name of your .pem file> ubuntu@ec2-the-ip-address-you-copied-from-before

Note, that you’ll need to be in the same directory as your .pem file, or pass the command the path to your .pem file.  If it’s your first time logging into your EC2 instance, you’ll be asked if you want to trust it.  Type yes and hit enter.  Now you’re on your new computer!

Shutting it Down

Before I forget, if you do somehow end up getting charged somewhere down the road from AWS, you can kill your instance by going to the Instances page, clicking your instance, selecting the Actions drop down, and changing the Instance State from Start to Stop (and the nuclear option, Terminate).  You’ll still have to cough up the charges (they should be small) but this should stop you from getting any more.  Also, on that note, I’d recommend looking into setting a billing alarm to send you an email if you ever do get charged.  I’ll also monitor this EC2 instance for a month and post an update if it does end up charging me, so we can fix it together if it does.  Also, again, don’t hesitate to get in touch if you have any problems with the charges!

Next Steps

In the next blog post, we’ll get the tor relay software up and running on your new cloud server, and you can now brag at parties that you’re actually protecting revolutionaries in dictatorial countries, just like I do any time someone asks me which VPN to use.  Also, don’t forget to buy a tor sticker to put on your laptop, as you’ll have joined the brotherhood after the next tutorial.









AWS Access Keys on GitHub Public Repo

Well, it finally happened. After years of playing fast and loose with API keys on GitHub, I’ve been hacked.

I woke up to an unusually large billing alarm from my AWS account. Since, the code running on this account was storing almost no data I thought this was a bit odd. So I logged in to my billing account and redirected to the service center to submit a claim and found this:

Screenshot from 2017-07-12 13-27-39

Well, the temporary limiting of my account didn’t appear to help much. The hacker had created about 20 m3.2xlarge EC2 instances and a 35gb snapshot for every. single. region. available on AWS (Mumbai, Tokyo, Ireland, the list goes on and on), using the keys I uploaded to GitHub.  Needless to say, I spent this morning terminating all the EC2 instances, clearing the security groups, and deleting the public key from the account.  Sounds like a blast right?

The funny thing is, the chunk of commented out code holding the keys was only a quick test written months ago, that I never used again, which is why I forgot it even existed in the code.

This was back when I was still learning the ropes. Now, whenever I need an API key, I immediately put it into a JSON file, and add that JSON file to the .gitignore for the project, so I don’t have to worry about this stuff while pushing and pulling.  I’ve heard there are better tools out there for storing ssh keys, but I’m curious if there are any for API keys.  If anyone has any suggestions I’d love to hear them in the comments.

I submitted a claim to get a refund for the charges, we’ll see what happens.


The charges racked up by the hacker ended up being $1100 by the time I managed to kill all the instances!  I’ve since received this from Amazon AWS

Screenshot from 2017-07-13 09-37-46

Thank God from Amazon’s customer support.  The hack was obviously my own fault, but they’re still willing to cover the costs.

Firebase Hosting and Google Domains

I decided my real first post was going to be a tutorial describing the steps I took to get my portfolio page built and hosted on Google’s firebase, along with my domain name pointing to this web page.


I was initially drawn to creating a portfolio page after taking this Udemy freelance bootcamp.  Honestly, I won’t recommend it.  But it did open my eyes to the fact that building a freelance client base takes time, and a lot of marketing (hence the blogging and portfolio page).  Since my jumping off point was via this bootcamp, I tried to follow the design of the instructor’s portfolio page.  In the course, he mentions that wordpress or wix are fine for something like this.  So I poked around the websites looking at all the flashy UX templates with coffee and mouse pictures in the backgrounds.  Finally, I came to the conclusion that I wanted something simple and easily maintainable.  This lead me to this little number.  Perfect.  Simple, no extra animations, short and to the point, almost like an online resume.


The first thing I did was clone the repo.  I then used the two demo projects available at the above link to change different aspects of the HTML.  Since the page is a simple single page application, I could simply open the HTML in the browser to see my updated changes (word of caution, sometimes the browser will cache the assets, so I’d switch between firefox and chrome when that happened, or simply close the browser and open it again).

A few bugs

Of course, things never go as planned.  The HTML featured a github activity calendar and feed functionality, however, this wasn’t working on my page.  So I did some digging and found a few problems with the code.

The problem lines were here and here.  I’m not sure if you can source code without the “http:” but it looks like that’s what was left out of these lines (if anyone does know of anything, please drop a comment!).  They’re also using old versions of the libraries they’re importing.

Finally, the version of github-calendar located in the assets/plugins/ directory of the repo had an old version of code in it.  So the library was scraping non existent elements on the github HTML.

I forked the original repo, made the changes and submitted a pull-request, however, the repo hasn’t been committed to since 2016, so it’s highly unlikely that the pull-request will be merged.


Now that I had my HTML looking slick, I wanted to use firebase to host the webpage.  I’d used firebase previously for hosting a project I built called Github Battle to learn react.js from Tyler McGinnis, who’s since gone on to be an instructor of the Udacity react nanodegree.  This guy knows his stuff, can’t give him enough praise.

To deploy to firebase is dead simple.  Simply follow these instructions (be sure to run the firebase commands inside the directory you’ll be deploying from, in the case of this tutorial it would be the Developer-Theme directory). I will typically click the Firebase console link, then Add Project and create the project via the console before running firebase init.

When you run

firebase init

You’ll be prompted with this:

Screenshot from 2017-07-11 21-12-55

You’ll press down twice to hover over ‘Hosting’ and press the spacebar then press enter.  If you’ve created the project in the firebase console prior to running the init command you’ll see your newly created project in the list of default firebase projects to choose from.   As you can see I’ve got quite a few projects.

Screenshot from 2017-07-11 21-18-29


Here’s a gotcha if you’ve been following along with the portfolio website building.  Firebase will ask which directory you want to use as your public directory.  The repo we cloned doesn’t have a public directory.  So, I created one.  I made a public directory and copied the assets directory, favicon.ico, and index.html into that public directory.  I then allowed firebase to proceed as usually using public as the public directory.  You’ll then be asked if you want to rewrite all urls (this means that if, for example a user went to sjkelleyjr.com/TEST, they’ll be redirected to the index.html page).  Since this is a single page application, we do.  You might also be told that there already exists an index.html page, and then asked if you want to over write it, of course, you don’t.


You should now be able to run

firebase deploy

and see something like this

Screenshot from 2017-07-11 21-26-10

If you navigate your browser to your Hosting URL, you and anyone else on the internet should now see your website!


Now comes the fun.

Great, you’ve got a hosted URL, but you don’t want to navigate users to https://portfolio-2447c.firebaseapp.com/ any time you want them to check out your portfolio, do you?  You need a domain name (something cool and flashy, like, maybe scorpiocode.com).

I poked around the web for a bit, everyone and their dog sells domains nowadays.  I’d thought about using Amazon’s Route 53 (hoorah!).  However, since I was using firebase to host the page, I thought it might streamline the process to use google domains, and I was right.  I searched for my domain (sjkelleyjr.com) and paid the $1 a month to reserve it.  After my purchase cleared I was redirected to https://domains.google.com/registrar where you’ll see something like this

Screenshot from 2017-07-11 21-42-08Now, log into your firebase console and select your newly created project.  You’ll see a sidebar like this

Screenshot from 2017-07-11 21-43-35

Click Hosting, then Connect Domain and type in your newly reserved domain name.  If you elected to reserve the domain name via google domains, you won’t need to verify ownership (I told you it would streamline the process), since google already knows you own the domain name.  If you didn’t you’ll see something like this

Screenshot from 2017-07-12 12-16-10

And you’re on your own (however, I assume the next steps are VERY similar).  If you did you’ll see something like this

Credit to this stack overflow post for the image, and in fact I’d highly recommend going to it.  He has essentially done all the work for us, with a slight gotcha, which I suspect is his problem (and you’ll see my first stack overflow post there hoping to fix the issue!).

The Final Gotcha

Go back to your https://domains.google.com/registrar page and click the DNS button.  This will drop down a bunch of different options you can customize with the DNS portion of your Domain Name.  The one we’re interested in is Custom Resource Records, so scroll down to it.  You should see something like this.

enter image description here

(again from the stack overflow post).  Here’s where I believe the poster went wrong.  Type in your URL exactly as it was in the Connect Domain modal on your firebase console.  The SO poster has his crossed out in red, but for our example it would be sjkelleyjr.com, where it says @, then type in the IP address given in firebase where it says IPv4 address (in this case  Now instead of clicking Add, click the + next to IPv4 and add the second IP address given in the firebase console. You’ll notice, the Name entry automatically converts “sjkelleyjr.com” to “@”.  You’ll also notice that the poster has “@sjkelleyjr.com”, which is not correct.


If you’re like me you’re curious as to what @ means in the context of DNS, so I decided to do a bit of research.  @ simply means “this domain name” so in our case @ = sjkelleyjr.com.  Feel free to Google this yourself for a more detailed explanation, and I may even do a post at a later date if this is something that interests readers.


These DNS changes take time to propagate throughout DNS servers, so you won’t be able to type your URL into a browser and get directed to your firebase page until then (it shouldn’t take any more than 2 hours from what I’ve read online).  Once it does, try typing www . <YOURURL> . com (where YOURURL is the URL you registered).  You’ll notice it didn’t work.  What gives?

Not Quite

Back in firebase you should see the option to add a redirect.  You want to enable the redirect of the www. version of your URL to your firebase URL as well.  This involves exactly the same steps as the original URL, however, in https://domains.google.com/registrar you’ll add www. in front of the URL, and use the IP addresses given to you for the www. URL in firebase.  When you click Add, the record should once again convert “www.sjkelleyjr.com” to “www”.  These changes must also propagate, but I noticed they take less time.

Really Done