How I built My Own Data Center and Became My Own ISP

“Wait, what?”

You read it right: I built a data center in my house, and became my own ISP. But, understand: I didn’t wake up one morning and think, “You know what I’d like to do today? Become my own Internet Service Provider.” It was something that happened incrementally over time. It was a Stone Soup type thing, where I looked up and realized—holy smokes! I've made my own data center!
If you're asking, "Why?" my counter is quickly: “Why not eh?”

The main reason this all started is because I do a lot of internet scanning. Not like creeper scanning, but like cool scanning. I perform a deep scan every IP on the internet, and I do it extremely quickly. This is incredibly taxing on my network, as well as the hardware. Scanning through a traditional ISP is nearly impossible due to the tight limitations they have, as well as not having nearly enough bandwidth. And, sure, there are services out there that allow for the use of a database, but this much scanning gets really pricey, really fast.

I had $250,000 worth of credits to a VPS provider that had an expiry date (and how I came by that is a whole other story in itself), but when they ran out—and they would, quickly—I had to have another plan set when it happened.

So I started looking into alternative avenues, and, after crunching some numbers, I discovered that buying a server is actually way cheaper in my situation—so much so, that it essentially paid for itself in roughly six months. Most businesses just assume it's better to use a cloud provider—or maybe they don't care enough to investigate—and are content to pay the stiff bill that comes with it. For tech companies though—companies like ours that do way more than just host a website—this isn't the case.

“So what was the process?”

Since this wasn’t a cloak and dagger situation, I went through the proper channels (wah-wah). I applied to Canadian Radio-television and Telecommunications Commission (CRTC), which is Canada’s version of the United States’ Federal Communications Commission (FCC), for a Basic International Telecommunications Services (BITS) license—so… many… acronyms…

It required some legal actions and notaries and paperwork in triplicate, but there weren’t an absurd number of hoops to jump through; overall, it was a very manageable process—Oh! And there’s a public comment period on all license applications where people can leave comments regarding what they think about it… I received not a single comment…

[wipes a single tear from cheek]

Anyway, after it was all finalized, I was now legally licensed to sell internet, and this is where the story gets really interesting. I was licensed to sell internet, but I had no internet to actually sell. The process was overly frustrating, and the cost of internet that I could resell was much higher. O! what am I to do?

Well, a few years back, I met an ISP owner, who introduced me to “wholesale” internet by Bell (which is a big ISP in Canada), which ISP’s don’t really talk about (for obvious reasons; don’t give away what you can sell, amiright?). There was a major issue, though, in that peddling wholesale internet would require dragging a line of fiber from a source to my house, which would require construction costs, city permits, and a number of other costs, sending the monthly cost to well above what a normal person would consider, and that, for a minimum of a six-year commitment…

So shoot... In the eternal words of Axl Rose, “Where do we go now?”

There was another option out there: a reseller of Bell internet had an offer for substantially less, but still very expensive. I know what you’re thinking: “Hey Jason, how come you don’t rent out a rack at a colocation facility eh?” A just question. I tried that, but every new server would've increased the rental fee considerably. And the network was decent, but I had to share it with people, which meant that if I had a software bug, or if they did, it could cause the whole network to shut down, and then nobody's happy. And all that besides, it was still very costly, and I would be paying for features that I was never going to use. It would be viable if I needed a fixed set of hardware requirements, and if I wouldn’t be changing anything. But therein lies the issue: my scanning, among many other operations, will not be static; on the contrary, they will be very dynamic.

This, then, would require me to alter and manipulate the settings and features constantly, causing more of a headache than it would be worth. And not only that, [big inhale] co-locations claim to have a 99.99999% uptime, also called "five 9's," but, if you're willing to run it yourself, and settle for 99.9%—which, there is a surprising difference between the two--you can save yourself quite a bit of money, and you don't have to pay for unwanted features that are baked into the price.

“So then what did you do?”

I built.

I learned how to build my own network, and created a build out, complete with a network diagram, and posted it for comments on Reddit at /r/homelab.
And yes, my Reddit username really is 420SwagBootyWizzard. Deal with it.

Moving along, there are 5 major components to the network, all of which bear a Star Trek-themed name.
UNADJUSTEDNONRAW_thumb_e62-2
(Data Center V. 1.0)

  1. USS Defiant - A Dell R710, used as my primary messaging gateway. It hosts, parses, and manages all of my data ingestion. Given the internet scale data collection I do, this server chews through quite a lot of work.
  2. Picard - It's my deep learning rig which has multiple Graphical Processing Units (GPU's), and the most powerful CPU's in the data center. It functions almost entirely as a deep learning and analysis machine. This server is a real time saver when I’m working on some new analysis or ML project.
  3. Janeway - This is a very high powered database with large Solid State Drive (SSD) volume. She sports several 'enterprise grade' (chortle, chortle) SSDs in it. The database needs to have a very high write speed to keep up with the incoming data, as well as many terabytes of storage. Building this server was actually quite challenging. The database requirements are to have very fast write operations, fast reads, must be able to sustain heavy write endurance, as well as have large capacity. Meeting all those requirements wasn’t easy. I calculated that a common SSD (like a Samsung evo 860 for example) would likely die within the first year of use.
  4. Then there's the Delta Flyer, my other Dell R710. This one is my virtual machine server, which runs a Proxmox virtual server environment.
  5. And lastly, there's Databanks (I planned to name it after the computer on the Enterprise, but couldn't find a name for it—if anyone can help me out here, I'd be much obliged). This is basically a my my Domain Name System (DNS) server and a synology Network Attached Storage (NAS). It has a very large storage capacity and is used mostly for backups and archiving - but also can be hooked into any VM or other server that needs to hold some large folders.
    (Also, side note: if you want to know more about DNS servers, or how they can be used in a cyber attack check out my blog about them.)

IMG_1785-1
(Data Center V. 2.0)

We happened to be remodeling our home, which, I mean, c’mon, it’s the perfect time to add some new wiring to the house.—I should point out here, too, that up until this point, my wife thought I was being a little more nuts than usual; but now, she totally digs it. So with home renovations taking place and wires being inlaid, I went ahead and splurged on a glass-enclosed data center, and a couple of business-grade WiFi access points. It ended up being a very small room, but plenty of space for my needs.

UNADJUSTEDNONRAW_thumb_1081-2
(WiFi access points)

Now, as I’m sure you’ve estimated already, the costs of such an undertaking are significant: the servers, the casing, the rails, the power distribution, Uninterruptible Power Supply (basically a big battery so that then your house loses power, your computers keeps running), etc.; it all adds up, and fast. But on the upside, since I am technically an authentic internet service provider, I can, in fact, sell internet to my neighbors, legally, which I think is pretty stinkin’ rad. Most importantly however, it can also facilitate my research in a cost effective way. Although it’s costly up front, the long run is actually cost effective overall, and I could feasibly break even within the first couple of years—I know, I know, I jumped like twenty coolness levels just now, but try to keep a lid on it, yeah? :P

“What are you gonna do now?”

Since this is still a work in progress, I’m not sure what the future will look like. I imagine though that I’ll continue to build it out further and further, because, if I’m being honest here, I rather enjoy being my own ISP—king of my own domain, you might say. It gives me the freedom, the speed, and the bandwidth to scan what I please, as often as I please. Now, I’m not saying you should go out there and do the same thing, I’m just saying that you could. It was surprisingly easy, and a lot of fun.

One of the original ideas was to offer it to infosec researchers, because, like, that's what we do around here, and now that it's up and running, we certainly want to offer it to infosec researchers. Once I figured out how difficult it can be to do infosec on the major cloud providers, I thought it would be a good idea to support research by providing access to common data sets and shared research. We've since backed away from that idea a little due to the logistics and client management, but it's certainly still an option for the future.

Dictated by Jason, written by DD Towler