Skip to content
Old AF

How running websites has changed in the last two decades (for an Ars IT guru)

Ars' IT guru Jason Marlin has 20+ years in information infrastructure—game's changed a bit.

Jason Marlin
The Pit, a BBS door game. In this shot, Lee Hutchinson was attacking these guys. Or, maybe they're attacking him. Credit: Lee Hutchinson
The Pit, a BBS door game. In this shot, Lee Hutchinson was attacking these guys. Or, maybe they're attacking him. Credit: Lee Hutchinson

I was a true nerd growing up in the 1980s—not in the hipster way but in the 10-pound-issue-of-Computer-Shopper-under-my-arm way (these things were seriously huge). I was thoroughly addicted to BBSes (Bulletin Board Systems) by the time I was 10. Maybe it's no surprise I ended up as a technical director for a science and tech site.

In fact, I'd actually draw a direct line between the job of managing your own BBS (aka SysOping) to managing a modern Web infrastructure. And with everyone around Ars looking back given the site's 20th anniversary, let's make that line a bit clearer. It won't be an exhaustive history of websites, but here's how my own experiences with managing websites have evolved in the past two decades—plus how the tools and thinking have changed over time, too.

LOAD “*”, 8, 1

My first SysOp experience was powered by a Commodore 128 (in 64 mode, of course) running Greg Pfountz’s Color 64 software. I sent Greg my check—well, my mom’s check—and received back a single 5.25-inch floppy diskette along with a hand-bound dotmatrix-printed manual. It was on.

Color 64 was an amazing feat of ANSI-colored ASCII, not like most of the BBS software available at the time, which was bland and colorless text. With Color 64, it felt like you were crafting a user experience. I can’t recall the name of my BBS anymore, but I can assure you the theme was dragon and/or kung-fu related. I’m vaguely ashamed to admit that my handle was DragonMaster, but I was doing my part to solidify nerd stereotypes.

Unfortunately, my network infrastructure consisted of a single phone line, meaning I had to disable any ringers (read: unplug the rotary-dial hanging wall phone) and operate between the hours of 11pm and 5am. This also meant the BBS wasn’t terribly interactive. With only one line, and a single Commodore 1571 disk drive, users could neither chat nor download more than a single game at a time.

The Commodore 1670, still makes the heart race.
The Commodore 1670, still makes the heart race.
The Commodore 1670, still makes the heart race. Credit: Canada's Personal Computer Museum

In my dreams, I’d soon be running a real BBS, like the famous Fear & Loathing in Las Vegas to which I was a very frequent dialer. I’d have 10 lines so users could chat in real-time, all connected to 1200—nay!—2400 baud modems. And there'd be an endless supply of games stored on the mythical 10 MB Lt. Kernal hard drive.

Alas, this was all far out of my reach, but I had definitely been bitten by some kind of new bug that included the unusual desire to build digital places where users could gather.

1990s

I continued to tinker with BBS software, including a very interesting precursor to HTML with Excalibur BBS. Take a gander at this Google image search to get a feel for just how ahead of the curve this software was.

$: cd ~/public_html

I first became familiar with HTML in college in the mid-'90s when I would compose assignments and load them up to my public home directory where professors could view them with Netscape or Mosaic at their leisure. The bonus +10 points for “use of technology” was a great motivator.

Apache + Perl + XML + Shared Hosting

One of the first actual “applications” I wrote as a Web developer was a newsroom for a telecommunications company. The underlying technology was a common stack at the time: Apache as the HTTP server, Perl for the server-side language, and a flat-file database. I didn’t have any familiarity with real databases at this point, but I knew how to write and parse XML files. All of this was hosted on a surprisingly capable shared platform where any serving rules I needed could be included in an .htaccess file. I soon learned that .htaccess gave my inexperienced hands far too much power!

While shared hosting did the trick, developers at the time were at the mercy of admins when it came to software versions and extensions. You also had to worry about what your neighbors were doing within those same shared resources, including various unsavory ventures. A hack into a single machine could easily compromise hundreds of sites.

IIS, FrontPage Extensions, and Access

Eventually, the agency I worked at had enough clients to necessitate its very own server. To my chagrin, this was a Windows machine running IIS (Internet Information Services). This was completely foreign territory for me, but after firing up the Frontpage IDE (integrated development environment), I was blown away to see how simple Microsoft had made the ordinarily complex task of saving validated input to a database. (Seriously, amazing.) This sent me on a real tangent of pursuing the perfect graphical IDE, including a brief and regrettable dalliance with Macromedia Dreamweaver. I soon learned that tools that generate code for you tend to produce a significant amount of spaghetti that could then only be untangled by the same tool.

Managing IIS within Windows NT 3.51 also seemed dangerously easy for someone coming from a Unix background. At the same time, it felt limiting—where were the .conf files that made granular customizations (and phenomenal screwups) possible?

This became my platform for a while, as we built a number of custom CMSes (content management systems) for clients, never with any forward-looking consideration for managing a common codebase, long-term upkeep, or even version control. The horror.

Early 2000s, starting with ColdFusion

I realize at this point, I’ll be alienating myself, but I really liked Allaire’s ColdFusion environment and used it for at least four years to build some fairly large-scale applications and intranets. The underlying language was CFML (ColdFusion markup language). It was like HTML, but it made trivial work out of querying databases and integrating with external technologies like Java Servlets or CORBA components. ColdFusion had plenty of haters, but I’ve always been fairly agnostic with technology, choosing whatever would get the job done fastest.

Enter Web Frameworks

ColdFusion’s very low barrier to productivity earned it infamy as a simpleton’s language that brought poorly trained programmers into the arena, much like PHP. While I can’t argue with that, it’s ironic that my first real exposure to a proper Web framework came in the form of Fusebox. Fusebox began as a way to organize your application with simple file naming and directory layout conventions. This sounds obvious, but, like most Web developers at the time, I tended toward a constantly evolving personal approach to application layout and struggled with separating concerns such as database queries and display components. I had tinkered with Struts, but since Java wasn’t an option at my day job, I never fully grokked it. Fusebox however, opened my eyes to the language-transcending paradigm of MVC (model view controller) frameworks. This was years before the mind-blowing Ruby on Rails 15 minute blog demonstration by David Heinemeier Hansson.

These days, I would never consider starting a large application without choosing a framework, and there are many exciting options to choose from. Laravel is a personal favorite for PHP.

Really, this was mind-blowing at the time.

Clustered Webservers

My first experience with a high-traffic website came in around 2002. With higher traffic came more responsibility and more middle-of-the-night phone calls requiring me to reboot servers. I finally decided to learn about load balancing, caching, and clustered servers. This was another revelation, as it opened up the possibility of nearly endless scalability.

If one machine went offline, we now had backup machines to keep things humming. We also had analytics and detailed cluster metrics. Life was good and so was my sleep.

Rise of the Virtual Machines: AWS, Vagrant

AWS (Amazon Web Services) seemed to come from nowhere, providing developers with exactly the tools we needed. They also cut out the middleman of traditional hosting environments. No longer did we need to ask what technologies we were allowed to use; the sky was suddenly the limit. Want to try building an app in Django or NodeJS? No problem! Fire up a couple of VMs (virtual machines) and go to town. You could do it all with AWS: virtual firewalls, load balancers, specialized database clusters, CDNs (content delivery networks) for static assets, and just about anything else you could dream up. It became a DIY datacenter—which was a curse and a blessing. With every new service you add in this type of environment, you need monitoring and someone with the knowledge to bring it back online when things go awry. (They always will.) It’s very easy for an overzealous developer to bite off more than they can chew.

What AWS made possible in the cloud, Vagrant made possible on your workstation. With Vagrant, we gained easy, scriptable control over a selection of VM providers. I could finally test new flavors of Linux and all manner of software package in an environment that was easily reproducible in the cloud when ready to deploy. If something went wrong during setup, a simple vagrant destroy let you start again with the same image. This made development so much more enjoyable than running servers directly on my workstation OS, or even directly within VMWare or VirtualBox.

The 2010s—from Webmasters to DevOps

Can we pause for a moment and talk about how much I’ve always hated the term webmaster?

Person: “What do you do for a living?”

Me: “I build sweet web applications using different programming languages and I also work with infrastructure…”

Person: “Ohhh, so you’re a WEBMASTER!” :(

I move to strike this abhorrent word from the vernacular.

Besides sounding like it involves a 20-sided die, webmaster didn’t capture the role many of us filled in both programming and infrastructure management. That’s why I was thrilled to see a shift in the last decade toward a more respectable term like DevOps. DevOps engineers used programming to build, manage, and document actual infrastructure.

Soon, I'd join Ars Technica (and get struck by lightning, which is only sort of related). Credit: Aurich Lawson / Derek Riggs

Chef + Ansible

When I first started at Ars Technica eight years ago, we wanted to be able to easily add or remove Web and database servers within our virtualized environment. After looking through a number of approaches, we settled on Chef, which makes infrastructure management simple through a hierarchy of mostly kitchen-oriented analogies (knife, cookbooks, recipes, etc). Once you learn how variable properties, or “attributes” as they’re called in Chef, cascade from roles down to individual nodes, it becomes very easy to keep all your server software and versions managed from a single platform. Chef allowed us to stop micromanaging individual servers within clusters and made it much easier to upgrade machines en masse.

My personal preference these days is for Redhat’s Python-based Ansible, which I find a bit easier to work with and slightly less fragile for smaller organizations. In contrast to Chef’s central server requirement, Ansible makes use of SSH from a management machine (in our case, our development laptops), though it does have a server called Tower for larger setups. Ansible also allows you to write most of your configuration data in YAML, making it very readable.

We continue to host our environment with ServerCentral Turing Group, as we have for several years now. They help us strike the right balance between the things we’re good at (writing the code and configuring VMs) and the things we just can’t competently manage, like diesel backup generators, redundant networks, or realtime replication to a failover datacenter.

2019 and beyond

I'll always be nostalgic for the halcyon days of hand-typed modem commands, when the promise of Lawnmower Man and Hal 9000 loomed large on the horizon. But the here and now also holds exciting developments for the next phase in infrastructure evolution. There are so many promising tools that Ars has yet to put into production, like Docker Swarm or Kubernetes, for instance. It’s amazing to reflect on just how far we’ve come and how much more productive a single developer can be these days, versus those long, dial tone-riddled BBS days. From here, I expect we’ll see more and more abstraction from the complex layers of technology required to run a modern Web presence in the days to come.

Here's to the next 20 years!

Photo of Jason Marlin
Jason Marlin Technical Director
Jason is the Technical Director for Ars Technica. If you ask him nicely he'll tell you wild stories about when he lived in Atlanta.
Prev story
Next story