Skip to content
Large Moon Models (LMMs)

What kind of bug would make machine learning suddenly 40% worse at NetHack?

One day, a roguelike-playing system just kept biffing it, for celestial reasons.

Kevin Purdy
Moon rendered in ASCII text, with "You are lucky! Full moon tonight" in the middle in yellow.
Credit: Aurich Lawson
Credit: Aurich Lawson

Members of the Legendary Computer Bugs Tribunal, honored guests, if I may have your attention? I would, humbly, submit a new contender for your esteemed judgment. You may or may not find it novel, you may even deign to call it a "bug," but I assure you, you will find it entertaining.

Consider NetHack. It is one of the all-time roguelike games, and I mean that in the more strict sense of that term. The content is procedurally generated, deaths are permanent, and the only thing you keep from game to game is your skill and knowledge. I do understand that the only thing two roguelike fans can agree on is how wrong the third roguelike fan is in their definition of roguelike, but, please, let us move on.

NetHack is great for machine learning…

Being a difficult game full of consequential choices and random challenges, as well as a "single-agent" game that can be generated and played at lightning speed on modern computers, NetHack is great for those working in machine learning—or imitation learning, actually, as detailed in Jens Tuyls' paper on how compute scaling affects single-agent game learning. Using Tuyls' model of expert NetHack behavior, Bartłomiej Cupiał and Maciej Wołczyk trained a neural network to play and improve itself using reinforcement learning.

By mid-May of this year, the two had their model consistently scoring 5,000 points by their own metrics. Then, on one run, the model suddenly got worse, on the order of 40 percent. It scored 3,000 points. Machine learning generally, gradually, goes in one direction with these types of problems. It didn't make sense.

Cupiał and Wołczyk tried quite a few things: reverting their code, restoring their entire software stack from a Singularity backup, and rolling back their CUDA libraries. The result? 3,000 points. They rebuild everything from scratch, and it's still 3,000 points.

Example NetHack screen from a Linux terminal window
NetHack, played by a regular human. Credit: xmodulo.com (CC BY 2.0)

… except on certain nights

As detailed in Cupiał's X (formerly Twitter) thread, this was several hours of confused trial and error by him and Wołczyk. "I am starting to feel like a madman. I can't even watch a TV show constantly thinking about the bug," Cupiał wrote. In desperation, he asks model author Tuyls if he knows what could be wrong. He wakes up in Kraków to an answer:

"Oh yes, it's probably a full moon today."

In NetHack, the game in which the DevTeam has thought of everything, if the game detects from your system clock that it should be a full moon, it will generate a message: "You are lucky! Full moon tonight." A full moon imparts a few player benefits: a single point added to Luck, and werecreatures mostly kept to their animal forms.

It's an easier game, all things considered, so why would the learning agent's score be lower? It simply doesn't have data about full moon variables in its training data, so a branching series of decisions likely leads to lesser outcomes, or just confusion. It was indeed a full moon in Kraków when the 3,000-ish scores started showing up. What a terrible night to have a learning model.

Of course, "score" is not a real metric for success in NetHack, as Cupiał himself noted. Ask a model to get the best score, and it will farm the heck out of early-stage monsters because it never gets bored. "Finding items required for [ascension] or even [just] doing a quest is too much for pure RL agent," Cupiał wrote. Another neural network, AutoAscend, does a better job of progressing through the game, but "even it can only solve sokoban and reach mines end," Cupiał notes.

Is it a bug?

I submit to you that, although NetHack responded to the full moon in its intended way, this quirky, very hard-to-fathom stop on a machine-learning journey was indeed a bug and a worthy one in the pantheon. It's not a Harvard moth, nor a 500-mile email, but what is?

Because the team used Singularity to back up and restore their stack, they inadvertently carried forward the machine time and resulting bug each time they tried to solve it. The machine's resulting behavior was so bizarre, and seemingly based on unseen forces, that it drove a coder into fits. And the story has a beginning, a climactic middle, and a denouement that teaches us something, however obscure.

The NetHack Lunar Learning Bug is, I submit, quite worth memorializing. Thank you for your time.

Listing image: Aurich Lawson

Photo of Kevin Purdy
Kevin Purdy Senior Technology Reporter
Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch.
Staff Picks
Peevester
5000 points is pretty sad, probably 2000 of it is finding an elven dagger and naming it "Sting". And the rest is a lot of rats and goblins.

Looking at what happens on the full moon, I still don't get why the bot scored lower. Maybe a higher chance of being bitten by a wererat and dropping all your stuff? Or maybe it's because throwing tripe at attacking dogs works less often to tame them (Vs 100% normally).

edit: aha, I think this is it - attacked werecreatures are much more likely to summon help on full moons. Poor bot probably got overrun.
Most Read
  1. Listing image for first story in Most Read: Helene ravaged the NC plant that makes 60% of the country’s IV fluid supply
    1. Helene ravaged the NC plant that makes 60% of the country’s IV fluid supply
  2. 2. Apple couldn’t tell fake iPhones from real ones, lost $2.5M to scammers
  3. 3. X fails to avoid Australia child safety fine by arguing Twitter doesn’t exist
  4. 4. Neo-Nazis head to encrypted SimpleX Chat app, bail on Telegram
  5. 5. ULA’s second Vulcan rocket lost part of its booster and kept going