top of page
  • Writer's picturehenryfarleyjohnson


Was recently watching MythBusters, a first-ballot hall of fame TV show, when I got to wondering what their rate of actually busting myths is. Thankfully, MythResults keeps a beautiful archive of episode summaries! So using R, I was able to pull every result from the show's original run and group it into one of three categories: Busted, Confirmed, and Plausible. Let's take a look!

Of the 928 myths in question, about 57% are busted by Adam, Jamie, and co., while 27% are confirmed, and 16% are deemed plausible. This seems like a pretty optimal mix, no? Give the people what they paid for by mostly busting myths (it's in the name, after all), throw in a healthy mix of confirmations (always a fun outcome), and keep plausibles (which always felt like the show weaseling its way out of an answer) to a minimum.

The other thing I wanted to check was how this distribution changed over time. So let's plot the cumulative rates of each result across episode number:

If you'll pardon the noisy spikes toward the start, it's clear that the show did more busting of myths early on. Fortunately, it seems like the decrease in busts was mostly absorbed by confirmations rather than plausible results.

I have no idea why this is. It's possible that it's random, though this trend does register as statistically significant. It's also possible that as the show started to run out of common myths, which were perhaps more bustable than ultra-specific ones. In any case, I'm glad that you and I can now safely go about our days with a deeper understanding of this pressing issue.


bottom of page