At Public Books, Ted Underwood defends the humanities against charges of irrelevance in an increasingly technological, commodified, and engineered world. Underwood begins by clarifying the meaning of “machine learning” and helpfully distinguishing it from mere algorithms. He then goes on demonstrate how the methodology behind machine learning is surprisingly similar to the methodology behind academic disciplines like history, literary studies, and philosophy. Ultimately, suggests Underwood, the humanities and machine learning are more complimentary than antagonistic. Check out an excerpt from the piece below.
Machine learning increasingly shapes human culture: the votes we cast, the shows we watch, the words we type on Facebook all become food for models of human behavior, which in turn shape what we see online. Since this cycle can amplify existing biases, any critique of contemporary culture needs to include a critique of machine learning. But to prepare students for this new world, we need to do more than wag our fingers and warn them that algorithms are problematic. Generalized suspicion about technology doesn’t necessarily help people understand the media—as presidential attacks on biased search engines and fake news have recently made clear. Telling students that new technologies are out to fool them can just make them hungry for an easy cure—a so-called “red pill.” (In fact, danah boyd has argued that attempts to teach media literacy often backfire in exactly this way.) To be appropriately wary, without succumbing to paranoia, students need to understand both the limits and the valid applications of technology. Humanists can contribute to both halves of this educational project, because we’re already familiar with one central application of machine learning—the task of modeling fuzzy, changeable patterns implicit in human behavior. That’s also a central goal of the humanities.
This may sound like a bizarre claim, if we believe the stereotype that represents math as alien to history and literature. But humanists have always been more flexible than the stereotype implies: economic historians, for instance, often use numbers. Cultural historians haven’t done so in the past, because the simple quantitative methods available in the 20th century genuinely couldn’t do much to illuminate culture. We can’t write a simple algorithm to recognize a literary genre, for instance, because most genres lack crisp definitions. Humility on this topic is hard-earned: dozens of 20th-century critics spent years of their lives trying to define “science fiction,” before critics conceded that the phrase has meant different things at different times. Scholars have reluctantly abandoned the quest for an essential feature that unifies genres, in order to acknowledge that genres are loose family resemblances, organized by a host of overlapping features, and changing their meaning from one decade to the next.
Image via Public Books.