We took great interest in seeing the announcement of the star-studded task force on artificial intelligence at Yale. We share the feeling of excitement and opportunity at pooling interdisciplinary talent to tackle questions that have massive implications for the future of the world as we know it.
It is for this reason that we have recently been working closely together with an interdisciplinary group of like-minded faculty, including many in the humanities, to propose an undergraduate certificate we are calling Critical Computing Studies. We humbly submit that a critical approach also has a lot to offer in University-wide deliberations at Yale about AI.
Why critical? This term can trip people up if it is taken to mean mere negativity, fault-finding, or a catch-all term for “important,” as it often does in ordinary talk. We mean to draw on three productive ways of thinking about the term.
First, criticism is a method, as in literary criticism, that involves close attention to the ways that words and ideas operate, often in mysterious ways we fail to recognize. Humanists have been thinking about core concepts such as art, artificiality, intelligence and mind for a long time. We can’t just pick up these words free of their historical legacies, which can predetermine our assumptions. Indeed, there is a long history of thinking about artificial intelligence, writ large, in the western world that is at least 2,400 years old. We ignore this tradition at our peril. Humanists bring expertise in the ways that histories unconsciously preform our thoughts. The art of interpretation can be a remedy for being captive to inherited pictures. The discipline of analyzing language’s stubborn, sometimes buggy meaningfulness is common to poets and coders.
Second, critique is a philosophical project of asking about “conditions of possibility.” We owe this way of thinking to the great Enlightenment philosopher Immanuel Kant. To write the critique of pure reason, as he famously did, was not to deny rationality: it was to show what it could do and where it ran against its limits. After Kant, a long line of very diverse thinkers marched under the banner of critique to ask about the limits and possibilities of society, art, politics, science and more. To be critical is to ask what are the deep grounds that make our questions possible. It is to constantly ask where our blind spots are. It is to resist an embrace of things as they are and to ask: why not? In this way, critique is not in the least opposed to scientific research; both share a deep commitment to open-ended inquiry that is not afraid to put our very starting points into question.
Third, critique is a tradition of thought that asks how our thinking is complicit with power. Whose interests are served by the questions we ask — and don’t ask? Scholars in the booming field of critical computing studies are concerned with how industry hype and funding can guide research agendas. One of us has shown how the very algorithms of computer graphics encode unconsidered biases about skin color, leading to obviously racist disparities in how people are depicted in our vibrant visual culture. Historical critique can reveal how unexamined assumptions from Jim Crow-era photography got carried over into the digital age. Critique can thus play a role in bending the arc of justice.
Academic-industry collaboration can have productive synergies if we avoid mission creep. But it is the unique societal mission of the university to be able to ask big questions, tap the brakes, puncture hype. Just as the military-industrial-academic complex brought us the internet, we need to be probing questions about the implications of AI for the distribution of power in our societies and world, including the university’s role. AI is perhaps not only a bonanza to “capitalize” on; perhaps it is a trap, distraction, or the tip of the iceberg.
We admit that we critical humanists sometimes earn our reputation as troublemakers. Before we go straight to the solution, we want to know what the problem is, how it is defined, and by whom. It can feel insulting to problem-solvers and system-builders to hear their language might not be totally under their control or their taken-for-granted ideas carry implications, sometimes dangerous ones, that stretch far beyond their labs and design screens. And yet, splashing cold water on our most fundamental biases can be bracing and can uncover new scientific questions.
The first rule of inquiry in any field is check your assumptions. Interpreting basic concepts, understanding deep historical legacies, asking about possibilities, and probing power — these are the comparative advantages that critical humanists can bring to the conversation about artificial intelligence at Yale.
Dr. John Durham Peters is the Maria Rosa Menocal Professor of English and of Film and Media Studies, and writes on media history and theory. You may contact him at john.peters@yale.edu.
Dr. Theodore Kim is an associate professor of Computer Science, who co-leads the Yale Computer Graphics Group and conducts research in physics-based simulation. You may contact him at theodore.kim@yale.edu.