March 22, 2024 | V. Juggy Jagannathan
I recently read a fascinating editorial in Nature titled, Why scientists trust AI too much – and what to do about it. The editorial is based on a research perspective penned by an anthropologist from Yale, Lisa Messeri, and a cognitive scientist from Princeton, Molly Crockett. What risk have they identified with using AI? Let’s dive in.
The editorial highlights a perspective article by Messeri and Crocket that examines more than 100 peer-reviewed papers on researcher AI use over the past five years. Per their findings, Messeri and Crockett’s warning is clear, “The proliferation of AI tools in science risks introducing a phase of scientific inquiry in which we produce more but understand less.” They identify four dominant themes for how scientists use AI tools:
Proliferation of the above can create a vicious cycle. The more that literature exponentially grows, the more of a need for oracle, surrogate, quant and arbiter, potentially resulting in producing more while understanding less. But that’s not all. Messeri and Crockett describe how this can lead to epistemic risks – a broad class of risks arising from holding incorrect beliefs. Such a belief system can lead to the illusion that one knows more than they actually do, is more objective than they actually are and understands less than they actually do.
Epistemic risks can lead to scientific monocultures (a term I hadn’t heard of prior to reading this article). Messeri and Crockett describe scientific monocultures through the use of a metaphor. In agriculture, when practicing monoculture, only one crop of species is grown at a time. This makes the process efficient and crop yield goes up. Over time, however, crops become more susceptible to pests and disease. A similar affliction can happen to scientists.
Messeri and Crockett are giving scientists a call to action. Be careful with how you use AI, and be aware of the risks in adopting AI technology. To paraphrase guidance I recently heard, AI use isn’t equal collaboration between a human and technology. We need to transition from the mindset of “human-in-the-loop” to one in which the human retains control of the technology – “human-in-the-top.” I call that good advice.
Coincidentally, I saw a blog post by Gary Marcus, an outspoken critic of large language models (LLM). Marcus has a series of examples showing how a lot of author-submitted research has all kinds of obvious LLM generated content. It appears as though use of LLM as a surrogate to help write papers is in full swing – another clear example of a human not remaining in control.
“Juggy” Jagannathan, PhD, is an AI evangelist with four decades of experience in AI and computer science research.