AI’s growing threat to challenging ideas
Our public conversation about artificial intelligence oscillates between fantasies of human-like intelligence and fears of sentient AI overlords taking over the world. This overlooks the real danger posed by large language models, argues Armin Schulz. Since they work by reproducing the average outputs of human language-users, they have an inbuilt tendency to produce conventional, conservative outputs. As we integrate these AIs into our lives, there’s therefore a danger that we suppress our distinctively human ability to think creatively and outside the box. If the AI revolution continues to be driven by disembodied LLMs, it risks stifling rather than propelling progress.This is part 1 of a 2-part series on AI and Human Creativity. Part 2 on the potential of AI for creativity by Jessie Hall and Karina Vold is out now. It may seem that large language models (LLMs) are coming for us: not a day goes by without another headline about how ChatGPT, DeepSeek and their ilk have achieved another stunning feat (Writing jokes! Winning poetry contests! Passing Law Schools Entrance Exams!). Is it true that LLMs will soon reach human-level intelligence? Will they make human cognition obsolete?The answer to both questions is no. However, this doesn’t mean that we don’t need to be careful in how we use these systems—for LLMs’ very structure may actually limit, rather than enhance, our thinking. SUGGESTED VIEWING The consciousness test With Hilary Lawson, Güneş Taylor, Nick Lane, Yoshua Bengio, Sabine Hossenfelder To see this, let’s start by considering what it means to think like a human. Human cognition is a particular combination of concepts (“He is just hangry;” “That’s institutional corruption”), cognitive abilities (while getting a PhD, we may delay gratification of pleasures for years and for highly uncertain and abstract gains), and dispositions for social learning and engineering (we study at a university to figure out how to build fMRI machines). While there are many interesting details about how this works, what is key in this context is that these concepts, abilities, and dispositions interact to make human engagement with the world possible and efficient.Making our decisions dependent on whether something is a case of “institutional corruption” (say) is difficult: it might require historical analysis, legal research, and formal reasoning. Much the same is true for most of our decisions—whether they concern where to go for dinner (should we eat vegetarian?) or what to do for fun (is going to a concert contributing to climate change and inequality?). If we had to figure out, from scratch and by ourselves, how to apply our concepts to the world, we would never actually get to do anything—we would be dithering our days away.___LLMs live in a world of words.___Luckily, we figured out a solution to this: we build tools and rely on others’ insights. We can learn from others what decisions are good to make when. We can rely on technology like calculators, newspapers, algebra, and accounting conventions to help us make our thinking better and more efficient. Importantly, building these tools also requires learning from others: we take existing inventions and insights, and change them slightly. In this way, our concepts, abilities, and dispositions ratchet each other up to new heights.All of this matters when it comes to LLMs. On the one hand, LLMs live in a world of words. This makes them very restricted in their ability to approach human cognition. It is not just that ChatGPT can’t make you a cup of tea. It is that ChatGPT cannot function at all unless it is provided with verbal prompts. It will never decide to do anything by itself—because, without verbal prompts, it lives in an empty universe. (Incidentally, this is something increasingly recognized by AI pioneers such as Stanford’s Fei-Fei Li, <a href="https://www.ted.com/talks/fei_f