www.scientificamerican.com /article/new-clues-about-the-origins-of-biological-intelligence/

New Clues about the Origins of Biological Intelligence

Rafael Yuste,Michael Levin 10-12 minutes

In the middle of his landmark book On the Origin of Species, Darwin had a crisis of faith. In a bout of honesty, he wrote, “To suppose that the eye with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I confess, absurd in the highest degree.” While scientists are still working out the details of how the eye evolved, we are also still stuck on the question of how intelligence emerges in biology. How can a biological system ever generate coherent and goal-oriented behavior from the bottom up when there is no external designer?

In fact, intelligence—a purposeful response to available information, often anticipating the future—is not restricted to the minds of some privileged species. It is distributed throughout biology, at many different spatial and temporal scales. There are not just intelligent people, mammals, birds and cephalopods. Intelligent, purposeful problem-solving behavior can be found in parts of all living things: single cells and tissues, individual neurons and networks of neurons, viruses, ribosomes and RNA fragments, down to motor proteins and molecular networks. Arguably, understanding the origin of intelligence is the central problem in biology—one that is still wide open. In this piece, we argue that progress in developmental biology and neuroscience is now providing a promising path to show how the architecture of modular systems underlies evolutionary and organismal intelligence.

Biologists are trained to focus on the mechanisms of living systems and not on their purpose. As biologists, we are supposed to work out the “how” rather than the “why,” pursuing causality rather than goals. The “why” is not only always present but is precisely what drives specific “how”s to be chosen, enabling organisms to survive by selecting and exploiting specific mechanisms out of an astronomically large space of possibilities. In the case of the human eye, for example, the optical properties of the lens only make sense if they help focus the light on the retina. If you don’t ask why the lens is transparent, you will never understand its function, no matter how long you study how it becomes transparent.

In fact, the problem of understanding how intelligence emerges is becoming more acute with the “omics” revolution, which is generating systematic, quantitative data on genomes, transcriptomes, proteomes and connectomes. Biological systems are being dissected into their ultimate complexity, but no magic answer is appearing at the end of the tunnel. The race to big data is not providing a better explanation of living systems. If anything, it’s making it harder.

Modern biology faces a fundamental knowledge gap when trying to explain meaningful, intelligent behavior. How can a system composed of cells and electrical signals generate a well-adapted body with behavior and mental states? If cells are not intelligent, how can intelligent behavior emerge from a distributed system composed of them? This fundamental mystery permeates biology. All biological phenomena are, in a sense, “group decisions” because organisms are made of individual parts—organs, tissues, cells, organelles, molecules. What properties of living systems enable components to work together toward higher-level goals?

A common solution is emerging in two different fields: developmental biology and neuroscience. The argument proceeds in three steps. The first rests on one of natural selection’s first and best design ideas: modularity. Modules are self-contained functional units like apartments in a building. Modules implement local goals that are, to some degree, self-maintaining and self-controlled. Modules have a basal problem-solving intelligence, and their relative independence from the rest of the system enables them to achieve their goals despite changing conditions. In our building example, a family living in an apartment could carry on their normal life and pursue their goals, sending the children to school for example, regardless of what is happening in the other apartments. In the body, for example, organs such as the liver operate with a specific low-level function, such as controlling nutrients in the blood, in relative independence with respect to what is happening, say, in the brain.

The second step in the argument is that modules can be assembled in a hierarchy: lower-level modules combine to form increasingly sophisticated higher-levels modules, which then become new building blocks for even higher-level modules, and so on. In our apartment building, families could belong to a local association, like a local chapter of a political party, whose goals could be to ensure the future welfare of all the families in the area. And this party could belong to a parliament, whose goal could be to shape the policy of the entire country, and so on. In biology, different organs could belong to the same body of an organism, whose goal would be to preserve itself and reproduce, and different organisms could belong to a community, like a beehive, whose goal would be to maintain a stable environment for its members. Similarly, the local metabolic and signaling goals of the cells integrate toward a morphogenetic outcome of building and repairing complex organs. Thus, increasingly sophisticated intelligence emerges from hierarchies of modules.

This may seem to solve the problem, except that hierarchical modularity still does not explain how evolution, changing solely one element at a time at a lower level, can ever manipulate the upper levels. Given that the upper levels are built with lower levers, wouldn’t you still need to modify a slew of things at the same time to change an upper-level module? A third step in our argument addresses this problem: each module has a few key elements that serve as control knobs or trigger points that activate the module. This is known as pattern completion, where the activation of a part of the system turns on the entire system. In our apartment building, the family would have one central figure, let’s say one of the parents, who would represent the family in meetings and engage it when needed. These trigger points serve to represent the entire module and thus enable these modules to be activated, altered, inactivated or deployed in novel circumstances without having to manipulate or recreate all their parts. Moreover, pattern completion naturally emerges from systems of interconnected elements with interactions among the elements.

In recent years, researchers have found evidence for pattern completion in both neural circuits and developmental biology. For example, when Luis Carrillo-Reid and his colleagues at Columbia University studied how mice respond to visual stimuli, they found that activating as few as two neurons in the middle of a mouse brain—which contains more than 100 million neurons—could artificially trigger visual perceptions that led to particular behaviors. These fascinating pattern-completion neurons activated small modules of cells that encoded visual perceptions, which were interpreted by the mouse as real objects. Similarly, in work published in 2018, Michael Levin of Tufts University and Christopher Martyniuk of the University of Florida reviewed data showing how triggering a simple bioelectric pattern in nonneural tissues induced cells to build an eye or other complex organs in novel locations, such as on the gut of a tadpole.

The idea of hierarchical modularity to explain biological intelligence has been explored before by economist Herbert Simon, neuroscientist Valentino Braitenberg, computer scientist Marvin Minsky, evolutionary biologists Leo Buss, Richard Dawkins and David Haig, and philosopher Daniel C. Dennett, among many others. These recent experiments from developmental biology and neuroscience can now provide a common mechanism of how this could work via key nodes that generate pattern completion. While there is still much to learn about how pattern completion units work, they could provide a solution to the problem of how to repurpose a system of modules without having to change it all. The manipulation of local goal-pursuing modules, to make them cooperate at multiple scales of organization in the body, is a powerful engine. It enables evolution to exploit the collective intelligence of cell networks, using and recombining tricks discovered at the lower level while operating with robustness despite noise and uncertainty.

Like a ratchet, evolution can thus effectively climb the intelligence ladder, stretching all the way from simple molecules to cognition. Hierarchical modularity and pattern completion can help understand the decision-making of cells and neurons during morphogenesis and brain processes, generating well adaptive animals and behavior. Studying how collective intelligence emerges in biology not only can help us better understand the process and products of evolution and design but could also be pertinent for the design of artificial intelligence systems and, more generally for engineering and even the social sciences.

newsletter promo

Sign up for Scientific American’s free newsletters.

Suggested Reading:

Imprinting and Recalling Cortical Ensembles. Luis Carrillo-Reid et al. in Science, Vol. 353, pages 691–694; August 12, 2016.

The Bioelectric Code: An Ancient Computational Medium for Dynamic Control of Growth and Form. Michael Levin and Christopher J. Martyniuk in BioSystems, Vol. 164, pages 76–93; February 2018.

Controlling Visually Guided Behavior by Holographic Recalling of Cortical Ensembles. Luis Carrillo-Reid et al. in Cell, Vol. 178, No. 2, pages 447–457; July 11, 2019.

The Computational Boundary of a “Self”: Developmental Bioelectricity Drives Multicellularity and Scale-Free Cognition. Michael Levin in Frontiers in Psychology, Vol. 10, Article No. 2688 December 2019.