How we reason about innateness
Iris Berent (Northeastern University)
Abstract: Few questions in science are as controversial as the origins of knowledge. Whether ideas (propositional attitudes, e.g., “objects are cohesive”) are innate or acquired has been debated for centuries. Here, I ask whether our difficulties with innate ideas could be grounded in human cognition itself. I first demonstrate that people are systematically biased against the possibility that ideas are innate. They consider ideas (i.e., epistemic traits, as opposed to horizontal faculties, such as attention) as less likely to be innate compared to non-epistemic traits (sensorimotor or emotive)— those of humans, birds and aliens, and they maintain this belief despite explicit evidence suggesting that the traits in question are in fact innate. I next move to trace this bias to the collision between two principles of core cognition—Dualism and Essentialism. Dualism (Bloom, 2004) renders ideas disembodied; per Essentialism, the innate essence of living things must be embodied (Newman & Keil, 2008; Berent, 2021). It thus follows that ideas cannot be innate. A second series of experiments tests these predictions. These results show for the first time that people are selectively biased in reasoning about the origins of innate ideas. While these findings from adults cannot ascertain the origins of these biases, they do open up the possibility that our resistance to innate ideas could be in our nature. I conclude by briefly considering how the dissonance between Dualism and Essentialism can further account for a wide range of other phenomena, from why we are seduced by neuroscience to why we fear the takeover of humanity by AI, and what we think happens when we die.
The Contingent Animal: Why behavioral development still doesn’t need innateness
Gregory Kohn (University of North Florida)
Abstract: Despite almost universal condemnation the specter of nature versus nurture has remained resilient. The dichotomy between nature or nurture depends on partitioning organisms into innate or acquired traits. Recent defenses of the concept of innateness have emerged in the field of artificial intelligence. Here it’s claimed that the limitations unsupervised artificial neural networks (ANN) reflect its lack of task-specific programs. These limitations are often compared with the behavior of animals, which is assumed to largely reflect innate task-specific programs that: (1) Develop independent of individual experiences and (2) are prefunctional by preparing the organism for an environment it has not yet experienced. I argue that recent defenses ignore the conceptual flaws of previous ethological innateness concepts. I highlight how constructive experiences are necessary to the development of species-typical behaviors challenging the assumption of experiential independence. I show that prefunctionality contrasts with the antecedent-consequent process of development. Development proceeds prospectively based on a chain of situated ontogenetic niches with no clear analog in either artificial neural networks or task specific programs. At no point in this process can we demarcate an innate or acquired trait, as the functional properties of current organisms in current ontogenetic niches provide the foundation for the emergence of later organismal states. Behavioral expression is thus non-local, it is a product of coordination and relationships across the whole organism and its environment with no privileged locus of causation or control.
Reconstructing constructivism: How probabilistic models can reconcile nativism and empiricism in cognitive science and AI
Alison Gopnik (UC Berkeley)
Abstract: The last few years has seen dramatic progress in artificial intelligence, particularly in machine learning, most notably in new work in the connectionist tradition, such as deep learning, but also in work on inferring structured generative models from data. Nevertheless, this new work still is limited to relatively narrow and well-defined spaces of hypotheses. In contrast, human beings and human children, in particular, characteristically generate new, uninstructed and unexpected, yet relevant and plausible hypotheses. My hypothesis is that the evolution of our distinctively long, protected human childhood allows an early period of broad hypothesis search, exploration and creativity, before the demands of goal-directed action set in. Our innate endowment appears to lead to flatter priors rather than the more peaked priors one would expect in classic nativism. This evolutionary solution to the search problem may have implications for AI solutions.
The long evolutionary history of the brain’s functional architecture
Paul Cisek (University of Montreal)
Abstract: Despite the dramatic differences between the lifestyle of modern humans and non-human primates in the wild, the major features of their functional neural architectures appear to be remarkably similar. This suggests that our learned abilities are strongly constrained by an inherited architecture that has evolved over hundreds of millions of years through a process of gradual refinement. In my talk, I will review some of the major stages of that process and suggest the functional adaptations they made possible. I will begin with the formation of the chordate neural tube and its differentiation in early vertebrates into a spinal cord, a sensorimotor midbrain, and a modulatory forebrain capable of reinforcement learning. I will then discuss the transition to land and the specialization of forebrain segments to support local exploitation and long-range exploration in early tetrapods. Next I will discuss the retreat into a nocturnal niche in mammals, accompanied by the expansion of the dorsal pallium into a neocortex subdivided into a dorsomedial sector supporting sensorimotor action maps for search, handling, ingestion, and defense, and a ventrolateral sector supporting key stimulus learning and interoceptive integration. Finally, I will discuss the primate return to diurnal life in the arboreal niche, and the expansion of fronto-parietal action maps for visually-guided interaction and temporo-orbital mechanisms for object classification and valuation. I will suggest how the resulting hypothesis on the brain’s inherited functional architecture can potentially explain many features of neural activity that pose a challenge classical psychological theories, and on how it may constrain theories of learning.
The role of self-organization mechanisms in the emergence of behavioral regularity and diversity
Clément Moulin-Frier (INRIA – Flowers Team)
Abstract: The concept of innateness has, in some disciplines, been opposed to the concept of acquired behavior. Any behavior, being acquired or not, is however necessarily shaped by pre-existing internal and external mechanisms: it self-organizes out of dynamic interactions over multiple spatiotemporal scales. In this talk, I will highlight the key role of self-organization mechanisms in the generation of behavioral regularity and diversity. For this aim, I will present computational models from developmental robotics and multi-agent systems, illustrating how behavior emerges from the coupling of environmental, morphological, sensorimotor, cognitive, developmental, social, cultural and evolutionary processes.
Born to Learn: Combining Innate Mechanisms and Learning in Evolving Agents
Sebastian Risi (IT University of Copenhagen)
Abstract: While originally inspired by the way animals learn, most reinforcement learning algorithms assume nothing is known a priori about the particular domain and therefore these algorithms often require millions of learning trials. On the other hand, evolved innate knowledge can help animals to quickly and robustly learn, abilities our current AI systems still struggle with. In this talk, I will review some of our recent approaches that take a step in this direction, from deep neuroevolution methods that directly describe the neural wiring diagrams of agents, to approaches that combine innate mechanisms and learning to allow agents to quickly self-organize their weights and continually adapt. Finally, to improve robustness further, I present our recent evolve&merge approach, which is able to encode complex policies through a genomic bottleneck.
Abstraction and Analogy in AI: The Role of Core Knowledge Systems
Melanie Mitchell (Santa Fe Institute)
Abstract: The abilities to form and reason about abstract concepts and to make analogies are among the requirements for AI systems with humanlike intelligence. In this talk I will survey different task domains—both idealized and real-world—that have been proposed to evaluate such abilities in machines. I will argue that several (possibly innate) core knowledge systems are essential to all these domains, and that imbuing AI systems with such core knowledge is an essential step in enabling humanlike thought. Furthermore, I will propose a hybrid symbolic/subsymbolic architecture that enables flexible analogy-making based on core knowledge
Innateness in Machine Learning
Thomas Dietterich (Oregon State University)
Abstract: It is an easy theorem in learning theory that a machine learning system must have some form of innate knowledge or constraint in order to learn efficiently. But what should it be, and what form should it take? I’ll discuss what answers research in machine learning can suggest and reflect on the broader methodology of the field. Much like biological evolution, machine learning research is currently generating and evaluating mechanisms and refining those that work. While this gives us (moderately) successful mechanisms, it doesn’t provide answers to the fundamental questions of innateness.
Innate Necessarily So: On the indispensable role of the innateness concept in cognitive science, and perhaps also AI
Richard Samuels (Ohio State University)
Abstract:
The notion of innateness is a notoriously vexed one that has been characterized in a wide array of non-equivalent ways, and roundly rejected by some as confused or even incoherent. For all that, it retains an apparently significant role in cognitive science and allied disciplines.
In this talk I first briefly discuss some influential critiques of the innateness concept. I then describe what I take to be the central role that, as a matter of fact, the notion of innateness plays within the cognitive sciences – roughly, to pick out those psychological primitives on which learning depends. Next, I argue that, given widespread assumptions, the deployment of a concept that fills this role is very likely required by an adequate science of cognition. Finally, I conclude by speculating that, for analogous reasons, such a concept may also play a central role in the development of AI.