1. Setting the Stage: Latent Recurrent Reasoning in AI and the Human Inner Voice

Recent AI research (as described in the “latent reasoning” or “recurrent-depth” papers) proposes iterative, depth-recurrent computations in large language models—where the model loops through a hidden state, effectively “thinking” multiple times in continuous latent space before producing the next token. This allows the AI to scale its “compute budget” at inference: if a question is hard, the model can iterate more. Crucially, the internal “chain-of-thought” is not necessarily verbalized or visible, but hidden in high-dimensional states.

Human cognition, in parallel, uses inner speech as a core substrate for reflection and self-regulation: we talk ourselves through problems, plan, daydream, or “imagine the other side” in an internal dialogue. But the human mind also comprises non-verbal forms of reasoning—intuition, mental imagery, emotional hunches—some of which do not surface as a self-reportable chain-of-thought. Contemporary neuroscience shows that when we do silently “talk,” we really do engage language-production areas (Broca’s area, etc.), but we also have a more “condensed,” sublinguistic reasoning layer that may not appear as a crisp, fully spelled-out sentence in the mind.

Hence, the new AI approach:

Let’s see how pushing these analogies further can generate new insights on human evolution, the nature of the “self,” and even new philosophical or scientific frameworks.

2. Evolutionary Echoes: “Latent Depth” as a Parallel to Gradual Internalization in Humans

In Childhood Development

In Human Evolution

A Novel Evolutionary Perspective?

  1. Social vs. Internal: The AI approach suggests that hidden iterative reasoning might precede or at least exist alongside the “speaking out” portion. Could it be that in early hominin groups, individuals first developed fast, silent hypothetical simulations (like hidden recurrences) before they had the impetus to externalize them socially? Usually, we assume language predates internal speech, but maybe a partial “latent chain-of-thought” in raw conceptual or sensorimotor terms existed, which eventually found expression in outward speech.
  2. Adaptive Efficiency: The latent loop in AI is a tool to re-use the same parameters repeatedly for deeper processing instead of making the architecture huge. For human ancestors, it might have been cognitively cheaper to “iterate mentally” on a given capacity for speech rather than building an ever-larger “memory” or purely instinctive repertoire. So, “recurrent depth” in Homo sapiens could be an analog to how we internalize speech for flexible multi-step reasoning without inflating brain mass further.

3. Dialogical or “Multivoice” Reasoning: AI’s Iteration vs. Internal Debates

Dialogical Phenomena in Humans