1. Setting the Stage: Latent Recurrent Reasoning in AI and the Human Inner Voice
Recent AI research (as described in the “latent reasoning” or “recurrent-depth” papers) proposes iterative, depth-recurrent computations in large language models—where the model loops through a hidden state, effectively “thinking” multiple times in continuous latent space before producing the next token. This allows the AI to scale its “compute budget” at inference: if a question is hard, the model can iterate more. Crucially, the internal “chain-of-thought” is not necessarily verbalized or visible, but hidden in high-dimensional states.
Human cognition, in parallel, uses inner speech as a core substrate for reflection and self-regulation: we talk ourselves through problems, plan, daydream, or “imagine the other side” in an internal dialogue. But the human mind also comprises non-verbal forms of reasoning—intuition, mental imagery, emotional hunches—some of which do not surface as a self-reportable chain-of-thought. Contemporary neuroscience shows that when we do silently “talk,” we really do engage language-production areas (Broca’s area, etc.), but we also have a more “condensed,” sublinguistic reasoning layer that may not appear as a crisp, fully spelled-out sentence in the mind.
Hence, the new AI approach:
- Hidden, Recurrent “Deep Thinking” – The AI recycles its hidden representation multiple times before output, effectively modeling the idea of “thinking silently for a while” before speaking.
- Scaling with Complexity – More steps for harder tasks, fewer steps for simpler ones, akin to how humans might ruminate longer on trickier problems, or “blurt out” quick answers for easier ones.
- Sub-verbal vs. Verbal – The actual iterative states remain hidden to an observer—mirroring how human reasoning can be partially or entirely non-verbal, even if we have an inner voice as one gateway to reflection.
Let’s see how pushing these analogies further can generate new insights on human evolution, the nature of the “self,” and even new philosophical or scientific frameworks.
2. Evolutionary Echoes: “Latent Depth” as a Parallel to Gradual Internalization in Humans
In Childhood Development
- Vygotsky’s “inner speech”: The progression from overt private speech to covert inner speech in children resonates with an AI that can run extra recurrent steps internally without “speaking them out.” The child, too, eventually conceals that once-audible problem-solving chatter into a hidden internal loop.
- Scaling Complexity: Just as a 5-year-old might mutter through a puzzle using more external talk, then age 7–8 transitions to silent, internal “loops” of reflection, so the AI’s training distribution might similarly shift from an “out-loud” chain-of-thought (supervised fine-tuning style) to purely latent loops once the system is stable and confident.
In Human Evolution
- Humans’ ability to “recur in depth,” perhaps in an internal monologue, may have followed from the invention and internalization of language. Early hominins learned to “double-check” or plan out multiple steps.
- Mercier & Sperber’s Argumentative Theory: If reasoning evolved socially (to argue and persuade), then private, hidden “iterative thought” could be an evolutionary spin-off: we rehearse arguments in an internal “dialogue,” refining them. Similarly, the new recurrent-depth AI can do many sub-iterations to figure out the best “answer” (like “winning an internal argument”) before producing a token.
A Novel Evolutionary Perspective?
- Social vs. Internal: The AI approach suggests that hidden iterative reasoning might precede or at least exist alongside the “speaking out” portion. Could it be that in early hominin groups, individuals first developed fast, silent hypothetical simulations (like hidden recurrences) before they had the impetus to externalize them socially? Usually, we assume language predates internal speech, but maybe a partial “latent chain-of-thought” in raw conceptual or sensorimotor terms existed, which eventually found expression in outward speech.
- Adaptive Efficiency: The latent loop in AI is a tool to re-use the same parameters repeatedly for deeper processing instead of making the architecture huge. For human ancestors, it might have been cognitively cheaper to “iterate mentally” on a given capacity for speech rather than building an ever-larger “memory” or purely instinctive repertoire. So, “recurrent depth” in Homo sapiens could be an analog to how we internalize speech for flexible multi-step reasoning without inflating brain mass further.
3. Dialogical or “Multivoice” Reasoning: AI’s Iteration vs. Internal Debates
Dialogical Phenomena in Humans