This was a fun journey caused by these two papers, which prompted the question:
What can we learn about our own minds from how LLMs are learning to think?
If we set aside the number of counterarguments (common use of language, human directed instead of evolutionary pressures etc) we can have a fun walk through whether these papers have bearing on understanding ourselves.
And so it was that somewhere over the Bering Sea on a flight to JFK, Deep Research and I talked about it.
First we have some research on:
The Development of Internal Monologue and Human Reasoning
Evolution of Vygotsky’s Inner Speech Theory in Light of Modern Research
Now we have gemini pro and o1 pro run through what we have and look for connections.
If we isolate some of the fun parts and rewrite with comments from the HITL (me), we get:
This report delves into the intriguing intersections between human inner speech/reasoning and recent advancements in artificial intelligence, specifically the techniques of latent space reasoning and recurrent depth in large language models (LLMs). Rather than focusing on superficial similarities, we'll explore deeper, more speculative connections, drawing on cognitive science, philosophy, evolutionary theory, and even unconventional perspectives. The goal is to use each domain (human cognition and AI) to illuminate and challenge our understanding of the other.
Before diving into the parallels, let's briefly define the key concepts: