Back to Writing
2 min read

[DRAFT] Do Androids Dream of Electric Sheep?

I’ve always thought that the reason that we haven’t created a truly “intelligent” machine yet, is because we don’t actually understand our own intelligence. It’s impossible until we know how the brain works at the level we can reverse engineer it. And despite what the hype-y headlines say about AGI, we’re nowhere CLOSE to this point.

We say Large Language Models are “intelligent” because it is capable of generating language for relatively complex topics. But language isn’t thought. Language is an extremely fuzzy, unreliable, and imprecise abstraction of thought, where so much complex information - memory, experiences, nuance, emotion - that is experienced absent of language is consequently lost in the process of using language. This is probably obvious, but hang with me.

Language was a CRITICAL invention by humans. Language enabled thought to be externalized and synchronized across humanity. Communicating concepts like time, space, causality, and social trust allowed us to create systems of religion, government, economic collaboration. But just like language =/ thought, Thought =/ language. Language as a medium.

While our collective understanding of how we use language to process information is limited, but we DO know that it’s definitely not the way that LLM’s process data and generate language (auto-regressive).

So what was the MOST essential pre-requisite to intelligence? Vision.

Before language, all organisms evolved the capacity to parse the visual world: to identify objects, infer motion, anticipate threat, and recognize affordances. The ability to see allowed us to compress an unfathomable amount of environmental data into structures that are actionable and meaningful to us. This took billions of years, and without it, cognition wouldn’t be possible.

This is why many of AI’s original pioneers reject the idea that LLMs are going to be more intelligent over time simply with more data centers or data fueling them, and why they’re gravitating toward research in “world model” development.

“Thought does not require Language. Language is an expression of thought. Intelligence requires thought more than it requires language.” — Yann LeCunn