We Can Simulate Intelligence, But Not Its First Spark
I was watching a YouTube video (https://youtu.be/dyvqH7v6V0E , https://youtu.be/N3tRFayqVtk) recently where someone created a simulation featuring predators, prey, and vegetation. In this digital ecosystem, both predator and prey species had neural networks functioning as their "genes," allowing them to evolve over generations. What captivated me was that, given enough time, these digital organisms developed what resembled intelligent behaviors - they made strategic moves, surrounded their prey, and demonstrated something that looked remarkably like primitive intelligence, despite starting with none.
This observation led me to a thought experiment: these creatures managed to develop intelligence-like behaviors because the neural networks provided the flexibility to accommodate such complexity. The neural networks weren't explicitly programmed to be intelligent; they simply had the freedom to adapt and evolve in ways that proved beneficial for survival.
This got me thinking about freedom and flexibility in simulations at a much deeper level. What if we created a simulation where *everything* was completely flexible and able to accommodate any type of emergence? Not just creatures with neural networks, but a medium where the fundamental rules themselves could adapt and evolve.
I began envisioning what such an ultimate flexible medium might look like. Perhaps it would need self-modifying physics where fundamental constants and equations could rewrite themselves based on emergent patterns. Maybe it would require fluid dimensionality, where space and time aren't fixed at 3+1 dimensions but could locally expand, contract, or spawn new dimensions as needed. I even considered whether such a system would need the ability to incorporate paradoxes as generative engines rather than avoiding them.
The more I thought about it, the more I realized that what I was seeking had parallels in various domains - from quantum vacuum and spacetime foam in physics, to David Bohm's Implicate Order where all possibilities are enfolded within a deeper reality. I considered ancient concepts like Prima Materia (the formless alchemical substrate that can become anything), Apeiron (the boundless principle in Greek philosophy), and the Dao/Tao (the unnameable source that generates all things). My exploration even led me to mystical concepts like Akasha (the subtle medium that records all events), Sunyata (emptiness containing all potential forms), and Pleroma (the fullness from which all forms emerge). Biological analogues like morphogenetic fields and stem cells also seemed relevant as naturally occurring flexible mediums.
As I explored these concepts of ultimate freedom in simulation, I started wondering what kind of concrete benchmark we could use to test such a system. And that's when my mind turned to the Urey-Miller experiment from 1952. This famous experiment attempted to recreate Earth's early conditions to study the possible origins of life, resulting in the spontaneous formation of amino acids - the building blocks of proteins. It represented a perfect benchmark for simulation: well-defined initial conditions, a known outcome, and fundamental processes that should theoretically be simulatable. Another aspect it motivated me to look into that is that it can be simulated in constrains of reality which is much better understood than those I mentioned earlier.
And here's where I realized a fascinating paradox: despite our computational advances, we still cannot accurately simulate even this relatively simple chemical experiment. We can simulate emerging intelligence through neural networks, but we can't properly simulate the basic chemistry that might have led to life.
Why is this the case?
The answer lies in fundamentally different approaches to simulation:
When we simulate intelligence using neural networks (like in that YouTube video), we're not actually simulating the physical substrate of intelligence. We're creating functional abstractions that produce intelligence-like behaviors without recreating the actual biological mechanisms. We don't need to simulate every neuron's biochemistry - we just need mathematical functions that behave similarly enough.
This abstraction is what enables intelligence simulation to be feasible. We know that artificial neural networks are dramatically simplified compared to biological neurons:
A biological neuron is an incredibly complex cell with thousands of connections and intricate biochemistry
An artificial neuron is just a mathematical function - typically a weighted sum followed by an activation function
The critical insight is that intelligence simulation benefits from "acceptable approximation." An AI that's 95% accurate at recognizing images is still useful. The neural networks in the simulation could develop strategies that were effective enough for survival, even if they weren't optimal.
In contrast, simulating basic chemistry (like the Urey-Miller experiment) requires extreme precision across multiple scales:
We need to account for quantum mechanical effects governing electron behavior
We need to track interactions happening at femtosecond timescales (10^-15 seconds)
We need to bridge quantum, molecular, and macroscopic levels simultaneously
Small inaccuracies cascade into completely wrong outcomes
The freedom and flexibility that allows intelligence to emerge in neural networks actually works against us in chemical simulations. Chemistry doesn't have the luxury of approximation - electron interactions either happen in specific ways or they don't.
This creates a strange situation where we can build AI systems that demonstrate complex behaviors before we can accurately simulate the basic chemistry that would naturally give rise to life (and eventually intelligence) in the first place.
I find it remarkable that the key to this paradox is freedom within constraints. Neural networks have the freedom to approximate solutions, while still being constrained enough to be computationally feasible. Chemistry requires such precise simulation that our current computational approaches simply can't handle the freedom inherent in quantum interactions.
So we find ourselves in this peculiar technological moment where we can create chess grandmasters in silicon but can't fully simulate how a simple amino acid forms in primordial soup. The abstracted intelligence we can simulate is impressive and useful, but it doesn't necessarily help us understand the fundamental physical processes that gave rise to intelligence in the first place.
Perhaps this paradox itself holds important clues about the nature of intelligence and reality - that intelligence might be more about patterns and relationships that can emerge in multiple ways, rather than being tied to specific physical implementations.
What started as a simple observation about a YouTube video has led me down a fascinating rabbit hole about computation, intelligence, and the foundations of existence itself.