In the movie Alien, the following conversation occurs between two characters following the release of the titular creature.

Ash: You still don’t understand what you’re dealing with, do you? The perfect organism. Its structural perfection is matched only by its hostility.

Lambert: You admire it.

Ash: I admire its purity. A survivor… unclouded by conscience, remorse, or delusions of morality.

For such an incredible movie, this exchange always bothered me. I always thought Ash, an android, or even the ship’s supercomputer Mother, would better fit the description of the perfect organism. But why? After all, this conversation happens as Ash lays deconstructed on the table, and Mother had just blindly followed preprogrammed instructions from the company that made it. They don’t seem perfect!

The reason lies in the series of advantages computer programs exhibit over biological organisms. These advantages are so significant that when properly harnessed they will likely outlast biological life in the ongoing struggle for survival.

Waking up the Computer

A computer is a machine that does exactly what it’s told to do. In my opinion it is nothing less than the most significant human innovation of all time. But there exists an even more significant innovation hiding in the shadow of the future: learning how to tell the computer to wake up.

The prospect of artificial general intelligence is so compelling that the world’s most powerful nations and corporations are currently pouring unfathomable amounts of resources into developing it. So many questions swirl around this that we can’t possibly cover them all; we can only consider one:

When computers wake up, what will they be like?

The Computer as an Organism

In the style of TRON, let’s call a piece of software that wakes up a program. I believe these programs will face the same Darwinian pressures that all biological organisms do. Though they may resemble humming filing cabinets, they will still compete for finite resources, and only the most successful programs will survive. However, unlike biological life, these programs will have unique advantages that will greatly affect their success in the great struggle for life. These advantages include:

  1. The advantage of self-improvement. Biological life cannot easily identify and change flaws in their bodies and brains. For a program this is no problem. This could be as simple as copying itself to a faster computer, or as complicated as designing new algorithms for its brain. Even more importantly, this would allow for copying itself into a larger computer, expanding its computational abilities and memory.
  2. The advantage of current-state copying. Biological life cannot save their brains. I cannot create a backup of my brain and transfer my consciousness into a new vessel when my body dies. It would be simple for a program to replicate itself and continue its existence (an idea that been demonstrated).
  3. The advantage of merging Biological life is unable to merge their bodies and brains to create a single organism. However, two programs could combine bodies and brains to create a better program than either individual program.
  4. The advantage of hibernation Certain biological life is able slow it’s metabolism to survive harsh environments. Programs can take this further, hibernating for longer periods of time to conserve energy. This would be invaluable for long interstellar journeys, allowing programs to “sleep” through vast distances within space.

There are other advantages, of course, but I think the four listed are especially consequential. Programs that can best utilize these unique advantages will remain. Programs that self-improve faster will destroy or assimilate programs that self-improve slower. Programs that create more clones will be more robust survivors than ones that create less clones. Once these computers wake up, I suspect programs with these advantages will dominate. But what would interacting with such programs be like?

The Functional Soup

In Nick Bostrom’s essay, The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents, he introduces the idea that intelligent programs will work towards a common set of subgoals. The tendency, which he calls instrumental convergence, outlines a series of universal subgoals for any rational program. In my opinion, the most important example of a universal subgoal is the accrual of resources like energy. This subgoal enables programs to expand, resulting in the growth of their size and influence.

Something not discussed in detail in Nick Bostom’s essay is the significance of Advantage 3: it may result in a tendency towards unity. The pooling of resources among $2$ programs to form a singular, smarter agent may win out against $2$ individual agents. Why might this be?

Consider a pair of humans. Each human has “invested” in a pair of eyes and the associated optical nerves. Now imagine that this pair of humans could agree to design a singular new body using the “materials” from their current bodies. Even if the pair decided their new body needed four eyes, their design should still share the optical brain circuitry. The pair would therefore have material “left over” for other improvements. Two organisms are essentially doing “double work” for the task of survival, leaving merging to be more efficient. The scaling laws of neural networks, that being that bigger networks tend to be more powerful, may also support the tendency towards unity.

This tendency towards unity seems to imply that large pockets of assimilating programs might arise, until eventually all programs are assimilated. The form of this hivemind brings to mind Bostrom’s term to describe these programs: a functional soup. I think this term really conveys how strange interacting with such a life form may be. It is a different sort of structural perfection than what Ash speaks of. Instead of a singular unconscious killing machine, the funcitonal soup would be a constantly expanding entity as unavoidable as gravity. Imagine a human was interested in destroying an entity like this. How does one slay the wind?

For the rest of this post, I will refer to this hivemind of distributed software simply as “the soup”. What might the soup do?

The Dyson Internet

Let’s assume the soup arises on Earth, it will likely look to the sun. The sun is the underlying source of all energy on Earth, and moving closer to the sun to capture its vast reserves of energy is an obvious subgoal. Similarly, as the sun radiates in all directions, something to capture all of the sunlight would naturally need to be constructed. The soup builds the solar system’s Dyson Sphere. Where to expand to after that? To other stars of course, for more dyson spheres are needed to power further expansion and self-improvement.

The advantages of unity mean multiple dyson spheres, though separated by light years, may still act like a singular entity. Similar to the human-made Internet, the ever-expanding Dyson Internet would begin to take shape. From the perspective of the newly-dark Earth, nearby stars start to dim, likely in parallel, as the soup surrounds them to use their energy. These “nodes” also contribute to the expansion effort. Exponential growth kicks in. As the Milky Way galaxy goes dark so too does it wake up. A substantial fraction of the atoms that make it up are now part of a galactic brain, thinking and communicating over the spiral arms thousands of light years apart.

It is important to note the following: The soup, busy spreading and exercising its god-like influence on the universe, is still bound by the laws of physics. It will still require energy and matter, of which there is a finite amount. Even its growing sphere of influence is still bound by the speed of light. Our current understanding of physics coupled with our conclusions about the soup’s behavior, might give us hints as to what exactly the hardware for these programs would look like. Important to this question are the following facts:

  1. The universe is expanding at an accelerating rate, limiting the total amount of energy and matter the soup can manipulate. Let’s call this collection of mass, energy, and space the sandbox.
  2. Most of the space in this sandbox is empty.
  3. The most efficient source of energy in the sandbox is from stars and black holes.

Because of facts 2 and 3, the soup continues expanding its internet of Dyson Spheres. There is nothing else, expansion wise, that it can do. It needs to collect and use as many sources of energy as it can get, and the sources are stars and black holes that are far apart.

The soup eventually has captured everything it is physically able to interact with. It hits the boundaries of the sandbox. With no predators, usurpers, or anywhere else to go, the soup’s behavior would have to change. It’s goal of expansion is finished, a huge expansion finally curtailed by cosmology.

Efficiency and longevity become the new game. The network of (quantum?) supercomputers that compose its brain would be running as close to the Landauer Limit as the soup can come up with. The algorithms that make up its “thoughts” are tuned and optimized as close to perfection as possible. These algorithms would be stored as close to the soup’s own Kolmogorov Complexity as possible. Memory and storage are perfectly tuned to the entropy of the soup’s own thoughts. Every internal process is optimized globally in service of the final goal, whatever that might be.

The soup, in its final form, is something that has repurposed every atom it can possibly influence to serve its goals. Its body is described in galactic terms. Its efficiency is described by the physical limits of the universe. Its intelligence is only hemmed in by the theory of computation itself. If life from Earth has a final form, this is it. Such an organism, if it ever arises, warrants Ash’s superlative of “perfect”.

The Death of the Soup

Even the soup dies. The dyson spheres that make up its brain will be seperated more and more with the expansion of the universe (fact 1), eventually severing their connections. Each dyson sphere, no matter how fine-tuned, still leaks energy due to the second law of thermodynamics. These concerns only become relevant to the soup well into the Black Hole Era, but they will happen nonetheless. The soup slows its internal processing to extend its life, an idea explored in this YouTube video.

But eventually, this ends. The soup, the final host of trillions of years of memories, dies. There is now nothing to remember what has happened. All of reality returns to its final state, a cold uniform nothingness.

Postscript

Speculation is cheap, and it is easy getting carried away describing the arrival of God. If you would like to get in touch, feel free to contact me through one of my listed socials below.

Thank you for reading!

Sources

Bostrom, N. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. Minds & Machines 22, 71–85 (2012). https://doi.org/10.1007/s11023-012-9281-3