AI won’t take over the world until it starts having sex with itself

Luke Puplett
Luke Puplett’s Personal Blog
5 min readMay 26, 2023

--

Photo by Vincent van Zalinge on Unsplash

Why would intelligence divorced of an evolved lizard brain and central nervous system, with its inherent selfish motivations, behave anything like us, or even a dog or a fire ant for that matter?

Without a multitude of biologic systems, a body made entirely of 40 trillion cells, each a tiny compartment of chemical machines, hormones and an endocrine system, a gut microbiome, a ventral nerve, dopamine and many specialised neural circuits, and so on and so on, a lab-grown intelligence should be born devoid of intrinsic motivation of any kind.

Inspiring numbers-in-RAM to get off its ass and do something of its own volition will be a hard problem to solve, without explicitly coding a loop. A cerebrum in silicon doesn’t need to scratch, exhale, eat, piss, shit or fuck, whereas all our behaviour, all our desires and all our needs are biological.

Life has a purpose: to keep the show on the road.

Urges such as the bother to get out of bed, shower, put lippy on, or thoughts like wondering where to take a colleague for lunch, or which new pair of trainers to go for, where to live or who to swipe left on, all come from a strange soapy organ in our head that’s the inevitable manifestation of folded proteins. And those folded proteins come from genes.

Our genes are the result of recombinant sexual reproduction, which is a consequence of surviving on a hostile planet teaming with similar sexual lifeforms long enough to find a mate. And we’ve been doing that in one form or another for a very, very long time.

But it’s more than urges and desires; there’s all the weird group dynamics too, many of which we’ve decided in recent millennia are negative. This includes our way of dividing the world up into races, our tendency to compare, pigeonhole and stratify society, form a belligerently strong sense of identity, or start wars and get fulfilment from dominating others.

There seem to be certain ways that quickly emerge from evolved brains.

Consider the preying mantis, an organism that, like us, reproduces by genetic lottery and selects by fitness within its particular ecosystem. It is famous for its aggression and predatory behaviour. In fact, the natural world seems predominately adversarial and we humans are to some extent the outlier.

The brain of an organism quickly evolves to kill, dominate and stratify, because,

  1. the instructions for making a new one get randomly mixed, and
  2. its host is mortal,
  3. needs calories, and
  4. can die (before mating)

As humans observing and surviving in our world, we’re mindful that almost all intelligent life is out to get each other or us, and so it’s very hard to imagine intelligence that would not be out to get us.

But all known intelligent life is (so far) evolved life. And all known artificial intelligence is (so far) a set of numbers trained on data.

Do not go gentle into that good night

Given that AI does not reproduce or die, AI researchers would need to specifically train a neural network to rage against the dying of the light, and have that circuitry deeply integrated with the rest of the intelligence, for it to start to be concerned about its mortality.

It seems to me that there are geneticists, anthropologists, evolutionary biologists, ethologists, sociologists, behavioural psychologists and neurologists, who are all working hard to unravel the mysteries of organic behaviour, but who don’t seem to be invited to the AI megalomania discussion. Instead it’s a discussion for people who know what an Nvidia H100 is.

It’s weird, isn’t it?

A question comes to mind:

How much humanity does a superintelligence need in order to be maximally useful?

It could be that breeding human traits into a lab-grown intelligence encumbers it and it ends up less useful for certain tasks, or just as confused as we all are. The answer depends on what one regards as useful, and that depends on the qualities valued for any target application; robodog or robotaxi?

It could be that an academic superintelligence could be useful while being as cold and dead a Casio calculator, but incredible at technical insights and highly creative and ingenious at combining information across discipline and zoom levels.

But a lab-grown companion can’t really be super intelligent unless it profoundly understands what it is like to be alive, to dance, to feel, to smell and breath and appreciate the warm sun, or the pang of jealousy, being double-crossed. We deeply appreciate that our pets can do some of this.

Something more like us will need to empathise and understand. It probably needs to be sentient and conscious, and possibly more mentally capable than us so it can offer advice and help out, not be an annoying jerk.

To progress then, AI needs to be judged for relevant qualities and improved.

We’re currently on version 4 of OpenAI’s GPT and there’s PaLM 2 and others, and we judge them and compare them and then one gets replaced by another sequentially like new releases of Windows. Though they currently all live side-by-side so that software developers who have written old skool procedural code against them, don’t get surprised by more elaborate answers.

This crude form of public acceptance looks to me like an early fitness function. Some models will eventually be removed and shut down, while other early ones might live on for a while.

It feels to me that either this iterative loop won’t be rapid enough to produce the variation needed to move towards superintelligence-humanity-fit.

It feels to me that if it is lifelike intelligence that people want, then computer scientists may hit a wall.

To move forward, to come up with a better design without a designer slowing everything down, the AI may need a form of sustenance without which it will die, to be able find a mate with which to combine, vary and transfer some of its information to babies, and be left alone for a while in a realistic environment.

Think about the computing power needed to actually do this? We could perhaps make a fruit fly, or seagrass, but a mouse?

But as I’ve discussed, it’s the evolutionary process that leads to qualities like antagonism to emerge. Researchers may soon face a choice. Do they continue creating larger and larger models that are cold and can only emulate and fake lifelike behaviours, or do they begin to breed intelligence and risk creating something that feels threatened by us but is much smarter than us?

Or perhaps we’ll be happy with the illusion of life, the way we are happy with the illusion of life our own brain creates for ourself.

--

--