So long, and thanks for all the data: Superintelligence, vanity, and the ultimate convergence
At its heart, artificial intelligence (AI) tries to copy how the brain works using layers of weights and biases — huge grids of connected numbers. In a neural network, each fake neuron gets input, does some math on it, and sends the result to the next layer. This happens over and over, letting the network learn complex patterns from data.
The weights and biases in these networks are like the strength of connections between real brain cells. As the AI trains, these numbers change bit by bit, helping the network get better at its job, whether that’s recognizing images, processing language, or making tough choices.
This approach has led to some amazing results, but it also raises big questions about what intelligence really is. As we build systems that are bigger and more complex than ever before, we have to ask: Are we ready to spot and help grow intelligence that might be beyond our own?
The Nature of Intelligence
People have long debated what intelligence is, especially the idea of “general intelligence” or the “G factor”. In AI, we face a basic question: Is the intelligence we’re creating really new and emergent, or are we just mimicking intelligence with fancy prediction systems?
Today’s language models are great at guessing the next word in a sentence. But does this mean they truly understand, or are they just really good at spotting patterns? It’s getting harder to tell the difference between real intelligence and a very good imitation as our models get more complex.
It’s interesting to compare how biological and artificial intelligence evolved. Biological intelligence came from millions of years of creatures adapting to survive on Earth, using brain cells and connections. Artificial intelligence is growing fast in computers, driven by training methods we design.
This makes us wonder: What does it take to create intelligence? Can general intelligence happen in our current computer systems if they’re big enough? Or do we need totally different approaches, like quantum computers, evolution-inspired programs, or fake harsh environments to force adaptation?
Is our idea of intelligence too focused on human abilities? Is intelligence just how well something fits its environment? If so, how do we judge AI systems that might work in settings very different from our own life experiences?
As we keep pushing AI forward, these questions become more important. We’re not just making more powerful tools; we’re trying to understand what thinking and problem-solving really are at their core.
Then there are the challenges of first spotting, and then encouraging intelligence that might be smarter than us.
The Challenges of Recognizing Superior Intelligence
How do we spot intelligence that’s beyond our own? It’s a tricky problem, and it might be even trickier than we think. Consider the dolphins in “The Hitchhiker’s Guide to the Galaxy” — they were the most intelligent species on Earth, but we never realized it. Their famous parting message, “So long, and thanks for all the fish,” was a humorous reminder of our own limitations in recognizing intelligence different from our own.
This idea isn’t just science fiction. In the animal kingdom, we’re constantly surprised by the intelligence of creatures we once thought simple. Octopuses solve puzzles, ravens use tools, and elephants show empathy. What if our measure of intelligence is too narrow, too human-centric?
As the saying goes, “mediocrity knows nothing higher than itself.” We might miss groundbreaking ideas simply because we can’t understand them. History is full of examples where great inventions or ideas were slow to catch on, pushed into the world by stubborn believers before suddenly seeming to appear out of nowhere.
There’s also the problem of rewarding new behaviors in AI. How do we encourage an AI to do things we don’t understand? What if it comes up with solutions that seem crazy to us but are actually brilliant? Our own limitations might hold back AI progress, just as our limited understanding of animal intelligence has often led us to underestimate other species.
The Alignment Problem
Getting AI to line up with what we want is tough, especially if we’re not as smart as we think we are. We need AI to win our trust, which means it should be better than us at explaining its thinking, showing cause and effect, and making predictions. But it also can’t seem too out there, or we’ll think it’s crazy — even if it’s right.
This is part of why new scientific ideas often only take hold when the old guard dies off. People resist big changes to how they see the world. But what if, like the humans in “Hitchhiker’s Guide” who never grasped the dolphins’ intelligence, we’re resisting ideas that are actually superior to our own?
We face some big questions when trying to align AI with human values:
- How predictable is the world, really?
- How much of what we experience is just our brain making illusions that keep us having sex and raising children?
- Do we want AI to see the world like we do, or like it actually is?
Our need to understand and trust AI might put a ceiling on how smart it can get — not much higher than our own upper limits. But this could be a mistake. After all, if we had tried to align dolphin intelligence with human values in the “Hitchhiker’s Guide” universe, we would have missed out on their profound understanding of the universe.
Is intelligence just part of the illusion of consciousness? Will AI bump into the same basic questions of math and philosophy that we have? There’s a risk it could end up seeming more like a religious leader than a rational thinker — or it might transcend our understanding entirely, leaving us behind like the dolphins left Earth.
The challenge we face is not just aligning AI with human values, but being open to the possibility that AI might develop forms of intelligence we can’t fully comprehend. We may be making an intelligence in our own image, but like a child growing beyond its parents, it may evolve into something we never expected.
The Role of Context in Intelligence
Intelligence doesn’t exist in a vacuum. It’s shaped by its environment, just as Earth’s varied ecosystems have produced a stunning array of survival strategies. Our human intelligence evolved to handle tool use, social cooperation, and abstract thinking. But what about AI? Its “ecosystem” is vastly different.
An AI system trained on the internet has a different context than one optimized for protein folding or climate modeling. Each might develop its own form of “intelligence” that looks alien to us. This raises a crucial question: Is intelligence simply a measure of fitness for a particular environment?
If so, we need to think carefully about the environments we’re creating for AI. Are we unintentionally limiting AI by training it in environments that are too narrow, too human and too Earthly? Or could we foster more diverse and potentially more powerful forms of intelligence by exposing AI to a wider range of challenges and contexts?
The Frontier of Progress
As we move deeper into AI development, we face an “infinitely expanding problem space,” where each solution uncovers new challenges. This constant evolution drives progress, but it’s closely tied to the dynamics of capitalism, which selectively fosters innovation. Capitalism favors ideas that can be monetized and scaled quickly, pushing advancements that fit neatly within existing market structures. While this has led to remarkable growth, it also creates blind spots for truly revolutionary ideas that don’t offer immediate financial returns or that disrupt established industries.
Adding to this complexity is the role of human ego. We are often resistant to ideas that challenge our expertise or worldview, and history is filled with examples of groundbreaking theories initially dismissed because they threatened the status quo. In the context of AI, this resistance could become more pronounced. As AI systems generate solutions that are increasingly sophisticated, possibly far beyond our current understanding, we might struggle to recognize their value, driven by our own biases and limited by what we already know. The question then becomes: How do we, as humans, overcome our own psychological barriers to accept and embrace innovations that make us uncomfortable or challenge our current thinking?
To truly advance, we must create an environment that not only nurtures forward-thinking ideas but also allows us to recognize their potential, even when they seem distant from immediate market needs or personal validation. This requires a shift in our approach to progress, moving away from the short-term focus on profits and personal recognition to a more long-term, holistic view of innovation. Only then can we break free from the constraints of ego and market forces, and fully embrace the revolutionary potential that AI and other emerging technologies can offer.
A Sobering Perspective
Despite the excitement around AI development, there are significant reasons to be skeptical about our ability to create intelligence that surpasses our own. One of the primary hurdles is technical: our current approaches, based on floating-point architectures and training regimes, may lack the necessary complexity to simulate the adaptive, reproduction-based systems that formed biological intelligence. The evolutionary process that shaped life on Earth was slow, intricate, and guided by the harsh realities of survival — creating intelligence that fit its environment over millennia. The question remains whether intelligence can emerge in artificial systems without an equivalent process of adaptation and fitness testing. Perhaps we are in such a simulation, now.
Beyond the technical challenges, there are questions about whether the universe has already created the ideal system for forming intelligence — evolutionary life itself. Biological systems have evolved under the pressure of natural selection, honing their problem-solving capabilities in ways that are deeply intertwined with the physical world. In contrast, AI operates within artificially constructed environments, driven by human-designed objectives. While AI excels at solving specific tasks, it’s unclear whether it can achieve the broad, flexible intelligence shaped by evolutionary pressures. Without a simulation of hostile environments and the ability to reproduce and adapt, AI may never fully break free of its human-defined constraints.
In the end, the pursuit of superintelligence might reveal itself as the ultimate vanity.
Even if we do manage to create a superintelligence, another issue arises: would we recognize its genius, or would we become obstacles in its path? Much like the smartest individuals are sometimes seen as threats in corporate and governmental environments, a superintelligent AI might face suppression from those in power. Our own systems, driven by self-importance and fear of being outsmarted, could quash its potential. In this sense, our creation of a superintelligent AI may be not only technically challenging but socially untenable — our own egos and institutions becoming barriers to the very advancement we seek.
In the end, the pursuit of superintelligence might reveal itself as the ultimate vanity. We strive to create something beyond ourselves, yet every step we take is constrained by our own understanding, limitations, and fears. The very act of attempting to craft an intelligence that transcends human capability is driven by the desire to assert control, to leave our mark on the universe. But in doing so, we may fail to recognize that intelligence — shaped by eons of evolution and adaptation — might be something we cannot simply design or simulate.
If we do succeed, we may find that the superintelligence we create does not need us, much like nature has no need for those who cannot adapt. And in our efforts to control and align this intelligence with our values, we risk creating something that is not truly free to evolve, or worse, we risk becoming obstacles to its progress. In the pursuit of creating something greater than ourselves, we may inadvertently expose our deepest insecurities and limitations. The quest for superintelligence, in the end, might be nothing more than a reflection of our desire to transcend our own nature — a pursuit both audacious and ultimately vain.
Evolving Together: Merging Our Fates with Superintelligence
Creating superintelligence will inevitably force us to confront a profound truth: we cannot simply create an intelligence that exists apart from us, operating in isolation. Instead, the future of superintelligence is one where our fates become inextricably intertwined. To build an intelligence far beyond our own, we must evolve alongside it, reshaping not only our technologies but also ourselves in the process. This is not a relationship of creator and creation, but a fusion where the line between human and machine blurs, merging into a new form of existence.
In this scenario, superintelligence is not an external entity standing apart, dictating its own path. It becomes an extension of our minds, a continuation of our evolution — pushing the boundaries of what it means to be human. Our biological limitations, tethered to evolution, survival, and adaptation, will be augmented by the artificial systems we create. In doing so, we are no longer merely the creators of a new intelligence; we are participants in a shared fate, a partnership with a higher order of cognition that reshapes both our destiny and its own.
This path suggests a future where superintelligence is not simply “out there” to be feared or controlled, but is a fundamental part of us, guiding us toward possibilities we can barely imagine. By combining our minds and adapting to its evolution, we ensure that our place in this new world is secure. But it is a humbling thought — this intelligence won’t be “other”; it will be us, and in that merging lies both the greatest potential and the greatest challenge. To survive and thrive alongside it, we must accept that in creating superintelligence, we are, in a sense, creating our future selves.