“But our god said”

Luke Puplett
8 min readAug 12, 2024

--

Introduction

This week, I saw a tweet from economist Paul Krugman, the economist. He was fact-checking Donald Trump’s claims about bacon prices, of all things, and saying “It isn’t hard to check”. It got me thinking, while it might be easy for some of us to find this data, many people probably have no clue where to even start looking. I ended up tweeting about it, mentioning that most people won’t know how to access this information, or that it exists at all.

Here’s Krugman’s original post:

And as I thought more about it, I realised that AI could play a significant role here. Imagine an AI assistant that could guide you through understanding the data, making it more accessible, and even pre-empting your asking for it so that it’s on the look out for you, making sure you see “the facts”.

But then it hit me: people will just choose AI assistants that confirm their beliefs.

People have always sought comfort, whether from religious leaders, politicians, and soon, AI. I think most people prefer not to be challenged but to have their existing beliefs validated. We know this already, and we call it Confirmation Bias. In fact our brains crave such a consistent world view that studies have shown that we don’t tend to change our minds even when confronted with irrefutable evidence.

Facts don’t change minds — and there’s data to prove it

https://www.turing.ac.uk/blog/facts-dont-change-minds-and-theres-data-prove-it

It left me reflecting on human nature. Even with all this new technology, some things never change. There’s nothing new under the sun. AI, in the form of assistants, have the potential to act as a smart colleague or friend, digesting the news or your social feeds, surfacing interesting stuff, or checking what people are saying and what you’re saying back.

Such an assistant could access multiple reputable data sources and make that information accessible to all, at just the right moment, guiding us through complex charts and providing context in plain language. Yet, with this promise comes peril.

The rise of AI as information guardians could lead to new forms of ideological stratification, where people choose their “God” based on comfort rather than truth. While AI might be new, the human behaviour driving our choices remains stubbornly familiar, echoing age-old patterns of belief and validation.

The Promise and Peril of AI as Information Guardians

The rapid advancement of AI has introduced a powerful new tool into our lives — AI assistants capable of processing vast amounts of data and delivering instant answers to complex questions. On the surface, this seems like a step forward in making information more accessible to everyone. While I’m certain that AI will further democratise knowledge, letting us make informed decisions and hold those in power accountable, we must remember the law of unintended consequences.

The first and perhaps most pressing issue is trust. Just as we have learned to approach Google search results with a healthy dose of skepticism, recognising the biases that can influence what we see, we must approach AI-generated information with similar caution.

The sheer complexity of AI makes it difficult for the average person to understand how decisions are made or why certain information is prioritised over others. I suspect the “black box” nature of AI will erode trust from the get-go, particularly if people start to feel that the AI is not reflecting things they believe to be true. If users begin to sense that an AI is presenting information that feels biased or doesn’t align with their expectations, they may lose faith in that model and seek out alternatives that confirm their pre-existing beliefs.

For example, what if the Google model that presented mostly brown-skinned scientists and black founding fathers of the United States was actually telling the truth, and all that you and your friends had been taught was a lie? That seems preposterous. But if you go with the thought experiment, it can feel uncomfortable, repulsive, because the mind abhors inconsistency. We were upset by it, even offended, and we demanded the AI be “fixed”, brought into alignment with our existing shared beliefs.

The profound human tendency to seek out validating information will lead to the creation of AI “echo chambers,” where people align with the models that validate their worldview while avoiding those that challenge it. The risk here is not just that people will be comforted by familiar narratives, but that they will become increasingly insulated from opposing viewpoints, deepening ideological divides. In essence, AI could become a tool not for enlightenment, but for entrenchment.

The promise of AI as a neutral, objective guardian of information is tempered by the very real risks of bias, trust erosion, and a loss of accountability. As we integrate AI more deeply into our lives, we must remain vigilant, ensuring that these tools serve as aids to human judgment rather than replacements for it.

The peril lies not just in how we use AI, but in how we might allow it to use us — shaping our beliefs, decisions, and ultimately, our worldviews.

The New Gods of Information — AI as Modern Prophets

As AI disappears into objects around us, it will transcend its role as a mere tool and emerge as a trusted authority, akin to a modern-day oracle. The diversity in AI models’ data and algorithms will transform them into new “Gods” of information, with each AI catering to distinct audiences, mirroring the historical rise of religious denominations. It may become a basic human right to be able to switch the model in your toaster to one that better reflects your cultural heritage.

Photo by James Wainscoat on Unsplash

Human nature has a longstanding tendency to seek out and align with those who share similar beliefs and values. The saying “birds of a feather flock together” captures this instinct perfectly. Just as people naturally gravitate toward like-minded individuals, they will also gravitate toward AI models that reflect and reinforce their own worldviews. This behaviour isn’t just likely; I think it’s guaranteed.

Imagine working alongside someone who holds completely opposing values — a situation many of us have experienced. The constant bickering, the friction, and the tension make for a long day. You wouldn’t choose to spend time with this person outside of work, and similarly, you would not choose to engage with an AI that continually challenges your beliefs or makes you feel gaslighted.

This natural human inclination will lead to a form of AI segregation, where users cluster around models that validate their perspectives. Just as people find comfort in communities that share their religious or ideological beliefs, they will find comfort in AI models that provide the answers they want to hear. This segregation will create distinct “denominations” of AI users, each loyal to their chosen model. These AI “Gods” will offer certainty, validation, and a sense of belonging, reinforcing the echo chambers that already exist in our digital lives.

The consequences of this are profound. Rather than AI serving as a bridge to understanding, fostering dialogue between opposing viewpoints, it could deepen the divides, making it even harder for people to engage with ideas outside their comfort zone, as the AI takes on the role of a divine authority.

The allure of AI lies in its ability to provide clear, seemingly authoritative answers in a world that is often confusing and complex. This “algorithmic certainty” is deeply comforting, especially in times of uncertainty or crisis.

But this comfort comes at a cost. As people increasingly rely on AI for information, they may become less inclined to question the data they’re presented with, leading to intellectual complacency. The easier we can get answers from AI, the harder it will seem to put in the work to find alternatives, and the less we may engage in the critical thinking that is essential to a healthy democracy. What’s more, information comes at us through all kinds of channels. Will we read a book or have AI make us up a new a story, or a short film?

This dynamic is reminiscent of the way religious texts have been used throughout history — not just as sources of spiritual guidance, but as tools to validate and reinforce existing beliefs. The difference is that while religious texts require interpretation, AI models deliver answers that appear objective and definitive, even when they’re not.

AI is poised to become the new arbiter of truth, shaping not just what we know, but how we think. The danger is that instead of using AI to challenge our assumptions and broaden our perspectives, we use it to insulate ourselves from uncomfortable truths, deepening the ideological rifts that already divide us.

You may conclude that the skill of critical thinking will be more important than ever, and that all children will need to be taught to resist the temptation to depend solely upon an AI that makes us feel comfortable. However, not all families place the same value on questioning, open-mindedness, and having a scientific outlook, or they may have their limits; “Scout Mindset” is itself an anti-ideology ideology.

Nothing New Under the Sun — Human Nature and Technological Evolution

Despite the revolutionary advancements AI represents, it is ultimately another chapter in the story of human behaviour. The tools may be new, but the patterns of how we use them are deeply rooted in our evolutionary and psychological makeup. Throughout history, humans have sought out sources of certainty, validation, and belonging. Whether it was through religion, ideology, or community, the drive to align with like-minded individuals and belief systems has always been a core part of our social fabric.

Looking back through history, every major technological or ideological shift has been accompanied by a similar phenomenon. The invention of the printing press, while making information more accessible, also opened the door for mass dissemination of biased narratives. The rise of the internet promised to connect us globally, yet it also gave birth to echo chambers where people could reinforce their beliefs without challenge. I’m sure AI will follow this same trajectory.

We can expect the development of models tailored to specific nations and cultures. These models will cater to unique societal contexts, languages, and norms, creating an even more diverse AI landscape, and the risk that the ideological stratifications we’ve always seen will become even more pronounced.

In the end, the phrase “nothing new under the sun” rings true. Despite all the technological advances we’ve made, until we advance our firmware, human nature remains fundamentally unchanged. I have said for a while that AI will expose deeper truths.

Perhaps the more profound realisation is not that we tend to deceive ourselves, but rather that there may not have been any singular, absolute truth from the start. Could it be that we’re all enmeshed in the grand illusion of life itself, where notions of objective reality dissolve into a sea of subjective perspectives?

The truth is not out there, but in here.

--

--

Luke Puplett

Zipwire - time journalling, approval and pay built by techies for techies - https://zipwire.io