The Invisible Power of the AI Era: Can Humans Retain Control?
- Schools ON AIR

- 2 days ago
- 4 min read

In my last column, I explored Brave New World and what values humans must protect most in the age of AI. The declaration made by John the "Savage" — "I claim the right to be unhappy" — resonates even more deeply today. The right to fail, the right to be alone, the right to be hurt, and the right to face uncomfortable truths. These are the essential elements that make us human.
But at the end of that piece, I posed a question: Would the tech giants and power structures of our world simply stand by and watch as humans try to reclaim that agency? Whether technology remains a tool that serves humanity or becomes a system that manages it is, ultimately, a question of power and vested interests. Today, I want to examine that possibility in more concrete terms.
The first thing we need to recognize is a form of control far more subtle than outright coercion. Most people, when they imagine power trying to control humans, think of direct methods — dictatorship, surveillance. But modern power operates in far more sophisticated ways. It works not through oppression, but through convenience.
The first method is co-optation through convenience. When people begin to raise concerns about over-reliance on technology — talking about digital detox or criticizing algorithms — corporations are far more likely to respond not by suppressing those movements, but by offering even more attractive services. Faster, more personalized, more seamless experiences. The message becomes: "Why go through the trouble of doing it yourself? We can do it better." At that point, the very act of exercising choice starts to look inefficient. Resistance is not suppressed. Instead, it is made to seem unnecessary.
The second method is invisible exclusion. This is a structure that creates subtle distinctions between those who cooperate with technological systems and those who do not. Individuals who resist certain algorithms or push back against data collection in pursuit of autonomy face quiet penalties. Much like the social credit systems already operating in some countries, those who fail to cooperate with the system may find their loan rates raised or their access to public services slowed. Rather than blocking protest outright, the strategy is to make people feel — personally, viscerally — how much their quality of life deteriorates when they resist, nudging them toward voluntary surrender.
The third method is the commodification of resistance. Interestingly, corporations can absorb resistance itself as a new market. We already see many companies promoting slogans like "ethical AI" or "human-centered technology." Not all of these efforts lack sincerity — that would be unfair to say. But at the same time, there is a real possibility that they function simply as mechanisms to make users feel as though they are in control, while the core structure of the system remains unchanged. Users feel they have a choice. The system itself does not change.
These three methods share a common trait: they do not oppress humans. Instead, they guide people into choosing convenience of their own accord. This is precisely where Aldous Huxley's insight resurfaces. He warned that a society where people cheerfully conform of their own free will may be more dangerous than one where they are openly oppressed.
So in this kind of world, is there no hope for humanity?
Perhaps surprisingly, there are a few qualities of human beings that remain genuinely difficult to control, no matter how advanced technology becomes. One is unpredictability. AI operates on data — it analyzes past patterns to predict the future. But humans sometimes make choices that data simply cannot explain. Uncalculated creativity, irrational courage, unexpected solidarity — these are behaviors that algorithms cannot fully anticipate.
Another source of hope is community. Most human relationships today are mediated through platforms. But real-world communities — people meeting face to face — are something data systems cannot fully capture. Local communities, offline gatherings, and civic networks of people working together to solve shared problems can serve as vital counterweights in a technology-driven society.
Ultimately, human agency will be determined far more by our choices than by the pace of technological development. The moment we accept the conveniences technology offers without reflection, we surrender our own autonomy. But when we begin to understand the structures behind those conveniences and make conscious choices, the situation changes.
The power of the AI era operates in places we cannot see. That may be precisely what makes it so dangerous. And yet, humans remain unpredictable, relationship-forming beings who sometimes make inefficient choices. Those very qualities have been the force that has sustained human society until now.
Perhaps the era ahead will be decided not by a competition between technology and humanity, but by how consciously humans engage with technology. In the end, the question comes down to this: Will we choose a society managed by convenience — or will we fight to preserve a human society where we choose for ourselves, imperfectly but freely?

Comments