We often debate whether artificial intelligence will replace us—but philosopher Mark Coeckelbergh asks a more troubling question: What if AI simply makes us irrelevant—to ourselves?
In his 2024 paper in AI & Society, “Artificial Intelligence—A Threat to Human Dignity and Autonomy?”, Coeckelbergh outlines the most nuanced, disturbing, and timely critique of AI yet published—not because he worries about killer robots, but because he understands how fragile the idea of "human" really is in a world where we slowly surrender our moral, political, and philosophical agency to machines.
“What is at stake is not just some abstract moral principle. It is who we are and who we are becoming.” —Mark Coeckelbergh
What’s the Paper About?
The paper offers a philosophical account of how AI—particularly decision-making AI—is already reshaping what it means to be human. Coeckelbergh argues that AI systems do not just displace jobs or automate workflows. They are beginning to redefine the human role in ethics, politics, and responsibility.
At its core, the paper warns of a creeping loss of human autonomy and dignity, as we build and deploy systems that increasingly:
Make decisions on our behalf,
Judge other humans as datasets,
Reduce morality to algorithmic logic,
And centralize authority in opaque, technical infrastructures.
Coeckelbergh warns that these effects accumulate invisibly, until we no longer notice what we’ve lost until we’ve already lost it.
Five Philosophical Threats of AI
1. The De-skilling of Human Judgment
AI’s growing role in areas like hiring, healthcare, and law enforcement erodes our need—and ability—to make critical decisions. Coeckelbergh calls this "decision deskilling" and links it to a decline in moral reasoning:
“We will not be able to understand, question, and take responsibility for the decisions anymore.” —Coeckelbergh
The paradox? The more powerful AI becomes, the less incentive we have to think clearly ourselves.
2. The Moral Displacement Effect
As responsibility shifts from humans to machines, so does blame. When AI systems fail—whether through bias in hiring algorithms or wrongful targeting in drone strikes—humans are tempted to say, “It wasn’t me. It was the machine.”
This moral distancing is corrosive:
“The problem is not that humans are replaced. It is that they are still involved but in a more distant, passive, and less responsible way.” —Coeckelbergh
This is what philosopher Hannah Arendt called “the banality of evil”—evil that thrives not in monstrous individuals, but in systems without accountability.
3. Quantifying the Soul
AI’s dependency on data leads to the reduction of the human being into a dataset. This “datafication” means we are understood not as stories, communities, or moral agents—but as patterns to predict and optimize. This is at the heart of Mark Zuckerberg’s AI manifesto - using the data we’ve willingly surrendered to him as the foundation for his AI models and engines. Cutting both ways, it can make personal decisions based on how you might actually decide, but it also surfaces your thoughts, ideals, and moral values to the Surveillance State. Thereby making you more exposed while destroying the concept of privacy in very dramatic terms.
“There is no room for ambiguity, vulnerability, and narrative complexity in data models.” —Coeckelbergh
This isn’t just an epistemological problem—it’s an attack on human dignity.
4. Surveillance and Political Control
Coeckelbergh places today’s AI within the larger frame of power, state surveillance, and authoritarian drift. He notes how AI’s role in predictive policing, facial recognition, and digital ID systems increases state control—often without public consent.
“AI, when deployed in governance and security, can be used to control, monitor, and repress.” —Coeckelbergh
This isn’t science fiction. China’s Social Credit System and Western deployments of Palantir’s predictive analytics are already laying the groundwork.
5. The Disappearance of Ethical Deliberation
Perhaps most importantly, Coeckelbergh worries that AI changes how we frame problems. The logic of efficiency, optimization, and algorithmic clarity replaces messier but more human ways of thinking.
“What disappears is not only responsibility but also the ambiguity, complexity, and humanity of moral deliberation itself.” —Coeckelbergh
In essence, we are forgetting how to be human—not because machines are better, but because we are becoming indifferent.
What Is To Be Done?
Coeckelbergh offers no cheap techno-fixes. His recommendations go beyond transparency or auditability and call for a transformation in how we design, regulate, and live with AI:
Re-center human judgment in system design. AI should assist—not replace—deliberation.
Build “slow AI” that leaves room for reflection, ambiguity, and democratic input.
Refuse reductionism. Humans are not data points.
Reinforce moral education. The capacity for ethical reflection must be cultivated, not outsourced.
This is a radical message for a society addicted to novelty, speed and efficiency.
The Paper’s Relevance in 2025
What makes Coeckelbergh’s warning so resonant now is that we’re seeing his concerns unfold in real time:
OpenAI and Palantir are being woven into national defense, policing, and immigration enforcement.
Trump’s AI Action Plan" overrides state-level AI regulations and guardrails, centralizing unchecked AI power at the federal level.
Meta’s Boz and OpenAI’s Sam Altman have been added to Detachment 201, the Army Reserve’s elite innovation advisory corps. Using personal information harvested from Meta products like FaceBook and Instagram fed into Palantir’s AI Identification, your private life and thoughts are now in the hands of Federal Law Enforcement - without your consent or active participation. This is Orwell’s thought-crime policing writ large.
Ultimately, the question isn’t whether AI will change the human condition—but whether it already has.
Final Thought: The Machines Are Not Coming. They’re Here.
To read Coeckelbergh is to be reminded that the greatest threat AI poses is not murder or manipulation—but moral numbness. Not killer drones, but decision fatigue and quiet dehumanization. It’s not the Technology per se - it’s the People behind the Technology and the purpose to which they apply it.
In his words:
“AI does not merely help us do things better. It changes what it means to be a person.” —Coeckelbergh
As Frank Herbert said “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” If we fail to grasp this now, we may soon find ourselves in a world where no one remembers how to say no.
Smash the Machines!
Citation:
Coeckelbergh, M. (2023). Artificial Intelligence—A Threat to Human Dignity and Autonomy? AI & Society. DOI: 10.1007/s00146-025-02490-9.