So much of what we read about AI these days feels like it’s floating in abstraction—cloud computing, neural nets, silicon wafers. But in 2025, a few articles from Le Monde diplomatique brought us back to earth. They reminded us that AI isn’t just code and circuitry. It’s becoming a tool of power, control, and—more quietly—a trigger for moral recalibration in our societies.

One piece from January, a review of Félix Tréguer’s book Technopolice, doesn’t just document the rise of algorithmic policing—it walks us through how we got here. From paper files and human informants to facial recognition and predictive analytics, we’re now in an age where the boundary between the police and private tech firms is blurring fast. In France and beyond, Tréguer argues, we’re seeing the emergence of a new kind of partnership—one that’s quiet, data-rich, and largely invisible to the public. AI doesn’t just make policing more efficient; it changes what’s considered legitimate surveillance. And that’s the heart of the concern. It’s not just about watching—it’s about how the very idea of being watched is shifting in democratic societies.

Then, in April, Le Monde diplo published a sharp commentary by Philippe Leymarie called De la guerre en zone grise. It explores how AI is now woven into what are sometimes called “grey zone” operations—not quite war, not quite peace. Claude Serfati, a French economist cited in the piece, warns that we’re drifting toward a new military paradigm where AI enhances everything from target selection to battlefield coordination. But it doesn’t stop there. These systems are increasingly integrated with quantum computing, hypersonics, even additive manufacturing. The deeper worry? As AI makes weapons more autonomous and decisions more data-driven, the transparency of those decisions disappears. Who’s accountable when an algorithm makes the wrong call? What happens when escalation moves faster than diplomacy?

And finally, a March article zoomed in on the use of AI in military targeting. It compared how Israel and the U.S. use AI to improve precision and reduce civilian casualties. Israeli officials reportedly use AI to assess civilian presence and avoid strikes that might cause high collateral damage. But that same framing—the idea that AI protects civilians—can also be a rhetorical shield. The article questions whether this narrative masks deeper issues: are we becoming too comfortable with the idea that “smart war” is ethical war? And can algorithms ever truly make war humane?

These pieces all circle around one uncomfortable truth: AI is already shifting how power is exercised—from how police patrol a neighborhood to how a nation decides to engage in conflict. And the technology doesn’t just move faster than policy—it moves faster than our ability to think through the implications.

If you’re in the Global South, or part of a small country trying to define its digital sovereignty, the stakes are different—but no less real. Many of these technologies are being exported or embedded via partnerships, sometimes with little debate or oversight. Surveillance tools designed in the West or East often land in places with far less institutional capacity to regulate them—or resist them. And in military alliances or donor relationships, AI tools may come bundled with defense assistance, shaping norms before public debate ever catches up.

So as AI becomes a defining force in geopolitics, the challenge isn’t just who gets the best chips or writes the fastest code. It’s about who defines the terms—who gets to decide what AI is for, and who gets left out of that conversation.

That’s why voices like Tréguer, Serfati, and others matter. They’re asking the hard questions now—questions about power, accountability, and the role of humans in the loop—before the answers get automated, locked in, and quietly forgotten.