Opinion | If A.I. Is a Weapon, Who Should Control It? - The New York Times

March 02, 2026 | By virtualoplossing
Opinion | If A.I. Is a Weapon, Who Should Control It? - The New York Times

Article Navigation

Look, I've been in this game long enough to see the next big thing come and go. Dot-com bubble. Cloud. Blockchain. Every few years, some new tech rolls in, promising to change everything, fix all our problems, probably even fold our laundry. And every single time, without fail, the same clowns are running the show, making the same mistakes. So now it's AI. And if AI isn't a weapon, I don't know what is. The question isn't if it can be used for harm, but who’s going to grab the damn trigger, and more importantly, who’s even qualified to aim?

The AI Gold Rush: Fool's Gold?

We’re drowning in hype. Pure, unadulterated hype. Every VC, every CEO with a PowerPoint deck, every government official trying to sound relevant is suddenly an AI expert. Total nonsense. But we buy it anyway. We've seen this movie before, with blockchain, with "big data" before that, where the promise is glorious and the reality is a swamp of over-engineered solutions and under-delivered value. This time, though, the stakes are higher. This isn't just about losing money; it's about losing control, about automating our biases at scale, about building systems we barely understand and certainly don't know how to govern.

The industry, my industry, is tripping over itself to be "AI-first." What does that even mean? For most, it means tacking "AI-powered" onto whatever they're already doing, slapping some machine learning on top of legacy BSS/OSS systems held together with duct tape and prayers. It’s polishing a turd. They’re chasing the dream of reduced CAPEX and higher ARPU without actually understanding the foundational shifts required. They think an API call to an LLM magically fixes decades of technical debt. It doesn’t. It just gives you eloquently wrong answers, or worse, LLM hallucinations, dressed up as fact.

  • The rush to deploy means skipping critical steps: ethical reviews, robust testing, impact assessments. We’re building the plane while flying it, often without a pilot's license.
  • Everyone wants a piece of the pie, but nobody wants to bake it properly. The underlying data infrastructure, the security protocols, the accountability frameworks? Those are boring. They don't make headlines.
  • Investment is pouring into new models and applications, not into the essential, unsexy governance and safety mechanisms that would make any of this sustainable. The juice isn't worth the squeeze if the squeeze breaks society.

Who's Steering This Warship?

Here’s the rub: the people controlling AI right now are largely the same people who control everything else: Big Tech, venture capitalists, and a smattering of overly enthusiastic government agencies. And I can tell you, from two decades of watching these cycles, they are rarely the right people. They optimize for profit, market share, or perceived national security, often at the expense of privacy, fairness, or long-term societal stability. They’re building autonomous systems without autonomous ethics boards. It's a Wild West scenario, only instead of six-shooters, they're playing with nuclear codes.

Consider the structure. A handful of massive corporations hold the keys to the most powerful models. They dictate the terms, control the data, and set the narratives. Small startups, if they're lucky, get acquired, their innovations folded into the existing behemoths. If they're unlucky, they get crushed. This isn't innovation; it's consolidation. It’s creating a digital oligarchy, an unchecked power that makes the old monopolies look like neighborhood lemonade stands. And these titans, bless their hearts, are driven by quarterly earnings, not grand philosophical ideals. They’ll tell you they’re "democratizing AI," but what they mean is they’re giving you access to their walled garden, on their terms.

Data: The Poisoned Well

You want to talk about weapons? Let's talk about data. AI is only as good as the data it’s trained on. And ninety-nine percent of the data out there is a mess. It’s biased, incomplete, stale, or just plain wrong. It reflects human prejudices, historical inequalities, and the shoddy data collection practices of the last 30 years. Feeding that to a powerful AI is like giving a machine gun to a toddler who learned to aim from watching bad action movies. The results are unpredictable, often discriminatory, and always amplified.

  • We're not just talking about privacy violations, though those are rampant. We're talking about the fundamental unfairness baked into the system because the historical data itself is unfair.
  • The sheer volume of data makes effective curation and cleansing a Herculean task, so most just don't bother. They dump it all in, hoping the AI will sort it out. It won't. It will just learn to mimic the mess.
  • The "data quality" conversation is always sidelined for the flashier "model architecture" discussion. No one wants to spend money on plumbers when they can build a golden faucet.
  • And let's not forget the security implications. Centralizing all this data, making it accessible to powerful AI, creates targets so juicy the hackers are practically salivating. One breach, and you’re not just leaking credit card numbers; you’re leaking predictive profiles, behavioral patterns, the very fabric of people’s digital lives.

Ethics? What Ethics?

The conversation around AI ethics is usually just that: conversation. A lot of hand-wringing in academic papers and a few well-meaning committees that lack any real teeth. The reality is, ethical considerations are an afterthought, a checkbox item to appease PR departments, not a core design principle. When deadlines loom and profits beckon, "ethics" is often the first thing to get deprioritized. It's not malicious intent, usually. It's simple greed and short-sightedness.

We’re building tools that can make life-or-death decisions – from autonomous vehicles to medical diagnostics to military applications – without clear lines of accountability. Who's responsible when the AI makes a mistake? The developer? The company that deployed it? The user? It's a legal minefield we're barely even beginning to map. The regulatory bodies, bless their slow, bureaucratic hearts, are always ten steps behind. They're trying to apply 20th-century laws to 21st-century problems, and it’s simply not working. We need new frameworks, new legal paradigms, and a willingness to actually enforce them.

The Bureaucracy of Bleeding Edge

Who should control it? Ideally, a diverse group of technologists, ethicists, sociologists, policymakers, and—crucially—the public. People who understand the technology, its implications, and its potential for harm and good, rather than just its immediate profit potential. But that’s a fantasy. In reality, it's a muddled mess of competing interests, fragmented regulations, and a profound lack of shared understanding. We're seeing nations race to "win" the AI arms race, pouring money into defense applications, ignoring the underlying civilian applications that could just as easily be weaponized. We talk about edge computing and distributing intelligence, but the core control remains centralized, vulnerable, and opaque.

Government intervention is inevitable, but it often misses the mark. They’ll regulate the symptoms, not the disease. They’ll focus on data privacy laws that are easily circumvented, or ban specific applications without addressing the underlying power structures or computational capabilities. What we need is global cooperation, an international framework that sets standards for development, deployment, and accountability. It sounds idealistic, I know. But without it, we're left with every nation, every corporation, every rogue actor, wielding ever more powerful, largely unregulated, computational weapons. And believe me, that won’t end well for anyone.

  • The lack of interoperability and open standards means vendors are creating proprietary ecosystems, locking in customers and further centralizing control.
  • Talent is scarce and expensive, concentrating the expertise in a few select organizations, exacerbating the problem of limited oversight and understanding elsewhere.
  • The concept of "responsible AI" is often an internal corporate initiative, self-regulated, with no external enforcement or independent auditing. It's like asking the fox to guard the hen house, and then report on how ethical he was.

The Cold Hard Truth About AI

Will AI take all our jobs?

The Blunt Truth: Not all, but a hell of a lot of them. Repetitive, data-heavy, even some creative tasks are toast. The jobs left will be the ones requiring deep human empathy, truly novel problem-solving, or physical dexterity that's too expensive to automate. Start thinking about what you bring to the table that an algorithm can’t mimic or optimize away.

  • Quick Facts:
  • Many entry-level roles will disappear first.
  • New jobs will emerge, but often require retraining.
  • The speed of displacement will outpace new job creation.
Is AI truly intelligent?

The Blunt Truth: No. It’s a pattern-matching engine on steroids. It doesn't "understand" anything in the human sense. It predicts the next word, identifies trends, or makes decisions based on probabilities derived from massive datasets. It can mimic intelligence brilliantly, but it lacks consciousness, empathy, or true reasoning. Don't confuse impressive parlor tricks with sentience.

  • Red Flags:
  • Over-attribution of human qualities to algorithms.
  • Belief that current AI has "common sense."
  • Ignoring the statistical nature of AI "decisions."
Can we control AI if it gets too powerful?

The Blunt Truth: We can barely control its biases now, let alone some hypothetical superintelligence. Our current "control" mechanisms are largely theoretical, based on wishful thinking or a profound misunderstanding of how complex adaptive systems behave. The horse is already out of the barn, and we're arguing about what color bridle it should wear. If it becomes a truly autonomous weapon, good luck. We don't even have a robust global agreement on how to manage MPLS networks, let alone sentient code.

  • Quick Facts:
  • Current safety protocols are nascent and often voluntary.
  • Defining "control" is a philosophical and technical nightmare.
  • The speed of AI development outstrips regulatory capacity.
Is there an AI bubble coming?

The Blunt Truth: You bet your last dollar. Valuations are through the roof for companies with little more than a demo and a dream. The market is fueled by FOMO and speculation. There are some truly innovative applications, but a vast swathe of it is vaporware or marginal improvements dressed up as revolutions. When the interest rates hike, or the next shiny object appears, a lot of these over-inflated balloons are going to pop. And it won't be pretty for those who drank the Kool-Aid.

  • Red Flags:
  • Companies valued purely on "AI potential" rather than revenue.
  • Lack of clear business models beyond "we have AI."
  • The sheer volume of generic "AI solution" providers.

Parting Shot

So, who should control AI? Nobody knows, which is precisely the terrifying answer. It's too diffuse, too complex, too entrenched in the mechanisms of power we already struggle to rein in. We’re in for a rough ride. Expect more ethical quagmires, more job disruption, more consolidation of power, and more instances where the "solution" creates ten new problems. In five years, we won't be asking who should control AI, but how we cope with the fact that it's controlling more and more of us, often without us even realizing it, and certainly without our explicit consent.