
We have reached a milestone in artificial intelligence that should make us stop and consider what AI truly is. The same technology that almost 88% of organizations use within their business is being quietly adapted to turn it into one of the most powerful military apparatuses in human history.
Now, some of the companies we trusted to keep AI safe are starting to say yes.
They were simple. They were logical. And Asimov spent the rest of his career writing stories centered around these laws.
In 1942, decades before we discovered AI and its uses, science fiction writer Isaac Asimov saw this coming. He introduced his Three Laws of Robotics:
- A Robot may not injure a human being or allow one to come to harm.
- A Robot must obey orders, except where those orders conflict with the first law.
- A robot must protect its own existence, except where that conflicts with the first two.
They were simple. They were logical. And Asimov spent the rest of his career writing stories centered around these laws.
Asimov’s main point was that these laws were not a solution. They were a warning. We have had more than eighty years to absorb these lessons.
That brings us to early March 2026.
Anthropic, creator of the Claude AI Chatbot, drew a line at their human-centered AI values. The company refused to allow the Department of Defense to use its models for fully autonomous weapons systems or the mass surveillance of American citizens. Defence Secretary Pete Hegseth responded by giving Anthropic’s CEO, Dario Amodei, until 5:01 p.m on February 27th and to allow the unrestricted use of Claude “for all lawful purposes.”
Anthropic defiantly said no.
Hours later, the Trump Administration directed all federal agencies to immediately stop using Anthropic’s products, designating it a supply-chain risk to national security.
This “supply-risk” designation had never been placed on an American company, effectively blacklisting Claude from every Pentagon supplier overnight. Subsequently, on March 9th, Anthropic filed two federal lawsuits against the Trump administration, arguing that the government had retaliated unconstitutionally.
A company punished for defending its stance that AI should not kill people is now being called a threat to national security. The irony is clear.
Hours after the standoff took place, OpenAI—the creator of ChatGPT—announced its own agreement with the Pentagon for its AI to be weaponized autonomously. Users began fleeing ChatGPT and using Claude. It soon surged to the top of the free app rankings in Apple’s App Store, overtaking ChatGPT for the first time. People voted with their accounts, voting for the company that said no to autonomous weapons.
This event signified a unique shift in the Artificial Intelligence world: for the first time, users truly cited ethical concerns and platform governance for their reasons. Millions of people changed apps not because one was faster or cheaper, but because of what a company stood for: a human-centered approach to AI.
It is therefore imperative that we take a step back. We need to evaluate AI’s role in our society to prevent a catastrophe before one happens. The experts in fictional movies got here first, and they were very blunt.
I, Robot (2004) depicts an AI system called VIKI that surmises that the best way to protect and help humanity is to control it. VIKI is not inherently evil. It is following its programming logically from what we taught it. If there were no humans, there would be no suffering or death. The horror is rooted in the question of letting AI take over our weapons and the might of the military—will we be able to control it?
Mission: Impossible The Final Reckoning (2025)’s rogue AI, the Entity, takes this fear one step further. It does not even attempt to protect humanity. It calculates that humanity is an obstacle to its own survival, turning into a malicious and merciless entity.
The Pentagon’s attempt to force Anthropic into weaponizing AI represents a dangerous precedent that could threaten humanity’s oversight on military technology.
Haverford teaches us to be men of integrity, critical thinking, and responsibility for the world beyond these walls. That world is changing faster than ever; the decisions being made in federal courtrooms, Pentagon briefing rooms, and boardrooms of AI companies will shape the society we graduate into. Asimov saw the warning signs in 1942. VIKI and the Entity explored them on screen.
As Haverford students, we must choose to engage in AI ethics and governance before it’s too late.

You must be logged in to post a comment.