逍遥法外电影大尺度未删减,伊人天堂网,蜜桃臀av在线,综合网天天,老炮儿电影未删减完整版下载,国内久久精品视频,风花电影在线观看完整版

Global EditionASIA 中文雙語Fran?ais
World
Home / World / Americas

AI risks come to fore amid standoff with Anthropic

By YANG RAN | China Daily | Updated: 2026-03-09 09:57
Share
Share - WeChat
FILE PHOTO: Anthropic logo is seen in this illustration taken May 20, 2024. [Photo/Agencies]

A high-stakes standoff between the US government and tech company Anthropic has brought into sharp focus the dangers of rapidly militarizing artificial intelligence.

Experts warn that rushing to deploy AI in lethal weapons systems could trigger a global AI arms race and heighten the risk of conflict, urging the international community to quickly establish clear red lines.

Anthropic's large language model, Claude, has been making headlines recently. According to multiple Western media reports, the US military has utilized Claude for key operational support in actions against Venezuela and Iran, highlighting AI's expanding role in live combat.

However, on Feb 27, the US administration ordered all government agencies to cease using Claude, instituting a six-month phaseout. And Anthropic was formally identified by the Pentagon on Thursday as a supply-chain risk.

This drastic move followed Anthropic's refusal to compromise its guardrails that prevent the technology's application in fully autonomous weapons and domestic mass surveillance. In a public statement, Anthropic CEO Dario Amodei declared the company "cannot in good conscience accede to" the US Department of War's request, framing it as an ethical line the firm will not cross.

Jiang Tianjiao, research fellow of the Center for Global AI Innovative Governance at Fudan University, said that while AI is increasingly being used to assist military decision-making, current large language models like Claude lack the predictability, robustness, and safety needed for lethal autonomous weapons or mass surveillance tasks.

"Even powerful models," he argued, "cannot guarantee reliability in 'real battlefield' conditions, where errors can have deadly consequences and risk escalating international conflicts."

He also warned that the Pentagon's push to integrate AI more deeply into military applications could fuel a global AI arms race. "These demands may conflict directly with international law and ethical standards,"Jiang added. "Autonomous lethal weapons, for example, clash with principles of international humanitarian law, which requires distinction between combatants and civilians and accountable human command."

Anthropic's principled stance has cost the firm its US government business. Shortly after the ban, OpenAI announced a deal to deploy its models within the Department of War's classified networks. The US Departments of State, Treasury and Health and Human Services have also instructed the staff members to stop using Anthropic's AI products.

Sun Chenghao, head of the US-Europe Program at Tsinghua University's Center for International Security and Strategy, said that punishing firms for upholding safety guardrails incentivizes the industry to "prioritize contracts over constraints," pushing risks to the battlefield and society.

Jiang further warned that the US moves risk politicizing the global tech ecosystem, forcing companies to prioritize national security over ethics or face sanctions. "Once militarization is forcibly advanced, the line between commercial and military sectors can become increasingly blurred, potentially making existing security review mechanisms purely cosmetic," he said.

Ironically, Anthropic's loss of government contracts has coincided with a surge in its public popularity. Its chatbot Claude recently topped the Apple App Store, and the company's annualized revenue has reportedly jumped.

Ethical boundaries

Sun noted that among a considerable user base, "safety red lines" and "ethical boundaries" genuinely influence consumption and platform choices. "But this reflects a rejection of 'unlimited militarization' and of including surveillance or lethal applications as options, rather than a blanket opposition to all defense-related AI," he added.

Experts pointed out that the confrontation underscores a significant governance lag, as existing international law and rules concerning AI militarization remain underdeveloped.

Sun said that while existing international law offers some principled constraints, it's insufficient for governing AI militarization effectively."AI isn't a single, easily countable or verifiable weapon system, so traditional arms control methods don't apply well. External verification is also hindered by commercial confidentiality and national security secrecy."

"The biggest challenge for global governance on AI militarization isn't a lack of principles, but a lack of actionable common definitions and tiered regulations and a lack of minimal political trust that can be sustained amid great power competition," Sun added.

A UN General Assembly resolution adopted in December underscores the urgent need for the international community to address the challenges posed by emerging technologies in lethal autonomous weapons systems.

"The feasible path is not an abstract call for a total ban, but to promote a set of tiered, verifiable, and implementable safety guardrails," Sun said, "The international community should prioritize reaching a minimum consensus on 'meaningful human control' over the most dangerous lethal applications and embed the principle of 'ultimate human command and accountability' into national policies and international agreements."

Jiang highlighted the need for consensus on the red lines for military AI reached within the United Nations framework as soon as possible, advocating for strategic communication mechanisms among major powers to manage the risks effectively.

Most Viewed in 24 Hours
Top
BACK TO THE TOP
English
Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US