In light of increasing safety concerns, Anthropic has revised its usage policy for the Claude AI chatbot. The updated guidelines introduce enhanced cybersecurity measures and explicitly outline certain types of weapons that should not be developed with the aid of Claude.
While the company’s announcement does not specifically highlight the changes regarding weapons, a review of both the previous and current usage policies indicates a significant addition. The earlier version prohibited using Claude to create or distribute weapons and materials intended to cause harm, but the new policy explicitly bans the development of high-yield explosives, as well as biological, nuclear, chemical, and radiological (CBRN) arms.
In May, Anthropic had previously enacted “AI Safety Level 3” protections with the rollout of its Claude Opus 4 model. These safeguards are aimed at increasing the difficulty of jailbreaking the model and preventing it from assisting in the creation of CBRN weapons.
Anthropic’s post also recognizes the potential dangers associated with certain AI tools, such as Computer Use, which allows Claude to control a user’s machine, and Claude Code, which integrates the AI directly into developer terminals. The company notes that these advanced functionalities could lead to risks including mass abuse, malware creation, and cyber attacks.
To address these threats, the company is adding a “Do Not Compromise Computer or Network Systems” section to its usage policy. This section prohibits the use of Claude to find or exploit vulnerabilities, develop or distribute malware, create tools for denial-of-service attacks, among other restrictions.
Furthermore, Anthropic is modifying its stance on political content. Rather than a blanket ban on all political content creation, the updated policy will only forbid uses deemed deceptive or disruptive to democratic processes, as well as activities involving voter and campaign targeting. The company clarified that its regulations on “high-risk” use cases will apply only to consumer-facing recommendations and not to business contexts.