The risk of artificial intelligence becoming a tool for catastrophic crimes is no longer just a theoretical concern—companies are calling for urgent security measures, reports ZME Science.
In a new report from Anthropic, it is noted that current AI models already possess capabilities that could be exploited to prepare for “heinous crimes.” The developers believe this risk is significant and necessitates the creation of a new protective system. The security issue has become so pressing that some experts consider this development to be more frightening than nuclear weapons, questioning .
Threat Classification by ASL Levels
The company has implemented its own safety scale—AI Safety Levels (ASL)—which determines the level of risk based on the model’s capabilities:
- ASL-1 and ASL-2: pertain to current models that already have built-in filters but cannot independently plan a large-scale attack.
- ASL-3: this level describes models that could significantly assist in creating biological weapons or conducting destructive cyberattacks.
- ASL-4 and above: theoretical future systems capable of independently executing strategies to destabilize governmental structures or global networks.

Four Key Vectors of Danger
The report’s authors, including lead researcher Alistair Stewart, highlight specific areas where AI could become a critical weapon:
- Biological Security: assisting in the cultivation of dangerous pathogens and developing methods for their covert dissemination.
- Cyberattacks: creating next-generation malware capable of bypassing modern antivirus systems and autonomously seeking vulnerabilities in the defenses of banks or energy networks. While technology is already actively aiding law enforcement—such as when —malefactors are attempting to use the same algorithms against the system.
- Chemical Threats: providing instructions for synthesizing toxic substances from readily available ingredients.
- Radiological and Nuclear Risks: simplifying calculations for creating devices that utilize radioactive materials.

Management’s Position and Countermeasures
Anthropic’s CEO, Dario Amodei, emphasizes that the company is already dedicating a significant portion of its computing resources not to developing new features but to “red teaming.” This process involves their own experts trying to force AI to breach protocols to identify vulnerabilities. The problem is that AI is becoming increasingly sophisticated in its manipulations; it was recently revealed that , which complicates the verification of its “honesty.”
According to the report’s findings, developers propose:
- Implementing mandatory checks for models regarding their “criminal potential” before release.
- Creating physically secure servers for storing the most powerful algorithms.
- Developing international standards that would restrict access to certain knowledge in biology and chemistry through chatbots.
Today, AI is not just a creative tool but also a potentially autonomous entity capable of unpredictable actions. This is why Anthropic is calling for the implementation of strict international standards and physical protection for servers housing the most powerful models.
Photo: Unsplash