Top AI companies’ safety practices fall short, says new report
As leading artificial intelligence companies release increasingly capable AI systems, a new report is sounding the alarm about their safety practices.
As leading artificial intelligence companies release increasingly capable AI systems, a new report is sounding the alarm about what it says are some of those companies’ lagging safety practices.
The Winter 2025 AI Safety Index, which examines the safety protocols of eight leading AI companies, found that their approaches “lack the concrete safeguards, independent oversight and credible long-term risk-management strategies that such powerful systems demand.”
Sabina Nong, an AI safety investigator at the nonprofit Future of Life Institute (FLI), which organized the report and works to address large-scale risks from technologies like nuclear weapons and AI, said in an interview at the San Diego Alignment Workshop that the analysis revealed a divide in organizations’ approaches to safety.
“We see two clusters of companies in terms of their safety promises and practices,” Nong said. “Three companies are leading: Anthropic, OpenAI, Google DeepMind, in that order, and then five other companies are on the next tier.”
The lower tier of five companies includes xAI and Meta, along with the Chinese AI companies Z.ai, DeepSeek and Alibaba Cloud. Chinese models have been increasingly adopted in Silicon Valley as their capabilities have quickly advanced, and they are readily available because they are largely open source.
Rating: 5