OpenAI has recently updated its Preparedness Framework, the internal system designed to assess the safety of its AI models and determine the necessary safeguards during development and deployment. This update arrives amidst increasing competitive pressures within the AI industry, where companies are racing to deploy ever more powerful models. A key change in the framework is that OpenAI may “adjust” its safety requirements if a competing AI lab releases a “high-risk” system without similar protections in place, a move that highlights the delicate balance between innovation and responsible AI development. This potential adjustment reflects the growing tension between the desire to push the boundaries of AI capabilities and the need to ensure these systems are safe and aligned with human values. OpenAI has faced accusations of lowering safety standards to facilitate faster releases, and has been criticized for not delivering timely reports detailing its safety testing. Adding fuel to the fire, twelve former OpenAI employees recently filed a brief in Elon Musk’s case against OpenAI, arguing that the company would be incentivized to cut even more corners on safety should its planned corporate restructuring be completed. Anticipating potential criticism, OpenAI maintains that any policy adjustments would not be taken lightly and that its safeguards would remain at “a level more protective.” In a recent blog post, OpenAI stated, “If another frontier AI developer releases a high-risk system without comparable safeguards, we may adjust our requirements. However, we would first rigorously confirm that the risk landscape has actually changed, publicly acknowledge that we are making an adjustment, assess that the adjustment does not meaningfully increase the overall risk of severe harm, and still keep safeguards at a level more protective.” This statement attempts to reassure stakeholders that safety remains a top priority, even in a competitive environment. The updated Preparedness Framework also reveals a greater reliance on automated evaluations to accelerate product development. While OpenAI claims it hasn’t abandoned human-led testing entirely, it has developed “a growing suite of automated evaluations” designed to “keep up with [a] faster [release] cadence.” However, some reports contradict this assertion. The Financial Times, for example, reported that OpenAI gave testers less than a week for safety checks on an upcoming major model – a significantly shorter timeframe compared to previous releases. Sources cited by the publication also alleged that many of OpenAI’s safety tests are now conducted on earlier versions of models rather than the versions ultimately released to the public, raising concerns about the effectiveness of these evaluations. Other notable changes to OpenAI’s framework involve how the company categorizes models based on risk. Specifically, the focus is on models that can conceal their capabilities, evade safeguards, prevent their shutdown, or even self-replicate. OpenAI will now concentrate on whether models meet one of two thresholds: “high” capability or “critical” capability. According to OpenAI, a model with “high” capability could “amplify existing pathways to severe harm,” while a model with “critical” capability could “introduce unprecedented new pathways to severe harm.” The company emphasizes that systems reaching either of these capability levels must have safeguards in place to sufficiently minimize the associated risks before deployment or during development, respectively. These updates to the Preparedness Framework, the first since 2023, signal OpenAI's ongoing efforts to refine its approach to AI safety in a rapidly evolving landscape. While the company emphasizes its commitment to responsible AI development, the potential for adjustments to safety requirements in response to competitive pressures raises important questions about the future of AI safety standards and the need for transparency and collaboration within the industry. It remains to be seen how these changes will impact the development and deployment of future AI models and whether they will effectively address the complex challenges associated with increasingly powerful AI systems.