Policy & Regulation

Mira Murati tells the court that she couldn’t trust Sam Altman’s words

· May 6, 2026
Mira Murati tells the court that she couldn’t trust Sam Altman’s words

Mira Murati, former CTO of OpenAI, has testified in the Musk v. Altman trial that she could not trust statements made by OpenAI CEO Sam Altman. Under oath, Murati said Altman lied about the safety processes related to a new AI model, claiming falsely that the company’s legal team determined the model did not need review from OpenAI’s deployment safety board. This revelation came during a video deposition shown in court, highlighting serious internal disputes over how rigorously new AI tools should be vetted before release.

This matter is crucial because safety protocols in AI development protect both users and the broader public from unintended consequences. AI models can behave unpredictably, so organizations like OpenAI have safety boards to catch potential risks. If leadership bypasses these checks or misleads engineers, rushed or unsafe deployments could lead to harmful outcomes. For developers, startups, and businesses relying on AI, this case underscores the importance of transparency and adherence to safety standards rather than shortcuts driven by competitive pressure.

The background here involves OpenAI’s efforts to rapidly push out increasingly powerful AI models. Balancing innovation speed with robust safeguards has always been difficult, but disputes over how far to go in safety compliance have grown sharper. The trial derived from tensions between tech visionaries like Elon Musk and Sam Altman, centered around trust, control, and responsibility in AI’s future. Murati’s testimony sheds light on internal challenges faced by AI companies trying to protect users while keeping pace with swift development cycles.

This situation signals a broader warning for the AI industry. Leadership accountability and clear safety governance will be key to maintaining public trust as AI becomes more powerful. Developers and policymakers should watch how companies handle these internal conflicts. The Musk v. Altman trial could inspire tighter regulation and more scrutiny on whether AI companies follow their own safety commitments. The risk is that internal dissent, if unresolved, might lead to more reckless AI releases or fractures that slow positive progress.

We should expect calls for new standards and external audits of AI safety going forward. OpenAI and other labs must address internal transparency gaps and ensure engineers can rely on truthful communication from executives. How this trial ends could reshape governance models for AI projects industry-wide. Companies ignoring safety or misleading their own staff risk serious backlash not only legally but also from the AI community and users.

— AI Quick Briefs Editorial Desk

Stay ahead of AI Get the most important AI news delivered to your inbox — free.