Musk files a lawsuit to remove OpenAI’s CEO

(SeaPRwire) –   The billionaire has alleged that his former co-founder executed an “unlawful shift to for-profit operations” at the AI company

Court documents submitted on Tuesday reveal that tech magnate Elon Musk is aiming to remove OpenAI CEO Sam Altman and President Greg Brockman from their roles through his legal action against the AI leader.

Musk initiated a lawsuit against OpenAI in 2024, claiming the company defrauded him of $38 million in seed capital he provided at its 2015 founding, based on the premise it would stay a nonprofit. The AI firm, now valued at $852 billion, underwent a restructuring late last year and currently operates as a nonprofit entity owning a 26% share in its for-profit division, which houses ChatGPT.

According to the recent court filing, Musk’s legal representatives are pursuing a motion to “remove Sam Altman and Greg Brockman from their positions of power and recover the personal financial gains they obtained from OpenAI’s illegal for-profit activities and transformation.”

Musk’s attorneys stated that both divisions of OpenAI must also uphold pledges to pursue “AI development that prioritizes safety and open research for humanity’s widespread benefit.” The updated legal complaint notes that any financial penalties granted would be directed to the AI company’s nonprofit side. The trial is scheduled to begin later this month.

In response, OpenAI has charged Musk with trying to damage the company’s reputation with “completely baseless claims,” and it is reported to have asserted that he is conspiring with Meta CEO Mark Zuckerberg to weaken rivals.

Musk departed from OpenAI in 2018 following conflicts with Altman, acquired Twitter (currently X) in 2022, and established his own AI venture, xAI, the next year.

This past February, both xAI and OpenAI disclosed agreements with the Pentagon to incorporate their AI technologies into the U.S. military’s secret systems. Altman stated that his firm consented to the partnership on the stipulation that its tools would not be deployed for widespread surveillance or in fully autonomous weaponry.

These identical two stipulations, however, have been inflexible for the Pentagon in its dispute with Anthropic, the military’s prior preferred AI provider. The U.S. Department of Defense formally labeled Anthropic a supply chain hazard endangering national security after the tech firm declined to eliminate protective measures from its Claude AI model.

Anthropic asserted on Wednesday that its latest AI model is “highly autonomous,” capable of analytical reasoning similar to an expert security researcher, and too potent for general public access, as the company persists in its legal battle against the Pentagon.

This article is provided by a third-party content provider. SeaPRwire (https://www.seaprwire.com/) makes no warranties or representations regarding its content.

Category: Top News, Daily News

SeaPRwire provides global press release distribution services for companies and organizations, covering more than 6,500 media outlets, 86,000 editors and journalists, and over 3.5 million end-user desktop and mobile apps. SeaPRwire supports multilingual press release distribution in English, Japanese, German, Korean, French, Russian, Indonesian, Malay, Vietnamese, Chinese, and more.