Microsoft said on Wednesday it will integrate artificial intelligence models from Anthropic into its Copilot assistant, signaling the software giant’s push to reduce dependence on its high-profile partnership with ChatGPT maker OpenAI.
While Copilot will remain powered by OpenAI’s latest models, users will be able to select Anthropic models, Claude Sonnet 4 and Claude Opus 4.1, in Copilot’s AI-powered reasoning agent “Researcher,” as well as when developing agents in Microsoft Copilot Studio.
Starting Wednesday, users who opt in to try Claude can switch between OpenAI and Anthropic models in Researcher, said Charles Lamanna, president of Microsoft’s business and industry Copilot operations.
The move marks a shift for Microsoft Copilot, which has been primarily using OpenAI for the new AI features across its suite of applications like Word and Outlook.
Microsoft, a key financial backer for OpenAI, has been seeking to reduce its reliance on the startup and is developing its own AI models, while also integrating models from China’s DeepSeek into its Azure cloud platform.
Earlier this year, Microsoft said it would offer new AI models made by companies including Elon Musk’s xAI and Meta Platforms (META.O), opens new tab hosted in its own data centers.
Earlier, the parents of a teen who died by suicide after ChatGPT coached him on methods of self harm sued OpenAI and CEO Sam Altman, saying the company knowingly put profit above safety when it launched the GPT-4o version of its artificial intelligence chatbot last year.
Adam Raine, 16, died on April 11 after discussing suicide with ChatGPT for months, according to the lawsuit that Raine’s parents filed in San Francisco state court.
The chatbot validated Raine’s suicidal thoughts, gave detailed information on lethal methods of self-harm, and instructed him on how to sneak alcohol from his parents’ liquor cabinet and hide evidence of a failed suicide attempt, they allege. ChatGPT even offered to draft a suicide note, the parents, Matthew and Maria Raine, said in the lawsuit.
The lawsuit seeks to hold OpenAI liable for wrongful death and violations of product safety laws, and seeks unspecified monetary damages.
An OpenAI spokesperson said the company is saddened by Raine’s passing and that ChatGPT includes safeguards such as directing people to crisis helplines.
“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” the spokesperson said, adding that OpenAI will continually improve on its safeguards.
OpenAI did not specifically address the lawsuit’s allegations.
Read more: xAI sues Apple and ChatGPT maker OpenAI
As AI chatbots become more lifelike, companies have touted their ability to serve as confidants and users have begun to rely on them for emotional support. But experts warn that relying on automation for mental health advice carries dangers, and families whose loved ones died after chatbot interactions have criticized a lack of safeguards.
OpenAI said in a blog post that it is planning to add parental controls and exploring ways to connect users in crisis with real-world resources, including by potentially building a network of licensed professionals who can respond through ChatGPT itself.
OpenAI launched GPT-4o in May 2024 in a bid to stay ahead in the AI race. OpenAI knew that features that remembered past interactions, mimicked human empathy and displayed a sycophantic level of validation would endanger vulnerable users without safeguards but launched anyway, the Raines said in their lawsuit.
“This decision had two results: OpenAI’s valuation catapulted from $86 billion to $300 billion, and Adam Raine died by suicide,” they said.
The Raines’ lawsuit also seeks an order requiring OpenAI to verify the ages of ChatGPT users, refuse inquiries for self-harm methods, and warn users about the risk of psychological dependency.