Despite OpenAI’s multiple re-affirmations that its relationship with Microsoft is strong and central, in view of recent developments, Redmond doesn’t seem to be convinced.
According to reports, the tech giant is considering legal action against OpenAI and Amazon over the $50 billion cloud deal the two recently struck to make Amazon Web Services (AWS) the exclusive third-party cloud distribution provider for OpenAI Frontier.
This third-party exclusivity agreement could conflict with OpenAI’s existing Azure partnership. Unnamed Microsoft execs purportedly consider the AWS arrangement unworkable, and say it breaches, if not explicitly, but in principle, their agreement with the AI darling.
The three companies are said to be in discussions to resolve the issue before Frontier goes live following a limited preview, without having to resort to litigation.
“This is a tricky issue, and prospective early adopters of the OpenAI-AWS Frontier capabilities will need to proceed with caution,” said Scott Bickley, advisory fellow at Info-Tech Research Group. The OpenAI-Microsoft agreement is “quite convoluted, and contains several provisions that lack absolute clarity in terms of where boundaries reside for IP use and IP sharing, likely by design.”
Is OpenAI double-dipping with Microsoft and AWS?
In late February, AWS and OpenAI announced their intentions to “co-create” a stateful runtime environment, powered by OpenAI models, that would be made available on Amazon Bedrock for AWS customers. “Stateful AI” is meant to overcome the challenges of so-called “stateless AI,” where models offer one-off answers without factoring in context from previous sessions.
According to the agreement, AWS would not only invest another $50 billion in OpenAI, but would be the exclusive third-party cloud provider for Frontier, which is currently in limited preview with a small group of AI-native companies including Abridge, Ambience, Clay, Decagon, Harvey, and Sierra. OpenAI says it will soon expand the program to other AI builders.
AWS has also agreed to give OpenAI 2GW of Trainium capacity to support demand for the stateful environment, Frontier, and “other advanced workloads.” Further, the two companies would develop models specifically for Amazon applications, and expand their existing $38 billion multi-year agreement to $100 billion over eight years.
However, at the time of that announcement, OpenAI also felt the need to concurrently announce that nothing about its collaborations with other tech companies “in any way” changed the terms of its partnership with Microsoft. Azure would remain the exclusive cloud provider of stateless OpenAI APIs.
The two companies stressed that, as in their original agreement:
- OpenAI has the flexibility to commit to compute elsewhere, including through infrastructure initiatives like the Stargate project.
- Both companies can independently pursue new opportunities.
- An ongoing revenue share arrangement will stay the same; however, that agreement has “always” included revenue-sharing from partnerships between OpenAI and other cloud providers.
OpenAI and Microsoft also underscored the fact that the tech giant will maintain an exclusive license and access to intellectual property (IP) across OpenAI models and products, and that OpenAI’s Frontier and other first-party products would continue to be hosted on Azure.
They stated that their ongoing partnership “was designed to give Microsoft and OpenAI room to pursue new opportunities independently, while continuing to collaborate, which each company is doing, together and independently.”
This re-affirmation followed yet another affirmation of the “next chapter” of the companies’ collaboration in October 2025. Microsoft was one of OpenAI’s earliest financial backers, investing $1 billion in 2019 and $10 billion in 2023.
Concerns about potential future lock-in
Clearly, OpenAI has for some time sought to maintain its independence, while seeking out strategic partnerships with the biggest names in tech. The ChatGPT builder seems to have struck (or is in the midst of striking) deals with nearly every big company out there, including Nvidia, Cerebras, Cisco, Accenture, Snowflake, Oracle, and many others.
“OpenAI is seeking to exploit a loophole between what rights Microsoft has to ‘stateless’ versus ‘stateful’ implementations of LLM models,” Info-Tech’s Bickley observed. Stateful is essential to multi-step agentic workflows, he noted, as it allows AI agents to retain memory and context over time.
But, as with many things, “the devil may reside in the details,” he said, as the AWS announcement calls for the creation of a stateful runtime environment. So, for instance, if Frontier is simply an orchestration layer designed to ensure that API calls are made to an Azure-hosted LLM, Microsoft would get paid for that usage.
The reality is that OpenAI has little choice but to “push the boundaries” of its agreement with Microsoft, and to develop products hosted and used on other hyperscaler clouds, Bickley said. The market is “too big to ignore the AWS and [Google Cloud Platforms] of the world.” Additionally, OpenAI’s massive forecasts of its requirements for capacity (250 GW of data center demand), revenue, and expense/cash burn necessitate a global use case.
“OpenAI is dependent on raising massive amounts of capital to fund this growth trajectory,” said Bickley, and the $50 billion Amazon investment is predicated on the delivery of Frontier.
However, the recent reaffirmation of the Microsoft relationship “muddies the waters,” because it grants OpenAI the right to strike deals with cloud rivals, as long as Microsoft retains its rich revenue-sharing agreement and exclusive hold over stateless models, he noted. This seems to imply that stateful models “may be out of this exclusive IP scope.”
Ultimately, “Microsoft’s aggressive legal response is standard fare for IP disputes among large tech firms, and should not scare away would-be customers,” Bickley emphasized, adding that it will likely be resolved via negotiations.
However, an additional looming issue is the potential for vendor lock-in, he noted. Frontier is tied to OpenAI’s architecture, and now adds “additional lock-in layers” for customer data stored in AWS, along with proprietary orchestration layers through which AI agents will flow. Therefore, as these agentic workflows begin to manage critical enterprise processes, customers’ business workflows could be “distinctly tied” to AWS.
“This will be quite sticky and difficult to migrate off of in the future, assuming there is an alternative to migrate to,” said Bickley.
This article originally appeared on NetworkWorld.
Go to Source
Author: