When OpenAI launched Frontier in February, the announcement was described as a platform for enterprise AI agents. What it actually signalled was a challenge to the revenue architecture underpinning the software industry.
Frontier is designed to act as a semantic layer in an organisation’s existing systems, connecting data warehouses, CRM platforms, ticketing tools, and internal applications so AI agents can operate with the same business context a human employee would have. OpenAI describes these agents as “AI coworkers” that can be on-boarded, assigned identities, granted permissions, and reviewed for performance.
Early customers include Uber, State Farm, Intuit, and Thermo Fisher Scientific. OpenAI CFO Sarah Friar has stated that enterprise customers currently account for roughly 40% of the company’s revenue, and she aims to increase this figure to closer to 50% by year-end, and claims Frontier is the vehicle.
Frontier in enterprise workflows
The case for Frontier is that agents deployed in isolation add complexity not remove it. Each new agent is a point of integration, requiring its own data connections and governance controls, and the result is fragmentation. OpenAI’s answer is a shared business context. Rather than each agent building its own understanding of how an organisation works, Frontier provides a centralised layer that all agents can reference.
Fidji Simo, OpenAI’s CEO of Applications, speaking at the launch briefing, referred to her time running Instacart. “We spent months integrating each of the ones that we selected. We didn’t even get what we actually wanted, because each tool was good for one use case, but they weren’t integrated or talking to one another, so we were just reinforcing silos on silos.”
The results OpenAI cites from early deployments include a global investment firm using Frontier agents in its sales process freeing up more than 90% of salesperson time previously spent on administrative tasks. A technology customer reported saving 1,500 hours a month in product development. At a major manufacturer, agents compressed a production optimisation process from six weeks to a single day.
Frontier manages agents built by OpenAI, in-house enterprise teams, and those from third-party providers. Openness is a design principle and positioning: it makes Frontier harder to dismiss on the grounds of vendor lock-in and expands the surface area it can govern.
The seat-licence problem
A deep concern for incumbents is structural. The per-seat licence model that has made SaaS enormously profitable assumes that software use maps to headcount. If an AI agent handles the workflow that previously required a human employee logging into Salesforce, the justification for that seat licence weakens. Fortune described fear in the market of models like Frontier making SaaS software “invisible” and consequently less valuable.
Salesforce’s stock has declined more than 27% this year, which analysts have attributed more to agentic AI disruption fears than to any weakness in its underlying financials. Revenue reached $11.2 billion in the quarter, Agentforce’s annual recurring revenue hit $800 million, and the company closed 29,000 Agentforce deals. Stock fell after guidance that came in below Wall Street’s expectations.
The incumbents are not standing still. Salesforce has introduced what it calls the Agentic Enterprise License Agreement, a fixed-price, all-you-can-eat model for Agentforce that attempts to make consumption more predictable for enterprise buyers.
ServiceNow has moved to consumption-based pricing for some of its AI agent offerings, and in January signed a multiyear agreement with OpenAI to embed frontier model abilities directly into its platform. Microsoft has introduced consumption-based pricing with its per-user model for Copilot Studio.
The pricing pivot signals that companies understand the seat-licence model cannot survive agentic AI unchanged. The question is whether repricing is enough or whether the architecture itself needs to change.
Two ideas of where the intelligence layer should sit
Should AI agents live inside systems of record, or above them? Salesforce and ServiceNow are betting on the embedded model, arguing that agents are most effective when they sit closest to the data, and that CIOs will trust governance and compliance controls more readily from vendors already managing their workflows.
Marc Benioff, CEO of Salesforce, has described Agentforce as the “operating system for the agentic enterprise.” ServiceNow positions its AI Control Tower as a centralised governance layer for all agents, regardless of origin.
OpenAI, and to a similar degree, Anthropic with Claude Cowork, is betting on the overlay model. Frontier sits above existing systems, using open standards to connect them. The pitch is that enterprises should not have to re-platform to get production-grade agents running in their operations.
Both arguments have merit, and enterprises evaluating these platforms will find genuine trade-offs. The embedded approach offers tighter data control and faster time to value in a known ecosystem. The overlay approach offers flexibility and avoids the problem of agents that can only see one vendor’s data.
What the incumbents have that OpenAI does not is decades of institutional trust and existing contracts. What the AI leader has is the model ability advantage and an argument that it can run the intelligence layer across the whole enterprise.
Frontier is currently available to a limited set of customers, with broader availability expected over the coming months. Pricing has not been disclosed, with OpenAI directing interested organisations to its enterprise sales team.
Many large enterprises run Salesforce, ServiceNow, and Microsoft infrastructure simultaneously. The immediate question is whether Frontier becomes an orchestration layer that connects systems, or a platform that displaces them.
OpenAI’s chief revenue officer, Denise Dresser, said: “What’s really missing still for most companies is just a simple way to free the power of agents as teammates that can operate inside the business without the need to rework everything underneath.”
Every platform in this space claims to close the gap. SaaS incumbents have a head start on trust and data. Whether that proves sufficient is the central question for enterprise software through to the end of 2026.
(Photo by Austin Distel)
See also: OpenAI’s enterprise push: The hidden story behind AI’s sales race
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

