
The promise of AI is that we can accelerate and automate large swaths of undifferentiated work by augmenting human effort or creating agentic AI “workers” who can complete tasks on their own.
To realize that vision, there is a lot of work ahead to take agents from the lab to production quality and safety. We will also have to pay attention to some of the ways that AI agent needs go beyond traditional software. One of the most paradoxical of these is that agentic AI thrives on interoperability, but tooling vendors often want to lock you into their own system.
While closed ecosystems may offer short-term convenience, they’re fundamentally incompatible with how modern engineering teams, and now, AI agents, actually work. Closed ecosystems tie organizations to a single vendor stack, limiting interoperability and cutting off real-time visibility that agents need to perform accurately. The result is fragmentation, slower innovation, and reduced agility. Yet many AI vendors still promote closed ecosystems under the banner of control or security, forcing teams to sacrifice flexibility for confinement.
Open ecosystems, meanwhile, foster transparency and interoperability across the tools engineers already rely on, like GitHub and ServiceNow. Developers are far more likely to embrace AI when it fits seamlessly into their existing workflows, rather than forcing them to toggle between disconnected systems. Beyond convenience, open ecosystems make agentic AI more powerful: they allow agents to gather context across the entire tech stack, collaborate with other systems, and act with greater accuracy. With agent-based AI projected to automate tasks worth over $6 trillion by 2030, vendors would be wise to prioritize open ecosystems since they make it much easier for agentic AI to collect data and work across the tech stack.
In the tech industry specifically, leaders say that open ecosystems are critical to implementing new and business-critical innovations like AI. More than half of those already deploying such technology believe open ecosystems will become standard within two years, according to a Salesforce survey.
Interoperability standards like the Model Connectivity Protocol (MCP) have fueled the shift toward open ecosystems. MCP, a standard developed by Anthropic in 2024, enables software engineers to build secure, two-way connections between their data sources and AI-powered tools. By reducing vendor lock-in and enabling composable AI architectures, MCP accelerates the development of a flexible, open AI landscape.
Additionally, MCP simplifies how large language models connect to external data, tools, and applications. By democratizing access to AI, it enables users to build new workflows and tools more easily. The ultimate goal is agent-to-agent collaboration, allowing them to solve complex problems together and accelerate human productivity. When intelligent agents can freely interoperate, innovation thrives.
Compared to the complex (and full featured) RPC protocols of the past MCP is extremely simple, but it has still driven a huge change because it standardized how different pieces can work together and did so in a way anyone could implement. What other places do we need this pattern? Perhaps semi-structured documentation to make it easier for Agents to keep up with changes. Or going beyond development to production by adding Observability.
After all, like interoperability, observability is also key to ensuring agents are trustworthy and efficient. When agents act autonomously and learn from their environments, teams need a clear line of sight into how those decisions are made. Without visibility into these actions, it’s difficult to notice when something goes wrong or when an agent is behaving in ways that aren’t aligned with an organization’s goals. Real-time agentic AI monitoring allows teams to track an agent’s actions, understand its decision-making processes, and identify any patterns that could indicate biases or errors.
Observability builds trust with developers and external users by showing the logic behind an agent’s decisions and providing a clear activity trail—reassuring people the system isn’t veering off course. When integrations between observability platforms and engineering tools are seamless, observability data can be brought directly into agentic applications like autonomous coding agents. OpenTelemetry, an open source data project managed by the nonprofit Cloud Native Computing Foundation, also ensures compatibility across implementations. It’s beneficial for Agentic AI, especially since it promotes consistent data collection across applications and languages. Without this standardized data, AI agents would fly blind, unable to act on complex IT issues.
Through the end of this year, an estimated 30% of GenAI projects will be abandoned after proof of concept due to poor data quality, inadequate risk controls, escalating costs or unclear business value, according to a recent Gartner report. A similar result for agentic AI can be prevented if developers design AI agents that work across tools, train them responsibly, and use observability to make sure they operate smoothly.
Open ecosystems for agentic AI are essential in an interconnected, composable future. Embracing them fuels innovation, strengthens resilience, and ensures seamless data flow across tools and tech stacks. Ultimately, open ecosystems accelerate progress and foster a more inclusive, collaborative digital future.




