
The adoption of AI in software engineering is accelerating rapidly, yet organizations frequently struggle to translate early-stage experimentation into meaningful production results. In a recent SD Times Live! webinar, Will Lytle, Plandek chief operating officer, said the challenge isn’t with the tools, but in “how they’re applied within the system.” High-performing AI teams are recognizing that AI gains often get absorbed by system constraints, preventing positive delivery outcomes.
The AI Adoption Surge and the Perception Gap
AI adoption across engineering organizations has become nearly universal. Polling data from Plandek shows a significant surge: 6 months ago, 30% of respondents had rolled out AI across at least half of their engineering teams, but in a poll conducted a month ago, that number jumped to 93%. Furthermore, 48% of organizations have deployed AI across 90% or more of their teams, up from 12% 6 months earlier. This push aims to have engineers, product owners, and product teams use AI in their different roles.
Despite this surge in adoption, Lytle pointed out a major disconnect: Engineers often feel they are faster, generating code and running tests more efficiently, but this doesn’t consistently translate to organizational speed. In fact, an MIT survey found that while 20% of experienced developers felt faster, a systems-level analysis of delivery showed they were about 19% slower.
Shifting Bottlenecks: Why AI Gains Are Absorbed
The core issue is that AI does not automatically fix underlying team dynamics or system flaws. “It’s because AI doesn’t fix the team, right? AI really amplifies what’s already there,” Lytle explained.
Historically, bottlenecks often related to engineering capacity, but AI has shifted this constraint. Delivery performance frequently remains flat because the constraints are now located in parts of the system where AI has yet to have a direct influence. Lytle notes that these new constraints are exposed by AI’s accelerating effect: “AI is accelerating how individuals are delivering. But the constraints are now shifting to review cycles, planning, dependencies, ideation as part of the product development life cycle, as well as other elements as part of your continuous delivery and continuous integration ecosystem,” he said.
Measuring Success: The Four Pillars of Productivity
For organizations to drive meaningful change, they must first establish a standardized way to measure productivity. Plandek utilizes a framework called the four pillars of productivity to measure software engineering performance. These pillars are:
- Focus: Ensuring investment and capacity are directed toward things that drive the business forward, such as new revenues or customer satisfaction, while monitoring time spent on support and maintenance.
- Flow: Driving an efficient flow state using metrics like lead time to value, cycle time, and the new throughput and PR quotients introduced in the 2026 benchmarks.
- Predictability: Measuring reliability and consistency, ensuring delivery aligns with customer expectations using metrics such as sprint capacity accuracy and velocity volatility.
- Quality: Focusing on building a quality product, and critically, driving fast feedback loops to minimize the time a bug or defect spends in the backlog. Addressing quality correlates directly with optimizing time spent on support and maintenance.
Tackling System Constraints
Identifying bottlenecks requires combining quantitative and qualitative data. Quantitative data (cycle time, KPIs) reveals where the system is slowing down, but qualitative signals (developer frustration, stakeholder feedback) trace the signal to the why.
Lytle outlined seven common categories of constraints, emphasizing that the top barriers have evolved. They are governance and compliance, workflow and process, codebase and architecture, tooling, documentation, training and, finally, culture.
The most impactful change over the last six months is the rise of governance and compliance and workflow and process as leading constraint categories, reflecting increased regulatory demands and complex processes. Additionally, codebase and architecture have shot up, as modern AI tools expose difficulties in working within legacy or non-modularized codebases.
Ultimately, Lytle advises organizations to change their operating model rather than engaging in slow, multi-year change management programs. Instead, the focus should be on driving speed and pace with a tight feedback loop to quickly evaluate the impact of changes.
“I would say lead with the change, rather than trying to change manage everything over a 1-year, 2-year, 3-year program,” Lytle concluded.
Watch the full webinar here.




