Moving from KTLO to Net-New Delivery: A CTO's Guide to Resource Allocation


CTOs rarely struggle to explain why the roadmap matters. The harder question is why so much of the organization’s engineering effort never reaches it.
In many enterprise environments, the answer is not a lack of ambition or even a lack of budget. It is that a large share of capacity has already been committed elsewhere, to platform upkeep, production support, risk containment, dependency management, aging architecture, compliance work, internal service requests, and the steady stream of operational exceptions that keep the business running but do not expand what the business can do next.
That distinction is what makes the shift from KTLO, keep the lights on, to net-new delivery so difficult. The problem is not merely that teams want to innovate while still carrying maintenance work. The problem is that most organizations do not allocate resources according to the real cost structure of their operating environment.
Run-versus-change problem. Based on a Mckinsey survey of technology leaders at global companies, it argues that technology leaders will need to rethink how they allocate spend between run activities and change activities to capture value from AI-era innovation, and describes this tension as a core budget trade-off rather than a side issue.
That framing is useful because it moves the conversation away from generic productivity language. Net-new delivery is not unlocked by asking teams to work faster inside the same system. It is unlocked when leadership stops allowing run pressure to consume the same capacity that future-facing delivery depends on.
The resource problem is rarely a headcount problem in isolation
When product delivery stalls, many organizations default to the most visible explanation. They assume they need more people. Sometimes they do. But headcount alone is an incomplete answer because engineering capacity is not the same thing as the number of engineers employed.
Capacity is shaped by what the system forces those engineers to spend time on.
McKinsey noted in May 2025 that enterprise technology spending in the United States has been growing by 8% per year on average since 2022, while labor productivity has grown by close to 2 percent over the same period. The point is not that spending is excessive by default. The point is that increasing investment does not automatically translate into proportionate productive output.
That gap is often what CTOs feel when teams appear fully staffed and still cannot create enough room for strategic initiatives. The organization has people, but too little discretionary capacity. It has delivery plans, but too little stable bandwidth to execute them cleanly.
A more accurate way to read the problem is this:
- Some engineering time preserves current operations.
This includes incident response, application support, patching, compliance remediation, environment care, ticket-driven requests, and the broad class of work required to keep existing systems viable. - Some engineering time pays for complexity already created.
Technical debt, brittle integrations, inconsistent tooling, duplicated workflows, and fragmented ownership structures do not show up as strategy, but they absorb the time strategy needs. - Only a subset of engineering time can be directed toward net-new capability.
That portion is usually much smaller than portfolio plans assume, especially when allocation is based on nominal team capacity rather than actual available delivery capacity.
This is why the move from KTLO to net-new delivery is not a motivational problem. It is a portfolio accounting problem disguised as a throughput problem.
KTLO grows when operating friction is funded by delivery capacity
KTLO work becomes unmanageable when leadership treats it as background noise instead of as a first-class consumer of strategic capacity.
That usually happens gradually. A support burden increases. A platform requires more manual upkeep than planned. A legacy integration becomes fragile. Risk controls add approval steps. Technical debt begins to tax every release. Each item looks individually reasonable. Together, they create a persistent draw on delivery.
McKinsey’s 2024 banking technology analysis is especially useful here because it connects technology spend, productivity pressure, and the hidden costs of new build decisions. It notes that in banking, global technology spending had been increasing 9 percent a year on average, outpacing revenue growth of 4%, while labor productivity at US banks had been falling 0.3 percent annually on average since 2010. It also warns that standard ROI calculations often fail to acknowledge maintenance of newly built applications, increased technical debt from complexity created, and future infrastructure expenses, all of which can outstrip the expected benefits of building one more application.
That observation matters beyond banking. It clarifies why net-new delivery often fails to materialize even after a launch, modernization project, or tooling investment. New work can increase future KTLO load if the organization does not account for the run costs it creates.
Where KTLO usually hides in plain sight
The most damaging KTLO load is often the work nobody explicitly budgets as KTLO. It tends to sit in categories that look temporary, tactical, or too small to escalate.
- Exception handling disguised as routine support.
Engineers spend hours each week resolving edge cases, production quirks, and one-off internal requests that should have been reduced through better architecture, automation, or ownership design. - Manual operations inserted into supposedly automated systems.
Teams inherit repeated tasks around deployments, data fixes, permission changes, reconciliation, or reporting because the last implementation never fully eliminated the human step. - Release drag created by technical debt.
McKinsey notes that companies often pay an additional 10 to 20 percent on top of project costs to address tech debt, creating a drag on productivity. That cost is not abstract. It reappears in slower delivery, more rework, and less room for new initiatives. - Coordination overhead treated as normal engineering work.
The 2025 Microsoft WorkLab special report describes an “infinite workday” built from overloaded inboxes, fragmented priorities, and constant communication flow. Its data shows the average worker receives 117 emails daily, and that mass emails with 20 or more recipients rose 7 percent over the prior year. This is broader than engineering, but the implication for technology teams is clear, fragmented operating environments consume attention before delivery work even begins.
As KTLO grows, delivery capacity is not usually re-estimated with enough honesty. Teams are still assigned new commitments as though the operational baseline had not changed. That is the moment where roadmaps become fiction.
Net-new delivery requires protected capacity, not optimistic planning
The move from KTLO to net-new delivery begins when leaders stop assuming that strategic work will somehow fit between operating demands.
Protected capacity is not simply a percentage written into quarterly planning. It is capacity that has been made real through structural choices, fewer manual burdens, clearer ownership, reduced support noise, lower dependency drag, and explicit refusal to let every operational request crowd out product work.
This is where high-performing delivery organizations differ. They do not only plan for innovation. They create the conditions under which innovation work can survive contact with the rest of the portfolio.
DORA’s 2024 report remains useful here because it keeps the discussion anchored in delivery system performance rather than output theater. It reiterates that DORA’s four key metrics remain the industry standard for software delivery performance and that performance must be assessed across workflow, team, and product dimensions.
The strategic implication is that CTOs need to allocate against capacity leakage, not only against project demand.
A more disciplined allocation model usually includes several moves:
- Separate operating load from strategic delivery in planning language.
If KTLO is buried inside team-level estimates, leadership will never see how much portfolio capacity is already spoken for. - Fund reduction work as capacity creation, not as maintenance charity.
Paying down technical debt, consolidating platforms, eliminating manual workflows, or redesigning brittle services should be treated as actions that buy future delivery space. - Protect a net-new lane with explicit trade-off rules.
The point is not to ignore production reality. It is to define what kinds of work are allowed to interrupt strategic delivery and what kinds must be queued, routed, or solved elsewhere. - Reallocate ownership where support demand is chronic.
If the same teams are repeatedly carrying both the operational burden and the product roadmap, the organization is using its most expensive engineering talent as a shock absorber.
What leadership should measure before claiming delivery capacity has improved
A temporary reduction in incident count or a brief increase in story completion does not necessarily mean the organization has created real net-new capacity. Leadership needs evidence that the system now preserves strategic delivery time more reliably than before.
Useful indicators include:
- Share of engineering time spent on run versus change.
This is the most basic allocation signal, and many organizations still track it too loosely. - Interrupt load on product and platform teams.
A team that is constantly redirected by unplanned support work does not truly own a roadmap, regardless of what planning documents say. - Lead-time trend for net-new initiatives.
If new work still moves slowly despite staffing or tooling investments, the operating burden likely remains unresolved. - Recurring support themes by application, domain, or dependency.
Patterns in KTLO work often reveal where structural fixes will free the most capacity. - Post-launch run-cost growth.
If each new capability expands ongoing support and maintenance requirements faster than it creates value, the organization is adding future KTLO while calling it innovation.
The important point is that net-new delivery should be measured as preserved capacity and sustained flow, not as a temporary burst of project activity.
The allocation decision changes again once AI enters the portfolio
AI has made the KTLO versus net-new question more urgent, not less.
Many leadership teams view AI as a force multiplier that can help engineering organizations do more with the same resources. Sometimes that is true. But AI does not remove the underlying allocation challenge. It changes the economics of it.
The run-versus-change conundrum becomes sharper because AI investments compete for attention with existing operational burdens, and because value capture depends on freeing enough change capacity to act on them.
There is also a more tactical risk. AI layered onto unstable delivery systems can accelerate the wrong things.
The higher AI adoption is associated with improvements in documentation quality, code quality, and code review speed, it will also be connected with lower delivery throughput and lower delivery stability. The lesson is not that AI is counterproductive. It is that local gains do not guarantee system-level gains when the surrounding operating model is still constrained.
That has a direct implication for CTO resource allocation:
- AI should not be funded as a separate innovation layer detached from core constraints.
If teams remain trapped in support load, debt drag, and fragmented workflow, AI will sit on top of unresolved capacity problems. - The first AI investment may not be the most visible AI investment.
In some organizations, the better move is to direct resources into the data, platform, or workflow layer that allows future AI delivery to work cleanly. - Role choices become allocation choices.
The decision on which AI role to hire first frames a strategic trade-off around capacity, leverage, and organizational maturity. That is the right frame. The wrong first AI hire can create blind spots, while the right one acts as a force multiplier only if it matches the actual bottleneck.
This is why the shift from KTLO to net-new delivery cannot be solved by enthusiasm for new initiatives alone. A CTO’s allocation model has to answer a more disciplined question: what portion of engineering effort is preserving the present, what portion is paying for past complexity, and what portion is truly building the next source of value.
Until that allocation becomes visible, most roadmaps will continue to overstate how much net-new delivery the organization can actually afford.
Subscribe to our newsletter
Stay informed with the latest insights and trends in the industry
Content
You may also like


