Kyndryl Just Bet Everything on Autonomous IT Agents -- Dedicated Teams Are the Trust Layer They Left Out

· 4 min read · dedicated technology team
Kyndryl Just Bet Everything on Autonomous IT Agents -- Dedicated Teams Are the Trust Layer They Left Out

A major infrastructure services company launched its "Agentic Service Management" offering on April 2nd. The pitch: transition enterprises from ticket-based IT operations to autonomous agent workflows. AI-native infrastructure, managed by fleets of software agents instead of humans filing tickets.

The interesting part isn't the product. It's what they launched alongside it.

A separate add-on service called "Agentic AI Digital Trust" -- specifically for governing the autonomous agents they just told you to deploy. The governance layer isn't built in. It's upsold.

That gap between "deploy agents" and "control agents" is where most businesses will get hurt.

The Numbers Nobody's Marketing Around

McKinsey's 2026 AI Trust Survey found that 80% of organizations have already encountered risky behavior from AI agents. Only 28% are satisfied with their existing guardrails. That means roughly three out of four companies running agents don't fully trust what those agents are doing.

It gets worse at the infrastructure level. 57% of IT security leaders lack confidence in the accuracy and explainability of agentic AI outputs. 59% haven't established mature responsible-usage guidelines. These aren't theoretical concerns. These are people running production systems admitting they don't have control.

Gartner projects 40% of agentic AI projects will fail by 2027 from cost overruns and poor risk controls. Carnegie Mellon benchmarks show leading agents complete only 30-35% of multi-step tasks successfully. And Monte Carlo Data reports that 91% of data engineering teams experience data quality incidents every single week.

Agents making autonomous decisions on bad data, in environments nobody fully understands, with governance available as a premium add-on. That's the current state of agentic infrastructure.

Why Governance Can't Be an Add-On

Here's a real example. An asset management firm deployed an AI agent for automated compliance reporting in 2025. The agent hit inconsistent entity naming conventions across their data sources and started generating reports that double-counted some exposures while missing others entirely. It took weeks of manual remediation to untangle.

The agent worked exactly as designed. It processed data and produced reports. But it didn't understand the business context behind those naming conventions. It didn't know that "Smith Capital" and "Smith Capital Partners LLC" were the same entity, or that they weren't. That kind of judgment requires someone who knows the stack, knows the data, and knows what "wrong" looks like in that specific environment.

Research from the Partnership on AI makes this even more concrete. In simulated multi-agent systems, a single compromised agent poisoned 87% of downstream decision-making within four hours. Traditional incident response couldn't contain the spread because the failure cascaded through agent-to-agent interactions that no human was monitoring in real time.

The fix for this isn't a governance assessment you buy after deployment. It's people embedded in your environment who understand context before something breaks.

The Accountability Question Nobody's Answering

"Liability follows control" is becoming the regulatory consensus. The UK has taken this position explicitly, and it's consistent with the EU AI Act's Article 14 requirements. If you deploy autonomous agents, you own what they do.

OWASP released a 2026 Top 10 for Agentic Applications with threat categories that didn't exist a year ago: Agent Goal Hijack, Tool Misuse, Identity and Privilege Abuse, Supply Chain Compromise through malicious tool servers impersonating trusted integrations.

These aren't edge cases. They're the natural consequences of giving software agents autonomous access to production infrastructure without humans who understand what normal looks like in your specific environment.

Deloitte found that only 11% of organizations are actively using agentic AI in production. 38% are piloting. The hype is running years ahead of the reality. And the companies jumping in fastest are often the ones with the least governance in place.

Dedicated Teams Are the Missing Layer

The industry is splitting into two camps. One camp says automate everything, then buy governance tools to manage the automation. The other says start with people who understand your business, then automate the things that make sense to automate.

LTFI sits in the second camp.

A dedicated technology team isn't a help desk that responds to tickets. It's engineers embedded in your stack who know your infrastructure, your data, your business logic, and your risk tolerance. When an agent produces output that looks technically correct but is contextually wrong -- like that compliance report with double-counted exposures -- a dedicated team catches it because they know what "right" looks like for your specific situation.

This isn't about being anti-automation. LTFI runs 40+ custom internal tools and automated pipelines across its managed infrastructure. Automation is the point. But automation without accountability is just faster failure.

Every LTFI client gets isolated infrastructure with 30+ automated verification checks per deployment. Hardened servers with automated security monitoring. A team that maintains, updates, and understands the systems they built. That's the difference between "we deployed agents" and "we know what our agents are doing."

The 80% of organizations encountering risky agent behavior aren't failing because their agents are broken. They're failing because nobody in the loop understands the environment well enough to know when an agent's output is wrong.

What This Means for Your Business

KPMG reports that labor-led outsourcing is dropping from 55% to 37% over the next two years, with software-based delivery jumping from 14% to 30%. The shift toward automation is real and it's accelerating.

But the model that actually works isn't full autonomy. It's dedicated humans plus intelligent automation. People who know your stack, running tools that extend their reach. Not the other way around.

If your current technology partner is selling you autonomous agents and governance as separate line items, ask yourself: who's accountable when the agent makes the wrong call at 2 AM?

If the answer is "we'll figure it out," you don't have a technology partner. You have a vendor.

Talk to us about your infrastructure.