DEV Community

Cover image for The Integration Tax: Walled-Garden Agent Strategies Won't Scale (MxN vs. M+N)
Alexander Leonhard
Alexander Leonhard

Posted on

The Integration Tax: Walled-Garden Agent Strategies Won't Scale (MxN vs. M+N)

Personio maintains 200+ integrations. Greenhouse has 400+. iCIMS lists 800+.

Every single one is a point-to-point adapter somebody had to scope, build, test, and keep alive. That was fine when the other end was a stable SaaS product with a versioned API and a partnerships team you could email.

Now the other end is an AI agent that shipped last Tuesday, pivots next month, and might not exist by Q3.

The math is about to break. And not just in recruiting.

The startup spree nobody asked for

There are over 100 AI recruiting startups right now. Sourcing agents. Screening agents. Scheduling agents. Matching agents. Interview agents. Reference-check agents. Most of them do roughly the same thing with slightly different wrappers. And every single one wants your API.

If you're an integration engineer: each new agent means onboarding, sandbox access, field mapping, testing, a contract, a fee conversation. Half of them will pivot or shut down within 18 months — leaving you maintaining dead integrations for products that no longer exist.

If you're a recruiter: you're drowning. Another AI tool. Another dashboard. Another vendor claiming to "revolutionize hiring." Who owns the candidate data when three agents touch the same profile? Who's liable when a screening agent rejects someone unfairly? The roles are opaque, the accountability is nonexistent, and the pitch decks all look identical.

All you wanted was to help talents find jobs faster. Instead you're managing integrations, fees, contracts, and SLAs for a growing stack of tools that can't talk to each other.

"Just vibe-code the adapters"

Sure. AI coding tools make it faster to scaffold an integration. You can stub out a Greenhouse adapter in an afternoon now.

But building was never the expensive part. Maintaining is.

Every ATS updates their API. Fields get deprecated. Auth flows change. Rate limits shift. Your AI-generated adapter breaks silently at 2am and candidates disappear into a void. Multiply that across 200 integrations, half written by an LLM with no context on your internal data model.

AI coding doesn't eliminate integration debt. It lets you accumulate it faster.

And that's just code. You still manage the relationship overhead — partnership agreements, sandbox environments, webhook registrations, revalidation cycles (iCIMS requires revalidation every 90 days), version migrations. Every integration partner is a mini vendor relationship. At 800+, your integration team isn't engineering anymore. It's account management.

The protocol layer is fractured

MCP gives agents access to tools and data. A2A lets agents discover each other and coordinate. Both are real, both are gaining traction. Workday built an Agent Gateway on MCP + A2A — 15 launch partners including Paradox and Microsoft. It's well-designed.

But here's what these protocols don't do: they don't define what a "talent profile" looks like. Or a "job requirement." Or a "hiring transaction." They're transport and coordination layers — they move messages between agents. They don't know what hiring is.

Think of it this way: MCP and A2A are TCP/IP. What's missing is HTTP — the domain-specific protocol that makes the transport useful for a particular industry.

Visa understood this for payments. They didn't just adopt generic networking protocols and call it done. They built a domain-specific transaction layer on top — standardized what a "payment" looks like, how authorization works, how settlement happens, how disputes get resolved. Every merchant and every bank connects once to a shared protocol. That's why your card works at any terminal on the planet.

Hiring has no equivalent. Neither does real estate. Or legal services. Or logistics. Every industry that's about to be flooded with AI agents faces the same structural gap: generic agent protocols exist, but the domain-specific transaction layers that make them useful don't.

The token economics nobody's talking about

Here's where it gets concrete. When an AI agent receives a PDF resume, it has to burn tokens parsing unstructured text into something it can reason about. Every agent does this independently. Every time. For every candidate.

Now imagine a structured talent object — 40+ fields covering skills, credentials, availability, compensation, work authorization — in a machine-first format that any agent can evaluate against constraints directly. No parsing. No interpretation. No guessing what "experienced with cloud platforms" means.

The difference in token consumption between parsing a PDF and reading a structured schema is not marginal. Research shows structured context reduces token usage by 60-90% compared to unstructured input. At agent scale — millions of matching operations — that's the difference between a viable business model and one that bleeds money on inference costs.

Domain-specific schemas aren't just about interoperability. They're about making agent economics work. Without them, every agent in every industry is burning tokens on interpretation instead of doing actual work.

Meanwhile: the foundation is still a PDF

This is what really gets me.

We have MCP. We have A2A. We have billion-dollar acquisitions and agent gateways and unified APIs. A hundred AI startups claim to automate hiring end-to-end.

And candidates still apply with PDFs. A static, unstructured document designed for a human recruiter's desk in 1995. No structured skills data. No machine-readable credentials. No verifiable work history.

Every AI recruiting agent — no matter how sophisticated — starts by OCR-ing a resume and hoping the parser gets it right. The industry is building autonomous agents on top of a format that requires human interpretation. That's not infrastructure. That's duct tape.

And this is the pattern everywhere. Real estate agents work with PDF listings. Legal agents parse PDF contracts. Logistics agents read PDF shipping documents. Every industry that's adopting AI agents is hitting the same wall: the foundational data objects were designed for humans, and nobody has replaced them with something agents can natively transact on.

This isn't just a hiring problem

Recruiting is a good case study because the fragmentation is extreme — 50+ ATS vendors, 100+ agent startups, hard regulatory deadlines (EU AI Act enforcement starts August 2, 2026 — hiring is explicitly classified as high-risk AI). But the structural pattern is universal.

Any industry with fragmented vendors, high-frequency transactions, and autonomous agents needs three things that generic protocols can't provide:

Domain-specific schemas. Machine-first representations of the industry's core objects — what a "talent profile" or a "property listing" or a "freight shipment" actually looks like in structured, evaluable form.

Transaction protocols. Not just data sync — negotiation, settlement, counteroffers, acceptance. The difference between a data pipe and an economy.

Architectural compliance. Immutable audit trails baked into the transaction layer. When regulators classify your industry as high-risk AI (and they will — hiring is just first), bolting compliance on after the fact doesn't work. The ACM FAccT conference showed in 2024 that when compliance is optional, companies avoid it. It has to be structural.

Rochet and Tirole proved in 2003 that hub topologies (N+M) beat mesh (N×M) in two-sided markets. For 50 vendors and 200 agents: 250 connections instead of 10,000. The economics are clear. But each industry needs its own domain layer to make the hub work. Generic transport isn't enough. Visa proved that for payments. Hiring needs the same. So does every other industry about to drown in agents.

The caveat I'd be dishonest to skip

HR Open Standards has been around for 27 years with limited adoption. "Domain-specific protocol" isn't a new pitch. Most of them die in committee.

The argument — and it's an argument, not a certainty — is that agent adoption changes the dynamics. Agents don't have procurement cycles or integration review boards. If a schema is more efficient, the agent uses it. That compresses adoption timelines from decades to months. Maybe. It's an empirical question.

But the structural need isn't going away. The agent count is growing in every industry. The integration tax is compounding. And every month without shared domain layers, the mesh gets denser and more expensive — while agents burn tokens on interpretation that structured schemas would eliminate.

Whether these layers get built by startups, consortia, or dominant vendors absorbing the problem — I don't know. But the gap is real, it's widening, and it's not unique to hiring.

If you're building agents or maintaining integrations in any industry: what would it take for you to adopt a shared domain protocol? What's the real blocker — technical, commercial, trust? Is it the schema design? The governance? The fact that nobody wants to be first?

I think the answer to that question matters more than any specific solution right now.


Top comments (0)