Generative AI February 26, 2026

Trace raises $3M to address enterprise AI agent adoption and control

Trace, a London startup from Y Combinator’s summer 2025 batch, has raised a $3 million seed round to tackle a problem enterprise AI teams already know well. Models keep improving. Adoption still drags. The pitch is simple enough. Agents fail inside c...

Trace raises $3M to address enterprise AI agent adoption and control

Trace raises $3M to fix the part of AI agents enterprises keep getting wrong

Trace, a London startup from Y Combinator’s summer 2025 batch, has raised a $3 million seed round to tackle a problem enterprise AI teams already know well. Models keep improving. Adoption still drags.

The pitch is simple enough. Agents fail inside companies because they lack usable context. They can draft, summarize, and call tools, but they don’t know who owns what, which system is authoritative, what approvals are required, or what data they’re allowed to touch. Trace wants to sit above that sprawl with a knowledge-graph-based orchestration layer that maps the company, breaks work into steps, and routes those steps to software agents or humans.

Investors include Y Combinator, Zeno Ventures, Transpose Platform Management, Goodwater Capital, Formosa Capital, and WeFunder, plus angel backers Benjamin Bryant and Kevin Moore.

It’s an early bet, but the thesis holds up. Enterprise agent projects rarely fail because the LLM can’t write copy or generate code. They fail because the surrounding system is messy.

Context is still the hard part

Trace is pushing a phrase that’s more useful than “prompt engineering”: context engineering.

That fits the actual problem. Most enterprise agent demos are still a model with a few API calls taped on. Ask one to “launch a microsite” and it may produce a clean plan that falls apart on contact with the company. Which design system should it use? Which repo template is approved? Where are the current brand docs? Who signs off on legal copy? What happens when marketing owns the page but platform engineering owns the deployment pipeline?

A general-purpose agent won’t know any of that unless you feed it in. Doing that with giant prompts goes nowhere fast. Context windows are bigger now, but stuffing an internal wiki, Slack history, Jira backlog, and access-control policy into one model input is still expensive, noisy, and unreliable.

Trace’s answer is to build a graph of the organization.

That graph would include entities like Person, Team, Project, Service, Dataset, Document, and Runbook, plus relationships such as owns, depends_on, assigned_to, uses, and produces. That’s not flashy. It is, however, the kind of systems modeling enterprise AI has badly needed.

Why a graph makes sense

A graph is a practical way to represent how work moves through a company. Documents live in one system. Permissions live somewhere else. Operational history sits in tickets, repos, calendars, and chats. The useful questions are relational: who owns this, what depends on it, what changed, and who’s allowed to act on it?

A graph handles that better than dumping everything into a vector database and hoping semantic search recovers the structure later.

Trace appears to be combining a property graph store with embeddings and retrieval. That’s the right shape for this kind of problem. You want semantic lookup for fuzzy discovery, but you also want deterministic relationships and permission-aware traversal. “Find me the most relevant launch checklist” is one query. “Find the launch checklist for a microsite owned by marketing that deploys through this GitHub org and requires legal review in the EU region” is another. The second query needs structure.

There’s a clear downside too. Graph systems are only as good as their schema and sync quality. If identity resolution is weak, permissions are stale, or duplicate entities pile up, the agent gets confused in ways that are harder to catch than a broken API call.

Garbage context still gives you garbage output. It’s just better organized.

What Trace likely has to build

The company describes three core functions: breaking goals into tasks, injecting the right context into each task, and orchestrating execution across humans and agents. None of that is easy.

Planning

Turning “design a new microsite” into executable work means some mix of LLM planning and deterministic workflow rules. A decent planner should identify steps like content collection, brand review, information architecture, component selection, repo setup, CI config, copywriting, asset creation, QA, and launch prep.

The hard part is making that plan company-specific. Generic task lists are cheap. A useful plan knows your approved templates, your team boundaries, and the escalation path when something gets blocked.

Context selection

This is where most agent products still look weak.

Each subtask needs a narrow packet of relevant information. A copywriting agent might need the brand playbook, recent microsite examples, campaign goals, and product messaging. A CI setup agent needs the repo location, pipeline templates, secrets-handling rules, and role-based access constraints.

That means retrieval across documents and structured relationships, with aggressive filtering. Good systems don’t dump everything they have into the model. They send the minimum useful context and keep it fresh with caching, summarization, and TTL policies.

Performance matters here too. Once an orchestration layer sits between users and dozens of systems, latency adds up fast. Every context lookup, permission check, and tool invocation adds cost and delay. If Trace can’t keep those paths tight, people will go around it.

Orchestration

The orchestration piece may be the stronger part of the product.

Most enterprise workflows should stay partly human. Approvals, policy exceptions, sensitive content review, access grants, and ambiguous edge cases are not places where teams want full autonomy. Trace seems built around that assumption, with an event-driven workflow engine coordinating tasks, dependencies, handoffs, and escalations.

That may be the part buyers pay for first. Companies are often more willing to fund reliable workflow coordination than fully autonomous agents.

The market is getting crowded

Trace is entering a crowded market.

Anthropic announced enterprise agents this week with domain-focused plugins for areas like finance, engineering, and design. Atlassian has also gone deeper into agent workflows inside Jira. OpenAI and others keep shipping stronger general-purpose models that can handle intern-level tasks with less hand-holding.

A small startup still has an opening. The market still has an integration problem. Big model vendors are improving quickly, but they don’t automatically control the enterprise layer where permissions, workflow, and system boundaries live. Tool vendors like Atlassian have distribution, but their agents still skew toward their own products. That leaves room for a layer that coordinates across Slack, Notion, Google Workspace, Microsoft 365, Airtable, Jira, GitHub, and internal databases without forcing a company into one suite.

It’s a credible opening. It’s also a difficult one.

The obvious risk is that platforms with existing identity, workflow, and metadata advantages absorb this over time. Microsoft, Atlassian, ServiceNow, Salesforce, and even GitHub all have paths into the same territory. Trace has to move faster, integrate more broadly, and show better workflow intelligence than incumbents with much larger footprints.

What technical buyers should watch

If you’re evaluating Trace or building similar infrastructure in-house, a few issues matter a lot more than polished demos.

Permission fidelity

This is the first test. If the system can’t mirror RBAC cleanly, handle revocations quickly, and avoid exposing documents a user or agent shouldn’t see, it isn’t ready for enterprise use. Prompt logs also become sensitive data once they include internal context, so auditability and retention policy matter.

Observability

You need to know what the system retrieved, what it passed to the model, what tools it called, how much it cost, and where it failed. Agent products without serious telemetry are still toys. Teams should expect token tracking, latency metrics, task outcome logs, quality scoring, and replayable traces.

Schema discipline

The graph schema has to be useful, not academically elegant. Over-modeling slows adoption. Under-modeling leaves gaps that agents fill with guesswork. Start with a small set of core entities and relationships, then expand where it improves retrieval or task routing.

API quality in the existing stack

Some companies will find that the blocker isn’t the agent layer. It’s their SaaS stack. Tools with weak metadata, poor event streams, or coarse permission models are much harder to plug into a context-aware orchestration system. That’s going to become painfully obvious over the next year.

The bet

Trace is betting on the part of enterprise AI that causes most of the operational pain: context, permissions, and coordination.

That doesn’t make the company a sure thing. Building a clean, current, permission-aware graph across messy enterprise systems is miserable work. Keeping it synced is worse. Doing that while delivering low-latency orchestration and useful agent behavior is a lot for a seed-stage startup.

Still, this is the kind of infrastructure idea that sounds dull until you try to deploy agents at scale. Then it starts to look serious. Companies that can deliver context cleanly and control workflow will have an edge over teams still arguing about prompt phrasing.

Keep going from here

Useful next reads and implementation paths

If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.

Relevant service
AI agents development

Design agentic workflows with tools, guardrails, approvals, and rollout controls.

Related proof
AI support triage automation

How AI-assisted routing cut manual support triage time by 47%.

Related article
What Startup Battlefield reveals about the shift to enterprise AI agents

TechCrunch’s latest Startup Battlefield selection says something useful about where enterprise AI is headed. Not toward bigger chatbots. Toward agents that can be monitored, constrained, audited, and tied into real systems without triggering complian...

Related article
Why Witness AI raised $58M as enterprises move to secure AI agents

Witness AI just raised $58 million after growing ARR more than 500% and expanding headcount 5x in a year. The funding matters, but the timing matters more. Enterprise buyers have moved from asking how to use LLMs to asking how to keep agents from doi...

Related article
Meta's internal AI agent posted without approval. That's a real governance problem

Meta now has a concrete version of a problem many teams still treat as theoretical. According to incident details reported by The Information, an internal Meta AI agent answered a technical question on an internal forum without the engineer’s approva...