Generative ai May 5, 2026

CopilotKit raises $27M to build app-native AI agents beyond the chat panel

CopilotKit has raised a $27 million Series A led by Glilot Capital, NFX, and SignalFire. Its argument is simple: a chat panel is a bad interface for a lot of software. A lot of enterprise AI still comes down to "user asks in natural language, model r...

CopilotKit raises $27M to build app-native AI agents beyond the chat panel

CopilotKit raises $27M on a bet that AI agents need real UI, not chat boxes

CopilotKit has raised a $27 million Series A led by Glilot Capital, NFX, and SignalFire. Its argument is simple: a chat panel is a bad interface for a lot of software.

A lot of enterprise AI still comes down to "user asks in natural language, model replies with text." That's fine for support queries. It's weak for workflows that already have structure, permissions, state, and established UI patterns. Booking travel, reviewing revenue, triaging tickets, editing CRM records, those jobs usually need something tighter than a blob of generated text.

CopilotKit wants agents to work inside the app, with access to UI state and the ability to return interactive components instead of paragraphs. The part that matters beyond the funding round is its open-source AG-UI protocol. The company is trying to define a standard link between agents and front ends, much like other protocols are trying to standardize model and tool access elsewhere in the stack.

That's the part worth watching.

What CopilotKit is building

CopilotKit sells tooling for developers who want app-native agents instead of chatbot overlays. Its open protocol, AG-UI, handles communication between an AI agent and a user interface, including:

  • streaming chat
  • front-end tool calls
  • shared state between agent and UI
  • human-in-the-loop flows

In practice, AG-UI gives an agent a way to understand what's happening in the app and respond with something more useful than text. That might mean rendering a company-defined pie chart for a revenue breakdown, surfacing a form with prefilled fields, or showing a task-specific UI component the user can edit before the action completes.

On top of that, CopilotKit is selling the commercial layer: support, enterprise features, and self-hosted deployment. It's also launching CopilotKit Enterprise Intelligence, a self-hostable package for companies that want to deploy in-app agents without handing the stack to a SaaS vendor.

That split makes sense. Open protocol underneath, paid enterprise hardening above it. Plenty of companies say some version of that, but it's still the right model if you want adoption and revenue.

Why AG-UI stands out

The protocol market is getting crowded. MCP handles context and tool access. Google's A2A focuses on agent interoperability. OpenAI has its own Apps SDK, although that mostly matters inside ChatGPT. Vercel's AI SDK helps developers build AI-native apps, and Assistant-ui gives them components for chat interfaces.

AG-UI is aimed at a narrower problem: how an agent talks to the front end.

That's a real gap. Most agent frameworks are strongest on orchestration, tool calling, and back-end integration. The front end usually gets chat bubbles, markdown, and maybe some streamed JSON. Fine for demos. Less fine when you're building product features people have to use every day.

CopilotKit says AG-UI is already supported by Google, Microsoft, Amazon, and Oracle, along with frameworks like LangChain, Mastra, PydanticAI, and Agno. If that support holds, AG-UI has a credible shot at becoming a protocol layer developers can use without committing to one vendor stack.

That's a better place to be than "yet another agent framework."

What developers will care about

CopilotKit's main product claim is control over UI generation. Developers can define a catalog of approved components and decide how much freedom the agent gets when composing interfaces.

That matters because fully generative UI gets messy fast in production. Teams don't want a model inventing layout, actions, and interaction patterns without guardrails. They want constraints. Use these components. Follow these rules. Stay within this action model. Respect this state.

CopilotKit says developers can move between tight, pixel-level control and looser assembly from building blocks. That's sensible. In enterprise apps, the hard part usually isn't getting a model to generate a panel. It's making sure that panel respects business rules, access controls, validation, and audit requirements.

If AG-UI makes those rules easier to wire into an app-native agent flow, that has real value. It also makes human review less awkward. A user can inspect a generated chart, edit a proposed action, approve a change, or reject it inside the existing product instead of bouncing between a chat transcript and the actual software.

That sounds minor until you ship it. Friction kills a lot of AI features before model quality does.

The open-source tension

CopilotKit says AG-UI gets millions of installs per week and that many Fortune 500 companies are already using the protocol and its tools in production. It also names Deutsche Telekom, Docusign, Cisco, and S&P Global as customers.

Those are good signals, with the usual caveats. Install numbers can be noisy, and "used in production" can mean anything from a narrow internal workflow to a large customer-facing system. The named customers matter more than the ecosystem vanity metrics.

The harder question is whether CopilotKit can keep AG-UI credible as neutral infrastructure while selling enterprise features on top. That's where open-source companies often run into trouble. If the protocol starts looking too tied to one vendor's commercial roadmap, other ecosystem players start hedging. Adoption slows. Forks show up. Politics follows.

The founders say the commercial product is meant to harden the open stack for large customers, not replace it. Reasonable answer. Still unproven.

Where it sits against Vercel, OpenAI, and others

CopilotKit is going after a real buyer concern: optionality.

A lot of enterprises already run some mix of Google, Azure, AWS, Oracle, LangChain, homegrown orchestration, and a pile of compliance requirements. They don't want an AI UI layer that quietly pulls them toward one cloud, one model vendor, or one deployment model. Self-hosting still matters a lot here.

That's where CopilotKit has an opening.

Vercel's AI SDK is strong, popular, and pleasant to use, but its center of gravity is still the Vercel ecosystem and modern web app workflows. OpenAI's Apps SDK can build richer interfaces, but only inside ChatGPT, which is a very specific constraint. Assistant-ui is useful too, though it leans harder toward chat surfaces and components than a broader agent-to-app protocol layer.

CopilotKit's horizontal pitch should land with teams that already have infrastructure choices in place and don't want to reshuffle them just to add in-app agents.

There's a trade-off. Horizontal platforms are harder to package cleanly. The more a tool promises to work with everything, the more integration burden usually shifts back to the developer. Large teams can live with that. Smaller teams may still prefer a more opinionated stack that gets them to production faster.

The hard parts

CopilotKit doesn't solve them by itself.

Even with a solid protocol, wiring agents into the UI introduces fresh problems.

Security comes first. If an agent can read app state, invoke front-end tools, and mutate the interface, permission boundaries have to be explicit. The UI layer can't become a side door around authorization checks that belong in the back end. Every action still needs server-side enforcement. Every generated component that triggers behavior needs a clear trust model.

Then there's state consistency. Agents are probabilistic. UIs usually can't be. Once an agent starts composing interfaces dynamically, developers need guardrails for stale state, interrupted flows, retries, and partial failures. Streaming a response is easy enough. Recovering cleanly when the user changes context halfway through an agent-driven workflow is harder.

Performance matters too. App-native agents feel good when the UI updates quickly with useful structured output. They feel awful when every interaction turns into a slow round trip through orchestration, tool calls, and model inference. A protocol can standardize the interaction. It can't fix your latency budget.

And teams still have to answer a blunt product question: should this workflow be agentic at all? Sometimes a form is just a form.

Why the round matters

Because the market is moving past chatbot wrappers.

The first wave of enterprise AI was retrieval, summarization, and support copilots. The next step is agents that can take actions. That puts pressure on interface design. If an agent is going to do work inside software, developers need a standard way to expose state, render structured output, and keep users in control.

That's the category CopilotKit wants to own.

If AG-UI keeps picking up support across clouds and frameworks, it could become one of those boring but important standards people stop talking about because they assume it's there. That's usually how standards win.

For engineering leaders, the immediate takeaway is simpler. If your team is building AI into an existing product and the plan is still "add a chat panel and some tool calls," the UI problem probably needs more attention than it's getting. CopilotKit is betting that's where the next useful layer of developer tooling gets built. The bet looks reasonable. Execution is the hard part.

Keep going from here

Useful next reads and implementation paths

If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.

Relevant service
AI agents development

Design agentic workflows with tools, guardrails, approvals, and rollout controls.

Related proof
AI support triage automation

How AI-assisted routing cut manual support triage time by 47%.

Related article
Meridian raises $17 million for an AI IDE aimed at financial modeling

Meridian has raised $17 million at a $100 million post-money valuation to build what it calls an IDE for agentic financial modeling. The round was led by Andreessen Horowitz and The General Partnership, with QED Investors, FPV Ventures, and Litquidit...

Related article
How startups are wiring AI agents into operations after TechCrunch Disrupt 2025

The most useful part of TechCrunch Disrupt 2025’s debate on “AI hires vs. human hustle” is the framing shift underneath it. A lot of startups are already past the basic question of whether AI can handle early operational work. They’re wiring agents i...

Related article
GenSpark Super Agent vs Manus AI: a closer look at agent loop speed

GenSpark Super Agent is getting attention because it seems to run the full agent loop quickly and package the result better than a lot of rivals people already know, including Manus AI. Based on the demo and the comparisons circulating online, GenSpa...