Generative AI February 3, 2026

Linq raises $20M to build AI assistant infrastructure for messaging apps

Linq has raised a $20 million Series A to build infrastructure for AI assistants that operate inside messaging apps instead of standalone products. The round was led by TQ Ventures, with Mucker Capital and angel investors participating. The funding m...

Linq raises $20M to build AI assistant infrastructure for messaging apps

Linq’s $20M raise bets that the next AI interface is your text thread

Linq has raised a $20 million Series A to build infrastructure for AI assistants that operate inside messaging apps instead of standalone products. The round was led by TQ Ventures, with Mucker Capital and angel investors participating.

The funding matters. The product bet matters more.

A lot of AI teams still build as if users are happy to open another app, learn another interface, and put up with the usual friction. Linq is betting on the opposite. People already live in iMessage, RCS, and SMS, so put the assistant there and make it behave like the channel belongs to it. For customer support, scheduling, lead gen, and other transaction-heavy workflows, that’s a pretty sensible call.

It also comes with plenty of baggage.

Why Linq got interesting

Linq didn’t start here. It began as a digital business card and lead capture startup. In February 2025, it launched an API for messaging customers natively in iMessage, with features plain SMS doesn’t really match: group chats, threaded replies, reactions, images, voice notes.

That’s already useful on its own. Brand outreach tends to work better when it looks like a normal conversation. A blue-bubble thread gets attention. A generic marketing text usually doesn’t.

Then Poke, an iMessage-based assistant, took off last September. Linq suddenly had teams asking for AI agents inside chat threads. So it changed direction. Now it sells itself as a messaging-first infrastructure layer for AI assistants across iMessage, RCS, and SMS.

The growth numbers are strong:

  • Customer base up 132% quarter over quarter
  • Average account expansion up 34%
  • 134,000 monthly active users reached through customer agents
  • More than 30 million messages per month
  • 295% net revenue retention, with zero churn claimed

Those numbers stand out. They also sit on top of platforms Linq doesn’t control.

Messaging is a strong interface for a certain kind of AI

For some tasks, chat is the right UI.

If someone wants to reschedule an appointment, check an order, approve a quote, start a return, or update a reservation, a text thread is often better than a web app. There’s less ceremony. The prompt is the interface. The conversation history holds the state. The user already knows how to use it.

That’s a good fit for the current generation of agents. Models are finally decent at bounded, tool-heavy workflows where the goal is clear and the action space is narrow.

It also cuts product surface area. Teams don’t need to maintain native iOS and Android apps for every interaction. They don’t need to fight for installs. They can iterate on prompts, routing, tool calls, and fallbacks without waiting on app store review.

That’s the appeal. It’s practical.

The hard part is the plumbing

“AI in iMessage” is the flashy line. The real work is underneath.

A messaging-native agent platform has to normalize very different channels into one event model. iMessage, RCS, and SMS don’t behave the same way. iMessage supports richer interactions like threaded replies and reactions. RCS adds read receipts, better media, and typing indicators, though carrier support is still uneven. SMS is plain, durable, and limited.

So you end up building a common envelope for inbound events. Fields like message_id, thread_id, sender, channel, timestamp, content_type, maybe reply_to, plus metadata for receipts or reactions. Then the system has to render responses back out in whatever form each channel supports.

That gets messy fast. Message ordering slips. Networks retry deliveries. Duplicate events show up. Read receipts are inconsistent. Group chats create identity and permission problems. Add AI on top and latency becomes a product problem almost immediately.

If a reply takes five seconds, users assume something broke. Under two seconds for a first acknowledgment is a reasonable target. In practice that usually means an event-driven pipeline with webhooks feeding an orchestration layer, staged responses, typing indicators where available, and careful tool use. Long synchronous workflows are a bad fit. Acknowledge first, run the job in the background, then post updates back into the thread.

The backend pattern will look familiar to anyone who’s built chat or workflow systems:

  • webhook ingestion
  • queueing and deduplication
  • state keyed by thread_id
  • model inference and tool execution
  • delivery retries with jitter
  • per-thread ordering controls
  • fallbacks when channel features aren’t available

None of this is glamorous. All of it matters.

Scale looks fine until traffic spikes

Linq says it handles more than 30 million messages a month. On average, that’s manageable. Roughly 11 messages per second. The harder part is burstiness.

Traffic won’t show up evenly. It’ll spike around campaigns, outages, launches, and events. If one assistant goes viral, the load profile changes overnight. Designing for bursts in the 100 to 200 messages per second range sounds reasonable for a platform at this stage.

The architecture is the familiar one: partitioned queues to preserve ordering within a conversation while unrelated threads run in parallel, idempotency keys to kill duplicate processing, and enough caching or templating to avoid routing every boring interaction through an expensive LLM call.

That last point matters for margins. A lot of messaging workflows don’t need freeform generation. “Your appointment is moved to 3:30 PM” does not require a frontier model. If the platform can mix deterministic flows, retrieval, and selective model use, the economics improve quickly.

Security and privacy get messy in chat

This is where messaging-native AI starts to look a lot less clean than the demo.

Users often treat messaging as private, especially in products associated with end-to-end encryption. But a server-side assistant processing those messages will see plaintext somewhere in the flow. If a business is acting on the user’s behalf, or routing messages through its own systems, that content becomes application data. Logs, prompts, retention settings, tool outputs, images, voice-note transcripts, OCR output, extracted entities. It all counts.

So the basic hygiene is mandatory:

  • short-lived credentials
  • encrypted tokens and API keys at rest
  • aggressive PII redaction in logs
  • retention limits and TTLs on conversation history
  • explicit consent flags if chat history feeds training, analytics, or RAG systems

Voice notes and images make the stack harder. Fast handling usually means low-latency ASR and OCR. Running those at the edge can help with speed and sometimes privacy, but deployment gets harder. Cloud batching is simpler, slower, and a tougher sell in regulated environments.

Any team building here should also have kill switches. If a model starts hallucinating or a tool integration goes off the rails, there needs to be a quick path to disable agent actions and fall back to human or template-driven responses.

The biggest risk is policy

Linq’s pitch gets stronger if businesses can operate in native iMessage threads that look and feel like ordinary conversations. Apple controls that ground. Apple can change the rules whenever it wants.

That risk is not hypothetical. WhatsApp banned general-purpose chatbots in October 2025. A channel can look open one quarter and close the next. If your product depends on a nonstandard route into a messaging platform, you need a fallback plan before the policy change lands.

There’s also channel-specific compliance to deal with. SMS means opt-in flows, 10DLC registration, carrier scrutiny, and STOP keyword enforcement. RCS is still fragmented across carriers and regions. iMessage is the wildcard. Apple has official business messaging paths, but any system promising broad blue-bubble automation raises obvious questions about identity, approval, and enforcement.

That’s why channel diversity matters. If your whole agent strategy depends on one messaging platform, you’re building on rented land.

What developers should take from this

Linq’s raise is a useful market signal, but the product lesson matters more.

If you’re building customer-facing AI, stop assuming the main interface has to be a dedicated app or a website chat widget. For a lot of workflows, messaging is the better target. It cuts friction, shortens feedback loops, and fits the kind of narrow task automation current models handle best.

It still needs discipline.

Pick tasks with clear completion criteria. Define tool schemas tightly. Store conversation state per thread. Expect duplicates and out-of-order events. Budget for sub-two-second first responses. Assume channel policies can change. And don’t send every incoming message to an LLM by default.

The teams that win here will probably be the ones that make scheduling, support, and transaction flows feel boringly reliable inside the chat apps people already use. Linq seems to understand that. The next question is whether the platforms will let it keep going.

Keep going from here

Useful next reads and implementation paths

If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.

Relevant service
AI agents development

Design agentic workflows with tools, guardrails, approvals, and rollout controls.

Related proof
AI support triage automation

How AI-assisted routing cut manual support triage time by 47%.

Related article
Continua raises $8M to put AI agents in SMS, iMessage, and Discord groups

Continua, a startup founded by former Google distinguished engineer David Petrou, has raised $8 million from GV and Bessemer to put AI agents inside group chats. The idea is straightforward: join an SMS thread, iMessage group, or Discord server and h...

Related article
How startups are wiring AI agents into operations after TechCrunch Disrupt 2025

The most useful part of TechCrunch Disrupt 2025’s debate on “AI hires vs. human hustle” is the framing shift underneath it. A lot of startups are already past the basic question of whether AI can handle early operational work. They’re wiring agents i...

Related article
CopilotKit raises $27M to build app-native AI agents beyond the chat panel

CopilotKit has raised a $27 million Series A led by Glilot Capital, NFX, and SignalFire. Its argument is simple: a chat panel is a bad interface for a lot of software. A lot of enterprise AI still comes down to "user asks in natural language, model r...