Read AI launches Ada, an email assistant for scheduling and thread replies
Read AI has spent the past few years in the same lane as a lot of meeting AI startups: capture calls, transcribe them, summarize them. Useful, yes. Also crowded. Its new product, Ada, moves up a layer. It lives in email, reads thread context, checks ...
Read AI wants your inbox to act like a coordinator, and Ada is the first serious version of that idea
Read AI has spent the past few years in the same lane as a lot of meeting AI startups: capture calls, transcribe them, summarize them. Useful, yes. Also crowded. Its new product, Ada, moves up a layer. It lives in email, reads thread context, checks availability, drafts replies, answers questions from company knowledge, and pushes follow-ups along.
That shift matters because summaries are easy to show in a demo. The harder part is doing useful work without making a mess of it.
Ada launches as an email-based assistant. You start by sending “Get me started” to ada@read.ai. From there, it works inside normal email threads, which is a smart choice. Email is old, ugly, and still the default coordination layer for most companies. Every team, tool, customer, and vendor already uses it.
Why email makes sense
Most AI assistants still live off to the side. A panel, a chatbot, a tab people forget to open. Ada drops into the place where the work already happens.
That gives Read AI a few obvious advantages.
The first is behavior. Nobody has to learn a new habit. If an executive, customer, recruiter, or sales prospect sends an email about scheduling or asks for an update, Ada can work inside that thread.
The second is latency. Chat feels broken if it takes 15 seconds to answer. Email gives the system more room to retrieve context, check calendars, and draft something reasonable. That matters when the model has to reason across meetings, documents, and availability before it replies.
And email still crosses company boundaries better than Slack or Teams. Read says support for those is coming, but inboxes remain the common denominator. If you want an agent that coordinates with people outside your org, email is still the cleanest place to put it.
What Ada does
Read’s pitch is straightforward: Ada acts as a digital twin for routine communication.
The practical feature list matters more:
- It can negotiate meeting times in an email thread using your calendar’s
free/busydata. - It can answer questions using company knowledge, recent meeting context, and public web sources.
- It can draft responses for your review before you send them.
- It can handle out-of-office replies with some awareness of context.
Scheduling sounds dull until you remember how much manual effort still goes into it. “Are you free next Tuesday?” turns into four replies, a timezone problem, and somebody eventually falls back to Calendly. If Ada can handle real back-and-forth inside a thread instead of tossing out a booking link, that’s genuinely useful.
Read says Ada only uses calendar availability and doesn’t expose meeting details. That’s the bare minimum privacy bar. No one wants an assistant hinting that “Tuesday at 2 is blocked for legal review” to an outside contact. Free/busy access is enough to schedule competently, and it makes the enterprise security story simpler than full calendar-content access.
The bigger technical decision: a knowledge graph, not MCP
The more interesting product call is in the backend.
Read AI says Ada doesn’t use MCP connectors. It builds a knowledge graph from meetings and connected services, then uses that graph for retrieval and reasoning. That tells you a lot about how the company sees agent products.
MCP, or Model Context Protocol, has become the standard pitch for AI tools that need to plug into lots of systems. The appeal is obvious. Standards cut custom integration work and make it easier to drop models into existing stacks.
Read wants tighter control over how context is represented.
That tracks for a company built around meeting data. A graph can connect people, projects, customers, decisions, action items, deadlines, and supporting documents with timestamps and relationships that plain vector search tends to flatten. If someone asks, “How are we doing against Q1 goals for the Acme account?” a graph-aware system has a better chance of pulling together the right meetings, the latest goal state, the account owner, and recent blockers.
That’s where a lot of generic enterprise RAG systems still struggle. They can retrieve text that looks relevant. They’re worse at relational memory. Who committed to what? Which decision replaced the old one? What happened after that customer escalation two weeks ago? Graph structure helps with that.
It also creates work.
Custom connectors and a custom schema mean more maintenance, more integration burden, and more lock-in. MCP caught on for a reason. If every vendor builds its own context layer, admins get another pile of plumbing to manage. Read may get better retrieval quality, but it’s also choosing the harder engineering path.
What’s probably happening under the hood
Read hasn’t published a full architecture breakdown, but the broad shape is easy enough to infer.
Ada likely ties user identity together through OAuth, then monitors email through provider APIs such as Gmail or Microsoft Graph. For scheduling, it only needs calendar metadata: availability windows, timezone, working hours, and maybe user-set buffers or meeting-length preferences.
For question answering, the system probably combines three sources:
- structured facts from its internal knowledge graph
- semantic retrieval over meeting transcripts and connected documents
- web search for public information when internal context falls short
That blend matters. Pure vector RAG over documents is often fine for summaries and shaky for operational memory. A graph can ground the answer in entities and events, while a vector index fills in the surrounding language. The model then drafts a response and hands it back to the user for approval.
That approval step matters. A lot.
Human review should be the default for anything customer-facing or externally visible, especially when the system can pull from internal knowledge. The risk isn’t only hallucination. It’s oversharing. A model can retrieve the right answer and still include the wrong details for the audience.
Read also says Ada will get more proactive, such as noticing follow-up commitments from meetings and nudging users to schedule them. That follows naturally from meeting intelligence. If the system already extracts action items from calls, the next step is wiring those tasks back into email, calendars, CRMs, and project tools.
At that point you’re edging into workflow automation.
Meeting notes aren’t enough anymore
Read isn’t alone in this. Microsoft has Copilot in Outlook and across Microsoft 365. Google is pushing Gemini through Workspace. Superhuman is turning email into a more agent-like interface. Scheduling vendors such as Calendly, Clockwise, and Motion have all moved toward smarter coordination. Meeting startups like Granola and Quill are trying to turn transcripts into repeatable actions and connected workflows.
So Ada lands in a crowded category. Read’s angle is specific: use email as the control plane and use meeting-derived memory as the context layer.
That’s a credible position because Read already has access to a lot of conversational data. The company says it has 5 million monthly active users, 50,000 daily sign-ups, and more than $81 million raised. It’s aiming for 10 million users. The U.S. is its biggest revenue market, though 60% of users are outside the U.S. Those numbers don’t prove Ada works, but they do mean Read has enough footprint to try turning passive meeting data into active coordination.
This part of the market comes down to outcomes. Nobody cares if an AI writes a polished draft that still needs heavy cleanup. They care whether the meeting gets booked correctly, whether the KPI answer is current, and whether the follow-up actually gets sent.
A lot of AI products still miss that.
What teams should check before rollout
If you’re evaluating Ada, treat it less like a note taker and more like a junior operator with access to your inbox and company memory.
A few questions come first.
How much can it read, and what can it send?
Start with OAuth scopes, mailbox access, token revocation, and audit trails. You want a clear record of what data the system touched, which sources informed an answer, and whether messages were auto-sent or user-approved.
Where does it pull answers from?
A knowledge graph sounds good until the system starts pulling from stale meeting notes, duplicate docs, or the wrong CRM object. You need to know which systems are authoritative for KPIs, account status, legal language, and customer commitments.
What happens when it gets something wrong?
The failure modes are boring and ugly: wrong timezone, outdated metric, wrong account owner, accidental exposure of internal details, AI-to-AI auto-reply loops. This isn’t exotic model safety work. It’s operations. That’s usually where enterprise trust is won or lost.
Can you keep it in draft mode?
For most teams, yes, at least at first. Run a shadow period. Measure draft acceptance rate, scheduling success rate, response latency, and correction patterns. If customer-facing emails still need regular fixes, don’t let it send on its own.
The signal here
Read AI is betting that the next useful agent will live inside the systems people already use to coordinate work, and that it needs a better memory model than a bag of embeddings and a prompt template.
That’s a solid bet. Email is messy, but it’s real. Knowledge graphs are harder to build than slapping RAG on top of a document store, but they fit relational context better, and that’s what this kind of work depends on.
Ada could still run into the usual problems: brittle integrations, stale knowledge, overreach, security reviews that drag out adoption. But the direction makes sense. Meeting AI was always going to move from “here’s what happened” to handling the follow-up. Read picked the inbox as the place to do it.
Useful next reads and implementation paths
If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.
Design agentic workflows with tools, guardrails, approvals, and rollout controls.
How AI-assisted routing cut manual support triage time by 47%.
Reddit is moving AI search out of the lab and into the main product. On its latest earnings call, the company said it’s combining traditional search and generative answers, pushing toward media-rich responses, testing dynamic agents, and planning to ...
The most useful part of TechCrunch Disrupt 2025’s debate on “AI hires vs. human hustle” is the framing shift underneath it. A lot of startups are already past the basic question of whether AI can handle early operational work. They’re wiring agents i...
Luma has launched Luma Agents, a new platform built around a model family it calls Unified Intelligence, starting with Uni-1. The pitch is simple: stop stitching together a chat model, an image model, a video model, and a mess of prompts. Let an agen...