Generative AI March 26, 2026

Granola raises $125M as it moves from AI meeting notes to enterprise software

Granola has raised a $125 million Series C led by Index Ventures, with Kleiner Perkins participating, pushing the company to a $1.5 billion valuation. Total funding now sits at $192 million. That valuation makes more sense once you stop thinking abou...

Granola raises $125M as it moves from AI meeting notes to enterprise software

Granola raises $125M to turn meeting notes into infrastructure

Granola has raised a $125 million Series C led by Index Ventures, with Kleiner Perkins participating, pushing the company to a $1.5 billion valuation. Total funding now sits at $192 million.

That valuation makes more sense once you stop thinking about Granola as a note-taking app.

The company is moving from a local-first meeting transcription tool toward a shared context layer for teams. It now has Spaces, new APIs, and an updated MCP server so AI assistants and internal agents can pull meeting context directly. For teams building enterprise AI systems, that shift matters more than the round itself.

Meeting notes are cheap. Structured context with permissions is where the value starts.

How Granola got traction

Granola’s original pitch was straightforward: record and transcribe meetings on the user’s machine, then turn them into useful notes without dropping a visible bot into the call.

That mattered. People dislike meeting bots for good reason. They clutter calls, make customers uneasy, and add friction around recording. Granola’s on-device approach avoided a lot of that, which helped it spread through product, engineering, and founder circles.

The company says it’s now used inside teams at Vanta, Gusto, Thumbtack, Asana, Cursor, Lovable, Decagon, and Mistral AI. That fits the product’s reputation. It caught on with people who spend half their day in meetings and then have to turn those conversations into specs, tickets, follow-ups, and decisions.

That’s also why Granola had to move beyond personal note capture. Once a tool starts holding a company’s operational memory, people want to search it, share it, automate against it, and govern it.

A standalone note app doesn’t get you far enough.

Where the product is heading

Granola’s biggest additions have little to do with prettier summaries. The point is to make meeting data usable by teams and software.

The headline features:

  • Spaces, which act as team workspaces with folders, access controls, and scoped search
  • A personal API for users on business and enterprise plans
  • An enterprise API for org-wide access and admin use cases
  • An updated Model Context Protocol (MCP) server that understands folder structure and sharing

Granola is turning meeting output into a permissioned data source.

That may sound dry, but it has real consequences. An agent can query notes from the sales folder for ACME, pull the last two weeks of customer risk discussion, and feed that into a CRM update or follow-up draft without scraping random text from someone’s laptop.

For enterprise AI, that’s the line between demo plumbing and something you can actually ship.

Spaces matter most

The most interesting addition is Spaces.

Granola is effectively building scoped retrieval for team knowledge. A Space contains folders and notes, with granular access controls. Users and, presumably, agents can query within a specific Space or even a specific folder path. That lowers the odds that an LLM pulls in irrelevant or sensitive material from elsewhere in the company.

This is least-privilege access applied to retrieval.

It also addresses one of the ugliest failure modes in enterprise AI: overbroad context. Plenty of RAG systems look fine in internal demos because the corpus is small and trust is assumed. In production, teams need boundaries. Finance data can’t bleed into sales. M&A notes can’t surface in product planning. Customer A data can’t contaminate Customer B.

Granola seems to understand where the work is now. Text generation is easy enough. Context selection, access control, and provenance are harder.

That’s why the folder hierarchy matters. Same for metadata. Same for shared-note visibility inside MCP.

MCP as an integration layer

Granola’s updated MCP server could look like standards-chasing. I don’t think it is. Model Context Protocol is becoming a practical way to connect assistants and agents to external tools without building one-off integrations for every product pair.

If Granola exposes notes, folders, and access scopes over MCP, tools that already speak MCP can request context in a predictable way. That lowers integration cost for agent builders.

The basic flow looks like this:

  1. An agent gets a task, like “prepare a follow-up for ACME with open risks.”
  2. It calls Granola through MCP or the API, scoped to the relevant space or folder.
  3. Granola returns snippets, metadata, action items, participants, links, and timestamps.
  4. The agent uses that material to draft the output or trigger downstream actions in a CRM, ticketing system, or email client.

None of that is exotic. What matters is the source material. Granola is packaging a high-signal stream of operational memory in a format agent systems can consume.

The company already lists integrations with Claude, ChatGPT, Lovable, Figma Make, Replit, Manus, v0, Bolt.new, Duckbill, and Dreamer. Some of those ecosystems are still early and messy, but the pattern is clear. Granola wants to sit underneath the apps where people already build and work.

Trust matters too

Granola didn’t get here cleanly.

Power users had built local workflows on top of the app’s on-disk cache. When Granola changed storage behavior, those setups broke and users pushed back. Co-founder Chris Pedregal reportedly said the local cache was never meant to be a supported API and promised proper data access. Now the company is shipping it.

That matters.

Developer trust is easy to lose, especially when a tool quietly becomes part of personal or team workflow infrastructure. If users depend on your data model, “that was never official” only gets you so far. Shipping a real API is the right fix, but it also confirms something larger: the product had already become programmable before Granola fully acknowledged it.

That’s a familiar pattern with good platforms. Users force the issue.

The technical trade-offs

Granola’s original appeal was its on-device design. That helped with privacy, friction, and probably latency. Once you add shared workspaces, org-wide search, APIs, and admin controls, you’re in synchronized server-side infrastructure whether you like it or not.

That comes with a different set of engineering requirements.

Sync and indexing

To support scoped search and team sharing, Granola likely needs server-side indexing over synchronized notes, not just a local cache. The system has to keep folder state, note metadata, permission boundaries, and searchable chunks in sync across users and workspaces.

For developers building on top of this, consistency windows matter. If a meeting just ended, how long until notes, action items, and summaries are queryable? If two systems update metadata, what wins? These are dull questions until an automation writes stale information into Salesforce.

Retrieval quality

Granola’s value depends on whether its retrieval layer holds up under enterprise sprawl.

The obvious architecture is some mix of chunking, embeddings, keyword search, and metadata filtering. The filters matter as much as the embeddings. A query constrained by space_id, folder_path, date range, participants, and maybe privacy flags is much more useful than broad semantic search across the entire company corpus.

Provenance matters too. If returned snippets include note IDs, timestamps, meeting URLs, and source spans, developers can build systems that cite their inputs instead of bluffing certainty. That cuts hallucination risk and gives humans a way to verify output before it gets sent or synced elsewhere.

Security and governance

Once meeting notes become machine-readable team context, they stop being casual notes. They become records.

That brings the usual enterprise checklist: SSO, SCIM, RBAC, audit logs, encryption at rest, TLS in transit, retention policies, export controls, and probably data residency questions from larger customers. If regulated data is involved, you also need redaction, consent capture, and a credible story for legal hold and eDiscovery.

Granola can’t hide behind its on-device origins here. The moment notes sync for collaboration and API access, the governance bar rises fast.

What developers should care about

If you’re evaluating Granola as part of an AI stack, three things matter more than the fundraising headline.

First, access scoping. Can your agent get only the notes it should, based on user identity and workspace membership? If that answer is fuzzy, don’t connect it to anything sensitive.

Second, API shape. Does Granola expose useful structured fields, or blobs of summary text? Action items, participants, links, note IDs, timestamps, and folder metadata are what make automation reliable.

Third, events and throughput. If you want workflows on top of meetings, you need webhooks or event streams like note.created, summary.finalized, or action_item.extracted. You also need sane rate limits, pagination, retries, and idempotency controls. Without those pieces, the product is still a UI, not much of a platform.

A representative query probably looks something like this:

POST /v1/search
{
"space_id": "space_123",
"filters": {
"folder_path": "Sales/ACME",
"date_range": {
"from": "2026-03-01",
"to": "2026-03-26"
}
},
"query": "open risks AND next steps",
"top_k": 8
}

That’s the right abstraction. Search a bounded corpus. Return snippets plus metadata. Feed it into your agent or internal tool. Keep the source traceable.

A more serious category

There’s no shortage of AI meeting products. Fireflies, Read AI, and a pile of startups all do transcription, summaries, and action items. That market is crowded and getting commoditized fast.

Granola is betting the durable product lives in the context system around the transcript: permissions, scope, APIs, provenance, and handoffs into the tools where work happens.

That’s a better bet than prettier summaries.

The open question is whether Granola can keep the product pleasant while adding enterprise machinery. Plenty of tools get worse once they start serving admins, security teams, and platform buyers. Granola’s early success came from individual users liking it. If that disappears and it turns into another corporate memory system with a nice brand, the edge fades.

For now, the direction makes sense. If AI agents need grounded, permissioned context to be useful at work, meeting data is one of the richest sources available. Granola is trying to own that layer before a bigger company turns it into a feature.

Keep going from here

Useful next reads and implementation paths

If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.

Relevant service
Web and mobile app development

Build AI-backed products and internal tools around clear product and delivery constraints.

Related proof
Growth analytics platform

How analytics infrastructure reduced decision lag across teams.

Related article
Amazon Alexa+ finally feels like a real AI assistant

Amazon has started rolling out Alexa+ to what it says are many millions of users, and the upgrade matters because Alexa finally acts like a current AI assistant instead of a voice remote with a long list of brittle commands. The new version can answe...

Related article
Gradium raises a $70M seed to build ultra-low-latency AI voice models

Gradium, a new Paris startup spun out of Kyutai, has raised a hefty $70 million seed round to build ultra-low-latency AI voice models. For a company founded in September 2025, that's an unusually large opening bet. It also says something useful about...

Related article
Microsoft says Microsoft 365 Copilot has 20 million paid seats and active use

Microsoft used this week’s earnings call to make a specific point about Copilot: people aren’t just licensed for it, they’re using it. The headline number is large. Microsoft 365 Copilot now has more than 20 million paid enterprise seats, accordi...