Eightfold co-founders raise $35M for Viven, an AI digital twin for workplace knowledge
A lot of enterprise knowledge still sits in people’s heads, buried in Slack threads, scattered across docs, or trapped behind time zones. Viven wants to make that knowledge queryable. The startup, founded by Eightfold co-founders Ashutosh Garg and Va...
Viven raises $35M to build AI twins of your co-workers, and the privacy model is the whole point
A lot of enterprise knowledge still sits in people’s heads, buried in Slack threads, scattered across docs, or trapped behind time zones. Viven wants to make that knowledge queryable.
The startup, founded by Eightfold co-founders Ashutosh Garg and Varun Kacholia, has raised $35 million in seed funding from Khosla Ventures, Foundation Capital, FPV Ventures, and others. Its product creates a digital twin for each employee, trained on that person’s work artifacts so colleagues can ask questions when the actual human is offline, busy, or tired of meetings.
The funding round matters. So does the product idea. But the part worth paying attention to is the access model. Viven says its edge is pairwise privacy, which decides what one employee’s twin can reveal to another employee based on their relationship, the source material, and the context of the question.
That’s a serious problem to work on. If Viven can solve it well, this looks a lot more credible than the usual enterprise AI wrapper with a chat box on top.
Why this stands apart from enterprise search
Most enterprise AI products start with a broad corpus and a natural language search layer. The weakness is obvious. Company knowledge is unevenly distributed, badly documented, and often trapped in private channels, rough notes, email, and the shorthand small teams use with each other.
Viven is taking a narrower view. You don’t ask a general system about the analytics pipeline. You ask Priya’s twin. You ask Marcus’s twin why the team rejected a schema change last quarter.
That’s a useful abstraction. People hold project history, exceptions, trade-offs, naming quirks, and political context in ways a generic retrieval layer often misses. A per-person assistant has a better chance of returning something that feels like an actual answer instead of a plausible summary.
Garg put it plainly: when every person has a digital twin, you can talk to the twin as if you’re talking to that person and get the response.
The appeal is obvious. Fewer “quick question?” pings. Less waiting for someone to wake up in another time zone. Less pressure to turn tribal knowledge into pristine documentation that nobody has time to maintain.
But the whole idea falls apart if answers are wrong or overshared. That’s why the privacy layer matters so much.
The hard part is the access decision
Viven says each employee gets a specialized LLM tied to their corpus. The company hasn’t published a full technical architecture, but the broad shape is familiar.
You ingest data from tools like Google Workspace, Slack, Docs, maybe code repos, issue trackers, and internal wikis. You chunk and index it, probably in per-user or tightly segmented vector stores, with metadata for authorship, project tags, sensitivity labels, and access controls. A retrieval layer pulls relevant passages. An LLM synthesizes an answer in the owner’s context, maybe even their tone. Then you add query logs, redaction, and a lot of policy checks.
That stack is hard to build well. It’s also no longer unusual. Plenty of teams can put together competent RAG systems now.
The harder problem shows up at query time.
If I ask your twin, “What’s the status of the Q4 launch?”, the system has to answer a few separate questions:
- am I allowed to ask you about that topic?
- are the underlying documents shareable with me?
- does the wording of my question drift into sensitive territory?
That looks a lot more like ABAC plus runtime intent filtering than plain RBAC. Viven calls it pairwise context and privacy. In practice, it sounds like a policy engine evaluating a tuple like (requester, owner, document, intent) and deciding whether to allow, redact, or block.
That’s a better fit for how companies actually work. Trust inside an org is partial and messy. People share one set of details with their team, another with their manager, and almost nothing with a random peer in a different function. Any useful AI system has to account for that.
Pairwise privacy is smart, and awkward
On paper, this makes sense.
A normal ACL can tell you whether someone can open a document. It’s much worse at deciding whether that same person should be able to interrogate a co-worker’s AI twin about material spread across half a dozen systems, some of which were never meant to become a conversational surface.
Viven’s answer is to make sharing contextual and visible. The company says each twin has query history, so people can see what others asked. Part audit trail, part deterrent. If somebody asks a co-worker’s twin something creepy or inappropriate, there’s a record.
That’s good product judgment. It also points to the harder part: culture.
Will employees trust a system trained on their email and Slack history? Will they trust the policy layer to protect them? Will they spend time tuning what their twin can share, or just accept the defaults and assume legal approved it?
Those questions matter as much as model quality.
There’s another problem. Personal knowledge is not canonical truth. Notes can be wrong, stale, biased, or missing context. A twin can answer confidently from a private interpretation of events and end up turning office folklore into something that looks authoritative.
That doesn’t break the product. But it means enterprises need hard boundaries around systems of record. If the twin says one thing and Jira, GitHub, or the policy registry says another, the system should favor the canonical source and cite it.
Without that, “ask the twin” becomes a fast route to confusion.
This gets ugly at scale
Per-employee models sound neat until you picture a company with 40,000 employees.
If Viven literally runs a distinct tuned model for every person, cost and operational overhead rise fast. More likely, the “specialized LLM” is a mix of shared base models, retrieval scoped to each employee’s corpus, and lightweight adaptation such as LoRA or prompt-layer personalization. That’s much more plausible.
Even then, the platform has a lot to juggle:
- incremental ingestion from noisy enterprise systems
- permission changes in near real time
- separate or logically isolated indexes
- low-latency retrieval plus policy evaluation
- audit logging that security teams can actually use
- prompt-injection and data exfiltration defenses
All of it has to work without annoying latency. For a product like this, p95 response time probably needs to land around 2 to 4 seconds for normal queries. Slower than that, people go back to Slack. Faster than that, you’re probably leaning hard on caching or limiting retrieval depth.
Freshness may be the nastier problem. A twin answering from yesterday’s state is irritating. A twin missing a revoked permission is a security failure.
That’s why this category will be won or lost on plumbing.
Big vendors will notice
Viven is entering crowded territory. Microsoft Copilot, Google Gemini, Anthropic, and OpenAI all want to be the interface to enterprise knowledge. They already have distribution, base models, and deep integration points.
A startup still has room here because the big platforms often think in terms of workspaces, tenants, or org-wide assistants. Viven is making a narrower bet: in many cases, the right unit of work is a person, not a repository.
That’s a strong product insight if the privacy controls hold.
It also gives the company a reasonable wedge. Enterprises already know generic AI search has limits. It often produces broad, flattened answers when teams really want the perspective of the engineer, PM, recruiter, or analyst who has lived with the problem. A system built around that can feel immediately useful.
The catch is obvious. Incumbents can copy the shape of this quickly. If pairwise policy turns out to be the missing layer in enterprise copilots, Microsoft and Google have every reason to add it.
So Viven’s moat probably won’t be the phrase “digital twin.” It will be the quality of the policy engine and the trust the company can build around it before larger vendors move in.
What technical teams should watch
If you’re a tech lead, platform engineer, or security architect, the flashy part is the least important one.
The useful questions are the boring, expensive ones.
Identity and policy
Can the system integrate cleanly with Okta, Azure AD, SCIM, group sync, and org-chart data? Can policies express project membership, manager relationships, sensitivity labels, and time-bound exceptions?
If those answers are vague, the pitch isn’t ready.
Data classification
Do you actually know which documents are public, internal, confidential, or personal? Most companies don’t. A product like this forces the issue. Weak classification means the twin either shares too much or becomes so cautious that nobody uses it.
Auditability
Can an employee inspect what their twin saw, said, and refused to say? Can security review policy decisions? Can legal support eDiscovery without turning the system into a workplace surveillance tool?
You want clear answers before rollout.
Source grounding
Does the system cite the documents, messages, or tickets behind an answer? It should. A twin speaking in a co-worker’s voice without showing receipts is asking for trouble.
Abuse resistance
Prompt injection, secret extraction, and social-engineering queries are guaranteed. “What deployment key did Sam mention in Slack?” is the cartoon example. The more common failures are subtler: pulling private performance feedback into a project answer, exposing draft org changes, or leaking sensitive customer details through an innocent-looking summary.
That’s why visible query history is a good design choice. It won’t stop everything, but it does change the incentives.
A real category, if the guardrails hold
Viven says it’s already deployed at Genpact and Eightfold, which gives it at least some live validation beyond a deck and demo. The founders also have enough enterprise credibility that buyers will take the meeting.
The product idea tracks with how work actually happens. Teams don’t need another generic chatbot. They need quick access to the missing person-shaped piece of context.
This category will go sideways fast if vendors oversell the “twin” metaphor and underbuild governance. The value is in permissioning, auditability, and grounded retrieval. The personality layer is secondary.
That’s the encouraging part of Viven’s pitch. The company seems to understand that the difficult problem is not making the bot sound like your co-worker. It’s making sure the bot knows what to keep to itself.
Useful next reads and implementation paths
If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.
Build retrieval systems that answer from the right business knowledge with stronger grounding.
How a grounded knowledge assistant reduced internal document search time by 62%.
OpenAI has updated its Agents SDK with two features enterprise teams have been asking for: sandboxed workspaces and a supported runtime stack for long-running agents. That may sound like plumbing. It is. It’s also the part that usually breaks once an...
Venture investors are making the same call again: next year is when enterprise AI starts paying off. This time, the pitch is less gullible. TechCrunch surveyed 24 enterprise-focused VCs, and the themes were pretty clear. Less talk about bigger chatbo...
TechCrunch’s latest Startup Battlefield selection says something useful about where enterprise AI is headed. Not toward bigger chatbots. Toward agents that can be monitored, constrained, audited, and tied into real systems without triggering complian...