Artificial Intelligence October 1, 2025

Deta launches Surf, an AI browser with a NotebookLM-style research notebook

Deta has launched Surf in beta. The pitch is straightforward: part AI browser, part NotebookLM-style research workspace. You open web pages, PDFs, and YouTube videos, ask questions across them, and get an editable notebook instead of a throwaway chat...

Deta launches Surf, an AI browser with a NotebookLM-style research notebook

Deta Surf treats the browser like a local AI notebook, and that’s a smarter idea than another chat sidebar

Deta has launched Surf in beta. The pitch is straightforward: part AI browser, part NotebookLM-style research workspace. You open web pages, PDFs, and YouTube videos, ask questions across them, and get an editable notebook instead of a throwaway chat reply.

That shift in interface matters. Most AI browsers still feel like chat products attached to tabs. They summarize pages, answer questions, maybe automate a few clicks. Useful enough. But the basic unit is still the prompt and response. Surf centers the notebook instead. For research work, that makes a lot more sense. Most of that work is accumulation, revision, comparison, annotation, then synthesis.

Deta seems to get that.

What Surf does

Surf lets users create topic-based notebooks and fill them with sources from the web, PDFs, and YouTube. From there, it can generate a summary report, pull out key points, answer questions across multiple tabs, and generate small interactive artifacts like charts or widgets inside the notebook.

A few parts stand out:

  • Notebook-first workflow: AI output lands in an editable document, not a chat log.
  • Cross-tab reasoning: multiple open tabs can be used as one shared context pool.
  • Multimodal sources: web pages, PDFs, and videos all feed the same workspace.
  • Code generation for embedded views: charts and mini apps can sit next to the source material.
  • Local-first storage: notebooks and data stay on the device, with offline use built into the pitch.

It’s free in beta for now. Deta says image generation is coming, and a paid tier may later add cloud backup, collaboration, and multi-device support.

That product path is sensible. Local-first sounds great until a team wants shared research workflows. Then sync becomes the hard part.

The product label undersells it

Calling Surf an “AI browser” misses the interesting part. It looks more like a retrieval-backed notebook runtime inside a browser shell.

The likely architecture will feel familiar to anyone building RAG systems:

  • content extraction from DOM, PDFs, and transcripts
  • chunking into passages with metadata
  • local embeddings and indexing
  • retrieval across sources for query-time context
  • LLM synthesis into notebook entries with some level of citation or provenance

None of that is new on its own. What matters is how Deta combines it.

NotebookLM treats documents as source material for synthesis. AI browsers treat tabs as live context for answers. Surf pulls those ideas together and keeps the workspace local. For developers, that points to a useful pattern: the browser session becomes a temporary knowledge graph, and the notebook becomes the durable interface on top of it.

If Deta gets the execution right, this is a better fit for real work than another assistant parked in the corner of the screen.

Why local-first matters

The local-first angle goes beyond privacy.

If Surf stores notebook content, indices, and retrieval data on-device, probably through something like IndexedDB, SQLite via WASM, or browser file system APIs, the trust model changes. Users can inspect their material, edit it directly, and keep it offline. Teams can also draw a cleaner line between what stays local and what gets sent to a remote model. That matters if you’re dealing with internal docs, customer notes, research PDFs, or legal material that shouldn’t be drifting into some vendor pipeline.

It also creates real engineering pressure.

Local indexing sounds good until someone dumps fifty PDFs and a four-hour transcript into a notebook. Then you're dealing with chunking jobs, embedding latency, storage growth, memory pressure, and UI responsiveness. Background processing has to be handled carefully or the app starts to feel sluggish fast. Incremental indexing, smart caching, and hard resource limits matter a lot in a local-first product.

There’s a second problem: these systems usually feel great for one person and awkward for teams until sync is solved properly. “Cloud backup” is easy copy. Conflict resolution, permissions, encryption, offline merges, and reproducibility across devices are where products usually get exposed.

The code generation piece matters

Surf can apparently generate code for graphs, charts, and small widgets that live inside the notebook. For engineers, that may matter more than the summarization features.

Summaries are table stakes now. Every AI knowledge product can summarize. The more useful move is turning source material into something executable.

Say you’re reading three market reports and a YouTube interview with an executive. A standard AI assistant gives you bullets. A notebook that can generate a comparison chart, a timeline, or an interactive filter in the same workspace starts to look like a lightweight analysis tool.

There are a few plausible ways Deta could be doing this:

  • plain JavaScript widgets rendered in the notebook
  • charting through D3.js, Plotly, or Vega-Lite
  • sandboxed execution via iframe and a tight CSP
  • optional Pyodide or WebAssembly for Python-style data transforms

That last option matters. If generated code can manipulate notebook data in place, the notebook starts behaving like a constrained computational workspace. Not Jupyter. Still enough for a lot of day-to-day research and product analysis without forcing people into another tool.

It also raises obvious security questions.

Generated code inside a notebook is dangerous if the sandbox is weak. Loose network access or sloppy script isolation turns a “helpful chart” into a path for data exfiltration or supply-chain nonsense. Any product doing this needs strict execution boundaries, limited permissions, and clear visibility into what code was generated and what it can touch.

For technical buyers, that should be high on the test list.

The editable output is one of the best parts

The editable notebook may matter more than the browser wrapper.

A lot of AI tools still treat generated output like a one-shot performance. Teams usually want drafts they can correct, annotate, and keep. Surf’s notebook pushes model output into something people can refine.

That’s a healthier default.

It also helps with auditability. If a summary report is tied back to source chunks, URLs, timestamps, or transcript segments, you at least have a usable provenance trail. Not perfect, but a lot better than the usual “the model said so” fog. For engineering orgs and data teams, that difference matters. A synthesized note with visible sources can survive a handoff. A polished chat answer usually can’t.

The weak spot is citation quality. YouTube is messy. Descriptions don’t always match what was said, transcripts can be rough, and timestamp attribution is often shaky. If Surf wants serious research users, source grounding has to be solid.

Pressure on AI browsers and NotebookLM

Surf lands in a real gap.

Google’s NotebookLM is good at source-based synthesis, but live browsing isn’t its core interface. AI browsers like Comet, Dia, and Opera Neon are built around active web context, but many still optimize for convenience over durable knowledge work.

Deta’s product sits between those categories and, on paper, improves on both.

That doesn’t guarantee anything. Hybrid product categories can get muddy fast. But if users like the notebook model, competitors will copy it quickly. Expect more browser tools to add persistent workspace modes, tighter source attribution, offline caches, and built-in data visualization. The browser tab is already becoming a context provider for LLMs. The next obvious step is making that context editable, reusable, and shareable.

What technical teams should watch

If you’re evaluating Surf or tools like it, don’t spend much time on whether the summaries sound polished. Demo summaries usually do.

Watch these instead:

  • Source control inside the notebook: can you trace claims back to URLs, PDF pages, or transcript timestamps?
  • Local versus remote processing: what stays on-device, and what gets sent to a model API?
  • Execution sandboxing: how isolated is generated code, and can it call out to the network?
  • Large corpus behavior: does the app stay usable after dozens of long PDFs and transcripts?
  • Reproducibility: can another teammate recover the same notebook state and get the same answers later?

There’s also a workflow question. Surf makes the most sense for research-heavy product work, competitive analysis, technical investigation, standards tracking, or early-stage synthesis. It’s less compelling if your team already has a disciplined docs pipeline and mostly wants deterministic search.

Still, Deta is pulling in the right direction. The AI browser category has spent too much time proving it can answer questions about tabs. Surf asks for something more useful: a browsing session that turns into working memory you can keep, edit, and compute on.

That’s a stronger product thesis than another chatbot glued to the web.

What to watch

The risk is overreading early technical progress as operational proof. In scientific or health-adjacent settings, reliability, validation, data quality, and expert review matter more than a clean product story. The useful question is where the system reduces friction without weakening accountability.

Keep going from here

Useful next reads and implementation paths

If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.

Relevant service
RAG development services

Build search and retrieval systems that ground answers in the right sources.

Related proof
Internal docs RAG assistant

How grounded search reduced document lookup time.

Related article
Google pauses Ask Photos rollout as Gemini struggles with speed and results

Google has paused the broader rollout of Ask Photos, the Gemini-powered search layer for Google Photos, after admitting the feature still misses on three basics: response time, result quality, and UI polish. That matters more than it sounds. Ask Phot...

Related article
Google DeepMind's SIMA 2 uses Gemini for goal-directed action in games

Google DeepMind’s new SIMA 2 research preview matters because it pushes AI agents beyond scripted instruction-following demos and closer to usable autonomy inside interactive environments. The headline is straightforward. SIMA 2 combines Gemini’s rea...

Related article
OpenAI acquires Sky to bring AI actions directly into macOS

OpenAI has acquired Software Applications, the startup behind Sky, an unreleased AI interface for macOS that can sit above the desktop, read what’s on screen, and take actions across apps. That pushes OpenAI past the chat window and into the OS. If C...