Artificial Intelligence November 27, 2025

Onton raises $7.5M to take its AI shopping platform beyond furniture

Onton, the startup formerly known as Deft, has raised $7.5 million to expand its AI shopping product beyond furniture into apparel and later consumer electronics. The round was led by Footwork, with Liquid 2, Parable Ventures, and 43 participating. T...

Onton raises $7.5M to take its AI shopping platform beyond furniture

Onton raises $7.5M for AI shopping, and its stack is more interesting than the usual chat wrapper

Onton, the startup formerly known as Deft, has raised $7.5 million to expand its AI shopping product beyond furniture into apparel and later consumer electronics. The round was led by Footwork, with Liquid 2, Parable Ventures, and 43 participating. Total funding is now about $10 million.

The funding is one part of the story. The technical bet is the more interesting one.

Onton says it grew monthly active users from 50,000 to more than 2 million, and it claims 3x to 5x higher conversion than traditional e-commerce flows. That conversion figure needs context on traffic mix and attribution before anyone takes it at face value. Still, the product direction deserves attention. Onton is building the kind of shopping system a lot of larger AI assistants still handle poorly: multimodal search with explicit constraints, grounded explanations, and category-specific reasoning.

That matters in retail. A chatbot can talk fluently about sofas. The harder job is explaining to a pet owner why one upholstery fabric is likely to wear better than another.

Why it stands out

Most AI shopping demos still rely too much on fluent language. Ask for "a durable sectional under $2,000 that works with walnut floors and won't get destroyed by cats," and a general LLM can give you a plausible answer. The problems show up when you need exact attributes, price filters, material trade-offs, style matching, or a rationale you can actually check.

Onton is pushing a neuro-symbolic approach. Neural models handle the fuzzy parts: multimodal retrieval, image understanding, semantic matching. Symbolic logic handles the structured parts: constraints, ontology, product attributes, and rule-based reranking.

That setup fits shopping well. Furniture already exposes the limits of pure chat. Apparel and electronics will expose them even faster.

Apparel brings sizing, fit, stretch, fabric behavior, and returns. Electronics brings connector standards, compatibility rules, and a lot of products that look interchangeable until they fail in specific, annoying ways. If Onton wants to move from interesting demo to genuinely useful system, those categories are the real test.

Beyond the chat box

Onton's interface includes text prompts, image uploads, image generation, and an infinite canvas for visual ideation. Users can upload a room photo, generate design directions, and place products into the scene.

That's a good fit for how people actually shop. Intent often isn't verbal. People may not know the right words for a style, but they know it when they see it. A screenshot, a room photo, or a rough visual composition usually carries more signal than "modern but warm."

It also raises the technical bar. Once the interface goes multimodal, the backend has to do much more than rank product titles. It needs shared text-image embeddings, product normalization across messy retailer feeds, scene understanding, generation pipelines, and probably some kind of cached, tiered rendering so latency doesn't ruin the experience.

For anyone building similar products, the hard part isn't adding a prompt field. It's getting all of those inputs to land in the same retrieval and ranking system.

What Onton's stack probably looks like

The company describes its approach as neuro-symbolic. In practice, that likely means four major layers.

Product ingestion and normalization

Retail data is a mess. Different merchants describe the same item with different units, names, and missing fields. A "cream performance fabric sofa" on one site becomes a "beige polyester blend couch" on another.

So you need a canonical schema. That could map to schema.org/Product, GS1 attributes, and category-specific fields layered on top. Then comes extraction and cleanup:

  • entity extraction for materials, finishes, dimensions, or hardware
  • synonym resolution such as sofa, couch, loveseat
  • unit normalization
  • category taxonomies that hold up under edge cases

That usually ends in a product knowledge graph, or at least a strongly structured attribute store. Without that, explanations get shaky and filters get brittle.

Neural retrieval

This is the part most teams like to demo.

A likely setup uses a CLIP-style or newer multimodal encoder to embed images and text into a shared vector space, plus a lexical index like BM25 for exact matches and broader recall. Vector search could sit on top of FAISS, ScaNN, or Milvus.

That gets Onton from vague user intent to candidate products. It also helps with screenshot search and room-based discovery, which are now close to table stakes for AI-native shopping products.

Symbolic reasoning and constraints

This is where the system has to prove itself.

Shopping queries are full of explicit and implied rules. Pet-friendly fabrics. Mid-century silhouettes. Sectionals that fit a wall. Dining tables that seat six without exceeding 72 inches. TV mounts that support a VESA pattern. USB-C hubs that actually deliver the promised power.

LLMs can talk about all of that. They still aren't reliable enough to enforce it.

A symbolic layer can. That might be a rules engine, a constraint solver, or a narrower domain-specific ranking system that applies structured filters and reranks candidates using product facts. If the user says "pet friendly," certain fabric properties should move up. If they imply a budget, price fairness should matter. If they want a specific style, silhouette and material cues should feed the scorer.

For furniture, that means cleaner results. For electronics, it helps avoid bad recommendations. For apparel, it may help with fit and return reduction if Onton can get good enough sizing data from suppliers.

Explanation generation

This is one place where small, tightly scoped generation is genuinely useful.

If the system says a sofa is a good match for pet owners, it should point to material composition or weave properties from structured data, not invent a tidy justification in prose. A light RAG layer can turn product facts into readable explanations without drifting.

That's good UX. It's also basic risk control.

Why developers should care

A lot of teams still treat retail AI as a chat integration problem. Onton's approach points somewhere else. Search, recommendation, and visual discovery are collapsing into one system.

That changes the engineering priorities.

Data quality is product quality

If the attribute model is weak, every layer above it gets worse. Retrieval gets noisy, rules misfire, explanations look suspicious, and conversion claims fall apart.

The boring work matters. Schema design. Ontology maintenance. Synonym dictionaries. Unit normalization. Provenance tracking. If you're running a commerce search stack, a lot of the money goes here.

Latency budgets get ugly fast

Text search alone is manageable. Add image understanding, generation, scene composition, reranking, and explanatory output, and each request starts fanning out across several expensive services.

Onton will need to keep that under control if it wants the product to feel usable at scale. That probably means:

  • aggressive caching for common prompts and product embeddings
  • asynchronous generation for heavier visual tasks
  • tiered inference, with cheaper models for candidate generation and heavier models only when needed
  • careful GPU allocation, especially if image generation is part of the default flow

A rich interface is expensive to run.

Auditability matters more than people admit

Shopping recommendations look low risk next to healthcare or finance. They're not harmless. Bad advice in commerce hits trust, refunds, support costs, and in some categories compliance.

A neuro-symbolic system gives teams something many LLM-first stacks still struggle to provide: a traceable reason for why an item appeared, why it ranked highly, and which rules shaped the outcome. That's useful for debugging. It's also useful when merchants want to know why a product is buried.

Apparel is where this gets harder

Furniture is a sensible beachhead. The ontology is tricky, but the category changes slowly compared with fashion, and the value per purchase is high enough to support richer recommendation flows.

Apparel is messier.

Taxonomies change faster. Inventory turns over constantly. Fit is partly objective and partly personal. "True to size" tells you almost nothing without body shape, garment cut, brand variance, fabric stretch, and return behavior. If Onton wants to stand out in apparel, visual search and style matching won't be enough. It needs solid fit logic and better supplier data than the average marketplace gets.

Otherwise the product risks turning into a polished inspiration engine with the usual conversion leak at checkout and the usual return problem a week later.

That doesn't make the expansion a bad idea. It means the technical bar just got higher.

A wider shift

The center of gravity in AI shopping is moving away from pure chat toward multimodal systems with structured grounding. OpenAI, Google, Amazon, Perplexity, Daydream, and others are all pushing parts of that. Onton's angle is narrower and, frankly, more credible than many generic shopping copilots because it starts with category logic instead of language polish.

That alone won't make it a winner. Building ontologies and rule systems is expensive. Keeping them current is tedious. Constrain ranking too much and discovery gets sterile. Constrain it too little and the system slips back into plausible nonsense.

The core idea is sound. In commerce, users need systems that retrieve the right items, apply the right constraints, and explain the result without bluffing.

Onton's funding round is small by late-2025 AI standards. The product thesis is sharper than a lot of better-funded efforts. If the company can carry its furniture discipline into apparel and electronics without getting buried by taxonomy and latency problems, retailers should pay attention.

Keep going from here

Useful next reads and implementation paths

If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.

Relevant service
Ecommerce AI development

Improve discovery, catalog quality, support, forecasting, pricing, and merchandising workflows.

Related proof
Catalog enrichment automation

How catalog automation reduced product data cleanup work by 58%.

Related article
Airbnb maps out AI for search, discovery, support, and internal tools

Airbnb is pushing AI into the parts of its product that matter most: search, discovery, customer support, and internal engineering. On its Q4 2025 earnings call, CEO Brian Chesky said the company wants to become “AI-native.” Unlike a lot of earnings-...

Related article
Apple updates App Review Guidelines to require disclosure for third-party AI data sharing

Apple tightened its App Review Guidelines in a way that will hit a lot of AI features already in production. The change sits in rule 5.1.2(i). Apple now says apps must clearly disclose when personal data is shared with third parties, including third-...

Related article
TechCrunch Disrupt 2025 puts AI infrastructure and applications on one stage

TechCrunch Disrupt 2025 is putting two parts of the AI market next to each other, and the pairing makes sense. One is Greenfield Partners with its “AI Disruptors 60” list, a snapshot of startups across AI infrastructure, applications, and go-to-marke...