Artificial intelligence April 29, 2026

Scout AI raised $100M for military autonomy. Its real focus is data

Scout AI has raised a $100 million Series A to build models for military autonomy, starting with logistics and aiming eventually at weapons systems. The funding is significant. The more revealing detail is how the company is spending its time: on a t...

Scout AI raised $100M for military autonomy. Its real focus is data

Scout AI raises $100M to train military autonomy in the dirt, not the lab

Scout AI has raised a $100 million Series A to build models for military autonomy, starting with logistics and aiming eventually at weapons systems. The funding is significant. The more revealing detail is how the company is spending its time: on a training range, driving ATVs over rough trails, logging human interventions, and teaching a vision-language-action model to deal with terrain that has no lanes, no reliable maps, and plenty of ugly failure cases.

That points to a broader shift in defense autonomy. The old approach was to bolt perception and planning onto a vehicle. The newer bet is to train a general control model, then layer guardrails and mission software on top. Scout thinks that stack can get into the field faster and adapt better once it does.

Maybe. The caveats are real.

What Scout is building

Scout, founded in 2024 by Colby Adcock and Collin Otis, says it’s building an AI model called Fury to operate and coordinate military assets. Near term, that means logistics: autonomous resupply, convoy support, surveillance, and remote command across mixed fleets of drones and ground vehicles. Longer term, the company is plainly going after autonomous weapons applications too.

That matters. Scout is not building software for military paperwork or warehouse optimization. It wants control over machines in the field, and the stakes rise quickly once “command military assets” stops meaning “move supplies.”

The company also says it has won $11 million in military technology development contracts from DARPA, the Army Applications Laboratory, and other DoD customers. It’s one of 20 autonomy companies in U.S. Army training cycles with the 1st Cavalry Division at Fort Hood, with a shot at deployment with the unit in 2027.

For a two-year-old startup, that’s serious traction. It also says something about Pentagon demand. The military wants autonomy that still works with bad comms, weak visibility, rough terrain, and constant change. Consumer autonomy stacks were not built for that.

Why VLAs sit at the center

Scout is using vision-language-action models, or VLAs. If you’ve followed robotics lately, that tracks. The idea is to start with a broadly pretrained model, then fine-tune it so perception, instruction, and motor control are linked in a more general way than older autonomous vehicle stacks usually allow.

That’s the attraction. A VLA can, in theory, take something like “go to this waypoint and watch for enemy forces” and turn it into a sequence of actions across sensors, controls, and mission context. It can also borrow broad priors from pretraining instead of learning every behavior through brittle task-specific code.

Google DeepMind helped popularize the category in 2023, and robotics startups have been chasing it since. Scout is applying the same line of thinking to military vehicles, where the environment is far less structured than city driving.

That difference matters. Road autonomy has lanes, signs, maps, and at least some predictable behavior from other actors. Off-road military autonomy has loose sand, narrow trails, confusing forks, poor GPS, shifting obstacles, and situations where the right behavior depends on mission context, not traffic law. Hug the edge here. Stay centered there. Slow down when confidence drops. Keep moving under degraded communications. Hand-coded rules get messy fast.

A VLA gives Scout a plausible way to learn those behaviors from demonstrations and corrections. It doesn’t solve the problem. It does make it look less hopeless than trying to code every case by hand.

Why the ATV demo matters

TechCrunch’s visit to Scout’s training site in central California included a useful detail: the company has only been training these ATV driving models for about six weeks, initially on civilian vehicles, and the system can already complete a 6.5 km loop with some recognizably human trail behavior. It keeps to the right on wider trails, stays centered on narrow ones, and slows when uncertain.

That’s impressive for one reason: off-road driving is messy, and learned policies tend to show their value first in messy environments.

It’s also easy to overread a demo like this.

Scout’s own testing suggests the system is comfortable on trails but not ready for full off-road operation. That limit matters. Trail following is already hard, but the trail is still a weak prior. True off-road autonomy means handling terrain with no path at all, making route choices from partial perception, and doing it reliably enough that a mistake doesn’t strand a vehicle or expose a unit.

That gap helps explain why defense autonomy companies keep ending up with hybrid systems. Scout does too. Nobody serious is handing a stochastic policy full control of a military vehicle with no safeguards. The practical stack mixes learned control where generalization helps with deterministic safety logic where failure is expensive and obvious.

Less elegant than the end-to-end pitch. More likely to ship.

The product that likely ships first

Scout says its first widely adopted product will probably be Ox, a command-and-control layer bundled with hardened compute, communications, cameras, and GPUs. The pitch is simple enough: give soldiers a way to orchestrate multiple drones and autonomous ground vehicles through high-level prompts.

That’s a sensible product sequence.

For military AI companies, vehicle control is only part of the problem. They also have to fit into existing units, hardware, workflows, and procurement cycles. A command layer has a cleaner path because it can sit above the hardware mess and deliver value before anyone fully trusts autonomy at the edge.

It also gives Scout a wedge. If it owns the operator interface and local compute package, it gets the data, the usage patterns, and a path to push Fury deeper into the stack over time.

For technical teams, this part looks familiar. Start with orchestration. Ship the control plane. Become the software layer across a fragmented fleet.

The trade-off is obvious. Command-and-control software built around natural-language prompts sounds flexible, but military environments punish ambiguity. “Watch for enemy forces” is a bad API if the system can’t clearly expose confidence, escalation rules, and handoff conditions. Promptability is useful. It’s also a liability if the operator thinks they issued a precise instruction and the model interpreted it probabilistically.

That puts a lot of weight on the UI and policy layer. The hard part isn’t only getting a robot to respond to language. It’s keeping intent, action, override, and accountability legible when people are tired and under pressure.

Why the Army cares

The first use case is easy to grasp: resupply.

Moving water, ammunition, batteries, and equipment is dangerous, repetitive work. It takes time and people. A convoy with one crewed vehicle and several autonomous followers is plausible. So is sending uncrewed vehicles to remote observation posts instead of asking soldiers to make every trip by hand, especially at night or in bad weather.

That’s where defense autonomy starts to feel practical. The return is not abstract. It’s less manual labor, less exposure, less fatigue, and better throughput.

It also helps explain why startups like Scout, Field AI, and Overland AI have traction. DARPA’s RACER program pushed companies toward high-speed off-road autonomy and seeded this part of the market. The pattern is familiar: a government program proves demand, sets the performance bar, and a small group of startups tries to commercialize the resulting ideas faster than the primes.

Scout has credible people for that race. Otis came from autonomous trucking company Kodiak, and Adcock has ties to Figure through his brother Brett Adcock and his own board role there. The Figure link matters less as status than as evidence that model-centric robotics assumptions are moving from humanoids and warehouse systems into defense.

Whether that transfer holds up at scale is still open.

What technical teams should watch

Data collection is becoming the moat

Scout’s Foundry training range exists because simulation is not enough. Drivers work eight-hour shifts, take over when the system fails, and feed those interventions back into reinforcement learning and other training loops. That’s expensive and operationally annoying. It’s also the work that matters.

Plenty of companies say they’re building autonomy models. Far fewer can build a repeatable pipeline for collecting edge-case data in the environments they actually care about. In off-road defense robotics, the dirt is part of the dataset.

Hybrid stacks are winning for now

The industry has spent years arguing over end-to-end learning versus modular robotics software. Fielded systems are landing in the middle. Learned models handle perception and action. Deterministic systems handle safety, constraints, and mission-critical controls.

That’s not a philosophical compromise. It’s what happens when failure has consequences.

Defense AI can deploy sooner than consumer robotics

Military buyers will tolerate systems that are imperfect if they’re useful, bounded, and supervised. Consumer products usually need cleaner UX and tighter safety margins before anyone accepts them. A muddy autonomy stack that still needs frequent intervention can be valuable to a military unit if it removes enough dangerous manual work.

That makes defense a particularly good market for this generation of robotics AI. It doesn’t mean the tech is mature. It means the bar for adoption is different.

Where skepticism is warranted

Scout’s thesis depends on relatively limited real-world data, plus simulation, being enough to produce a capable driving agent through VLA-based training. That may be enough for constrained trail tasks. It’s a much harder claim once the environment gets less structured, adversaries start interfering, or operators expect consistent behavior across unfamiliar terrain and weather.

Military deployment also adds security problems that polished autonomy demos tend to skate past. Sensor spoofing, jammed comms, degraded GPS, adversarial obstacles, and capture risk all change the design problem. If your orchestration layer depends on local GPUs, cameras, and radios in the field, those components are attack surfaces and logistics burdens too.

And then there’s the obvious issue. Once a company says weapons are on the roadmap, every discussion about reliability, override, and human control gets sharper. “Good enough to test with soldiers in the field” is one threshold. “Good enough for lethal decisions around ambiguous targets” is another.

Scout has raised enough money to chase that future. Whether the models deserve that level of trust is a separate question, and ATV laps on a California training ground won’t answer it.

Still, those laps matter. They show where military AI is headed: toward real-world data collection, model tuning, and software that tries to turn a fleet of machines into something one soldier can direct. That shift is worth paying attention to.

Keep going from here

Useful next reads and implementation paths

If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.

Relevant service
AI agents development

Design agentic workflows with tools, guardrails, approvals, and rollout controls.

Related proof
AI support triage automation

How AI-assisted routing cut manual support triage time by 47%.

Related article
ScaleOps raises $130M as AI infrastructure costs push cloud efficiency higher

ScaleOps has raised a $130 million Series C at an $800 million valuation, with Insight Partners leading and Lightspeed, NFX, Glilot Capital Partners, and Picture Capital also participating. The headline is funding. The actual point is simpler: compan...

Related article
Nexos.ai raises €30M Series A to build enterprise AI infrastructure

Nexos.ai has raised a €30 million Series A at a €300 million valuation, with Index Ventures and Evantic Capital co-leading the round. The startup was founded by Nord Security co-founders Tomas Okmanas and Eimantas Sabaliauskas, and its pitch is clear...

Related article
Periodic Labs raises $300M seed to build autonomous scientific labs

Periodic Labs has raised a $300 million seed round to build autonomous labs that can design experiments, run them with robotics, measure results, and use that data to plan the next round. For a seed round, that number is wild. The team helps explain ...