SoftBank reportedly plans Ohio AI server factory for Stargate with OpenAI and Oracle
SoftBank has reportedly bought Foxconn’s factory in Lordstown, Ohio, through a shell entity called Crescent Dune LLC, and plans to turn it into an AI server manufacturing hub for the Stargate project with OpenAI and Oracle. If Bloomberg’s report is r...
SoftBank’s reported Lordstown factory buy shows where the AI bottleneck has moved
SoftBank has reportedly bought Foxconn’s factory in Lordstown, Ohio, through a shell entity called Crescent Dune LLC, and plans to turn it into an AI server manufacturing hub for the Stargate project with OpenAI and Oracle.
If Bloomberg’s report is right, the move says something pretty clear about the next phase of AI infrastructure. Getting GPUs is still hard. Turning chips, networking, cooling, firmware, and power into working clusters on schedule is becoming just as important.
Lordstown is a failed EV plant being redirected toward a market that still swallows huge amounts of capital. EV contract manufacturing in Ohio went nowhere. AI server integration probably has a better shot.
Why Lordstown matters
Foxconn’s Lordstown site has had a rough few years. It was supposed to help anchor an EV manufacturing comeback after General Motors pulled out. Instead, Lordstown Motors collapsed, Fisker collapsed, IndiEV went nowhere, and Foxconn never found a durable contract manufacturing business there. Bloomberg now says SoftBank is the buyer.
The reported plan is server manufacturing for Stargate, the large AI data center effort SoftBank is backing with OpenAI and Oracle. The group has already broken ground in Texas and has been explicit about building more capacity.
The timing tracks. Demand for AI infrastructure still exceeds supply, and U.S. trade policy now complicates basic planning. If tariffs and import friction make every rack slower or more expensive to land, moving final assembly closer to deployment starts to look sensible.
That doesn’t fix the upstream choke points. HBM is still tight. Top-end GPUs are still rationed. Optics and liquid-cooling parts still come with long lead times. But domestic integration gives you tighter control over one part of the process that keeps slipping.
That matters.
AI server manufacturing is a systems job
Too much AI hardware coverage still treats the accelerator as the whole story. It isn’t. A modern training cluster is a systems integration job, and a messy one.
A factory like Lordstown would likely handle work such as:
- assembling GPU-heavy nodes around parts like NVIDIA B200 or GB200-class systems
- integrating Arm-based
Graceor x86 CPUs, high-bandwidth memory, NICs, and storage - building liquid-cooled racks with cold plates, manifolds, and rack-level coolant distribution
- burning in systems under load with telemetry tools such as
DCGM - provisioning firmware,
TPM, measured boot, BMC settings, and secure boot policies - pre-wiring
400Gto800Goptical links forInfiniBandorRoCEv2Ethernet fabrics
None of that is glamorous. It’s where a lot of expensive failures start.
At 80 to 150 kW per rack, small assembly mistakes become reliability problems fast. Uneven cold plate pressure. Sloppy thermal interface application. A bad manifold seal. Marginal optics. Weak BMC hardening. Firmware drift across nodes. Bad cable handling. Any of those can turn a very expensive cluster into an unstable one.
You want to catch that before the rack lands on a data center floor in Texas.
Why an EV plant is a plausible fit
The EV-to-AI angle sounds odd in headline form. On the factory floor, it makes more sense.
Server integration at this scale needs the kind of manufacturing discipline automotive plants already know well: statistical process control, repeatable assembly, end-of-line QA, supplier coordination, traceability, throughput planning. Lordstown already has the basics for heavy industrial work, including loading, floor space, power infrastructure, and tooling. Not all of that carries over cleanly, but enough does to make conversion faster than building a new facility from scratch.
The overlap also exists at the systems level. EV production already deals with thermal, electrical, and software validation under tight constraints. AI hardware assembly now has similar pressure, except the cost of mistakes is worse. A bad rack can sideline tens of millions of dollars in compute or drag down utilization across an entire fabric.
That’s why this reported move is worth more attention than the usual “AI factory” story. Stargate appears to want tighter control over the last mile of cluster construction, where deployment schedules usually slip.
The pain is in networking and cooling
For developers, factory news can sound remote. It isn’t, especially if you run training jobs or build platform tooling for inference fleets.
Hardware topology shapes software behavior. It affects scheduling, checkpointing, placement, failure handling, and the economics of scaling a model.
If Stargate is assembling dense GPU pods itself, the likely target is pre-integrated liquid-cooled racks with tightly controlled network layouts. Think NVLink and NVSwitch inside the node or chassis, then InfiniBand or RoCEv2 Ethernet between racks using 800G optics and very fast spine-leaf fabrics.
The InfiniBand versus Ethernet decision still matters. InfiniBand remains strong for large all-reduce workloads and low-latency consistency. RoCEv2 has improved a lot and fits cloud operations better, assuming the operator can tune congestion control, ECN, and queue behavior without wrecking performance under load. Hyperscalers like Ethernet because it lines up with the rest of the network. AI teams prefer whatever keeps distributed training moving.
Cooling is the other hard limit. Air cooling is running out of headroom at the top end. Liquid cooling has shifted from premium option to basic requirement for many training-class systems. A factory that installs cold plates, validates flow rates, checks leak integrity, and ships racks in a known-good thermal state cuts deployment time and reduces ugly field work.
It also creates lock-in. Once a data center and its service model are built around a specific liquid-cooling design, switching vendors gets harder.
Why SoftBank would want this onshore
There’s also a straightforward policy case for doing this in Ohio. SoftBank has reportedly run into funding and planning friction around Stargate amid tariffs and trade tension under the current U.S. administration. Final assembly in the U.S. gives it a hedge.
That hedge covers a few things:
- lower exposure to tariff swings on finished systems
- better provenance and auditability for enterprise and government customers
- easier supply-chain security controls
- tighter scheduling between manufacturing and data center build-outs
The security angle deserves more attention than it usually gets. In AI infrastructure, signed firmware, SBOMs, measured boot, and hardware root of trust are now standard requirements for serious buyers. Domestic assembly makes chain-of-custody controls easier to bake into the process instead of bolting them on at the end.
That could matter a lot if Stargate goes after regulated workloads. In this case, onshore manufacturing has actual technical value, not just political messaging.
There’s an Arm angle here too
SoftBank owns Arm, and that’s relevant whether the company says it out loud or not.
The main GPU story still runs through NVIDIA, and Grace already gives Arm a place in high-end AI systems. But if SoftBank gets influence over the server integration pipeline from the factory floor upward, it also gets a chance to shape which CPUs show up in orchestration nodes, storage servers, control planes, and some inference fleets.
That won’t change the market overnight. x86 still has deep validation, huge inertia, and plenty of operator trust. Still, power efficiency keeps mattering more, and Arm is getting harder to dismiss in parts of the AI stack that sit adjacent to the accelerator.
It’s worth watching whether a Lordstown-built stack gradually expands Neoverse-class deployments around large GPU clusters.
What it means for OEMs, cloud buyers, and engineering teams
Traditional OEMs and ODMs aren’t going away because SoftBank takes over one plant. But moves like this do put pressure on their place in the stack.
If the biggest AI builders start pulling rack integration in-house, outside vendors get pushed toward narrower roles: chassis, power shelves, cooling modules, optics, managed lifecycle services, or white-box reference designs. The best margins stay with whoever controls the full cluster package and can ship it on time.
For cloud and enterprise buyers, the takeaway is practical. Deployment risk now sits in the physical layer every bit as much as in software. Teams planning large GPU estates should be asking basic questions early:
- Can the facility support liquid cooling at training-rack densities?
- Are floor loading, power delivery, and redundancy designed for racks over 100 kW?
- Is the network team actually ready for
800Goptics and congestion tuning at scale? - Can the ops stack enforce firmware consistency and attestation across thousands of nodes?
Those aren’t procurement details. They determine how much useful compute you get after spending a fortune.
Developers are stuck with the consequences too. Model performance is increasingly constrained by infrastructure choices made far below the application layer. Sequence length, batch sizing, checkpoint cadence, parallelism strategy, even which jobs can co-reside cleanly all depend on fabric behavior, memory layout, and thermal stability underneath.
That used to sit neatly with hardware and ops teams. At this scale, it bleeds upward into software.
Lordstown may end up as a fairly accurate symbol of the AI buildout in 2026. A factory built for one industrial push, stranded by another, now being reused for the market still willing to spend aggressively on compute. If SoftBank can get that plant producing fast, the message is simple enough: the next AI race also depends on who can turn chips into working clusters first.
Useful next reads and implementation paths
If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.
Fix pipelines, data quality, cloud foundations, and reporting reliability.
How pipeline modernization cut reporting delays by 63%.
TechCrunch Sessions: AI hits UC Berkeley’s Zellerbach Hall on June 5, and this year’s agenda looks a lot more grounded. Less spectacle, more production reality. The speaker list is what you’d expect: OpenAI, Google DeepMind, Amazon, and Anthropic on ...
OpenAI is reportedly planning a data center campus in Abu Dhabi with a projected 5 gigawatt power envelope across roughly 10 square miles. By normal data center standards, that number is wild. At 5 GW, this stops looking like a big cloud region and s...
Microsoft just made a pointed infrastructure announcement. Satya Nadella says the company has deployed its first production Nvidia “AI factory” inside Azure, with more coming across Microsoft’s global data center footprint. The numbers are big enough...