Technology August 21, 2025

Catalyst Fund’s engineering thesis for African climate tech: solar cold chains and carbon MRV

Catalyst Fund’s latest push into African climate tech stands out because it puts engineering discipline at the center. The Earthshot Prize-linked firm is backing startups such as Keep IT Cool, which builds solar-powered cold-chain infrastructure, alo...

Catalyst Fund’s engineering thesis for African climate tech: solar cold chains and carbon MRV

Catalyst Fund is backing African climate startups with an engineering playbook that deserves attention

Catalyst Fund’s latest push into African climate tech stands out because it puts engineering discipline at the center.

The Earthshot Prize-linked firm is backing startups such as Keep IT Cool, which builds solar-powered cold-chain infrastructure, alongside newer carbon-removal and climate-adaptation companies. The shared constraint is obvious once you look at the systems: they have to keep working through bad connectivity, intermittent power, tight margins, and constant pressure to prove impact.

Africa is a serious proving ground for systems that many teams elsewhere still treat as edge cases.

Why this stands out

A lot of climate-tech coverage still fixates on funding rounds and market-size slides. Catalyst Fund’s model is more grounded. Early-stage support here looks like platform architecture, field telemetry, payment rails, and MRV pipelines that can survive scrutiny.

Developers will recognize the shape of the work. The failure modes are just harsher.

If you’re building solar cold storage in a city with unreliable grid power, or across rural routes with patchy GSM coverage, "cloud-native" doesn’t get you very far. A cold room that loses state when the uplink drops is a bad product. A PAYG entitlement system that can’t reconcile mobile money delays will create churn. A carbon-removal startup that can’t show auditable uncertainty bounds is going to struggle with buyers and standards bodies.

The technical bar is higher than the pitch decks make it look.

Cold-chain startups are distributed systems companies

Keep IT Cool is a useful example because the problem is concrete. The company works on cold storage and logistics for food supply chains using solar-powered infrastructure to cut spoilage. In practice, that means running an embedded stack, an energy management stack, and fleet software at the same time.

A real deployment usually includes:

  • sensors for temperature, humidity, door state, battery health, and compressor current
  • a microcontroller handling local control loops and fail-safe logic
  • a gateway or modem using GSM, 3G, NB-IoT, or sometimes LoRa backhaul
  • cloud ingestion for telemetry, alerts, billing, and diagnostics
  • OTA update tooling, because field visits are slow and expensive

The architecture has to be offline-first because the device still has to do its job when the network disappears for hours.

That pushes teams toward durable patterns. MQTT over TLS with compact payloads. Store-and-forward buffers on the edge. Deduplication on the broker side. Signed firmware updates. Feature flags that let you change behavior without sending someone into the field. If the hardware sits in the ESP32 or STM32 class, every byte and every wake cycle counts.

This is where climate-tech software gets separated from slideware. Stable infrastructure can’t be assumed. The system has to be built around that fact.

Cheap data transport is part of the product

African climate startups are often forced into design choices that enterprise teams elsewhere should probably copy.

Verbose JSON over flaky cellular links is expensive and sloppy. Compact encodings such as CBOR or Protocol Buffers make sense when airtime costs money and the link drops out. QoS 1 MQTT with retry logic also makes sense, provided the backend is replay-safe and idempotent.

Local control matters for the same reason. If a cold room’s compressor logic depends on cloud round trips, the system is poorly designed. PID loops belong on-device. So do basic anomaly checks that catch obvious trouble, like abnormal current draw or temperature drift after a door event.

TinyML can help, but it’s easy to overdo it. A lot of field maintenance problems don’t need a clever model. A threshold-based fallback on the MCU is often worth more than a fancy classifier sitting in the cloud. When you’re trying to protect food inventory in a remote deployment, robustness wins.

That trade-off comes up constantly in climate tech. The better answer is usually the one a field technician can understand and debug.

MRV is becoming a data engineering problem

Catalyst Fund’s work with carbon-removal and adaptation startups points to another shift. Measurement, reporting, and verification is moving into core product infrastructure.

That changes the stack.

A startup tracking biomass gains, water savings, or reduced spoilage needs more than dashboards. It needs reproducible pipelines, data lineage, and uncertainty estimates that hold up under audit. The source material mentions STAC catalogs, xarray, Dask, and Zarr, which tracks with where serious geospatial work has gone. Those tools matter because they make large Earth observation datasets queryable, chunked, and rerunnable without turning every analysis into notebook sprawl.

The NDVI example in the reference material is deliberately basic. Pull Sentinel-2 imagery from a STAC API, compute NDVI, store it as Zarr, and you have a cheap baseline for vegetation monitoring. Add ground-truth calibration and tighter QA around clouds, temporal compositing, and uncertainty, and you have the start of a commercial MRV pipeline.

The vegetation index itself is not the hard part. Reproducibility is. Can you rerun the same job six months later? Can you show which imagery was used, how it was transformed, and where the confidence interval came from? If not, you have analytics, not verification.

That gap will matter even more as carbon markets keep fighting over quality.

Payments are infrastructure

One of the sharper parts of Catalyst Fund’s approach is the overlap between climate tech and fintech.

A lot of these businesses depend on pay-as-you-go or lease-to-own pricing. So integration with mobile money systems such as M-Pesa, MTN MoMo, and Airtel Money is core product behavior.

Engineers who haven’t worked on these systems tend to underestimate the mess. Webhooks arrive late or twice. Customer balances drift. Devices go offline in the middle of a payment state change. Field agents need manual overrides. Finance wants reconciliation that matches actual cash movement, not app-level assumptions.

The technical requirements are plain and unforgiving:

  • idempotent payment event handling
  • signed commands for device entitlement changes
  • local grace periods when payment confirmation is delayed
  • audit logs that survive retries and operator mistakes
  • reconciliation jobs that can unwind mismatch across telecom and internal ledgers

That plumbing decides whether the unit economics work outside a spreadsheet.

Hardware still sets the ceiling

Software teams like to treat hardware as an implementation detail. In climate infrastructure, hardware often determines the margin.

Battery chemistry matters. LFP is emerging as the practical default because it’s safer and lasts longer, though sodium-ion is worth watching for stationary use if costs keep moving. Sensor quality matters too. Cheap sensors are attractive until calibration drift starts poisoning your model inputs and your operating decisions.

Ruggedization matters for the same reason. Dust, heat, vibration, and rough handling are normal conditions. A polished cloud stack attached to fragile field hardware is still a bad system.

That’s also why predictive maintenance is interesting only up to a point. Compressor and inverter failures can often be spotted with simple trend analysis on current draw, duty cycles, or battery state-of-charge patterns. You don’t need a huge model ingesting every signal to get useful results. Start with the small set of signals that correlate with failure and can actually be collected reliably.

What technical teams should take from this

There’s a clear lesson in Catalyst Fund’s portfolio logic. The engineering patterns travel well.

Embedded teams should care about low-power design, secure OTA updates, and calibration under ugly real-world conditions. Platform engineers should care about durable ingestion, event replay, fleet observability, and intermittent connectivity as a first-class requirement. Data teams should get serious about geospatial tooling, lineage, and uncertainty instead of shipping glossy impact charts. Full-stack teams should stop treating offline UX and payments as edge concerns.

A few practical bets look especially good right now:

Build for sync later

PWAs, local caches, queued actions, and explicit conflict handling still beat "please reconnect and try again" in field workflows.

Keep edge models humble

TinyML is useful when it cuts truck rolls or catches equipment faults early. It stops being useful when it burns power, adds brittleness, or turns into something nobody can debug in the field.

Treat MRV like regulated data infrastructure

Version datasets. Log lineage. Store uncertainty. If an auditor or buyer asks how a number was produced, "we ran a notebook" is not an acceptable answer.

Put security where the risk is

Mutual TLS for devices, signed firmware bundles, scoped API keys for diagnostics, and command authorization are basic requirements when remote assets can be switched on, off, or degraded by an attacker.

Why this matters beyond the portfolio

Catalyst Fund is backing startups. It’s also surfacing a set of engineering habits shaped by tougher operating conditions than most software teams ever face.

These companies are building edge systems that assume failure, payment stacks that assume delay, and impact pipelines that assume scrutiny. Those are sensible assumptions. A lot of the industry still builds around stable networks, cheap compute, and easy trust.

Those assumptions don’t hold up for long. Teams that learn from this playbook will build better systems almost anywhere.

Keep going from here

Useful next reads and implementation paths

If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.

Relevant service
Data scraping and automation

Collect, validate, enrich, and monitor external data so teams can use it confidently.

Related proof
Competitive pricing data automation

How pricing data automation replaced 70% of manual competitor tracking.

Related article
GV leads Blacksmith's $10M Series A four months after its seed

Google Ventures has led another round in Blacksmith just four months after leading the startup’s $3.5 million seed. The new raise is a $10 million Series A, and the timing matters almost as much as the number. Investors usually move this quickly when...

Related article
iOS 26 Beta 4 refines Liquid Glass and restores AI news summaries

Apple’s fourth developer beta for iOS 26 does two notable things. It keeps refining the new Liquid Glass interface, and it brings back AI-generated news notification summaries after pulling them earlier. Both changes fit Apple’s usual pattern. Ship t...

Related article
Peter Sarlin's new startup is building the enterprise software layer for quantum

Peter Sarlin is back with a familiar thesis from the Silo AI years before AMD bought the company for $665 million. Build the layer enterprises will need before the underlying hardware is fully ready. This time, the hardware is quantum. Sarlin’s new s...