Web Development March 4, 2026

India's Section 69A order is disrupting Supabase access for developers

India’s February 24 blocking order under Section 69A is disrupting access to parts of Supabase inside the country, and the impact goes well beyond a few developers losing a dashboard. Reports show .supabase.co endpoints are intermittently unreachable...

India's Section 69A order is disrupting Supabase access for developers

India’s Supabase block breaks real apps, and it exposes a weak point in modern web stacks

India’s February 24 blocking order under Section 69A is disrupting access to parts of Supabase inside the country, and the impact goes well beyond a few developers losing a dashboard. Reports show *.supabase.co endpoints are intermittently unreachable on major Indian ISPs including ACT Fibernet, JioFiber, and Airtel, while supabase.com itself remains accessible.

That split matters because the marketing site is irrelevant if your production app depends on https://<project>.supabase.co for auth, PostgREST, Realtime, Storage, or Edge Functions. For affected users, signups fail, API calls time out, websocket connections drop, uploads break, and anything built around direct browser-to-Supabase traffic starts to come apart.

Supabase says India represents roughly 9% of its global traffic. That's a meaningful market for the company, and a real operational problem for teams serving Indian users.

Why this hurts more than a GitHub block

India has blocked developer services before. GitHub has seen temporary restrictions in the past. But GitHub usually isn't sitting in the live request path of a customer-facing app. Supabase often is.

A lot of teams use Supabase as a public backend, wired straight into the client. The browser talks directly to Supabase for session handling, row-level protected reads, file uploads, realtime events, and serverless functions. It's clean when everything works. It also means one blocked hostname can take out a big slice of your product.

Teams still underrate this. They think they're buying managed Postgres plus some extras. What they're really adopting is a dependency on a public network endpoint. If that endpoint gets filtered at the ISP layer, your database can be perfectly healthy while the app is effectively down for part of your audience.

What the network behavior suggests

The pattern being reported looks like hostname-level filtering, not a broad infrastructure outage.

A few common enforcement methods fit:

  • DNS tampering. Requests for *.supabase.co can return NXDOMAIN, a null route, or some other poisoned response.
  • SNI filtering. During the TLS handshake, the client still exposes the requested hostname through Server Name Indication. TLS 1.3 doesn't hide that unless Encrypted Client Hello is in play, and ECH support is still limited in the wild.
  • Host header filtering for plaintext or proxied traffic.
  • IP blocking, though that's a blunter tool and can create collateral damage if addresses are shared.

That's why switching to a different DNS resolver often doesn't fully fix it. DNS-over-HTTPS or DNS-over-TLS might get around poisoned DNS. They won't help if the connection is dropped because supabase.co is visible in SNI.

For a typical project, all of these can fail together:

  • https://<project>.supabase.co/rest/v1
  • https://<project>.supabase.co/auth/v1
  • wss://<project>.supabase.co/realtime/v1
  • https://<project>.supabase.co/storage/v1
  • https://<project>.supabase.co/functions/v1

From the user side, it feels random. Some pages load. Others hang. The failures cluster around anything interactive.

Supabase’s architecture is also the failure mode

Supabase is popular because it compresses a messy backend stack behind one project URL. Under that URL you get a bundle of open source components:

  • PostgREST for REST over PostgreSQL
  • GoTrue for auth and JWT issuance
  • Realtime for websocket updates from logical replication
  • Storage for object handling
  • Edge Functions on a Deno runtime
  • A gateway layer handling routing, CORS, and rate limiting

That's great for developer speed. It's also why a block like this hurts so much.

You don't lose one service. You lose the front door.

This matters beyond Supabase. Any app that sends browser traffic directly to third-party infrastructure domains is taking on this kind of exposure. A client app that talks to APIs behind your own domain has a very different risk profile from one that depends on a third-party hostname for every authenticated action.

Custom domains may help, but don't assume too much

Some Supabase plans support custom domains. In theory, that can avoid a block aimed specifically at supabase.co, because the client presents your hostname in SNI rather than Supabase's.

Whether that works depends on how the block is enforced.

If the filtering is mostly SNI or domain-based, a custom domain may work. If the ISP is blackholing IP ranges or inspecting more deeply, it may not. There's also a bunch of operational cleanup teams like to ignore until it breaks production:

  • CORS rules need to be updated correctly
  • cookie domains can break auth in subtle ways
  • JWT audiences and callback URLs need to line up
  • if the custom domain just CNAMEs into a blocked target path, you may still be stuck

There's also a policy question here. A technical workaround isn't automatically a safe long-term business move. Teams operating in India need legal guidance before treating routing tricks as the answer.

Direct-to-backend architectures have a regulatory weak spot

This is bigger than one vendor.

Over the past few years, a lot of web architecture has moved logic and data access closer to the client. That includes direct browser calls to hosted auth providers, managed databases exposed through REST layers, vector stores, storage services, and edge runtimes. It cuts backend code. It also puts regulatory and network exposure directly in the user's request path.

That trade-off has been cheap on paper and expensive in moments like this.

If your app depends on a public third-party domain from the browser, you inherit every DNS block, SNI filter, certificate problem, and regional policy fight that domain runs into. Users feel it immediately. Your backend team often can't patch around it quickly because the browser is already wired to the dependency.

AI products are especially exposed. Plenty use Supabase for chat history, embeddings metadata, RAG document stores, usage tracking, and auth. Lose access to the project URL and you don't just lose persistence. You break login, context retrieval, sync, and rate-limiting logic in the same hit.

What engineering teams should do this week

If you have users in India, or in any market with a history of network-level service restrictions, this needs triage now.

Start with an inventory. Search for supabase.co, NEXT_PUBLIC_SUPABASE_URL, SUPABASE_URL, and any direct client SDK calls. Check web, mobile, background jobs, and upload flows. A lot of teams have more direct dependency here than they realize.

Then test from inside India across multiple networks. Don't trust a single synthetic probe. Enforcement like this is often inconsistent by ISP and city.

After that, the options get pretty practical.

Put an indirection layer under your own domain

For many teams, the cleanest short-term move is to stop sending the browser directly to Supabase where possible. Route requests through your own API and let your server talk to Supabase.

That gives you control over DNS, observability, retries, and possible failover. It also adds latency, server cost, and code you probably avoided by choosing Supabase in the first place. That's the bill for the simplicity.

Use custom domains if your plan supports them

Worth testing. Test from real Indian networks before you declare victory.

Move sensitive paths server-side

Uploads are a good example. Direct browser uploads to Supabase Storage are convenient until the storage endpoint disappears. A server-mediated signed URL flow or proxy upload path is heavier, but at least it gives you somewhere to intervene.

Prepare a second home for your data plane

If the disruption drags on, some teams will need an exit ramp.

The least painful route is usually to keep Postgres semantics and rebuild the API layer around that. That could mean self-hosting the Supabase stack, or moving to a managed Postgres provider like Neon, Aiven, Crunchy Bridge, Timescale, or RDS and putting your own backend in front of it.

The database migration work is familiar:

  • dump schema and roles
  • restore row-level security policies
  • move data
  • validate sequences, indexes, and extensions
  • cut over during a read-only window if you need consistency

The harder part is replacing the behavioral pieces teams forget they're relying on: auth flows, storage metadata, websocket semantics, function triggers, and the assumptions baked into client SDK usage.

The architecture lesson

Treat managed endpoint DNS as a production dependency on the same level as a database primary or a payment provider. If the hostname disappears in one country, what still works? What degrades cleanly? What has a first-party fallback?

A lot of teams don't have a good answer because the happy path has been so easy.

Supabase is still a strong product. But this episode makes one thing plain: direct third-party backend access from the client creates a nasty single-hostname failure mode. If one blocked domain can wipe out auth, storage, realtime, and your main API in a major market, the architecture isn't graceful. The DX is.

Keep going from here

Useful next reads and implementation paths

If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.

Relevant service
Web and mobile app development

Build product interfaces, internal tools, and backend systems around real workflows.

Related proof
Field service mobile platform

How a field service platform reduced dispatch friction and improved throughput.

Related article
Microsoft ports the TypeScript compiler to Go with up to 10x faster builds

Microsoft is porting the TypeScript compiler from its current Node.js runtime to Go under a project called Corsa. The headline number is hard to miss: up to 10x faster builds and type-checking, with lower memory use. If that number holds up outside M...

Related article
TypeScript 5.8 tightens return type checks and improves Node.js support

TypeScript 5.8 matters for two reasons. First, it gets stricter around return statements with conditional expressions. TypeScript has been looser here than most people assume. That leaves room for bugs in utility code, cache layers, and parser functi...

Related article
Modelence raises $3M to turn AI-generated code into deployable apps

Modelence has raised a $3 million seed round led by Y Combinator, with Rebel Fund, Acacia Venture Capital Partners, Formosa VC, and Vocal Ventures also participating. The pitch is clear enough: AI can generate components, endpoints, and decent-lookin...