Google adds on-device AI scam detection in India, with clear limits
Google is rolling out two fraud protections in India, and they matter for different reasons. One is technically interesting. The other is likely to help more people sooner. Both are late. Both have clear limits. The first is on-device scam detection ...
Google’s AI scam defenses in India are smart, overdue, and still too narrow
Google is rolling out two fraud protections in India, and they matter for different reasons. One is technically interesting. The other is likely to help more people sooner. Both are late. Both have clear limits.
The first is on-device scam detection for phone calls on Pixel 9 and newer, using Gemini Nano to watch live calls from unknown numbers for signs of social engineering. The second is screen-sharing scam alerts inside financial apps like Navi, Paytm, and Google Pay on Android 11+, aimed at a common fraud pattern where scammers talk users through a payment while watching the screen in real time.
That focus makes sense. India’s digital payments system is huge, fast, and heavily targeted. RBI data shows digital transaction fraud accounted for more than half of reported bank fraud in 2024. The Ministry of Home Affairs estimated online scam losses at ₹70 billion in just the first five months of 2025. A lot of fraud now happens through live persuasion: calls, screen sharing, remote access tools, fake urgency, and step-by-step coaching.
Google is finally putting defenses inside that path.
The call detection is the harder problem
Google says the Pixel feature runs fully on-device. Audio isn’t recorded or sent to Google. A periodic beep plays during analysis. It’s opt-in and, at launch, English only.
That setup matters. Live scam detection on calls has three ugly constraints: privacy, latency, and false positives.
Cloud processing would turn privacy into a problem fast. On-device processing that reacts too slowly won’t help much. A detector that fires too often gets ignored, and then the warning system is dead weight.
So the architecture choice is sensible. A lightweight local pipeline probably looks roughly like this:
- short-window
ASRtranscribes speech incrementally - a small classifier or prompt-style detector checks for scam patterns
- the phone decides whether to interrupt with a warning
The patterns are common enough to model: requests for OTPs, UPI PINs, account details, “your account will be blocked” pressure, instructions to install remote access apps like AnyDesk, fake bank verification flows. The hard part is doing it fast enough to matter while dealing with noisy audio, interruptions, mixed accents, and code-switching.
That’s where the English-only launch starts to look weak. Scam calls in India don’t stay neatly in one language. A caller might speak Hindi or Bengali and drop in “OTP,” “KYC,” “account freeze,” or “UPI verification” in English. Small on-device models are already running on a tight compute budget. Multilingual speech, slang, and overlap make the job much harder without server-side inference.
The technical direction is still right. If you want live call intervention at scale, local inference is the cleanest way to do it.
Effective scam detection on calls is a streaming NLP problem under strict latency budgets. Privacy makes it harder. India’s language mix makes it harder again.
The periodic beep is worth noting too. Part of it is transparency. Part of it is legal cover. It also signals that real-time audio analysis has moved from research demo to consumer security feature.
The screen-sharing alerts may matter more
The flashy feature is the AI call detector. The one with broader near-term value may be the screen-share warning.
This is being piloted with Navi, Paytm, and Google Pay. If a user is sharing their screen during a sensitive flow inside the app, they see an alert with a one-tap option to end the call and stop sharing.
That lines up with how a lot of financial fraud now works. The scammer doesn’t need malware if they can keep the victim on a call, get them to share the screen, and walk them through the payment. It’s cheap, scalable, and effective.
On Android, screen capture usually runs through the MediaProjection API. Apps can detect active projection sessions and decide when to warn, especially around OTP entry, PIN pads, account recovery, KYC steps, and beneficiary setup. Add FLAG_SECURE on sensitive views and you make screen-based coaching attacks much harder.
It won’t stop every attack. A manipulated user can still push past warnings. But this targets a common fraud path, and the mechanics are straightforward. It’s also much easier to deploy across Android than a Pixel-only AI model.
Google says Indian language support and more partners are coming. That needs to move quickly. Security warnings fail when users only partly understand them, and India is unforgiving about weak localization.
The usual Android defenses still matter
The new features arrive alongside the standard enforcement tools still doing quieter work.
Google says Play Protect blocked 115 million installation attempts this year involving sideloaded apps that ask for sensitive permissions often abused in fraud. Google Pay is generating more than a million weekly warnings for potentially fraudulent transactions. The company is also continuing its DigiKavach awareness push and maintaining a list of authorized digital lending apps with the RBI.
Those numbers sound good, but they need context. Play Protect helps, especially against sideloaded junk asking for READ_SMS, QUERY_ALL_PACKAGES, SYSTEM_ALERT_WINDOW, or abusive AccessibilityService access. But app review and policy enforcement are still patchy. Bad actors keep adapting. Fraudulent lending and investment apps still get through, stay up too long, and often disappear only after users or police force the issue.
The blocking matters. It doesn’t come close to solving the problem on its own.
The biggest issue is reach
Google is trying to address a mass-market fraud problem with a premium-device rollout.
That’s the central weakness here. Android has roughly 96% smartphone share in India, but Pixel’s share was under 1% in 2024. Limiting live call scam detection to Pixel 9 and newer means the strongest protection lands on a tiny part of the market. The people most exposed to phone-led financial scams are often using lower-cost Android phones, not flagship Pixels running Gemini Nano.
There are obvious reasons for that choice. On-device inference has real hardware demands. You need enough memory, decent sustained performance, and ideally an NPU that can handle streaming workloads without draining the battery. Rolling out first on Google’s own devices gives the company tighter control over latency, UX, and model behavior.
Still, the coverage is narrow.
The language gap is just as serious. English-only support means the detector will miss a large share of scam calls or catch them unevenly. That’s risky in a security feature because users tend to read silence as safety.
Google’s move is good. It just isn’t broad enough to change the fraud equation yet.
What Android and fintech teams should take from this
If you build financial products on Android, assume the OS will help a little and build your own layered defenses anyway.
A few priorities stand out:
- Detect active
MediaProjectionsessions and warn aggressively during OTP, PIN, login, and beneficiary flows. - Use
FLAG_SECUREon views that expose secrets or payment credentials. - Treat
AccessibilityServiceand overlay abuse as active fraud signals, not edge cases. - Avoid automatic OTP reading. Don’t ask for
READ_SMSif you can help it. - Feed client-side risk signals into server-side checks like transaction holds, step-up auth, or beneficiary cooling periods.
- Track how users respond to warnings. If everybody dismisses them, the wording or timing is wrong.
- Localize every high-risk warning properly, early.
Play Integrity API belongs in the stack too, especially if you’re already scoring device trust, rooting, tampering, or app association risk. It won’t stop social engineering, but it does help connect on-device signals to backend enforcement.
Another practical step: watch for remote desktop and screen-control apps during sensitive actions. AnyDesk-style fraud keeps showing up because it works.
Where this goes next
Google is pushing more security logic onto the device and into the app flow. That’s the right direction. Fraud happens in real time, so defenses need to show up there too.
Now the pressure shifts to OEMs, Apple, carriers, and payment platforms. Caller identity systems, STIR/SHAKEN quality, app integrity checks, UPI risk scoring, screen-sharing controls, and behavioral warnings all matter here. The strongest version of this stack will span multiple layers. One classifier or one app prompt won’t carry it.
For now, Google has shipped two useful ideas and one obvious frustration. The ideas are good. The scale still isn’t there. In India, that matters more than anything else.
Useful next reads and implementation paths
If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.
Build product interfaces, internal tools, and backend systems around real workflows.
How a field service platform reduced dispatch friction and improved throughput.
Google has quietly released an experimental Android app called AI Edge Gallery that lets users download and run Hugging Face models locally on a phone. No server round trip. No required cloud API. For a market full of on-device AI claims with a lot o...
Google has quietly released AI Edge Gallery, an experimental Android app for downloading and running AI models locally on a phone. An iOS version is planned. The app is Apache 2.0 licensed, pulls from open model ecosystems such as Hugging Face, and r...
CES 2026 had the usual stack of gadgets. The more useful signal came from somewhere else. AI is moving deeper into machines with hard latency limits, bad connectivity, safety constraints, and users who won't wait around for cloud round-trips. Nvidia,...