AI Deepfake Scam Changes Aadhaar Mobile Without OTP

 

AI-enabled fraudsters are now using deepfake tools to change Aadhaar details, such as the mobile number linked to an account, without victims noticing, enabling identity theft and loan fraud.

In Ahmedabad, cybercrime investigators uncovered a racket that quietly replaced victims’ Aadhaar-linked mobile numbers and then used those new numbers to intercept OTPs and take control of digital services, including DigiLocker and banking apps. The gang reportedly collected Aadhaar numbers, photographs and other personal data from leaks and social media, then used AI software to turn still photos into short “blink” videos that mimic liveness checks and fool verification systems. 

Once the fraudsters changed the registered mobile number, they could receive OTPs and update KYC details, effectively hijacking victims’ digital identities and applying for loans or accessing accounts in their names. Police say the operation was organised with distinct roles: some members sourced data and photos, others used Aadhaar update kits—often through Common Service Centres (CSCs)—to make unauthorised changes, and specialists created deepfake clips to pass biometric checks.

Authorities arrested several suspects after a businessman reported that his Aadhaar-linked number was altered without any OTP or call alerts, revealing how smoothly the criminals combined social engineering, physical update kits, and AI manipulation to bypass safeguards. Reports indicate the attackers exploited weaknesses in offline update workflows and gaps in liveness-detection systems that still accept AI-generated motion as genuine.

Safety recommendations 

To protect yourself, regularly verify the mobile number linked to your Aadhaar and lock your biometrics using official mAadhaar or UIDAI services when not in use. Monitor DigiLocker and bank accounts for unexpected changes and set up transaction alerts with your bank; if you spot unusual activity, report it immediately to local cybercrime units or UIDAI’s helplines. Avoid uploading Aadhaar photos or documents on unfamiliar platforms and be cautious about sharing personal information on social media, which criminals can reuse to create realistic deepfakes. 

Longer-term fixes will require stricter controls around Aadhaar update kits at CSCs, better audit trails for demographic changes, and improved liveness-detection algorithms that can distinguish AI-generated clips from real facial movement. Experts and regulators also urge faster data-breach notification rules and tighter controls on access to identity databases so criminals cannot easily assemble the building blocks for such attacks. Until these systemic changes arrive, vigilance, biometric locks, and immediate reporting remain the best defenses for citizens.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: