In May of this year, an anonymous person called and texted elected lawmakers and business executives pretending to be a senior White House official. U.S. senators were among the recipients who believed they were speaking with White House chief of staff Susie Wiles. In reality, though, it was a phoney.
The scammer employed AI-generated deepfake software to replicate Wiles’ voice. This easily accessible, low-cost software modifies a public speech clip to deceive the target.
Why are deepfakes so convincing?
Deepfakes are alarming because of how authentic they appear. AI models can analyse public photographs or recordings of a person (for example, from social media or YouTube) and then create a fake that mimics their face or tone very accurately. As a result, many people overestimate their ability to detect fakes. In an iProov poll, 43% of respondents stated they couldn’t tell the difference between a real video and a deepfake, and nearly one-third had no idea what a deepfake was, highlighting a vast pool of potential victims.