Identity Hijack: The Next Generation of Identity Theft

 

Synthetic representations of people’s likenesses, or “deepfake” technology, are not new. Picture Mark Hamill’s 2019 “The Mandalorian” episode where he played a youthful Luke Skywalker, de-aged.

Similarly, artificial intelligence is not a novel concept. 


However, ChatGPT’s launch at the end of 2022 made AI technology widely available at a low cost, which in turn sparked a competition to develop more potent models among almost all of the mega-cap tech companies (as well as a number of startups). 

Several experts have been speaking concerning the risks and active threats posed by the current expansion of AI for months, including rising socio economic imbalance, economic upheaval, algorithmic discrimination, misinformation, political instability, and a new era of fraud. 

Over the last year, there have been numerous reports of AI-generated deepfake fraud in a variety of formats, including attempts to extort money from innocent consumers, ridiculing artists, and embarrassing celebrities on a large scale. 
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: