North Korean Threat Actors Leverage ChatGPT in Deepfake Identity Scheme

North Korean hackers Kimsuky are using ChatGPT to create convincing deepfake South Korean military identification cards in a troubling instance of how artificial intelligence can be weaponised in state-backed cyber warfare, indicating that artificial intelligence is becoming increasingly useful in cyber warfare. 

As part of their cyber-espionage campaign, the group used falsified documents embedded in phishing emails targeting defence institutions and individuals, adding an additional layer of credibility to their espionage activities. 
A series of attacks aimed at deceiving recipients, delivering malicious software, and exfiltrating sensitive data were made more effective by the use of AI-generated IDs. Security monitors have categorised this incident as an AI-related hazard, indicating that by using ChatGPT for the wrong purpose, the breach of confidential information and the violation of personal rights directly caused harm. 
Using generative AI is becoming increasingly common in sophisticated state-sponsored operations. The case highlights the growing concerns about the use of generative AI in sophisticated operations. As a result of the combination of deepfake technology and phishing tactics, these attacks are harder to detect and much more damaging. 
Palo Alto Networks’ Unit 42 has observed a disturbing increase in the use of real-time deep

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: