Malware Can Be Written With ChatGPT, as it Turns Out

 

With its multi-talented AI chatbot, ChatGPT, the company now has another skill to add to its LinkedIn profile: it is capable of creating sophisticated “polymorphic” malware. 
The chatbot from OpenAI has been reported as both skilled and resourceful when it comes to developing malicious programs that can cause a lot of trouble for your hardware. This is according to a new report from cybersecurity firm CyberArk. 
As far as cybercrime is concerned, upcoming AI-powered tools have been said to change the game when it comes to the battle against cybercrime, but the use of chatbots to create more complex types of malware hasn’t been discussed extensively yet, with many medical professionals raising concerns about the potential implications. 
The researchers at CyberArk report that the code developed with the help of ChatGPT displayed “advanced capabilities” that could “easily evade security products,” a specific type of malware known as “polymorphic.” And to sum it up, CrowdStrike has offered the following answer to the question: 
There are many different types of viruses, but the most common is a polymorphic virus. This is sometimes called a metamorphic virus due to its capability to change its appearance repeatedly by altering decryption

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: