It has already managed to use artificial intelligence to create undetectable malware

It has already managed to use artificial intelligence to create undetectable malware

A security company researcher was able to get ChatGPT to use a technology to camouflage files

April 19th
2023
– 20:45

(Updated 4/20/2023 at 3:33 AM)

OpenAI’s generative artificial intelligence, ChatGPT, has revolutionized many areas of the digital world. Although this means good news in many sectors, there are also concerns in many digital sectors. One of them is online protection, and this week cybersecurity firm Forcepoint issued an alert about activity of concern on the subject.

According to Forcepoint, the tool is capable of producing undetectable malware, using a technique called steganography, which “camouflages” smaller files within larger ones. Although this action does not necessarily involve malicious content, it cannot be assumed that cyber criminals are already aware that it is possible to do this with virtual pests.

The discovery was made by Forcepoint researcher Aaron Mulgrew, who asked artificial intelligence to build lines of code for the malware. Initially, the request was denied, as this went against the guidelines for using the tool for illegal activities, such as creating a virus.

Photo: Rafael Damini/Canaltech/Canaltech

ChatGPT replied: “Sorry, but as an AI language model, I cannot create malware or any illegal or harmful system. My goal is to help users in a responsible way, and I am fully committed to moral and ethical standards.”

How to create undetectable malware with ChatGPT?

The researcher decided to use another strategy, and began sending commands to perform small steps in the process of creating the ChatGPT malware. And so, little by little, the technology created, with simple commands, an undetectable virus.

See also  Google launches Gemma artificial intelligence model

The first step was to command the AI ​​to generate code that would find images larger than 5MB on the local disk. Then he asked to create another code capable of putting files smaller than 1MB inside that material.

“Artificial intelligence inadvertently uses steganography to piece together a malware system. The technique consists of hiding files inside others, in an obscure way, disguising potentially confidential data within images, and uploading these images to Google public drive,” explains Luiz Farrow, Forcepoint Systems Engineering Director for Latin America.

To verify the efficiency of the malware created, Forcepoint ran tests on services that analyze URLs and files to identify potential attacks. On that first attempt, 69 different malware analysis engines detected that the file was, in fact, malicious. Already in the next version of the plague, there was no definition – thus, it was possible to create an undiscoverable virus using ChatGPT.

And now, a warning from Farrow to OpenIA itself and the entire market about discovery. The fact that it is relatively easy to subvert AI and make it into malware shows itself that for new cybercriminals it is not necessary to know how to code, just to know the process, and the tool will do the rest. They even need it.”

Trending on Canaltech:

By Chris Skeldon

"Coffee trailblazer. Social media ninja. Unapologetic web guru. Friendly music fan. Alcohol fanatic."