Security News


Kyle Alspach


The OpenAI chatbot can be used to write malware code and phishing emails effectively, even by individuals with a limited understanding of cybersecurity fundamentals, researchers said in a new report.


 ARTICLE TITLE HERE





Among the “most pressing and common threats” from the use of ChatGPT by cybercriminals are phishing, social engineering and malware development, threat intelligence firm Recorded Future said in a report Thursday.

The report adds further evidence, gleaned from sources such as dark web forums, that OpenAI’s massively popular chatbot is being used by malicious actors intent on carrying out cyberattacks with the help of the tool.

[Related: ChatGPT Malware Shows It’s Time To Get ‘More Serious’ About Security]

“ChatGPT lowers the barrier to entry for threat actors with limited programming abilities or technical skills,” Recorded Future researchers said in the report. “It can produce effective results with just an elementary level of understanding in the fundamentals of cybersecurity and computer science.”

The company said it has “identified threat actors on dark web and special-access sources sharing proof-of-concept ChatGPT conversations that enable malware development, social engineering, disinformation, phishing, malvertising, and money-making schemes.”

For instance, ChatGPT’s specialty in imitating human writing “gives it the potential to be a powerful phishing and social engineering tool,” the researchers said.

The AI-powered chatbot could prove especially useful for threat actors who are not fluent in English, with the potential for the tool to be used to “more effectively” distribute malware, according to the report.

Meanwhile, although the potential use of ChatGPT for development of malware has gotten plenty of attention, Recorded Future’s research team highlighted several ways that the tool could be used for malware creation in a more advanced fashion.

Those include training ChatGPT on malware code found in open-source repositories to generate “unique variations of that code which evade antivirus detections,” using “syntactical workarounds that ‘trick’ the model” into fulfilling a request to write code that exploits vulnerabilities, and using ChatGPT to create malware configuration files and set up a command-and-control system.

Notably, ChatGPT can also be utilized to generate the malware payload itself that’s intended for distribution as part of a cyberattack, according to Recorded Future researchers. The research team has identified several malware payloads that ChatGPT is effective at generating, including infostealers, remote access trojans and cryptocurrency stealers.

Through the use of Recorded Future’s intelligence platform, the researchers said they identified 1,582 references “on dark web and special-access sources to threat actors discussing and sharing proof-of-concept code generated by ChatGPT that fit the criteria listed above.”

Those included conversations during which malicious actors have shared ChatGPT-written code “that can be weaponized to develop malware,” as well as discussions about using the tool to help with exploiting critical vulnerabilities in software and on the web, the researchers said.

In addition, “we have identified several payloads written by ChatGPT, shared openly on these sources, which function as a number of different malware types,” the Recorded Future researchers said in the report.

Recorded Future offers its Intelligence Cloud to bring together continuous data collection with comprehensive graph analysis as well as with analysis from the company’s research team. The platform aims to offer crucial intelligence about malicious adversaries—including about their infrastructure and their target victims—enabling customers to proactively disrupt attacker activities.

Due to the fact that code written by ChatGPT is similar to publicly accessible code, the Recorded Future researchers said they believe that “most antivirus providers” would likely be successful at identifying the malware. “However, ChatGPT is lowering the barrier to entry for malware development by providing real-time examples, tutorials and resources for threat actors that might not know where to start,” the researchers said.

On the one hand, it’s clear that ChatGPT is “amazing, and is going to change our world,” said Russell Reeder, CEO of Netrix Global, No. 190 on the 2022 CRN Solution Provider 500. But the potential use of the tool for cyberattacks also shows that “technology cannot be unpoliced,” he said. “There needs to be a controlling force.”

Still, Reeder said he does believe that OpenAI is “managing it the best they can—a layman can’t come in and say, ‘Create [malware] for me.’”

OpenAI, which is also behind the DALL-E 2 image generator, and whose backers include Microsoft, first introduced ChatGPT in late November. The chatbot has gained widespread popularity thanks to its ability to effectively mimic human writing and conversation while responding to prompts or questions from users.


Kyle Alspach

Kyle Alspach is a Senior Editor at CRN focused on cybersecurity. His coverage spans news, analysis and deep dives on the cybersecurity industry, with a focus on fast-growing segments such as cloud security, application security and identity security.  He can be reached at [email protected].