Cybercriminals are using ChatGPT to create malware

Malicious actors have been using artificial intelligence (AI)-powered chatbots like OpenAI’s ChatGPT to build malware, dark web sites and other tools for enacting cyber attacks, reserach by threat intelligence company Check Point Research has found. 

When asked by Cyber Security Hub, cyber security experts predicted that a top threat to cyber security in 2023 would be crime-as-a-service; platforms where malicious actors can offer their services to those who would otherwise be unable to carry out cyber attacks. With ChatGPT being able to expedite the process of creating malware for free, this could make crime-as-a-service even more lucrative for cyber criminals.

Screenshot provided by Check Point Research

While ChatGPT has put restrictions on its use, including using it to create malware, posts on a dark web hacking forum have revealed that it can still be used to do so. One user alludes to this by saying a that “there’s still work around”, while another said “the key to getting it to create what you want is by specifying what the program should do and what steps should be taken, consider it like writing pseudo-code for your comp[uter] sci[ence] class.”  

Using this method, the user said they had been able to create a “python file stealer that searches for common file types” that can self-delete after the files are uploaded or if any errors occur while the program is running “therefore removing any evidence”. 

Screenshot provided by Check Point Research

Another user described being able to use ChatGPT to create a dark web marketplace script. Dark web marketplaces can be used in a number of different ways, including selling personal information obtained in data breaches, selling illegally obtained payment card information or selling cyber crime-as-a-service products.

Screenshot provided by Check Point Research

Many more users have posted to the forum, toting ChatGPT as a way to “make money”, with claims that it can make users more than US$1,000 per day. According to Forbes, these methods include using ChatGPT’s abilities to pose as young women to enact social engineering attacks on vulnerable targets.

Adam Levin, cyber security expert and host of cybercrime podcast What the Hack with Adam Levin, explains malicious actors being able to create “increasingly sophisticated software” and sell this software as-a-service is so dangerous as it “allows anyone, regardless how tech savvy, to conduct phishing, ransomware, distributed denial of service and other cyber attacks”. 

Levin predicts that throughout 2023, “criminal software enterprises will continue to threaten enterprises of any size”. Furthermore he says cyber-crime syndicates behind current as-a-service platforms are set to grow over the next 12 months as “they can make more money enabling entry-level cyber criminals to commit crimes than they can directly targeting victims and with less risk”.  

However, Levin reassures that these types of attacks can be mitigated with the use of multifactor authentication, the implementation of zero-trust architecture and regular cyber security training and penetration testing. 

You May Also Like

  • Blizzard Entertainment hit by DDoS attack

  • IOTW: A full timeline of the MOVEit cyber attack

  • PwC and EY impacted by MOVEit cyber attack

  • BlackCat threatens to leak 80GB of Reddit data