We are less than a year away from a cyber attack credited to ChatGPT

Jonathan Jackson, director of sales engineering APJ at BlackBerry Cybersecurity, writes about why cyber attacks linked to artificial intelligence are inevitable.

ChatGPT has answers for almost everything, but there is one answer we may not know for a while: will its unintended consequences for cyber security turn this tool into a genie that its creators regret taking out of the bottle?

BlackBerry surveyed 1,500 IT decision makers across North America, the UK and Australia and half (51 percent) predicted we are less than a year away from a cyber attack credited to ChatGPT. Three-quarters of respondents believe foreign states are already using ChatGPT for malicious purposes against other nations.

The survey also exposed a perception that, while respondents see ChatGPT as being used for ‘good’ purposes, 73 percent acknowledge its potential threat to cyber security and are either ‘very’ or ‘fairly’ concerned, proving artificial intelligence (AI) is a double-edged sword.  

The emergence of chatbots and AI-powered tools presents new challenges in cyber security, especially when such tools end up in the wrong hands. There are plenty of benefits to using this kind of advanced technology and we are just scratching the surface of its potential, but we also cannot ignore the ramifications. As the platform matures and hackers become more experienced, it will become more difficult to defend without also using AI to level the playing field.

AI-armed cyber attacks

It is no surprise people with malicious intent are testing the waters, but over the course of this year I expect we shall see hackers get a better handle on how to use ChatGPT successfully for nefarious purposes.

AI is fast-tracking practical knowledge mining, but the same is true for malware coders, with the ever-evolving cyber security industry often likened to a never-ending whack-a-mole game where the bad guys emerge as quickly as they have been mitigated. In the past, these bad actors would rely on their own experience, forums and security researcher blog posts to understand different malicious techniques then convert them into code. Programs like ChatGPT, however, have given them another arrow in their quiver to test out its efficacy to wreak digital havoc.

AI can be used in several ways to carry out cyber attacks, for example automated scanning for vulnerabilities and trying out new attack techniques. Through AI, advanced persistent threats (APTs) can carry out highly targeted attacks to steal sensitive data or disrupt operations. APTs typically involve a sustained attack on a single organization and are often launched by nation-states or highly sophisticated threat actors.

AI can also be used to create convincing phishing emails, text messages and social media posts to trick people into providing sensitive information or installing malware. AI generated deepfake videos can be used to impersonate officials or organizations in phishing attacks. It can be used to launch distributed denial of service (DDoS) attacks, which involve overwhelming an organization’s systems with traffic to disrupt operations, or be used to gain control over critical infrastructure, causing real-world damage.

AI for an AI

The growing use of AI in developing threats makes it even more critical to stay one step ahead by also using AI to proactively fight threats.

Organizations need to continue to focus on improving prevention and detection, and this is a good opportunity to look at how to include more AI in different threat classification processes and cyber security strategies. 
One of the key advantages of using AI in cyber security is its ability to analyze vast amounts of data in real time. The sheer volume of data generated by modern networks makes it impossible for humans to keep up. AI can process data much faster, making it more efficient at identifying threats.

As cyber attacks become more severe and sophisticated and threat actors evolve their tactics, techniques, and procedures (TTP), traditional security measures become obsolete. AI can learn from previous attacks and adapt its defenses, making it more resilient against future threats.

AI can also be used to mitigate APTs, which are highly targeted and often difficult to detect, allowing organizations to identify threats before they cause significant damage. Using AI to automate repetitive tasks when it comes to security management also allows cyber security professionals to focus more on strategic tasks, such as threat hunting and incident response.

The future of cyber security

In security AI matters more than ever now that cyber criminals are using it to up their game. Blackberry’s research reveals that the majority (82 percent) of IT decision-makers plan to invest in AI-driven cyber security in the next two years and almost half (48 percent) plan to invest before the end of 2023. This reflects the growing concern that signature-based protection solutions are no longer effective in providing cyber protection against an increasingly sophisticated threat.  

IT decision makers are positive ChatGPT will enhance cyber security for business, but our survey also shows 85 percent of respondents believe governments have a moderate-to-high responsibility to regulate advanced technologies. 

Both cyber professionals and hackers will continue to investigate how they can best use this technology and only time will tell whose is more effective. In the meantime, for those wishing to get ahead before it is too late it is time to put AI at the top of your cyber technology tools wish list and learn to fight fire with fire.  

Learn more from Jonathan Jackson on the growing threat of ChatGPT and why AI is crucial to prevent it by registering for BlackBerry Security’s webinar, How to use AI to prevent a ChatGPT attack

You May Also Like

  • Blizzard Entertainment hit by DDoS attack

  • IOTW: A full timeline of the MOVEit cyber attack

  • PwC and EY impacted by MOVEit cyber attack

  • BlackCat threatens to leak 80GB of Reddit data