The rise of generative AI tools such as ChatGPT is helping cybercriminals create more convincing sophisticated scams, cybersecurity experts have warned.
As ChatGPT marks the first anniversary of its launch to the public, a number of industry experts have said the technology is being leveraged by bad actors online.
They warn that generative AI tools for text and image creation are making it easier for criminals to create convincing scams, but also that AI is being used to help boost cyber defences by helping identify evolving threats as they appear.
At the UK’s AI Safety Summit earlier this month, the threat of more sophisticated cyber attacks powered by AI was highlighted as a key risk going forward, with world leaders agreeing to work together on the issue.
The UK’s National Cyber Security Centre (NCSC) has also highlighted the use of AI to create and spread disinformation as a key threat in years to come, especially around elections.
James McQuiggan, security awareness advocate at cyber security firm KnowBe4, said the impact of generative AI, and the large language models (LLMs) which power them, was already being felt.
“ChatGPT has revolutionised the threat landscape, open source investigations, and cybersecurity in general,” he told the PA news agency.
“Cybercriminals leverage LLMs to generate well-written documents with proper grammar and no spelling mistakes to level up their attacks and circumvent one of the biggest red flags taught in security awareness programmes – the notion that poor grammar and spelling mistakes are indicative of social engineering email or phishing attacks.
“Unsurprisingly, there have been increased sophistication and volume of phishing attacks in various styles, creating challenges for businesses and consumers alike.
“With generative AI also lowering the technical barrier to creating convincing profile pictures, impeccable text and even malware, AI and LLMs like ChatGPT are increasingly being used to create more convincing phishing messages at scale.”
The next generation of generative AI models are expected to start appearing in 2024, with experts predicting they will be significantly more capable than the current generation models.
Looking ahead to potential future uses of generative AI by bad actors, Borja Rodriguez, manager of threat intelligence operations at cyber security firm Outpost24, said hackers could develop AI tools to write malicious code for them.
“Currently, tools like Copilot from GitHub help developers generate code automatically,” he said.
“Not far from that, someone could create a similar tool specifically to assist in creating malicious code, scripts, backdoors and more, aiding script kiddies (novice hackers) with low levels of technical knowledge to achieve things they weren’t capable of in the past.
“These tools will assist underground communities in executing complex attacks without much expertise, lowering the skill requirements for those executing them.”
The rate of advancement of generative AI, and the general unknown potential of the technology for the years to come, has created an uncertainty around it, the experts say.
Many governments and world leaders have begun discussions on how to potentially regulate AI, but without knowing more about the possibilities of technology, piecing together successful regulation will be unlikely.
Etay Maor, senior director of security strategy at Cato Networks, said the issue of trust remained key in regard to LLMs, which are trained on large amounts of text data, and how they are programmed.
“As the excitement surrounding LLMs settles into a more balanced perspective, it becomes imperative to acknowledge both their strengths and limitations,” he said.
“Users must verify critical information from reliable sources, recognising that, despite their prowess, LLMs are not immune to errors.
“LLMs such as ChatGPT and Bard have already reshaped the landscape.
“However, a lingering uncertainty persists as the industry grapples with understanding where these tools source their information and whether they can be fully trusted.”
Comments: Our rules
We want our comments to be a lively and valuable part of our community - a place where readers can debate and engage with the most important local issues. The ability to comment on our stories is a privilege, not a right, however, and that privilege may be withdrawn if it is abused or misused.
Please report any comments that break our rules.
Read the rules here