Artificial intelligence will make it difficult to spot whether emails are genuine or sent by scammers and malicious actors, including messages that ask computer users to reset their passwords, the UK’s cybersecurity agency has warned, Report informs referring to The Guardian.
The National Cyber Security Centre (NCSC) said people would struggle to identify phishing messages – where users are tricked into handing over passwords or personal details – due to the sophistication of AI tools.
It said generative AI and large language models – the technology that underpins chatbots – will complicate efforts to identify different types of attack such as spoof messages and social engineering, the term for manipulating people to hand over confidential material.
“To 2025, generative AI and large language models will make it difficult for everyone, regardless of their level of cybersecurity understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts.”
Ransomware attacks, which had hit institutions such as the British Library and Royal Mail over the past year, were also expected to increase, the NCSC said.
It warned that the sophistication of AI “lowers the barrier” for amateur cybercriminals and hackers to access systems and gather information on targets, enabling them to paralyse a victim’s computer systems, extract sensitive data and demand a cryptocurrency ransom.
However, it said generative AI – which emerged as a competent coding tool – would not enhance the effectiveness of ransomware code but would help sift through and identify targets.