Home » 95% of Brits don’t understand AI data privacy risks

95% of Brits don’t understand AI data privacy risks

by Simon Jones Tech Reporter
27th Jan 26 4:53 pm

As artificial intelligence rapidly becomes part of everyday working life, an annual report from the cybersecurity firm NordVPN suggests UK organisations may be facing growing privacy risks driven by low awareness.

To coincide with Data Privacy Day (28 January), some new research from NordVPN’s National Privacy Test (NPT) for the whole of 2025 has been published, showing that 95% of people in the UK do not understand what privacy issues to consider when using AI for work.

With employees increasingly using generative AI tools to draft documents, summarise information, and support decision-making, experts warn that sensitive business and personal data may be compromised and leaked by malicious users and hackers.

The risks of AI extend beyond internal data handling.

The report also reveals that just over a quarter of people (27%) in the UK cannot correctly identify common scams enabled by AI technology, such as deepfake videos and voice cloning. As these tools become more accessible, cybercriminals are deploying them to enhance phishing attacks and social engineering campaigns targeting employees at all levels.

More broadly, the results point to persistent weaknesses in fundamental cybersecurity awareness.

Only a fifth (21%) of UK respondents know where passwords should be stored securely, underlining how gaps in basic digital hygiene may increase the risks introduced by AI.

Industry experts warn that without clearer governance, employee training, and controls around AI usage, organisations may struggle to manage the expanding attack surface created by new technologies.

Marijus Briedis, Chief Technology Officer at NordVPN, said, “The pace at which AI tools are being adopted in the workplace far outweighs people’s understanding of the privacy and security implications.

“Employees are increasingly relying on generative AI to speed up everyday tasks, but many don’t fully consider what happens to the data they input into these systems.

“Unlike traditional workplace software, AI tools can retain, analyse, and potentially reuse information in ways that aren’t always clear to the user. When staff share sensitive business data, client information, or internal discussions with AI systems, organisations may lose visibility and control over where that data is stored and how it is used.

“This presents a significant challenge for businesses, particularly in highly regulated sectors. When some of the biggest companies in Britain, such as M&S and The Co-Op, are experiencing high-profile cyberattacks, you can never be too secure.

“Without clear policies and employee education, organisations risk exposing themselves to compliance issues, data leaks, and long-term reputational damage as AI becomes more deeply embedded in business processes.

“At the same time, the same technology is being used by cybercriminals to make scams more convincing and harder to detect. AI-enabled phishing, voice cloning, and deepfake content are increasing the likelihood that employees will be targeted and manipulated, especially when awareness of these threats remains low.

“To manage these risks, organisations need to treat AI as a core security and privacy issue, not just a productivity tool. That means setting clear rules around AI use, investing in staff training, and ensuring employees understand both the benefits and the risks before these tools become business as usual.”

Leave a Comment

You may also like

CLOSE AD