Digital Crime and AI: Is the UK Ready to Respond? | Generative ai examples in real-world | Generative ai tools for images | | Turtles AI
The Alan Turing Institute has highlighted that UK law enforcement agencies are currently ill-equipped to effectively tackle AI-enabled crimes. The institute’s Centre for Emerging Technologies and Security (CETaS) has published a report recommending the establishment of a dedicated taskforce within five years to specifically tackle these crimes. The National Crime Agency (NCA) has recognised the growing threat and is carefully considering these recommendations.
Key points:
- UK law enforcement agencies must address the gap in AI adoption to effectively tackle technologically advanced crimes.
- CETaS recommends the creation of a specialist taskforce within five years to tackle AI-related crimes.
- The NCA has highlighted the growing use of AI in serious crimes such as child abuse, cyber crime and fraud.
- It is vital that law enforcement adopts AI to effectively tackle emerging threats.
The Alan Turing Institute has recently raised concerns about the ability of UK law enforcement to effectively tackle emerging AI-facilitated crimes. A report by the institute’s Centre for Emerging Technologies and Security (CETaS) found that the UK is not adequately prepared to tackle such threats. The report recommended that the National Crime Agency (NCA) establish a dedicated AI crime task force within the next five years. The NCA acknowledged the importance of these recommendations and said it would review them carefully.
The CETaS report noted that while AI crime is still in its infancy, criminals are rapidly upgrading their skills. Two academics interviewed as part of the research expressed concern about the gap between the technical capacity of law enforcement and the nature of the problem. One said there was a “huge gap between the technical capacity of UK law enforcement and the nature of the problem.” Another expressed concern about the police’s ability to understand and manage AI threats
The report also highlights that AI is increasingly being used to commit serious crimes, including child sexual abuse, cybercrime and fraud. For example, the use of deepfakes has allowed scammers to impersonate company executives, leading to fraudulent transfers of large sums of money. In one recent case, an employee was tricked into transferring HK$200 million following a video call in which scammers impersonated the company’s chief financial officer using deepfake technology.
To address these challenges, CETaS suggests that UK law enforcement agencies need to adopt, acquire and integrate AI as part of their routine crime-fighting efforts. In essence, they need to fight AI with AI. Ardi Janjeva, senior research associate at the Alan Turing Institute, said: “As AI tools continue to advance, criminals and fraudsters will exploit them, challenging law enforcement and making it even harder for potential victims to distinguish between real and fake.”
The NCA has acknowledged the threat of AI-enabled crime and said it is working to counter it. Alex Murray, director of threats at the UK’s largest police force and the national lead for AI policing, is exploring the use of AI to empower crime fighters and increase efficiency. Murray said the use of AI by criminals is rapidly expanding and that police must “move quickly” to address this threat.
The Alan Turing Institute has also highlighted the need to protect the UK’s AI research ecosystem from hostile threats, such as intellectual property theft and deceptive collaboration. The report highlights that AI is a dual-use technology, making UK research a priority target for hostile state actors seeking technological advantage.
The Alan Turing Institute and the NCA recognise the growing threat posed by AI-enabled crimes and highlight the importance of taking proactive steps to strengthen law enforcement capabilities to counter these emerging challenges.