Organised Crime

New Report Finds Cross Border Criminals Leverage AI for Malicious Use

A jointly developed new report by Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro looking into current and predicted criminal uses of artificial intelligence (AI) has been released. The report provides law enforcers, policymakers and other organisations with information on existing and potential attacks leveraging AI and recommendations on how to mitigate these risks.

“AI promises the world greater efficiency, automation and autonomy. At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology.” said Edvardas Šileris, Head of Europol’s European Cybercrime Centre. “This report will help us not only to anticipate possible malicious uses and abuses of AI, but also to prevent and mitigate those threats proactively. This is how we can unlock the potential AI holds and benefit from the positive use of AI systems.”

The report concludes that cybercriminals will leverage AI both as an attack vector and an attack surface. Deep fakes are currently the best-known use of AI as an attack vector. However, the report warns that new screening technology will be needed in the future to mitigate the risk of disinformation campaigns and extortion, as well as threats that target AI data sets.

For example, AI could be used to support:

  • convincing social engineering attacks at scale;
  • document-scraping malware to make attacks more efficient;
  • evasion of image recognition and voice biometrics;
  • ransomware attacks, through intelligent targeting and evasion;
  • data pollution, by identifying blind spots in detection rules.

“As AI applications start to make a major real-world impact, it’s becoming clear that this will be a fundamental technology for our future,” said Irakli Beridze, Head of the Centre for AI and Robotics at UNICRI. “However, just as the benefits to society of AI are very real, so is the threat of malicious use. We’re honoured to stand with Europol and Trend Micro to shine a light on the dark side of AI and stimulate further discussion on this important topic.”

The paper also warns that AI systems are being developed to enhance the effectiveness of malware and to disrupt anti-malware and facial recognition systems.

“Cybercriminals have always been early adopters of the latest technology and AI is no different. As this report reveals, it is already being used for password guessing, CAPTCHA-breaking and voice cloning, and there are many more malicious innovations in the works,” said Martin Roesler, head of forward-looking threat research at Trend Micro. “We’re proud to be teaming up with Europol and UNICRI to raise awareness about these threats, and in so doing help to create a safer digital future for us all.”

The three organisations make several recommendations to conclude the report:

  • harness the potential of AI technology as a crime-fighting tool to future-proof the cybersecurity industry and policing;
  • continue research to stimulate the development of defensive technology;
  • promote and develop secure AI design frameworks;
  • de-escalate politically loaded rhetoric on the use of AI for cybersecurity purposes;
  • leverage public-private partnerships and establish multidisciplinary expert groups

Read the full report: Malicious Uses and Abuses of Artificial Intelligence at https://www.europol.europa.eu/publications-documents/malicious-uses-and-abuses-of-artificial-intelligence