Artificial Intelligence: The New Perpetrator of Fraud?

According to ACFE 2024 data, organizations lose 5% of their revenue to fraud. Artificial intelligence is no longer just a tool, it is becoming the subject of fraud.
According to ACFE’s 2024 Report to the Nations, organizations lose an average of 5% of their revenue to fraud each year. Fraud is no longer committed by human hands, but with the help of algorithms!
Why Artificial Intelligence is the New Perpetrator of Fraud – Evidence from ACFE 2024
ACFE’s 2024 Report to the Nations reveals that organizations lose an average of 5% of their revenue to fraud each year. This time, however, a new actor has been added to the picture: Artificial intelligence. ACFE’s 2024 trend analysis shows that technology is no longer just a tool, but can and even has become the perpetrator of fraud itself.
1) AI-powered scams are becoming a trend
ACFE’s analysis of the Top 5 Fraud Trends of 2024 reveals the rapid rise in AI-enabled fraud. Deepfake audio and video production, fake emails and automated document generation can manipulate company data and algorithms to fool traditional audits.
Example:
In 2024, an AI-powered voice call impersonating the CFO of a European company triggered a fraudulent transfer of USD 15 million. The fraudster had established trust with just a 10-minute call.
2) Abusers are already using AI before organizations do
The Anti-Fraud Technology Benchmarking Report (2024), a joint ACFE-SAS study, reveals that only 18% of organizations are actively using artificial intelligence, while the vast majority plan to integrate it in the coming years. In other words, fraudsters are adopting technology first, while control systems are lagging behind.
Example:
According to the same report, fraudsters can create fake supplier records in seconds with generative artificial intelligence. However, many organizations are still trying to identify suspicious transactions through manual analysis.
3) Deepfake fraud cases cause millions of dollars in losses
Examples in Fraud Magazine reveal a USD 25 million fraud at a Hong Kong-based company via deepfake video conferencing. The fraudsters obtained approval through an AI-generated video impersonating a company executive.
Example:
In the post-incident investigation, the fraudster could not be identified because the “perpetrator” was actually software – the face, voice and facial expressions were completely generated by the algorithm.
4) Opportunities for fraud increase as technology becomes mainstream
ACFE’s analysis of technology and fraud clearly shows that the proliferation of technology is creating new opportunities for fraudsters. This is particularly evident in voice cloning, automated signature and document production systems.
Example:
In a cybersecurity test in 2024, a fake executive voice, generated from just 3 seconds of audio recording, managed to get payment approval from the finance department. Artificial intelligence now backs up forgery with digital “proof”.
5) The actor of fraud is changing: code, not human hands
The concept of “fraud enabler”, which stands out in ACFE’s 2024 general assessments, shows that it is no longer people but systems that facilitate fraud. Automated document generation, email chains and algorithmic accounting manipulations are turning software into a direct perpetrator of fraud.
Example:
In 2024, an AI-based document automation system in a government organization was found to generate fake invoices with misguided parameters and pass them through the approval process. No one was found as the perpetrator because it was a “module” that did the processing.
Conclusion: Artificial Intelligence – The New Perpetrator of Fraud
ACFE 2024 findings are clear: The nature of fraud has changed. It’s no longer about an employee taking money from the till; it’ s about gaming the system. Artificial intelligence is a new threat, generating false documents, replicating identities and moving at speeds that exceed human control.
Organizations are still preparing for this transformation. While many see AI as a “prevention tool”, fraudsters have already turned it into a “fraud tool”. Today, AI may be a footnote in reports, but soon it will be the subject of investigations.