OPINION | AI frauds: The big danger in small, invisible criminals

In the last two years, artificial intelligence has become a central part of how many businesses operate. But, at the same time, criminals are using it to commit more advanced types of fraud. Most of what we hear about in the news focuses on fake voices and deepfake videos used to trick people, but it is just the tip of the iceberg. Behind the scenes, more dangerous and less visible methods are being used; like AI systems that help criminals avoid being detected, sneak fake data into systems, and create believable, but fake, records that pass as real. These tricks are hard to catch because they look just like normal business activities.
For example, fraudsters are now using AI bots that act like real users in online systems, especially in cryptocurrency and digital finance platforms. These bots learn from each action they take. They quietly test how websites and systems work, looking for weaknesses. They might slowly take small amounts of money, carefully timed to avoid raising any red flags. Because they behave like real people, current fraud detection tools don’t notice anything unusual.
Another growing issue is fake legal documents created by AI. These include false company paperwork, fake digital signatures that look like real ones, and even fake companies. Criminals use bits of real, stolen legal data to train their AI. Then, they use it to make fake documents that can pass automatic checks. Because many businesses now rely on automated systems and don’t always have humans review documents unless something looks suspicious, these fakes often go unnoticed. In these cases, AI isn’t just helping forge documents; it’s creating fake realities that seem real in multiple digital systems. That makes it even harder to tell what’s real and what isn’t.
There’s also a new kind of fraud where fake people are added into large data sets, like in health or shopping records. These fake entries are carefully made so they look real. For instance, a criminal might create a fake patient in a healthcare database. That fake patient might then be used to get false insurance payments. These actions don’t always involve stealing money directly; they can create bigger problems by messing up how systems work, leading to wrong decisions in healthcare, insurance, or loans.
These schemes aren’t being carried out by amateurs. Well-organised groups with funding and access to the right tools are creating their own small AI systems focused just on specific types of fraud. In one reel case, a group in Southeast Asia trained an AI using refund request data from online shopping sites in their region. Their system could create refund requests that looked exactly like real customer complaints, even matching the usual writing style and timing. It worked so well that 90 per cent of their requests were approved before anyone noticed and tightened the rules.
Unfortunately, the tools companies use to catch fraud are not keeping up. Most rely on basic rules or try to spot odd patterns. But these systems can’t deal with fraud that changes quickly or hides within normal-looking behaviour. What’s needed now are tools that can track where data comes from, compare AI outputs to real-world facts, and check systems regularly. This will require both better technology and new ways of managing risk in finance and digital platforms.
"What we are witnessing is not the automation of fraud, but its evolution into a cognitive system, an adversary that learns, reflects, and self-corrects faster than our defences adapt."
Governments and companies also need to plan for the long-term risks. AI-made fake identities are now being sold on illegal online markets. These aren’t just fake ID cards. They are fully developed digital characters that can even pass video and voice identity checks. Their strength isn’t being super realistic; it’s about being believable enough to seem like a quiet, average user, so they don’t stand out.
Another growing risk comes from inside companies. Employees with access to important systems are starting to use AI to cover their tracks, change records, and test how far they can push rules before actually committing fraud. These kinds of insider threats don’t get much attention yet, partly because companies are reluctant to admit their own systems might be vulnerable from within.
In short, today’s AI-driven fraud isn’t about big, dramatic hacks. It’s about quiet, sneaky actions that hide inside normal business processes. Many current security systems are built to catch obvious red flags or large spikes in activity. But they miss the fraud that looks routine.
"Fraud is no longer an act; it is a process model, running parallel to compliance and often indistinguishable from it until you ask the system why it exists at all."
“AI-driven fraud is not just an evolution; it’s a transformation of the threat landscape, requiring equally transformative countermeasures,” I often remark. This captures the core issue: as AI becomes easier for everyone to use, it also becomes easier to misuse in smart, dangerous ways.
"The arms race in AI fraud versus AI security is accelerating, demanding relentless innovation and vigilance from all cybersecurity stakeholders," is a call to action for the industry. Moving forward, we must focus on creating ethical AI systems and stronger security rules to stay ahead of these growing threats.
Sivasubramani is a senior member of the US-based Institute of Electrical & Electronics Engineers, the world’s largest technical professional organisation.
Opinions expressed in this article are those of the author and do not purport to reflect the opinions or views of THE WEEK.
Sci/Tech