Opening hours: 8am to 6pm
+06 48 48 87 40
La Défense, Paris
More Criminals, More Crimes, More Victims.


It’s not hype or exaggeration to suggest that we’re looking at the dawn of an entirely new age, a new generation, of crime. AI-enabled and assisted and eventually even AI operated, cybercrime on autopilot.

A world where criminals might show up for work, press a button, then sit back while a collection of customized AI tools will do all the work that until now has been tedious and expensive. And literally.

In a recent demonstration by security firm Sophos, one of their research teams used AI tools to create hundreds of fully functioning e-commerce phishing sites, in just a few minutes, at the single press of a button.

And that work will be mainly about trying to steal your money. And it’s not just consumers. Around a third of businesses say they’ve already fallen victim to deepfake voice fraud and fake videos.


We spent more than a year looking at hundreds of discussions about AI crime in all its forms, including hacker forums on the dark web, to try to figure out which crimes are real and which are fantasy, and which could possibly be in the future versus those that are here now.

And while some of the claims about AI cybercrime range from premature and speculative to outright fantasy, AI-assisted scams are still very real and we’re already seeing real victims of crimes created or assisted by AI.


The criminal world is drowning in stolen data:

– In 2021 alone it’s estimated that more than 40 billion records were exposed or stolen in data breaches.

– According to Juniper Research, nearly 150 billion records were compromised in just the last 5 years.

– There are an estimated 24 billion stolen credentials (username and password combos) currently circulating on the dark web. That’s four complete sets of credentials for every human on earth.

– In January 2024 a stash of more than 26 billion records was discovered on an unprotected server, data from multiple recent and previous data breaches.

Until recently, criminals were limited by time and tools in what they could do with all this information. But AI is making it much easier for cyber criminals to sort through these billions of records and solve one of the biggest criminal challenges – connecting the dots. Analyzing those vast troves of stolen information to find the pieces that match, then putting them together so they can be used not just to commit convincing crimes, but at scale.


Speaking of stolen data, identity theft has been the top consumer crime for more than a decade and relies on a constant feed of personal information. The more information, and the more accurate that information, the better. And that’s where AI comes in.

Not only is AI making it easier to capitalize on the billions of personal records already stolen in data breaches, it’s making it much easier to launch more (and more convincing) identity thefts.


Synthetic identity theft is nothing new, but like so much else in crime, AI is making it much easier to grow and scale. Synthetic identity theft is where criminals use a mixture of real and concocted information to create entirely new identities.

For example, it could include a real Social Security number and address, combined with entirely made-up information like photos and utility bills. Using these hybrid identities, thieves are able to open up multiple bank accounts, credit card accounts, and lines of credit.

These identities could also be used to create entire personas. One security expert predicted that a synthetic identity could be used to apply for employment benefits, housing assistance, food stamps, and other benefits totaling more than $2 million per identity.


In the year following the launch of ChatGPT there was a reported 1,265% increase in phishing emails and a nearly 1,000% rise in credential phishing.

More of these phishing attacks are successfully tricking recipients because AI  is making it easier to create, launch, and manage massive spam and phishing campaigns that are so well-researched and convincing, they’re almost impossible to spot.

And accurately translating these phishing and business email compromise (BEC) emails into multiple languages is a also breeze for AI.


One of the best ways to verify whether a person is real or not is to look at their past. What does the Internet says about them, what evidence is there to prove or at least suggest that they really exist?

Before AI, it was almost impossible to create a believable fake Internet history. But thanks to AI, it’s much easier to create very detailed and believable online profiles and histories, from professional websites to complete social media profiles, LinkedIn pages, employment history, and even certifications.


In just the last few months we’ve seen a huge increase in reports of the success of AI in developing very realistic fake versions of real humans. Generating photos, voices, and videos that are so realistic and lifelike it’s almost impossible for even friends and family members to distinguish them from the real being.

And with the growth in use of video for everything from training to marketing and PR to social media, snippets of our voices and faces exist everywhere. Scammers are now able to use even just a few seconds of these snippets to create complete clones of our voice.

And the scams are working. In February 2024 a bank revealed it had lost $25 million to a scam based on a series of conversations that were completely fabricated by deepfake technology.

In one recent security demonstration, a security expert was able to trick an employee of the 60 Minutes program into sharing the passport number of one of their correspondents, simply by using a clone of her voice. The attack took less than 5 minutes to construct.


Not all targets are created equally, and whether it’s targeting a CEO or other executive, or a wealthy consumer or their advisers, AI will be much better at identifying and sorting the best targets. That includes doing the in-depth background research and setting up a social engineering or phishing attack that will be very hard to detect or defend against.


For many of we humans, the humble password is often the first and only line of defense guarding many of the things we hold in great value.

And AI is setting its sights on them. In some recent demonstrations, AI-driven password crackers were programmed to break a collection of millions of passwords stolen in recent data breaches.

According to reports, 81% of the passwords were cracked in less than a month, 71% in less than a day, and 65% in less than an hour. Any seven-character password could be cracked in six minutes or less.

AI can also sort through hundreds of millions of exposed and compromised passwords and usernames, and quickly find which other sites are using the same password/username combos.


For criminals, finding and exploiting the millions of security holes that exist every day can be a costly, time-consuming, and repetitive task. A task that AI is ideally suited for.

AI can also scan billions of lines of code almost instantly to discover flaws, weaknesses, or mistakes. It’s also very good at writing exploits to take advantage of the vulnerabilities it discovers.


ChatGPT, perhaps the most popular of all AI tools, has been used to not only create malicious code but also code that’s capable of changing quickly and automatically to evade antivirus software.

In early 2023, security researchers launched a proof-of-concept malware called BlackMamba that used AI to both eliminate the need for the command-and-control infrastructure typically used by cyber criminals, and to generate new malware on the fly in order to evade detection.

And AI is also helping to make malware smarter and more capable, able to do more damage, infiltrate more deeply into a network, morph and hide, and find and steal the most valuable data.


Creating and spreading misinformation and disinformation is something that AI seems born to excel at. Using the same techniques and tactics as phishing campaigns, AI can be deployed to create and optimize the distribution of all kinds of misinformation, disinformation, fake news and images, and conspiracy theories.

And it will also present a frightening threat to elections and democracies. One leading AI expert admitted to being completely terrified of the 2024 election and an expected misinformation tsunami, while another expert suggested that AI will turn elections into a train wreck.

Misleading AI-generated content will be at the forefront of these unsettling attacks.


In 2023, the FBI and the Department of Justice warned that the fastest-growing crime against children was sextortion – using fake but highly realistic social media profiles to trick teens and kids into sharing sensitive or sexually explicit photos and videos, and then extorting them for money with the threat of sharing that content with family or publicly.

In 2021 the National Center for Missing and Exploited Children received 139 reports of sextortion. Two years later, that number had jumped to 26,000. AI is expected to take that kind of crime even further by generating deep fake pornographic photos and videos that appear to include the face or likeness of the victim.

One West African gang is believed to be responsible for nearly half of all global sextortion targeting minors, even advertising “how to” guides in chat rooms and on social media sites.


In late 2023, global security firm Sophos showed how they were able to create a complete fraudulent website just using AI. The site included hyper realistic images, audio, and product descriptions, a fake Facebook login, and a checkout page able to steal user login credentials and credit cards. They were also able to create hundreds of similar websites in a matter of just minutes and with a single button.


AI will make it much easier for unsophisticated and entry-level criminals or wannabes to scale up more advanced and complex attacks with fewer resources or costs. According to security firm Trend Micro “One thing we can derive from this (AI) is that the bar to becoming a cybercriminal has been tremendously lowered. Anyone with a broken moral compass can start creating malware without coding know-how.”

Juniper Research estimates that global losses from e-commerce fraud from 2023 to 2027 will surpass $343 billion. Those losses will likely be shared by businesses and consumers.


Another AI advantage that will make crime easier and life harder is deepfake forgeries. AI is very capable of forging and counterfeiting the most complicated documents, including birth certificates, driver’s licenses, and even passports.

It’s also capable of forging all the stuff that’s supposed to make counterfeiting much more difficult – things like watermarks, holograms, microprinting, special fonts and logos, and of course, a user’s photo and even signature.

AI can also forge utility bills, which will make identity theft and other frauds much easier to commit. And it can easily forge and create paper trails of invoices that can be used to trick companies into inadvertently paying scammers.


Where AI is already shining in the criminal underworld is helping criminals to improve their hacking tools. Monitoring of hacker chat rooms on the dark web has shown a keen interest by criminals in using currently available AI tools like ChatGPT to make existing malware better, write better code, write quality code faster, and even create entirely new tools.


AI learns and grows from nothing else but data, and has an insatiable appetite for more. And that will include your personal information.

With so much of this information, chances are AI will know far more about you than you’re comfortable with. About your behavior, habits, choices, preferences, political and social opinions, locations and connections, and so on. And also, and perhaps mistakenly, make inferences about you based on inaccurate or incomplete data.


AI crime doesn’t just have the potential to change crime, but could change the world. Not just increasing the odds that everyone will fall victim to some kind of cybercrime or fraud sooner rather than later, but changing the way that we think, the way we interact, communicate, socialize, bank, vote, trust, live.


As consumers, we’re not helpless against these crimes. If there is a silver lining, it’s that the threat of AI crime might finally persuade more people to take cybercrime and scams a little more seriously. History has shown us that most consumers are still doing very little to protect themselves from these crimes and often in the mistaken belief that it’s simply never going to happen to them.

AI makes it more likely that these crimes will eventually happen to all of us, and might be more devastating too. The best defense remains the same – things like greater vigilance and awareness, and a handful of good habits and behaviors.


[block_title style=”column_title” title=”ABOUT THE AUTHOR”][/block_title]

Neal O’Farrell is widely regarded as one of the longest-serving cyber security experts on the planet, 40 years and counting.

He served on the Federal Communications Commission’s Cybersecurity Roundtable, advised half a dozen governments, and led the fight against identity theft for more than two decades.

[button title=”MEET HIM” link=”https://thinksecurityfirst.today/neal-o-farrell/”]
[media alignment=”center” slideshow=”posts” slideshow_post=”33″ slideshow_page=”244″ image=”3645″ width=”380″]
[textbar title=”Want to learn more, and how to protect yourself? We have dozens of short and free videos to help.” style=”style_2″ button_title=”CHECK THEM OUT” button_link=”https://thinksecurityfirst.today/60seconds/”]