WELCOME TO THE HOME OF
Opening hours: 8am to 6pm
Is The AI Crimewave Already Here?

 

It’s not hype or exaggeration to suggest that we’re looking at the dawn of an entirely new age, a new generation, of crime. AI-enabled and assisted and eventually even AI operated, cybercrime on autopilot.

Where criminals will metaphorically or even literally show up for work, press a button, then sit back while a bunch of AI minions will do all the work that until now has been tedious and expensive.

And that work will be mainly about trying to steal your money.

And just to be clear, while some of the claims about AI cybercrime range from premature and speculative to outright fantasy, AI-assisted scams are still very real and we’re already seeing real victims of crimes created or assisted by AI. It already walks amongst us.

MAKING THE TREASURE TROVE OF STOLEN DATA USABLE

The criminal world is drowning in stolen data and the world is beginning to lose count of how many sensitive records have already made it into the hands of criminals.

In 2021 alone it’s estimated that more than 40 billion records were exposed or stolen in data breaches, and Juniper Research estimated that nearly 150 billion records were compromised in just the last 5 years.

Until recently, criminals were limited by time and tools to what they could do with all this information. But AI is making it much easier for cyber criminals to sort through these billions of records and solve one of the biggest criminal challenges – analyzing those vast troves of stolen information to find the pieces that match, then putting them together so they can be used not just to commit convincing crimes, but at scale.

A NEW GENERATION OF PHISHING ATTACKS

According to some recent reports, there was a 1,200% increase in phishing emails from 2022 to 2023, and more of these phishing attacks are successfully tricking recipients.

That’s because AI is making it easier to create, launch, and manage massive spam and phishing campaigns that are so well-researched and convincing, they’re almost impossible to spot.

And of course, accurately translating phishing and BEC emails into multiple languages is a breeze for AI.

FILLING IN THE BLANKS

One of the best ways to verify whether a person is real or not is to look at their past, what the Internet says about them, what evidence there is to prove or at least suggest that they really exist.

Before AI, it was almost impossible to create a fake Internet history. But thanks to AI, it’s much easier to create very detailed and believable online profiles and histories, from professional websites to complete social media profiles and LinkedIn pages.

Deep fakes are not just about fake photos, voices, and videos, but entire fake life and work histories that can fool even the most careful.

A REAL WORLD OF DEEP FAKES

In just the last few months we’ve seen a huge increase in reports of the success of AI in developing very realistic fake versions of real humans. Generating photos, voices, and even videos that are so realistic and lifelike that it’s almost impossible for even friends and family members to distinguish them from the real being.

And with the growth in use of video for everything from training to marketing and PR to social media, snippets of our voices exist everywhere. Scammers are now able to use even just a few seconds of these snippets to create not just short deep fake sentences, but entire live conversations.

And the scams are working. According to Finextra, fraud losses resulting from deepfake scams have cost individual victims anywhere from $243,000 to $35 million.

In one recent security demonstration, a security researcher was able to trick an employee of the 60 Minutes program into sharing the passport number of one of their correspondents, simply by using a clone of her voice. The attack took less than 5 minutes to construct.

BEATING PASSWORDS IS GETTING MUCH EASIER

For many of we humans, the humble password is often the first and only line of defense guarding many of the things we hold in great value.

And AI is setting its sights on them. In some recent demonstrations, AI-driven password crackers were programmed to break a collection of millions of passwords stolen in recent data breaches.

According to reports, 81% of the passwords were cracked in less than a month, 71% in less than a day, and 65% in less than an hour. Any seven-character password could be cracked in six minutes or less.

AI can also sort through hundreds of millions of exposed and compromised passwords and usernames, and quickly find which other sites are using the same password/username combos.

GREAT AT FINDING SECURITY HOLES

For criminals, finding and exploiting the millions of security holes that exist every day can be a costly, time-consuming, and repetitive task, and a task that AI is ideally suited for.

AI can also scan billions of lines of code almost instantly to discover flaws, weaknesses, or mistakes. And it’s also very good at writing exploits to take advantage of the vulnerabilities it discovers.

EVADING ANTIVIRUS SOFTWARE

ChatGPT, perhaps the most popular of all AI tools, has been used to not only create malicious code but also code that’s capable of changing quickly and automatically to evade antivirus software.

In early 2023, security researchers launched a proof-of-concept malware called Mamba that used AI to both eliminate the need for the command-and-control infrastructure typically used by cyber criminals, and to generate new malware on the fly in order to evade detection.

And AI is also helping to make malware smarter and more capable, able to do more damage, infiltrate more deeply into a network, morph and hide, and find and steal the most valuable data.

MASTERS OF MISINFORMATION

Creating and spreading misinformation and disinformation is something that AI seems born to excel at. Using the same techniques and tactics as phishing campaigns, AI can be deployed to create and optimize the distribution of all kinds of misinformation, disinformation, fake news and images, and conspiracy theories.

And it will also present a frightening threat to elections and democracies. One leading AI expert admitted to being completely terrified of the 2024 election and an expected misinformation tsunami, while another expert suggested that AI will turn elections into a train wreck.

Misleading AI-generated content will be at the forefront of these unsettling attacks.

A HOTBED OF EXTORTION

In 2023, the FBI and the Department of Justice warned that the fastest-growing crime against children was sextortion – using fake but highly realistic social media profiles to trick teens and kids into sharing sensitive or sexually explicit photos and videos, and then extorting them for money with the threat of sharing that content with family or publicly.

AI is expected to take that kind of crime even further by generating deep fake pornographic photos and videos that appear to include the face or likeness of the victim.

A THREAT TO ECOMMERCE

In late 2023, global security firm Sophos showed how they were able to create a complete fraudulent website just using AI. The site included hyper realistic images, audio, and product descriptions, a fake Facebook login, and a checkout page able to steal user login credentials and credit cards. They were also able to create hundreds of similar websites in a matter of just minutes and with a single button.

A NEW GENERATION OF CRIMINALS

According to security firm Trend Micro “One thing we can derive from this is that the bar to becoming a cybercriminal has been tremendously lowered. Anyone with a broken moral compass can start creating malware without coding know-how. To quote a tweet from Andrej Karpathy, the Director of Artificial Intelligence and Autopilot Vision at Tesla, ‘The hottest new programming language is English.’”

MAKING BAD BETTER

Where AI is already shining in the criminal underworld is how it can help criminals improve their hacking tools. Monitoring of hacker chat rooms on the dark web has shown that there’s a keen interest by criminals in using currently available AI tools like ChatGPT to make existing malware better, write better code, write quality code faster, and even create entirely new tools.

And yet other conversations and tools in hacker forums are focusing on “jailbreaking” ChatGPT as a way to get around the restrictions the developers have placed to prevent people from using the tool to do bad things.

CONCLUSION

A word of caution. While the AI crimewave is already here, it’s not yet a tsunami. Many in the criminal underworld, like the rest of us, are struggling to learn more about AI and all its capabilities and potential. New tools are being developed quickly but they’re often slow, clumsy, and hardly feature packed.

But most observers agree that they are a good if not worrisome start. At least if you’re a criminal. And one thing we know about criminals is that if there’s money to be made, they learn fast. The more money they can make, the faster they learn.

In upcoming articles in this series, we’ll be offering tips on how to prevent this crimewave from drowning you.

[textbar title=”Interested in learning more about this topic?” style=”style_2″ button_title=”CONTACT US” button_link=”https://thinksecurityfirst.today/contact-us-2/”]