AI Image Generators: An Emerging Cybersecurity Threat

Artificial intelligence (AI) has the potential to change the very nature of our society. And if the AI tools we currently have at our disposal are any indication of what’s to come, we have a lot to look forward to.

We also have a lot to be wary of. Namely, the weaponization of AI by cybercriminals and other threat actors. This isn’t a theoretical concern, and not even AI image generators are immune to abuse.

What Are AI Image Generators? How Do They Work?

If you’ve ever used an AI image generator, you have a pretty good idea what they’re all about. Even if you’ve never used one, you’ve most likely come across AI-generated images on social media and elsewhere. The software that is popular today works on a very simple principle: the user types in the text, and the AI generates an image based on that text.

What goes on under the hood is a lot more complex. AI has become much better in recent years, and most text-to-image generators nowadays are so-called diffusion models. This means that they are “trained” over a long period of time on an enormous number of text and images, which is what makes their creations so impressive and stunningly realistic.

What makes these AI tools even more impressive is the fact that they don’t just modify existing images or combine thousands of images into one, but also create new, original images from scratch. The more people use these text-to-image generators, the more information they are fed, and the better their creations become.

Some of the best-known AI image generators are Dream by WOMBO, DALL-E, Stable Diffusion, Midjourney, DeepAI, Fotor, and Craiyon. New ones are popping up left and right, and tech giants—including Google—are releasing their own, so we can only speculate as to what the future will bring.

4 Ways Threat Actors Weaponize AI Image Generators

Digital illustration shows screens representing artificial intelligence

Like pretty much all technology, AI image generators can be abused by malevolent actors. Actually, they are already being used for all kinds of nefarious purposes. But exactly what type of scams and cyberattacks can a criminal carry out with the help of AI image generators?

1. Social Engineering

One obvious thing threat actors could do with AI image generators is engage in social engineering; for example, create fake social media profiles. Some of these programs can create incredibly realistic images that look just like genuine photographs of real people, and a scammer could use these fake social media profiles for catfishing.

Unlike real people’s photos, AI-generated ones cannot be discovered via reverse image search, and the cybercriminal doesn’t have to work with a limited number of photographs to con their target—using AI, they can generate as many as they like, building a convincing online identity from scratch.

But there are real-life examples of threat actors using AI image generators to scam people. In April 2022, TechTalks blogger Ben Dickinson received an email from a law firm claiming that he had used an image without permission. The lawyers emailed a DMCA Copyright Infringement Notice, telling Dickinson that he needs to link back to a client of theirs, or remove the image.

Dickinson googled the law firm, and found the official website. It all seemed completely legitimate; the site even had photos of 18 lawyers, complete with their biographies and credentials. But none of it was real. The photos were all generated by AI, and the supposed copyright infringement notices were sent out by someone looking to extort backlinks from unsuspecting bloggers, as part of an unethical, black hat SEO (Search Engine Optimization) strategy.

2. Charity Scams

When devastating earthquakes hit Turkey and Syria in February 2023, millions of people around the world expressed their solidarity with the victims by donating clothes, food, and money.

According to a report from BBC, scammers took advantage of this, using AI to create realistic images and solicit donations. One scammer showed AI-generated images of ruins on TikTok Live, asking their viewers for donations. Another one posted an AI-generated image of a Greek firefighter rescuing an injured child from ruins, and asked his followers for donations in Bitcoin.

One can only imagine what type of charity scams criminals will run with the help of AI in the future, but it’s safe to assume they’ll only get better at abusing this software.

3. Deepfakes and Disinformation

Governments, activist groups, and think tanks have long warned about the dangers of deepfakes. AI image generators add another component to this problem, given how realistic their creations are. In fact, in the UK, there’s even a comedy show called Deep Fake Neighbour Wars which finds humor in unlikely celebrity pairings. What would stop a disinformation agent from creating a fake image and promoting it on social media with the help of bots?

This can have real-life consequences, as it almost did in March 2022, when a fake video depicting Ukrainian President Volodymyr Zelensky telling Ukrainians to surrender circulated online, per NPR. But that’s just one example, because the possibilities are almost endless, and there are countless ways a threat actor could damage somebody’s reputation, promote a false narrative, or spread fake news with the help of AI.

4. Ad Fraud

TrendMicro researchers discovered in 2022 that scammers were utilizing AI-generated content to create misleading advertisements, and promote shady products. They created images that suggested popular celebrities use certain products, and ran ad campaigns based on those images.

For example, one ad for a “financial advisement opportunity” featured billionaire Elon Musk, the founder and CEO of Tesla. Of course, Musk never endorsed the product in question, but the AI-generated footage featured made it seem that way, presumably luring unsuspecting viewers to click the ads.

AI and Cybersecurity: A Complex Issue We Need to Tackle

Going forward, government regulators and cybersecurity experts will probably have to work together to address the emerging threat of AI-powered cybercrime. But how can we regulate AI and protect ordinary people, without stifling innovation and restricting digital freedoms? That question will loom large for years to come.

Until there is an answer, do what you can to protect yourself: carefully vet any information you see online, avoid shady websites, use safe software, keep your devices up to date, and learn to use artificial intelligence to your advantage.


🧪 |Medical Laboratory Scientist 🥇 | Mindset over Everything. 
 🤝 | Let's Grow Together.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button