How Generative AI is Supporting the Creation of Lures & Scams

A Guide for Value Added Resellers

Copyright © 2024 Gen Digital Inc. All rights reserved. Avast is part of Gen™.

Thank you for reading this post, don't forget to subscribe!

A long, long time ago, I worked for an antivirus company who has since been acquired by Avast.  Knowing many of the people involved in this area of artificial intelligence, I pay attention when they publish a white paper. AI in scams is something we all should be concerned about. I am excited to share it in our Tectonic Insights.

Executive Summary

The capabilities and global usage of both large language models (LLMs) and generative AI are rapidly increasing. While these tools offer significant benefits to the general public and businesses, they also pose potential risks for misuse by malicious actors, including the misuse of tools like OpenAI’s ChatGPT and other GPTs.

This document explores how the ChatGPT brand is exploited for lures, scams, and other social engineering threats. Generative AI is expected to play a crucial role in the cyber threat world challenges, particularly in creating highly believable, multilingual texts for phishing and scams. These advancements provide more opportunities for sophisticated social engineering by even less sophisticated scammers than ever before.

Conversely, we believe generative AI will not drastically change the landscape of malware generation in the near term. Despite numerous proofs of concept, the complexity of generative AI methods still makes traditional, simpler methods more practical for malware creation. In short, the good may not outweigh the bad – just yet.

Recognizing the value of generative AI for legitimate purposes is important. AI-based security and assistant tools with various levels of maturity and specialization are already emerging in the market. As these tools evolve and become more widely available, substantial improvements in their capabilities are anticipated.


AI-Generated Lures and Scams

AI-generated lures and scams are increasingly prevalent. Cybercriminals use AI to create lures and conduct phishing attempts and scams through various texts—emails, social media content, e-shop reviews, SMS scams, and more. AI improves the credibility of social scams by producing trustworthy, authentic texts, eliminating traditional phishing red flags like broken language and awkward addressing.

These advanced threats have exploited societal issues and initiatives, including cryptocurrencies, Covid-19, and the war in Ukraine. The popularity of ChatGPT among hackers stems more from its widespread recognition than its AI capabilities, making it a prime target for investigation by attackers.


How is Generative AI Supporting the Creation of Lures and Scams?

Generative AI, particularly ChatGPT, enhances the language used in scams, enabling cybercriminals to create more advanced texts than they could otherwise. AI can correct grammatical errors, provide multilingual content, and generate multiple text variations to improve believability.

For sophisticated phishing attacks, attackers must integrate the AI-generated text into credible templates. They can purchase functional, well-designed phishing kits or use web archiving tools to replicate legitimate websites, altering URLs to phish victims.

Currently, attackers need to manually build some aspects of their attempts. ChatGPT is not yet an “out-of-the-box” solution for advanced malware creation. However, the emergence of multi-type models, combining outputs like images, audio, and video, will enhance the capabilities of generative AI for creating believable phishing and scam campaigns.


Malvertising

Malvertising, or “malicious advertising,” involves disseminating malware through online ads. Cybercriminals exploit the widespread reach and interactive nature of digital ads to distribute harmful content. Instances have been observed where ChatGPT’s name is used in malicious vectors on platforms like Facebook, leading users to fraudulent investment portals.

Users who provide personal information become vulnerable to identity theft, financial fraud, account takeovers, and further scams. The collected data is often sold on the dark web, contributing to the broader cybercrime ecosystem. Recognizing and mitigating these deceptive tactics is crucial.


YouTube Scams

YouTube, one of the world’s most popular platforms, is not immune to cybercrime. Fake videos featuring prominent figures are used to trick users into harmful actions. This strategy, known as the “Appeal to Authority,” exploits trust and credibility to phish personal details or coerce victims into sending money.

For example, videos featuring Elon Musk discussing OpenAI have been modified to scam victims. A QR code displayed in the video redirects users to a scam page, often a cryptocurrency scam or phishing attempt. As AI models like Midjourney and DALL-E mature, the use of fake images, videos, and audio is expected to increase, enhancing the credibility of these scams.


Typosquatting

Typosquatting involves minor changes in URLs to redirect users to different websites, potentially leading to phishing attacks or the installation of malicious applications. An example is an Android app named “Open Chat GBT: AI Chat Bot,” where a subtle URL alteration can deceive users into downloading harmful software.


Browser Extensions

The popularity of ChatGPT has led to the emergence of numerous browser extensions. While many are legitimate, others are malicious, designed to lure victims. Attackers create extensions with names resembling ChatGPT to deceive users into downloading harmful software, such as adware or spyware. These extensions can also subscribe users to services that periodically charge fees, known as fleeceware.

For instance, a malicious extension mimicking “ChatGPT for Google” was reported by Guardio. This extension stole Facebook sessions and cookies but was removed from the Chrome Web Store after being reported.


Installers and Cracks

Malicious installers often mimic legitimate tools, tricking users into installing malware. These installers promise to install ChatGPT but instead deploy malware like NodeStealer, which steals passwords and browser cookies. Cracked or unofficial software versions pose similar risks, hiding malware that can steal personal information or take control of computers.

This particular method of installing malware has been around for decades. However the usage of ChatGPT and other free to download tools has given it a resurrection.


Fake Updates

Fake updates are a common tactic where users are prompted to update their browser to access content. Campaigns like SocGholish use ChatGPT-related articles to lure users into downloading remote access trojans (RATs), giving attackers control over infected devices. These pages are often hosted on vulnerable WordPress sites or sites with weak admin credentials.


Precautions for SMB Customers

To protect against ChatGPT-related hacks and scams, SMB customers should take the following precautions:

  • Keep software updated: Regular updates protect against vulnerabilities.
  • Choose products carefully: Download AI tools from official sources and be cautious of offers asking for payment.
  • Beware of offers: Avoid deals that seem too good to be true.
  • Verify publishers and reviews: Check the authenticity of apps and extensions.
  • Avoid cracked software: Pirated software increases malware risks.
  • Get Protected: Keep a valid, up to date version of an antivirus program on your computers and network.
  • Report suspicious activity: Use report buttons to inform providers of suspicious ads, applications, or extensions.
  • Consult resellers or service providers: Get information on the latest threats and protection measures.
  • Consider hiring a Managed Services provider.
  • Trust cybersecurity providers: Use round-the-clock protection from providers like Avast.

About Avast Business

Avast delivers award-winning cybersecurity solutions for small and growing businesses, protecting devices, data, applications, and networks. With over 30 years of innovation, Avast operates one of the largest threat detection networks globally. For more information, visit Avast Business.

Related Posts
Salesforce OEM AppExchange
Salesforce OEM AppExchange

Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more

The Salesforce Story
The Salesforce Story

In Marc Benioff's own words How did salesforce.com grow from a start up in a rented apartment into the world's Read more

Salesforce Jigsaw
Salesforce Jigsaw

Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Salesforce Artificial Intelligence
Salesforce CRM for AI driven transformation

Is artificial intelligence integrated into Salesforce? Salesforce Einstein stands as an intelligent layer embedded within the Lightning Platform, bringing robust Read more