Phishing - gettectonic.com
ChatGPT and Politics?

ChatGPT and Politics?

ChatGPT has also appeared in influence operations, with groups using it to generate political content for social media. OpenAI observed an Iranian-led operation, Storm-2035, using ChatGPT to publish politically charged content about U.S. elections and global conflicts. Yet, OpenAI noted that these AI-driven influence efforts often lack audience engagement.

Read More
Acceptable AI Use Policies

Acceptable AI Use Policies

With great power comes—when it comes to generative AI—significant security and compliance risks. Discover how AI acceptable use policies can safeguard your organization while leveraging this transformative technology. AI has become integral across various industries, driving digital operations and organizational infrastructure. However, its widespread adoption brings substantial risks, particularly concerning cybersecurity. A crucial aspect of managing these risks and ensuring the security of sensitive data is implementing an AI acceptable use policy. This policy defines how an organization handles AI risks and sets guidelines for AI system usage. Why an AI Acceptable Use Policy Matters Generative AI systems and large language models are potent tools capable of processing and analyzing data at unprecedented speeds. Yet, this power comes with risks. The same features that enhance AI efficiency can be misused for malicious purposes, such as generating phishing content, creating malware, producing deepfakes, or automating cyberattacks. An AI acceptable use policy is essential for several reasons: Crafting an Effective AI Acceptable Use Policy An AI acceptable use policy should be tailored to your organization’s needs and context. Here’s a general guide for creating one: Essential Elements of an AI Acceptable Use Policy A robust AI acceptable use policy should include: An AI acceptable use policy is not just a document but a dynamic framework guiding safe and responsible AI use within an organization. By developing and enforcing this policy, organizations can harness AI’s power while mitigating its risks to cybersecurity and data integrity, balancing innovation with risk management as AI continues to evolve and integrate into our digital landscapes. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Slack AI Exploit Prevented

Slack AI Exploit Prevented

Slack AI Exploit Prevented. Slack has patched a vulnerability in its Slack AI assistant that could have been for insider phishing attacks, according to an announcement made by the company on Wednesday. This update follows a blog post by PromptArmor, which detailed how an insider attacker—someone within the same Slack workspace as the target—could manipulate Slack AI into sending phishing links to private channels that the attacker does not have access to. The vulnerability is an example of an indirect prompt injection attack. In this type of attack, the attacker embeds malicious instructions within content that the AI processes, such as an external website or an uploaded document. In this case, the attacker could plant these instructions in a public Slack channel. Slack AI, designed to use relevant information from public channels in the workspace to generate responses, could then be tricked into acting on these malicious instructions. While placing such instructions in a public channel poses a risk of detection, PromptArmor pointed out that an attacker could create a rogue public channel with only one member—themselves—potentially avoiding detection unless another user specifically searches for that channel. Salesforce, which owns Slack, did not directly reference PromptArmor in its advisory and did not confirm to SC Media that the issue it patched is the same one described by PromptArmor. However, the advisory does mention a security researcher’s blog post published on August 20, the same day as PromptArmor’s blog. “When we became aware of the report, we launched an investigation into the described scenario where, under very limited and specific circumstances, a malicious actor with an existing account in the same Slack workspace could phish users for certain data. We’ve deployed a patch to address the issue and have no evidence at this time of unauthorized access to customer data,” a Salesforce spokesperson told SC Media. How the Slack AI Exploit Could Have Extracted Secrets from Private Channels PromptArmor demonstrated two proof-of-concept exploits that would require the attacker to have access to the same workspace as the victim, such as a coworker. The attacker would create a public channel and lure the victim into clicking a link delivered by the AI. In the first exploit, the attacker aimed to extract an API key stored in a private channel that the victim is part of. The attacker could post a carefully crafted prompt in the public channel that indirectly instructs Slack AI to respond to a request for the API key with a fake error message and a URL controlled by the attacker. The AI would unknowingly insert the API key from the victim’s private channel into the URL as an HTTP parameter. If the victim clicks on the URL, the API key would be sent to the attacker’s domain. “This vulnerability shows how a flaw in the system could let unauthorized people see data they shouldn’t see. This really makes me question how safe our AI tools are,” said Akhil Mittal, Senior Manager of Cybersecurity Strategy and Solutions at Synopsys Software Integrity Group, in an email to SC Media. “It’s not just about fixing problems but making sure these tools manage our data properly. As AI becomes more common, it’s important for organizations to keep both security and ethics in mind to protect our information and keep trust.” In a second exploit, PromptArmor demonstrated how similar crafted instructions could be used to deliver a phishing link to a private channel. The attacker would tailor the instructions to the victim’s workflow, such as asking the AI to summarize messages from their manager, and include a malicious link. PromptArmor reported the issue to Slack on August 14, with Slack acknowledging the disclosure the following day. Despite some initial skepticism from Slack about the severity of the vulnerability, the company patched the issue on August 21. “Slack’s security team had prompt responses and showcased a commitment to security and attempted to understand the issue. Given how new prompt injection is and how misunderstood it has been across the industry, this is something that will take the industry time to wrap our heads around collectively,” PromptArmor wrote in their blog. New Slack AI Feature Could Pose Further Prompt Injection Risk PromptArmor concluded its testing of Slack AI before August 14, the same day Slack announced that its AI assistant could now reference files uploaded to Slack when generating search answers. PromptArmor noted that this new feature could create additional opportunities for indirect prompt injection attacks, such as hiding malicious instructions in a PDF file by setting the font color to white. However, the researchers have not yet tested this scenario and noted that workspace admins can restrict Slack AI’s ability to read files. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI in scams

AIs Role in Scams

How Generative AI is Supporting the Creation of Lures & Scams A Guide for Value Added Resellers Copyright © 2024 Gen Digital Inc. All rights reserved. Avast is part of Gen™. A long, long time ago, I worked for an antivirus company who has since been acquired by Avast.  Knowing many of the people involved in this area of artificial intelligence, I pay attention when they publish a white paper. AI in scams is something we all should be concerned about. I am excited to share it in our Tectonic Insights. Executive Summary The capabilities and global usage of both large language models (LLMs) and generative AI are rapidly increasing. While these tools offer significant benefits to the general public and businesses, they also pose potential risks for misuse by malicious actors, including the misuse of tools like OpenAI’s ChatGPT and other GPTs. This document explores how the ChatGPT brand is exploited for lures, scams, and other social engineering threats. Generative AI is expected to play a crucial role in the cyber threat world challenges, particularly in creating highly believable, multilingual texts for phishing and scams. These advancements provide more opportunities for sophisticated social engineering by even less sophisticated scammers than ever before. Conversely, we believe generative AI will not drastically change the landscape of malware generation in the near term. Despite numerous proofs of concept, the complexity of generative AI methods still makes traditional, simpler methods more practical for malware creation. In short, the good may not outweigh the bad – just yet. Recognizing the value of generative AI for legitimate purposes is important. AI-based security and assistant tools with various levels of maturity and specialization are already emerging in the market. As these tools evolve and become more widely available, substantial improvements in their capabilities are anticipated. AI-Generated Lures and Scams AI-generated lures and scams are increasingly prevalent. Cybercriminals use AI to create lures and conduct phishing attempts and scams through various texts—emails, social media content, e-shop reviews, SMS scams, and more. AI improves the credibility of social scams by producing trustworthy, authentic texts, eliminating traditional phishing red flags like broken language and awkward addressing. These advanced threats have exploited societal issues and initiatives, including cryptocurrencies, Covid-19, and the war in Ukraine. The popularity of ChatGPT among hackers stems more from its widespread recognition than its AI capabilities, making it a prime target for investigation by attackers. How is Generative AI Supporting the Creation of Lures and Scams? Generative AI, particularly ChatGPT, enhances the language used in scams, enabling cybercriminals to create more advanced texts than they could otherwise. AI can correct grammatical errors, provide multilingual content, and generate multiple text variations to improve believability. For sophisticated phishing attacks, attackers must integrate the AI-generated text into credible templates. They can purchase functional, well-designed phishing kits or use web archiving tools to replicate legitimate websites, altering URLs to phish victims. Currently, attackers need to manually build some aspects of their attempts. ChatGPT is not yet an “out-of-the-box” solution for advanced malware creation. However, the emergence of multi-type models, combining outputs like images, audio, and video, will enhance the capabilities of generative AI for creating believable phishing and scam campaigns. Malvertising Malvertising, or “malicious advertising,” involves disseminating malware through online ads. Cybercriminals exploit the widespread reach and interactive nature of digital ads to distribute harmful content. Instances have been observed where ChatGPT’s name is used in malicious vectors on platforms like Facebook, leading users to fraudulent investment portals. Users who provide personal information become vulnerable to identity theft, financial fraud, account takeovers, and further scams. The collected data is often sold on the dark web, contributing to the broader cybercrime ecosystem. Recognizing and mitigating these deceptive tactics is crucial. YouTube Scams YouTube, one of the world’s most popular platforms, is not immune to cybercrime. Fake videos featuring prominent figures are used to trick users into harmful actions. This strategy, known as the “Appeal to Authority,” exploits trust and credibility to phish personal details or coerce victims into sending money. For example, videos featuring Elon Musk discussing OpenAI have been modified to scam victims. A QR code displayed in the video redirects users to a scam page, often a cryptocurrency scam or phishing attempt. As AI models like Midjourney and DALL-E mature, the use of fake images, videos, and audio is expected to increase, enhancing the credibility of these scams. Typosquatting Typosquatting involves minor changes in URLs to redirect users to different websites, potentially leading to phishing attacks or the installation of malicious applications. An example is an Android app named “Open Chat GBT: AI Chat Bot,” where a subtle URL alteration can deceive users into downloading harmful software. Browser Extensions The popularity of ChatGPT has led to the emergence of numerous browser extensions. While many are legitimate, others are malicious, designed to lure victims. Attackers create extensions with names resembling ChatGPT to deceive users into downloading harmful software, such as adware or spyware. These extensions can also subscribe users to services that periodically charge fees, known as fleeceware. For instance, a malicious extension mimicking “ChatGPT for Google” was reported by Guardio. This extension stole Facebook sessions and cookies but was removed from the Chrome Web Store after being reported. Installers and Cracks Malicious installers often mimic legitimate tools, tricking users into installing malware. These installers promise to install ChatGPT but instead deploy malware like NodeStealer, which steals passwords and browser cookies. Cracked or unofficial software versions pose similar risks, hiding malware that can steal personal information or take control of computers. This particular method of installing malware has been around for decades. However the usage of ChatGPT and other free to download tools has given it a resurrection. Fake Updates Fake updates are a common tactic where users are prompted to update their browser to access content. Campaigns like SocGholish use ChatGPT-related articles to lure users into downloading remote access trojans (RATs), giving attackers control over infected devices. These pages are often hosted on vulnerable WordPress sites or sites with

Read More
How AI is Raising the Stakes in Phishing Attacks

How AI is Raising the Stakes in Phishing Attacks

Cybercriminals are increasingly using advanced AI, including tools like ChatGPT, to execute highly convincing phishing campaigns that mimic legitimate communications with uncanny accuracy. As AI-powered phishing becomes more sophisticated, cybersecurity practitioners must adopt AI and machine learning defenses to stay ahead. What are AI-Powered Phishing Attacks? Phishing, a long-standing cybersecurity issue, has evolved from crude scams into refined attacks that can mimic trusted entities like Amazon, postal services, or colleagues. Leveraging social engineering, these scams trick people into clicking malicious links, downloading harmful files, or sharing sensitive information. However, AI is elevating this threat by making phishing attacks more convincing, timely, and challenging to detect. General Phishing Attacks Traditionally, phishing emails were often easy to spot due to grammatical errors or poor formatting. AI, however, eliminates these mistakes, creating messages that appear professionally written. Additionally, AI language models can gather real-time data from news and corporate sites, embedding relevant details that create urgency and heighten the attack’s credibility. AI chatbots can also generate business email compromise attacks or whaling campaigns at a massive scale, boosting both the volume and sophistication of these threats. Spear Phishing Spear phishing involves targeting specific individuals with highly customized messages based on data gathered from social media or data breaches. AI has supercharged this tactic, enabling attackers to craft convincing, personalized emails almost instantly. During a cybersecurity study, AI-generated phishing emails outperformed human-crafted ones in terms of convincing recipients to click on malicious links. With the help of large language models (LLMs), attackers can create hyper-personalized emails and even deepfake phone calls and videos. Vishing and Deepfakes Vishing, or voice phishing, is another tactic on the rise. Traditionally, attackers would impersonate someone like a company executive or trusted colleague over the phone. With AI, they can now create deepfake audio to mimic a specific person’s voice, making it even harder for victims to discern authenticity. For example, an employee may receive a voice message that sounds exactly like their CFO, urgently requesting a bank transfer. How to Defend Against AI-Driven Phishing Attacks As AI-driven phishing becomes more prevalent, organizations should adopt the following defense strategies: How AI Improves Phishing Defense AI can also bolster phishing defenses by analyzing threat patterns, personalizing training, and monitoring for suspicious activity. GenAI, for instance, can tailor training to individual users’ weaknesses, offer timely phishing simulations, and assess each person’s learning needs to enhance cybersecurity awareness. AI can also predict potential phishing trends based on data such as attack frequency across industries, geographical locations, and types of targets. These insights allow security teams to anticipate attacks and proactively adapt defenses. Preparing for AI-Enhanced Phishing Threats Businesses should evaluate their risk level and implement corresponding safeguards: AI, and particularly LLMs, are transforming phishing attacks, making them more dangerous and harder to detect. As digital footprints grow and personalized data becomes more accessible, phishing attacks will continue to evolve, including falsified voice and video messages that can trick even the most vigilant employees. By proactively integrating AI defenses, organizations can better protect against these advanced phishing threats. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com