Slack AI Exploit Prevented. Slack has patched a vulnerability in its Slack AI assistant that could have been for insider phishing attacks, according to an announcement made by the company on Wednesday. This update follows a blog post by PromptArmor, which detailed how an insider attacker—someone within the same Slack workspace as the target—could manipulate Slack AI into sending phishing links to private channels that the attacker does not have access to.

The vulnerability is an example of an indirect prompt injection attack. In this type of attack, the attacker embeds malicious instructions within content that the AI processes, such as an external website or an uploaded document. In this case, the attacker could plant these instructions in a public Slack channel. Slack AI, designed to use relevant information from public channels in the workspace to generate responses, could then be tricked into acting on these malicious instructions.

While placing such instructions in a public channel poses a risk of detection, PromptArmor pointed out that an attacker could create a rogue public channel with only one member—themselves—potentially avoiding detection unless another user specifically searches for that channel.

Salesforce, which owns Slack, did not directly reference PromptArmor in its advisory and did not confirm to SC Media that the issue it patched is the same one described by PromptArmor. However, the advisory does mention a security researcher’s blog post published on August 20, the same day as PromptArmor’s blog.

“When we became aware of the report, we launched an investigation into the described scenario where, under very limited and specific circumstances, a malicious actor with an existing account in the same Slack workspace could phish users for certain data. We’ve deployed a patch to address the issue and have no evidence at this time of unauthorized access to customer data,” a Salesforce spokesperson told SC Media.

How the Slack AI Exploit Could Have Extracted Secrets from Private Channels

PromptArmor demonstrated two proof-of-concept exploits that would require the attacker to have access to the same workspace as the victim, such as a coworker. The attacker would create a public channel and lure the victim into clicking a link delivered by the AI.

In the first exploit, the attacker aimed to extract an API key stored in a private channel that the victim is part of. The attacker could post a carefully crafted prompt in the public channel that indirectly instructs Slack AI to respond to a request for the API key with a fake error message and a URL controlled by the attacker. The AI would unknowingly insert the API key from the victim’s private channel into the URL as an HTTP parameter. If the victim clicks on the URL, the API key would be sent to the attacker’s domain.

“This vulnerability shows how a flaw in the system could let unauthorized people see data they shouldn’t see. This really makes me question how safe our AI tools are,” said Akhil Mittal, Senior Manager of Cybersecurity Strategy and Solutions at Synopsys Software Integrity Group, in an email to SC Media. “It’s not just about fixing problems but making sure these tools manage our data properly. As AI becomes more common, it’s important for organizations to keep both security and ethics in mind to protect our information and keep trust.”

In a second exploit, PromptArmor demonstrated how similar crafted instructions could be used to deliver a phishing link to a private channel. The attacker would tailor the instructions to the victim’s workflow, such as asking the AI to summarize messages from their manager, and include a malicious link.

PromptArmor reported the issue to Slack on August 14, with Slack acknowledging the disclosure the following day. Despite some initial skepticism from Slack about the severity of the vulnerability, the company patched the issue on August 21.

“Slack’s security team had prompt responses and showcased a commitment to security and attempted to understand the issue. Given how new prompt injection is and how misunderstood it has been across the industry, this is something that will take the industry time to wrap our heads around collectively,” PromptArmor wrote in their blog.

New Slack AI Feature Could Pose Further Prompt Injection Risk

PromptArmor concluded its testing of Slack AI before August 14, the same day Slack announced that its AI assistant could now reference files uploaded to Slack when generating search answers. PromptArmor noted that this new feature could create additional opportunities for indirect prompt injection attacks, such as hiding malicious instructions in a PDF file by setting the font color to white. However, the researchers have not yet tested this scenario and noted that workspace admins can restrict Slack AI’s ability to read files.

Related Posts
Salesforce OEM AppExchange
Salesforce OEM AppExchange

Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more

The Salesforce Story
The Salesforce Story

In Marc Benioff's own words How did salesforce.com grow from a start up in a rented apartment into the world's Read more

Salesforce Jigsaw
Salesforce Jigsaw

Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Health Cloud Brings Healthcare Transformation
Health Cloud Brings Healthcare Transformation

Following swiftly after last week's successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more