Novel Threat Model to Secure LLM
Salesforce AI Research Proposes a Novel Threat Model to Secure LLM Applications Against Prompt Leakage Attacks Large Language Models (LLMs) have gained widespread attention in recent years but face a critical security challenge known as prompt leakage. This vulnerability allows adversaries to extract sensitive information from LLM prompts through targeted attacks. Prompt leakage risks exposing