Wiz Researchers Warn of Security Flaws in AI Infrastructure Providers
AI infrastructure providers like Hugging Face and Replicate are vulnerable to emerging attacks and need to strengthen their defenses to protect sensitive user data, according to Wiz researchers. AI Infrastructure Flaws come from security being an afterthought.
Thank you for reading this post, don't forget to subscribe!During Black Hat USA 2024 on Wednesday, Wiz security experts Hillai Ben-Sasson and Sagi Tzadik presented findings from a year-long study on the security of three major AI infrastructure providers: Hugging Face, Replicate, and SAP AI Core. Their research aimed to assess the security of these platforms and the risks associated with storing valuable data on them, given the increasing targeting of AI platforms by cybercriminals and nation-state actors.
Hugging Face, a machine learning platform that allows users to create models and store datasets, was recently targeted in an attack. In June, the platform detected suspicious activity on its Spaces platform, prompting a key and token reset.
The researchers demonstrated how they compromised these platforms by uploading malicious models and using container escape techniques to break out of their assigned environments, moving laterally across the service. In an April blog post, Wiz detailed how they compromised Hugging Face, gaining cross-tenant access to other customers’ data and training models. Similar vulnerabilities were later identified in Replicate and SAP AI Core, and these attack techniques were showcased during Wednesday’s session.
Prior to Black Hat, Ben-Sasson, Tzadik, and Ami Luttwak, Wiz’s CTO and co-founder, discussed their research. They revealed that in all three cases, they successfully breached Hugging Face, Replicate, and SAP AI Core, accessing millions of confidential AI artifacts, including models, datasets, and proprietary code—intellectual property worth millions of dollars.
Luttwak highlighted that many AI service providers rely on containers as barriers between different customers, but warned that these containers can often be bypassed due to misconfigurations. “Containerization is not a secure enough barrier for tenant isolation,” Luttwak stated.
After discovering these vulnerabilities, the researchers responsibly disclosed the issues to each service provider. Ben-Sasson praised Hugging Face, Replicate, and SAP for their collaborative and professional responses, and Wiz worked closely with their security teams to resolve the problems.
Despite these fixes, Wiz researchers recommended that organizations update their threat models to account for potential data compromises. They also urged AI service providers to enhance their isolation and sandboxing standards to prevent lateral movement by attackers within their platforms.
The Risks of Rapid AI Adoption
The session also addressed the broader challenges associated with the rapid adoption of AI. The researchers emphasized that security is often an afterthought in the rush to implement AI technologies.
“AI security is also infrastructure security,” Luttwak explained, noting that the novelty and complexity of AI often leave security teams ill-prepared to manage the associated risks. Many organizations testing AI models are using unfamiliar tools, often open-source, without fully understanding the security implications.
Luttwak warned that these tools are frequently not built with security in mind, putting companies at risk. He stressed the importance of performing thorough security validation on AI models and tools, especially given that even major AI service providers have vulnerabilities.
In a related Black Hat session, Chris Wysopal, CTO and co-founder of Veracode, discussed how developers increasingly use large language models for coding but often prioritize functionality over security, leading to concerns like data poisoning and the replication of existing vulnerabilities.