Orca Report: AI Services and Models Show Security Shortcomings

Recent research by Orca Security reveals significant security vulnerabilities in AI services and models deployed in the cloud. The “2024 State of AI Security Report,” released in 2024, underscores the urgent need for improved security practices as AI technologies advance rapidly. AI Services and Models Security Shortcomings.

AI usage is exploding. Gartner predicts that the AI software market will grow
19.1% annually, reaching 8 billion by 2027. In many ways, AI is now in
the stage reminiscent of where cloud computing was over a decade ago.

Orca’s analysis of cloud assets across major platforms—AWS, Azure, Google Cloud, Oracle Cloud, and Alibaba Cloud—has highlighted troubling risks associated with AI tools and models. Despite the surge in AI adoption, many organizations are neglecting fundamental security measures, potentially exposing themselves to significant threats.

The report indicates that while 56% of organizations use their own AI models for various purposes, a substantial portion of these deployments contain at least one known vulnerability. Orca’s findings suggest that although most vulnerabilities are currently classified as low to medium risk, they still pose a serious threat. Notably, 62% of organizations have implemented AI packages with vulnerabilities, which have an average CVSS score of 6.9. Only 0.2% of these vulnerabilities have known public exploits, compared to the industry average of 2.5%.

Insecure Configurations and Controls

Orca’s research reveals concerning security practices among widely used AI services. For instance, Azure OpenAI, a popular choice for building custom applications, was found to be improperly configured in 27% of cases. This lapse could allow attackers to access or manipulate data transmitted between cloud resources and AI services.

The report also criticizes default settings in Amazon SageMaker, a prominent machine learning service. It highlights that 45% of SageMaker buckets use non-randomized default names, and 98% of organizations have not disabled default root access for SageMaker notebook instances. These defaults create vulnerabilities that attackers could exploit to gain unauthorized access and perform actions on the assets.

Additionally, the report points out a lack of self-managed encryption keys and encryption protection. For instance, 98% of organizations using Google Vertex have not enabled encryption at rest for their self-managed keys, potentially exposing sensitive data to unauthorized access or alteration.

Exposed Access Keys and Platform Risks

Security issues extend to popular AI platforms like OpenAI and Hugging Face. Orca’s report found that 20% of organizations using OpenAI and 35% using Hugging Face have exposed access keys, heightening the risk of unauthorized access. This follows recent research by Wiz, which demonstrated vulnerabilities in Hugging Face during Black Hat USA 2024, where sensitive data was compromised.

Addressing the Security Challenge

Orca co-founder and CEO Gil Geron emphasizes the need for clear roles and responsibilities in managing AI security. He stresses that security practitioners must recognize and address these risks by setting policies and boundaries. According to Geron, while the challenges are not new, the rapid development of AI tools makes it crucial to address security from both engineering and practitioner perspectives.

Geron also highlights the importance of reviewing and adjusting default settings to enhance security, advocating for rigorous permission management and network hygiene. As AI technology continues to evolve, organizations must remain vigilant and proactive in safeguarding their systems and data.

In conclusion, the Orca report serves as a critical reminder of the security risks associated with AI services and models. Organizations must take concerted action to secure their AI deployments and protect against potential vulnerabilities.

Balance Innovation and Security in AI

  1. Beware of default settings: Always check the default settings of new AI resources and restrict these settings whenever appropriate.
  2. Limit privileges: Excessive privileges give attackers freedom of movement and a platform to launch multi-phased attacks. Protect against lateral movement and other threats by eliminating redundant privileges and restricting access.
  3. Manage vulnerabilities: Unlike AI security, most vulnerabilities are not new. Often, AI services rely on existing solutions with known vulnerabilities. Detecting and mapping those vulnerabilities in your environments is essential to manage and remediate them appropriately.  
  4. Secure data: Favor more restrictive settings for data protection, such as opting for self-managed encryption keys while ensuring you enable encryption at rest. Also, offer awareness training that instructs users on data security best practices.
  5. Isolate networks: Limit network access to your assets, opening assets to network activity only when necessary, and precisely defining what type of network to allow in and out. 

Tectonic notes Salesforce was not included in the sampling.

Content updated September 2024.

Related Posts
Salesforce OEM AppExchange
Salesforce OEM AppExchange

Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more

The Salesforce Story
The Salesforce Story

In Marc Benioff's own words How did salesforce.com grow from a start up in a rented apartment into the world's Read more

Salesforce Jigsaw
Salesforce Jigsaw

Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Health Cloud Brings Healthcare Transformation
Health Cloud Brings Healthcare Transformation

Following swiftly after last week's successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

author avatar
get-admin