HuggingFace - gettectonic.com
Salesforce Advanced AI Models

Salesforce Advanced AI Models

Salesforce has introduced two advanced AI models—xGen-Sales and xLAM—designed to enhance its Agentforce platform, which seamlessly integrates human agents with autonomous AI for greater business efficiency. xGen-Sales, a proprietary AI model, is tailored for sales tasks such as generating customer insights, summarizing calls, and managing pipelines. By automating routine sales activities, it enables sales teams to focus on strategic priorities. This model enhances Agentforce’s capacity to autonomously handle customer interactions, nurture leads, and support sales teams with increased speed and precision. The xLAM (Large Action Model) family introduces AI models designed to perform complex tasks and trigger actions within business systems. Unlike traditional Large Language Models (LLMs), which focus on content generation, xLAM models excel in function-calling, enabling AI agents to autonomously execute tasks like initiating workflows or processing data without human input. These models vary in size and capability, from smaller, on-device versions to large-scale models suitable for industrial applications. Salesforce AI Research developed the xLAM models using APIGen, a proprietary data-generation pipeline that significantly improves model performance. Early xLAM models have already outperformed other large models in key benchmarks. For example, the xLAM-8x22B model ranked first in function-calling tasks on the Berkeley Leaderboards, surpassing even larger models like GPT-4. These AI innovations are designed to help businesses scale AI-driven workflows efficiently. Organizations adopting these models can automate complex tasks, improve sales operations, and optimize resource allocation. The non-commercial xLAM models are available for community review on Hugging Face, while proprietary versions will power Agentforce. xGen-Sales has completed its pilot phase and will soon be available for general use. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Services and Models Security Shortcomings

AI Services and Models Security Shortcomings

Orca Report: AI Services and Models Show Security Shortcomings Recent research by Orca Security reveals significant security vulnerabilities in AI services and models deployed in the cloud. The “2024 State of AI Security Report,” released in 2024, underscores the urgent need for improved security practices as AI technologies advance rapidly. AI Services and Models Security Shortcomings. AI usage is exploding. Gartner predicts that the AI software market will grow19.1% annually, reaching 8 billion by 2027. In many ways, AI is now inthe stage reminiscent of where cloud computing was over a decade ago. Orca’s analysis of cloud assets across major platforms—AWS, Azure, Google Cloud, Oracle Cloud, and Alibaba Cloud—has highlighted troubling risks associated with AI tools and models. Despite the surge in AI adoption, many organizations are neglecting fundamental security measures, potentially exposing themselves to significant threats. The report indicates that while 56% of organizations use their own AI models for various purposes, a substantial portion of these deployments contain at least one known vulnerability. Orca’s findings suggest that although most vulnerabilities are currently classified as low to medium risk, they still pose a serious threat. Notably, 62% of organizations have implemented AI packages with vulnerabilities, which have an average CVSS score of 6.9. Only 0.2% of these vulnerabilities have known public exploits, compared to the industry average of 2.5%. Insecure Configurations and Controls Orca’s research reveals concerning security practices among widely used AI services. For instance, Azure OpenAI, a popular choice for building custom applications, was found to be improperly configured in 27% of cases. This lapse could allow attackers to access or manipulate data transmitted between cloud resources and AI services. The report also criticizes default settings in Amazon SageMaker, a prominent machine learning service. It highlights that 45% of SageMaker buckets use non-randomized default names, and 98% of organizations have not disabled default root access for SageMaker notebook instances. These defaults create vulnerabilities that attackers could exploit to gain unauthorized access and perform actions on the assets. Additionally, the report points out a lack of self-managed encryption keys and encryption protection. For instance, 98% of organizations using Google Vertex have not enabled encryption at rest for their self-managed keys, potentially exposing sensitive data to unauthorized access or alteration. Exposed Access Keys and Platform Risks Security issues extend to popular AI platforms like OpenAI and Hugging Face. Orca’s report found that 20% of organizations using OpenAI and 35% using Hugging Face have exposed access keys, heightening the risk of unauthorized access. This follows recent research by Wiz, which demonstrated vulnerabilities in Hugging Face during Black Hat USA 2024, where sensitive data was compromised. Addressing the Security Challenge Orca co-founder and CEO Gil Geron emphasizes the need for clear roles and responsibilities in managing AI security. He stresses that security practitioners must recognize and address these risks by setting policies and boundaries. According to Geron, while the challenges are not new, the rapid development of AI tools makes it crucial to address security from both engineering and practitioner perspectives. Geron also highlights the importance of reviewing and adjusting default settings to enhance security, advocating for rigorous permission management and network hygiene. As AI technology continues to evolve, organizations must remain vigilant and proactive in safeguarding their systems and data. In conclusion, the Orca report serves as a critical reminder of the security risks associated with AI services and models. Organizations must take concerted action to secure their AI deployments and protect against potential vulnerabilities. Balance Innovation and Security in AI Tectonic notes Salesforce was not included in the sampling. Content updated September 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Infrastructure Flaws

AI Infrastructure Flaws

Wiz Researchers Warn of Security Flaws in AI Infrastructure Providers AI infrastructure providers like Hugging Face and Replicate are vulnerable to emerging attacks and need to strengthen their defenses to protect sensitive user data, according to Wiz researchers. AI Infrastructure Flaws come from security being an afterthought. During Black Hat USA 2024 on Wednesday, Wiz security experts Hillai Ben-Sasson and Sagi Tzadik presented findings from a year-long study on the security of three major AI infrastructure providers: Hugging Face, Replicate, and SAP AI Core. Their research aimed to assess the security of these platforms and the risks associated with storing valuable data on them, given the increasing targeting of AI platforms by cybercriminals and nation-state actors. Hugging Face, a machine learning platform that allows users to create models and store datasets, was recently targeted in an attack. In June, the platform detected suspicious activity on its Spaces platform, prompting a key and token reset. The researchers demonstrated how they compromised these platforms by uploading malicious models and using container escape techniques to break out of their assigned environments, moving laterally across the service. In an April blog post, Wiz detailed how they compromised Hugging Face, gaining cross-tenant access to other customers’ data and training models. Similar vulnerabilities were later identified in Replicate and SAP AI Core, and these attack techniques were showcased during Wednesday’s session. Prior to Black Hat, Ben-Sasson, Tzadik, and Ami Luttwak, Wiz’s CTO and co-founder, discussed their research. They revealed that in all three cases, they successfully breached Hugging Face, Replicate, and SAP AI Core, accessing millions of confidential AI artifacts, including models, datasets, and proprietary code—intellectual property worth millions of dollars. Luttwak highlighted that many AI service providers rely on containers as barriers between different customers, but warned that these containers can often be bypassed due to misconfigurations. “Containerization is not a secure enough barrier for tenant isolation,” Luttwak stated. After discovering these vulnerabilities, the researchers responsibly disclosed the issues to each service provider. Ben-Sasson praised Hugging Face, Replicate, and SAP for their collaborative and professional responses, and Wiz worked closely with their security teams to resolve the problems. Despite these fixes, Wiz researchers recommended that organizations update their threat models to account for potential data compromises. They also urged AI service providers to enhance their isolation and sandboxing standards to prevent lateral movement by attackers within their platforms. The Risks of Rapid AI Adoption The session also addressed the broader challenges associated with the rapid adoption of AI. The researchers emphasized that security is often an afterthought in the rush to implement AI technologies. “AI security is also infrastructure security,” Luttwak explained, noting that the novelty and complexity of AI often leave security teams ill-prepared to manage the associated risks. Many organizations testing AI models are using unfamiliar tools, often open-source, without fully understanding the security implications. Luttwak warned that these tools are frequently not built with security in mind, putting companies at risk. He stressed the importance of performing thorough security validation on AI models and tools, especially given that even major AI service providers have vulnerabilities. In a related Black Hat session, Chris Wysopal, CTO and co-founder of Veracode, discussed how developers increasingly use large language models for coding but often prioritize functionality over security, leading to concerns like data poisoning and the replication of existing vulnerabilities. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com