Anthropic’s New Approach to RAG
advanced RAG methodology demonstrates how AI can overcome traditional challenges, delivering more precise, context-aware responses while maintaining efficiency and scalability.
advanced RAG methodology demonstrates how AI can overcome traditional challenges, delivering more precise, context-aware responses while maintaining efficiency and scalability.
Not long ago, financial services companies were still struggling with the challenge of customer data trapped in silos. Though it feels like a distant issue, this problem remains for many large organizations unable to integrate different divisions that deal separately with the same customers. Salesforce AI evolves with tools like Agentforce and Atlas. The solution is a concept known as a “single source of truth.” This theme took center stage at Dreamforce 2024 in San Francisco, hosted by Salesforce (NYSE). The event showcased Salesforce’s latest AI innovations, including Agentforce, which is set to revolutionize customer engagement through its advanced AI capabilities. Agentforce, which becomes generally available on October 25, enables businesses to deploy autonomous AI agents to manage a wide variety of tasks. These agents differ from earlier Salesforce-based AI tools by leveraging Atlas, a cutting-edge reasoning engine that allows the bots to think like human beings. Unlike generative AI models, which might write an email based on prompts, Agentforce’s AI agents can answer complex, high-order questions such as, “What should I do with all my customers?” The agents break down these queries into actionable steps—whether that’s sending emails, making phone calls, or texting customers—thanks to the deep capabilities of Atlas. Atlas is at the heart of what makes these AI agents so powerful. It combines multiple large language models (LLMs), large action models (LAMs), and retrieval-augmented generation (RAG) modules, along with REST APIs and connectors to various datasets. This robust system processes user queries through multiple layers, checking for validity and then expanding the query into manageable chunks for processing. Once a query passes through the chit-chat detector—which filters out non-relevant inputs—it enters the evaluation phase, where the AI determines if it has enough data to provide a meaningful answer. If not, the system loops back to the user for more information in a process Salesforce calls the agentic loop. The fewer loops required, the more efficient the AI becomes, making the experience seamless for users. Phil Mui, Senior Vice President of Salesforce AI Research, explained that the AI agents created via Agentforce are powered by the Atlas reasoning engine, which makes use of several key tools like a re-ranker, a refiner, and a response synthesizer. These tools ensure that the AI retrieves, ranks, and synthesizes relevant information to generate high-quality, natural language responses for the user. But Salesforce’s AI agents don’t stop at automation—they also emphasize trust. Before responses reach users, they go through additional checks for toxicity detection, bias prevention, and personally identifiable information (PII) masking. This ensures that the output is both accurate and safe. The potential of Agentforce is massive. According to Wedbush, Salesforce’s AI strategy could generate over $4 billion annually by 2025. Wedbush analysts recently increased their price target for Salesforce stock to $325, reflecting the strong customer reception of Agentforce’s AI ecosystem. While some analysts, such as Yiannis Zourmpanos from Seeking Alpha, have expressed caution due to Salesforce’s high valuation and slower revenue growth, the company’s continued focus on AI and multi-cloud solutions places it in a strong position for the future. Robin Fisher, Salesforce’s head of growth markets for Europe, the Middle East, and Africa, highlighted two major takeaways from Dreamforce for African businesses: the Data Cloud and AI. Data Cloud provides a 360-degree view of the customer, consolidating data into a single source of truth without requiring full data migration. Meanwhile, Agentforce’s autonomous AI agents will drive operational efficiency across industries, especially in markets like Africa. Zuko Mdwaba, Salesforce’s managing director for South Africa, added that the company’s decade-long AI journey is culminating in its most advanced AI offerings yet. This new wave of AI, he said, is transforming not just customer engagement but also internal operations, empowering employees to focus on more strategic tasks while AI handles repetitive ones. The future is clear: as AI evolves with tools like Agentforce and Atlas, businesses across sectors, from banking to retail, are poised to harness the transformative power of autonomous technology and data-driven insights, finally breaking free from the silos of the past. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more
Exploring Small Language Models (SLMs): Capabilities and Applications Large Language Models (LLMs) have been prominent in AI for some time, but Small Language Models (SLMs) are now enhancing our ability to work with natural and programming languages. While LLMs excel in general language understanding, certain applications require more accuracy and domain-specific knowledge than these models can provide. This has created a demand for custom SLMs that offer LLM-like performance while reducing runtime costs and providing a secure, manageable environment. In this insight, we dig down into the world of SLMs, exploring their unique characteristics, benefits, and applications. We also discuss fine-tuning methods applied to Llama-2–13b, an SLM, to address specific challenges. The goal is to investigate how to make the fine-tuning process platform-independent. We selected Databricks for this purpose due to its compatibility with major cloud providers like Azure, Amazon Web Services (AWS), and Google Cloud Platform. What Are Small Language Models? In AI and natural language processing, SLMs are lightweight generative models with a focus on specific tasks. The term “small” refers to: SLMs like Google Gemini Nano, Microsoft’s Orca-2–7b, and Meta’s Llama-2–13b run efficiently on a single GPU and include over 5 billion parameters. SLMs vs. LLMs Applications of SLMs SLMs are increasingly used across various sectors, including healthcare, technology, and beyond. Common applications include: Fine-Tuning Small Language Models Fine-tuning involves additional training of a pre-trained model to make it more domain-specific. This process updates the model’s parameters with new data to enhance its performance in targeted applications, such as text generation or question answering. Hardware Requirements for Fine-Tuning The hardware needs depend on the model size, project scale, and dataset. General recommendations include: Data Preparation Preparing data involves extracting text from PDFs, cleaning it, generating question-and-answer pairs, and then fine-tuning the model. Although GPT-3.5 was used for generating Q&A pairs, SLMs can also be utilized for this purpose based on the use case. Fine-Tuning Process You can use HuggingFace tools for fine-tuning Llama-2–13b-chat-hf. The dataset was converted into a HuggingFace-compatible format, and quantization techniques were applied to optimize performance. The fine-tuning lasted about 16 hours over 50 epochs, with the cost around $100/£83, excluding trial costs. Results and Observations The fine-tuned model demonstrated strong performance, with over 70% of answers being highly similar to those generated by GPT-3.5. The SLM achieved comparable results despite having fewer parameters. The process was successful on both AWS and Databricks platforms, showcasing the model’s adaptability. SLMs have some limitations compared to LLMs, such as higher operational costs and restricted knowledge bases. However, they offer benefits in efficiency, versatility, and environmental impact. As SLMs continue to evolve, their relevance and popularity are likely to increase, especially with new models like Gemini Nano and Mixtral entering the market. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more
Growing Energy Consumption in Generative AI, but ROI Impact Remains Unclear The rising energy costs associated with generative AI aren’t always central in enterprise financial considerations, yet experts suggest IT leaders should take note. Building a business case for generative AI involves both obvious and hidden expenses. Licensing fees for large language models (LLMs) and SaaS subscriptions are visible expenses, but less apparent costs include data preparation, cloud infrastructure upgrades, and managing organizational change. Growing Energy Consumption in Generative AI. One under-the-radar cost is the energy required by generative AI. Training LLMs demands vast computing power, and even routine AI tasks like answering user queries or generating images consume energy. These intensive processes require robust cooling systems in data centers, adding to energy use. While energy costs haven’t been a focus for GenAI adopters, growing awareness has prompted the International Energy Agency (IEA) to predict a doubling of data center electricity consumption by 2026, attributing much of the increase to AI. Goldman Sachs echoed these concerns, projecting data center power consumption to more than double by 2030. For now, generative AI’s anticipated benefits outweigh energy cost concerns for most enterprises, with hyperscalers like Google bearing the brunt of these costs. Google recently reported a 13% increase in greenhouse gas emissions, citing AI as a major contributor and suggesting that reducing emissions might become more challenging with AI’s continued growth. Growing Energy Consumption in Generative AI While not a barrier to adoption, energy costs play into generative AI’s long-term viability, noted Scott Likens, global AI engineering leader at PwC, emphasizing that “there’s energy being used — you don’t take it for granted.” Energy Costs and Enterprise Adoption Generative AI users might not see a line item for energy costs, yet these are embedded in fees. Ryan Gross of Caylent points out that the costs are mainly tied to model training and inferencing, with each model query, though individually minor, adding up over time. These expenses are often spread across the customer base, as companies pay for generative AI access through a licensing model. A PwC sustainability study showed that GenAI power costs, particularly from model training, are distributed among licensees. Token-based pricing for LLM usage also reflects inferencing costs, though these charges have decreased. Likens noted that the largest expenses still come from infrastructure and data management rather than energy. Potential Efficiency Gains Though energy isn’t a primary consideration, enterprises could reduce consumption indirectly through technological advancements. Newer, more cost-efficient models like OpenAI’s GPT-4o mini are 60% less expensive per token than prior versions, enabling organizations to deploy GenAI on a larger scale while keeping costs lower. Small, fine-tuned models can be used to address latency and lower energy consumption, part of a “multimodel” approach that can provide different accuracy and latency levels with varying energy demands. Agentic AI also offers opportunities for cost and energy savings. By breaking down tasks and routing them through specialized models, companies can minimize latency and reduce power usage. According to Likens, using agentic architecture could cut costs and consumption, particularly when tasks are routed to more efficient models. Rising Data Center Energy Needs While enterprises may feel shielded from direct energy costs, data centers bear the growing power demand. Cooling solutions are evolving, with liquid cooling systems becoming more prevalent for AI workloads. As data centers face the “AI growth cycle,” the demand for energy-efficient cooling solutions has fueled a resurgence in thermal management investment. Liquid cooling, being more efficient than air cooling, is gaining traction due to the power demands of AI and high-performance computing. IDTechEx projects that data center liquid cooling revenue could exceed $50 billion by 2035. Meanwhile, data centers are exploring nuclear power, with AWS, Google, and Microsoft among those considering nuclear energy as a sustainable solution to meet AI’s power demands. Future ROI Considerations While enterprises remain shielded from the full energy costs of generative AI, careful model selection and architectural choices could help curb consumption. PwC, for instance, factors in the “carbon impact” as part of its GenAI deployment strategy, recognizing that energy considerations are now a part of the generative AI value proposition. As organizations increasingly factor sustainability into their tech decisions, energy efficiency might soon play a larger role in generative AI ROI calculations. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more
From Science Fiction to Reality: AI’s Game-Changing Role in Service and Sales AI for service and sales has reached a critical tipping point, driving rapid innovation. At Dreamforce in San Francisco, hosted by Salesforce we explored how Salesforce clients are leveraging CRM, Data Cloud, and AI to extract real business value from their Salesforce investments. In previous years, AI features branded under “Einstein” had been met with skepticism. These features, such as lead scoring, next-best-action suggestions for service agents, and cross-sell/upsell recommendations, often required substantial quality data in the CRM and knowledge base to be effective. However, customer data was frequently unreliable, with duplicate records and missing information, and the Salesforce knowledge base was underused. Building self-service capabilities with chatbots was also challenging, requiring accurate predictions of customer queries and well-structured decision trees. This year’s Dreamforce revealed a transformative shift. The advancements in AI, especially for customer service and sales, have become exceptionally powerful. Companies now need to take notice of Salesforce’s capabilities, which have expanded significantly. Agentforce – AI’s New Role in Sales and Service Some standout Salesforce features include: At Dreamforce, we participated in a workshop where they built an AI agent capable of responding to customer cases using product sheets and company knowledge within 90 minutes. This experience demonstrated how accessible AI solutions have become, no longer requiring developers or LLM experts to set up. The key challenge lies in mapping external data sources to a unified data model in Data Cloud, but once achieved, the potential for customer service and sales is immense. How AI and Data Integrate to Transform Service and Sales Businesses can harness the following integrated components to build a comprehensive solution: Real-World Success and AI Implementation OpenTable shared a successful example of building an AI agent for its app in just two months, using a small team of four. This was a marked improvement from the company’s previous chatbot projects, highlighting the efficiency of the latest AI tools. Most CEOs of large enterprises are exploring AI strategies, whether by developing their own LLMs or using pre-existing models. However, many of these efforts are siloed, and engineering costs are high, leading to clunky transitions between AI and human agents. Tectonic is well-positioned to help our clients quickly deploy AI-powered solutions that integrate seamlessly with their existing CRM and ERP systems. By leveraging AI agents to streamline customer interactions, enhance sales opportunities, and provide smooth handoffs to human agents, businesses can significantly improve customer experiences and drive growth. Tectonic is ready to help businesses achieve similar success with AI-driven innovation. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more
OpenAI has established itself as a leading force in the generative AI space, with its ChatGPT being one of the most widely recognized AI tools. Powered by the GPT series of large language models (LLMs), as of September 2024, ChatGPT primarily uses GPT-4o and GPT-3.5. This insight provides an Open AI Update. In August and September 2024, rumors circulated about a new model from OpenAI, codenamed “Strawberry.” Initially, it was unclear if this model would be a successor to GPT-4o or something entirely different. On September 12, 2024, the mystery was resolved with the official launch of OpenAI’s o1 models, including o1-preview and o1-mini. What is OpenAI o1? OpenAI o1 is a new family of LLMs optimized for advanced reasoning tasks. Unlike earlier models, o1 is designed to improve problem-solving by reasoning through queries rather than just generating quick responses. This deeper processing aims to produce more accurate answers to complex questions, particularly in fields like STEM (science, technology, engineering, and mathematics). The o1 models, currently available in preview form, are intended to provide a new type of LLM experience beyond what GPT-4o offers. Like all OpenAI LLMs, the o1 series is built on transformer architecture and can be used for tasks such as content summarization, new content generation, question answering, and writing code. Key Features of OpenAI o1 The standout feature of the o1 models is their ability to engage in multistep reasoning. By adopting a “chain-of-thought” approach, o1 models break down complex problems and reason through them iteratively. This makes them particularly adept at handling intricate queries that require a more thoughtful response. The initial September 2024 launch included two models: Use Cases for OpenAI o1 The o1 models can perform many of the same functions as GPT-4o, such as answering questions, summarizing content, and generating text. However, they are particularly suited for tasks that benefit from enhanced reasoning, including: Availability and Access The o1-preview and o1-mini models are available to users of ChatGPT Plus and Team as of September 12, 2024. OpenAI plans to extend access to ChatGPT Enterprise and Education users starting September 19, 2024. While free ChatGPT users do not have access to these models at launch, OpenAI intends to introduce o1-mini to free users in the future. Developers can also access the models through OpenAI’s API, and third-party platforms such as Microsoft Azure AI Studio and GitHub Models offer integration. Limitations of OpenAI o1 As preview models, o1 comes with certain limitations: Enhancing Safety with OpenAI o1 To ensure safety, OpenAI released a System Card that outlines how the o1 models were evaluated for risks like cybersecurity threats, persuasion, and model autonomy. The o1 models improve safety through: GPT-4o vs. OpenAI o1 Here’s a quick comparison between GPT-4o and OpenAI’s new o1 models: Feature GPT-4o o1 Models Release Date May 13, 2024 Sept. 12, 2024 Model Variants Single model Two variants: o1-preview and o1-mini Reasoning Capabilities Good Enhanced, especially for STEM fields Mathematics Olympiad Score 13% 83% Context Window 128K tokens 128K tokens Speed Faster Slower due to in-depth reasoning Cost (per million tokens) Input: $5; Output: $15 o1-preview: $15 input, $60 output; o1-mini: $3 input, $12 output Safety and Alignment Standard Enhanced safety, better jailbreak resistance OpenAI’s o1 models bring a new level of reasoning and accuracy, making them a promising advancement in generative AI. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more
The Salesforce hype machine is in full swing, with product announcements like Chatter, Einstein GPT, and Data Cloud, all positioned as revolutionary tools that promise to transform how we work. Is Agentforce Different? However, it’s often difficult to separate fact from fiction in the world of Salesforce. The cloud giant thrives on staying ahead of technological advancements, which means reinventing itself every year with new releases and updates. You could even say three times per year with the major releases. Why Enterprises Need Multiple Salesforce Orgs Over the past decade, Salesforce product launches have been hit or miss—primarily miss. Offerings like IoT Cloud, Work.com, and NFT Cloud have faded into obscurity. This contrasts sharply with Salesforce’s earlier successes, such as Service Cloud, the AppExchange, Force.com, Salesforce Lightning, and Chatter, which defined its first decade in business. One notable exception is Data Cloud. This product has seen significant success and now serves as the cornerstone of Salesforce’s future AI and data strategy. With Salesforce’s growth slowing quarter over quarter, the company must find new avenues to generate substantial revenue. Artificial Intelligence seems to be their best shot at reclaiming a leadership position in the next technological wave. Is Agentforce Different? While Salesforce has been an AI leader for over a decade, the hype surrounding last year’s Dreamforce announcements didn’t deliver the growth the company was hoping for. The Einstein Copilot Studio—comprising Copilot, Prompt Builder, and Model Builder—hasn’t fully lived up to expectations. This can be attributed to a lack of AI readiness among enterprises, the relatively basic capabilities of large language models (LLMs), and the absence of fully developed use cases. In Salesforce’s keynote, it was revealed that over 82 billion flows are launched weekly, compared to just 122,000 prompts executed. While Flow has been around for years, this stat highlights that the use of AI-powered prompts is still far from mainstream—less than one prompt per Salesforce customer per week, on average. When ChatGPT launched at the end of 2022, many predicted the dawn of a new AI era, expecting a swift and dramatic transformation of the workplace. Two years later, it’s clear that AI’s impact has yet to fully materialize, especially when it comes to influencing global productivity and GDP. However, Salesforce’s latest release feels different. While AI Agents may seem new to many, this concept has been discussed in AI circles for decades. Marc Benioff’s recent statements during Dreamforce reflect a shift in strategy, including a direct critique of Microsoft’s Copilot product, signaling the intensifying AI competition. This year’s marketing strategy around Agentforce feels like it could be the transformative shift we’ve been waiting for. While tools like Salesforce Copilot will continue to evolve, agents capable of handling service cases, answering customer questions, and booking sales meetings instantly promise immediate ROI for organizations. Is the Future of Salesforce in the Hands of Agents? Despite the excitement, many questions remain. Are Salesforce customers ready for agents? Can organizations implement this technology effectively? Is Agentforce a real breakthrough or just another overhyped concept? Agentforce may not be vaporware. Reports suggest that its development was influenced by Salesforce’s acquisition of Airkit.AI, a platform that claims to resolve 90% of customer queries. Salesforce has even set up dedicated launchpads at Dreamforce to help customers start building their own agents. Yet concerns remain, especially regarding Salesforce’s complexity, technical debt, and platform sprawl. These issues, highlighted in this year’s Salesforce developer report, cannot be overlooked. Still, it’s hard to ignore Salesforce’s strategic genius. The platform has matured to the point where it offers nearly every functionality an organization could need, though at times the components feel a bit disconnected. For instance: Salesforce is even hinting at usage-based pricing, with a potential $2 charge per conversation—an innovation that could reshape their pricing model. Will Agents Be Salesforce’s Key to Future Growth? With so many unknowns, only time will tell if agents will be the breakthrough Salesforce needs to regain the momentum of its first two decades. Regardless, agents appear to be central to the future of AI. Leading organizations like Copado are also launching their own agents, signaling that this trend will define the next phase of AI innovation. In today’s macroeconomic environment, where companies are overstretched and workforce demands are high, AI’s ability to streamline operations and improve customer service has never been more critical. Whoever cracks customer service AI first could lead the charge in the inevitable AI spending boom. We’re all waiting to see if Salesforce has truly cracked the AI code. But one thing is certain: the race to dominate AI in customer service has begun. And Salsesforce may be at the forefront. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more
OpenAI has firmly established itself as a leader in the generative AI space, with its ChatGPT being one of the most well-known applications of AI today. Powered by the GPT family of large language models (LLMs), ChatGPT’s primary models, as of September 2024, are GPT-4o and GPT-3.5. In August and September 2024, rumors surfaced about a new model from OpenAI, codenamed “Strawberry.” Speculation grew as to whether this was a successor to GPT-4o or something else entirely. The mystery was resolved on September 12, 2024, when OpenAI launched its new o1 models, including o1-preview and o1-mini. What Is OpenAI o1? The OpenAI o1 family is a series of large language models optimized for enhanced reasoning capabilities. Unlike GPT-4o, the o1 models are designed to offer a different type of user experience, focusing more on multistep reasoning and complex problem-solving. As with all OpenAI models, o1 is a transformer-based architecture that excels in tasks such as content summarization, content generation, coding, and answering questions. What sets o1 apart is its improved reasoning ability. Instead of prioritizing speed, the o1 models spend more time “thinking” about the best approach to solve a problem, making them better suited for complex queries. The o1 models use chain-of-thought prompting, reasoning step by step through a problem, and employ reinforcement learning techniques to enhance performance. Initial Launch On September 12, 2024, OpenAI introduced two versions of the o1 models: Key Capabilities of OpenAI o1 OpenAI o1 can handle a variety of tasks, but it is particularly well-suited for certain use cases due to its advanced reasoning functionality: How to Use OpenAI o1 There are several ways to access the o1 models: Limitations of OpenAI o1 As an early iteration, the o1 models have several limitations: How OpenAI o1 Enhances Safety OpenAI released a System Card alongside the o1 models, detailing the safety and risk assessments conducted during their development. This includes evaluations in areas like cybersecurity, persuasion, and model autonomy. The o1 models incorporate several key safety features: GPT-4o vs. OpenAI o1: A Comparison Here’s a side-by-side comparison of GPT-4o and OpenAI o1: Feature GPT-4o o1 Models Release Date May 13, 2024 Sept. 12, 2024 Model Variants Single Model Two: o1-preview and o1-mini Reasoning Capabilities Good Enhanced, especially in STEM fields Performance Benchmarks 13% on Math Olympiad 83% on Math Olympiad, PhD-level accuracy in STEM Multimodal Capabilities Text, images, audio, video Primarily text, with developing image capabilities Context Window 128K tokens 128K tokens Speed Fast Slower due to more reasoning processes Cost (per million tokens) Input: $5; Output: $15 o1-preview: $15 input, $60 output; o1-mini: $3 input, $12 output Availability Widely available Limited to specific users Features Includes web browsing, file uploads Lacks some features from GPT-4o, like web browsing Safety and Alignment Focus on safety Improved safety, better resistance to jailbreaking ChatGPT Open AI o1 OpenAI o1 marks a significant advancement in reasoning capabilities, setting a new standard for complex problem-solving with LLMs. With enhanced safety features and the ability to tackle intricate tasks, o1 models offer a distinct upgrade over their predecessors. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more
The introduction of LAMs marks a significant advancement in AI, focusing on actionable intelligence. By enabling robust, dynamic interactions through function calling and structured output generation, LAMs are set to redefine the capabilities of AI agents across industries.
Salesforce has announced several major updates to its Data Cloud at the annual Dreamforce conference, aimed at enhancing data analysis and boosting the capabilities of its new AI agents, Agentforce. These updates include support for unstructured data in audio and video formats, 50 new data connectors, and a low-latency pipeline to ensure faster AI responses. Salesforce describes Data Cloud as a customer data platform designed to unify data from various sources, providing actionable insights for sales, service, and marketing teams. The latest features will further improve this functionality, according to Rahul Auradkar, Salesforce’s General Manager of Unified Data Services and Einstein. Key Updates 1. Unstructured Data Support: Audio and Video Formats Salesforce has introduced the ability to process unstructured audio and video data. This allows enterprises to analyze information from sources like customer calls, training sessions, product demos, and webinars. The unstructured data can enrich customer profiles, provide deeper insights into behavior, and enhance the performance of AI agents within Agentforce by improving response accuracy. IDC’s Hayley Sutherland highlighted that vectorization of this content enables retrieval-augmented generation (RAG), enhancing the use of large language models (LLMs) to generate more effective and context-aware responses. 2. 50 New Data Connectors Salesforce has expanded its connector library with 50 new options, raising the total to 200. This helps businesses integrate diverse data sources, facilitating improved personalization and decision-making. These connectors are also critical for feeding data to AI agents, enabling them to generate more accurate insights. 3. Sub-second Data Layer for Real-time Responses A new sub-second data layer has been rolled out, allowing enterprises to ingest, unify, analyze, and act on data in real time. This layer powers Einstein Personalization, enabling instant AI-driven recommendations and analytics. It’s especially important for the low-latency needs of Agentforce’s autonomous AI agents, ensuring faster, more accurate customer interactions. 4. Governance and Security Enhancements Salesforce has introduced AI tagging and classification features for unstructured data, automating labeling and organization to ensure secure access. Policy-based governance will offer granular control over data access based on tags, metadata, and user attributes. Additionally, customer-managed encryption keys ensure data security, while the newly available private connect feature allows secure data sharing between Data Cloud and public cloud environments. 5. Data Cloud One and Hybrid Search Data Cloud One, a new feature, will allow enterprises to connect multiple Salesforce accounts across different departments or regions into a single instance, simplifying data management. The hybrid search feature, which combines vector and keyword search, is designed to improve the speed and accuracy of finding industry-specific information. Other updates include Tableau Semantics, which helps organize data based on its meaning, and a new data community platform to foster collaboration among data professionals. These updates reflect Salesforce’s commitment to enhancing data-driven decision-making and delivering powerful AI capabilities for enterprises. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more
Apple Unveils New AI Features at “Glowtime” Event In typical fashion, Apple revealed its latest product updates on Monday with a pre-recorded keynote titled “Glowtime,” referencing the glowing ring around the screen when Apple Intelligence is activated. Though primarily a hardware event, the real highlight was the suite of AI-powered features coming to the new iPhone models this fall. The 98-minute presentation covered updates to iPhones, AirPods, and the Apple Watch, with Apple Intelligence being the thread tying together user experiences across all devices. MacRumors has published a detailed list of all announcements, including the sleep apnea detection feature for the Apple Watch and new hearing health tools for AirPods Pro 2. Key AI Developments for Brand Marketers Apple Intelligence was first introduced at its WWDC event in June, focusing on using Apple’s large language model (LLM) to perform tasks on-device with personalized results. It draws from user data in native apps like Calendar and Mail, enabling AI to handle tasks like image generation, photo searches, and AI-generated notifications. The keynote also introduced a new “Visual Intelligence” feature for iPhone 16 models, acting as a native visual search tool. By pressing the new “camera control” button, users can access this feature to perform searches directly from their camera, such as getting restaurant info or recognizing a dog breed. Apple’s AI-powered visual search offers a strategic opportunity for brands. The information for local businesses is pulled from Apple Maps, which relies on sources like Yelp and Foursquare. Brands should ensure their listings are well-maintained on these platforms and consider optimizing their digital presence for visual search tools like Google Lens, which integrates with Apple’s search. The Camera as an Input Device and the Rise of Spatial Content The camera’s role as an input device has been expanding, with Apple emphasizing photography as a key feature of its new iPhones. This year, the iPhone 16 introduces a new camera control button, offering enhanced haptic feedback for smoother control. Third-party apps like Snapchat will also benefit from this addition, giving users more refined camera capabilities. More importantly, iPhone 16 models can now capture spatial content, including both photos and audio, optimized for the Vision Pro mixed-reality headset. Apple’s move to integrate spatial content aligns with its goal to position the iPhone as a professional creator tool. Brands can capitalize on this by exploring augmented reality (AR) features or creating immersive user-generated content experiences. Apple’s Measured Approach to AI While Apple is clearly pushing AI, it is taking a cautious, phased approach. Though the new iPhones will hit the market soon, the full range of Apple Intelligence features will roll out gradually, starting in October with tools like the AI writing assistant and photo cleanup. More advanced features will debut next spring. This measured approach allows Apple to fine-tune its AI, avoiding rushed releases that could compromise user experience. For brands, this offers a lesson in pacing AI adoption: prioritize quality and customer experience over speed. Rather than rushing to integrate AI, companies should take time to understand how it can meaningfully enhance user interactions, focusing on trust and consistency to maintain customer loyalty. By following Apple’s lead and gradually introducing AI capabilities, brands can build trust, sustain anticipation, and ensure they offer technology that genuinely improves the customer experience. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more
Reflection 70B: HyperWrite’s Breakthrough AI That Thinks About Its Own Thinking In the rapid advancement of AI, we’ve seen models that can write, code, and even create art. But now, an AI has arrived that does something truly revolutionary—reflect on its own thinking. Enter Reflection 70B, HyperWrite’s latest large language model (LLM), not just pushing the boundaries of AI, but redefining them. Tackling AI Hallucinations: A Critical Issue AI hallucinations—the generation of false or misleading information—are like digital conspiracy theories. They sound plausible until you pause to scrutinize them. And unlike people, AI doesn’t get embarrassed when it’s wrong; it confidently continues, which is more than just frustrating—it’s potentially dangerous. As AI plays an increasing role in everything from content creation to medical diagnoses, having models that produce reliable, fact-based outputs is vital. Reflection 70B: An AI That Checks Its Own Work HyperWrite’s Reflection 70B is built to directly address this issue. It does something uniquely human: it reflects on its thought process. This model is designed to check its own work, functioning like an AI with a conscience, minus the existential crisis. Reflection-Tuning: The Game-Changing Technology At the core of Reflection 70B is a new technique called Reflection-Tuning. This is a major shift in how AI processes information. Here’s how it works: This entire process happens in real-time before the model delivers its final answer, ensuring a higher degree of accuracy. Why Reflection 70B is a Game-Changer You may wonder what sets this AI model apart. Here’s why it matters: Real-World Applications: How Reflection 70B Can Improve Lives Reflection 70B’s accuracy and self-correction abilities can have a transformative impact in several fields: Looking Forward: What’s Next for Reflection 70B? HyperWrite is already working on Reflection 405B, an even more advanced model that promises to further elevate AI accuracy and reliability. They’re not just building a better AI—they’re redefining how AI works. Conclusion: The AI That Reflects Reflection 70B marks a major leap in AI by introducing self-reflection and correction capabilities. This model isn’t just smarter; it’s more trustworthy. As AI continues to permeate our daily lives, this kind of reliability is no longer optional—it’s essential. HyperWrite’s Reflection 70B gives us a glimpse into a future where AI isn’t just intelligent but wise—an AI that understands the information it generates and ensures it’s accurate. This is the kind of AI we’ve been waiting for, and it’s a future worth getting excited about. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more
Salesforce and IBM are advancing their longstanding partnership by focusing on transforming sales and service processes with AI, particularly for organizations in regulated industries that seek to leverage enterprise data for automation. The collaboration aims to deliver pre-built AI agents and tools that integrate seamlessly within customers’ IT environments, enabling them to use their proprietary data while maintaining full control over their systems. By merging Salesforce’s Agentforce, a suite of autonomous agents, with IBM’s watsonx capabilities, the partnership will empower businesses to utilize AI agents within their daily applications. IBM’s watsonx Orchestrate will enhance Agentforce with autonomous agents that improve productivity, security, and regulatory compliance. Additionally, IBM customers will have the ability to interact with these agents via Slack, facilitating dynamic conversational experiences. Planned integrations between Salesforce Data Cloud and IBM Data Gate for watsonx will enable access to business data from IBM Z mainframes and Db2 databases, supporting AI workflows across the Agentforce platform. This integration will enhance data analysis and fuel AI-driven processes. Customers will also benefit from a broader range of AI model and deployment options through integration with IBM watsonx.ai. This will include access to IBM’s Granite foundation models, designed for enterprise applications. Enhancing Business Automation with Tailored Autonomous Agents Through the Agentforce Partner Network, businesses can develop and customize AI agents to interact with various enterprise tools and platforms. These agents are designed to perform multi-step tasks, make decisions based on triggers or interactions, and seek user approval for actions beyond their scope. They will help automate routine tasks, increase efficiency, streamline operations, and enhance customer service. IBM’s watsonx Orchestrate will integrate with Salesforce Agentforce to develop new pre-built agents for specific business challenges. These agents will leverage data and AI from both Salesforce and IBM to address various needs: Expanding Data Integration for AI Salesforce and IBM are also advancing data integration strategies through the Zero Copy integration between Salesforce Data Cloud and watsonx.data. This allows data to remain in place while being utilized for AI use cases, without duplication. Joint customers, particularly in financial services, insurance, manufacturing, and telecommunications, will leverage this integration to access and use mainframe datasets from IBM Z and Db2 databases on Salesforce’s platform. IBM will be the first Zero Copy partner to facilitate data flow between IBM Z and Salesforce Cloud, offering secure access to critical enterprise data and enhancing AI agent functionality. With IBM Z handling over 70% of global transaction value, this partnership ensures high standards of security, privacy, and compliance. Improving Efficiency with Slack and IBM watsonx Orchestrate IBM customers will now engage with watsonx Orchestrate agents directly within Slack, supporting AI app experiences with a new interface. This integration allows for seamless interaction with AI agents, automating tasks and enhancing collaboration across systems without leaving Slack. Expanding AI Model and Deployment Options with watsonx.ai A new integration with watsonx.ai will enable customers to deploy customized large language models (LLMs) within Salesforce Model Builder. This includes access to a range of third-party models and IBM’s Granite foundation models, which offer transparency and compliance with regulatory requirements. IBM Granite models are expected to be available within the Salesforce ecosystem by October. Partnering with IBM Consulting for Tailored AI Solutions IBM Consulting will leverage its expertise in Salesforce and AI to help joint customers accelerate the implementation of Agentforce. Through IBM Consulting Advantage, the AI-powered delivery platform, businesses will receive support in selecting, customizing, deploying, and scaling AI agents to meet specific industry needs. Customer Perspective Tectonic is transforming its service stations into preferred journey stops with the help of Salesforce and IBM. The collaboration offers unprecedented flexibility in AI utilization, enabling Tectonic to deliver hyper-personalized services through Agentforce and IBM’s watsonx AI, enhancing customer engagement and satisfaction. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more
Copado Unveils AI Agents to Automate Key DevOps Tasks for Salesforce Applications Copado has introduced a suite of generative AI agents designed to automate common tasks that DevOps teams frequently encounter when building and deploying applications on Salesforce’s software-as-a-service (SaaS) platform. This announcement comes ahead of the Dreamforce 2024 conference hosted by Salesforce. These AI agents are the result of over a decade of data collection by Copado, according to David Brooks, Copado’s vice president of products. The initial AI agents will focus on code generation and test automation, with future agents tackling user story creation, deployment scripts, and application environment optimization. Unlike AI co-pilot tools that assist with code generation, Copado’s agents will fully automate tasks, Brooks explained. DevOps teams will be able to orchestrate these AI agents to streamline workflows, making best DevOps practices more accessible to a wider range of development teams. As AI continues to reshape DevOps, more tasks will be automated using agentic AI. This approach involves creating AI agents trained on a specific, narrow dataset, ensuring higher accuracy compared to general-purpose large language models (LLMs) that pull data from across the web. While it’s unclear how quickly agentic AI will transform DevOps, Brooks noted that in the future, teams will consist of both human engineers and AI agents assigned to specific tasks. DevOps engineers will still be essential for overseeing the accuracy of these tasks, but many of the repetitive tasks that often lead to burnout will be automated. As the burden of routine work decreases, organizations can expect the pace of code writing and application deployment to significantly accelerate. This could lead to a shift in how DevOps teams approach application backlogs, enabling the deployment of more applications that might have previously been sidelined due to resource constraints. In the interim, Brooks advises DevOps teams to begin identifying which routine tasks can be assigned to AI agents. Doing so will free up human engineers to manage workflows at a scale that was once unimaginable, positioning teams to thrive in the AI-driven future of DevOps. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more
Salesforce has introduced two advanced AI models—xGen-Sales and xLAM—designed to enhance its Agentforce platform, which seamlessly integrates human agents with autonomous AI for greater business efficiency. xGen-Sales, a proprietary AI model, is tailored for sales tasks such as generating customer insights, summarizing calls, and managing pipelines. By automating routine sales activities, it enables sales teams to focus on strategic priorities. This model enhances Agentforce’s capacity to autonomously handle customer interactions, nurture leads, and support sales teams with increased speed and precision. The xLAM (Large Action Model) family introduces AI models designed to perform complex tasks and trigger actions within business systems. Unlike traditional Large Language Models (LLMs), which focus on content generation, xLAM models excel in function-calling, enabling AI agents to autonomously execute tasks like initiating workflows or processing data without human input. These models vary in size and capability, from smaller, on-device versions to large-scale models suitable for industrial applications. Salesforce AI Research developed the xLAM models using APIGen, a proprietary data-generation pipeline that significantly improves model performance. Early xLAM models have already outperformed other large models in key benchmarks. For example, the xLAM-8x22B model ranked first in function-calling tasks on the Berkeley Leaderboards, surpassing even larger models like GPT-4. These AI innovations are designed to help businesses scale AI-driven workflows efficiently. Organizations adopting these models can automate complex tasks, improve sales operations, and optimize resource allocation. The non-commercial xLAM models are available for community review on Hugging Face, while proprietary versions will power Agentforce. xGen-Sales has completed its pilot phase and will soon be available for general use. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more