Vector Database Archives - gettectonic.com - Page 2
Deep Dive Summer 24 Release

Deep Dive Summer 24 Release

Deep Dive Summer 24 Release Get ready, Salesforce fans! The Summer ’24 release is here, and it’s like Christmas morning for tech geeks. We’re talking about new features, enhancements, and improvements that will make you wonder how you ever lived without them. This Tectonic insight is your ultimate guide to all the exciting updates, changes, and key considerations for this release. So hang on tight to your keyboards and let’s dive into the Christmas treat bag of goodies coming your way! Key Highlights – Deep Dive Summer 24 Release What’s New in Einstein AI? 1. Einstein for Flow Meet your new best friend for building Salesforce workflows, Salesforce Flow. Just describe what you need in plain English, and Einstein will whip up the workflow for you. For example, say “Notify sales reps when a lead converts,” and boom, it’s done. Automation just got a whole lot easier and way cooler. How to: Einstein for Flow makes complex processes feel like a walk in the park, letting you deliver solutions faster than you can say “workflow.” Considerations: 2. Einstein for Formulas No more tearing your hair out over formula syntax errors. Einstein for Formulas will not only tell you what’s wrong but also suggest fixes, saving you from endless hours of debugging. How to: Einstein for Formulas cuts down errors and speeds up formula creation, making your life exponentially easier. Like easier squared. Easier to the nth degree. Considerations: UI/UX Enhancements 1. Add New Custom Fields to Dynamic Forms-Enabled Pages Say goodbye to limitations! You can now add new custom fields directly to Dynamic Forms-enabled pages, aligning fields with your ever-changing business needs. Considerations: 2. Use Blank Spaces to Align Fields on Dynamic Forms-Enabled Pages Finally, a way to make your Dynamic Forms pages look neat and tidy with blank spaces for perfect alignment. Considerations: 3. Set Conditional Visibility for Individual Tabs in Lightning App Builder Now you can make specific tabs visible based on user profiles, record types, or other criteria. Customization just got a whole lot more precise. Considerations: 4. Create Rich Text Headings in Lightning App Builder Make your headings pop with bold, italic, and varied font sizes. Your Lightning pages are about to get a visual upgrade. Considerations: Flow Updates 1. Automation Lightning App A one-stop shop for managing and executing all your automation tools and processes. Considerations: 2. Lock and Unlock Records with Action Gain more control over your processes by locking records during critical stages and unlocking them when done. Considerations: 3. Check for Matching Records (Upsert) When Creating Records Avoid duplicates by checking for existing records before creating new ones. One can never have too many de-dupe tools. Considerations: 4. Transform Your Data in Flows (Generally Available) Now generally available, perform calculations, data transformations, and more with the Transform element in Flow Builder. Considerations: Admin Enhancements 1. Field History Tracking Manage tracked objects and fields more efficiently with a centralized page in “Setup.” Considerations: 2. See What’s Enabled in Permission Sets and Permission Set Groups (Generally Available) Enhanced permission set viewing improves visibility and control over security configurations. Considerations: 3. Get a Summary of User’s Permissions and Access Quickly view user permissions, public groups, and queues from the user’s detail page. Help and Training Community: Salesforce is simplifying Permission Set management by phasing out Profiles. Data Cloud Vector Database Vector search capabilities allow the creation of searchable “vector embeddings” from unstructured data, enhancing AI applications’ understanding of semantic similarities and context. Considerations: Deep Dive Summer 24 Release The Salesforce Summer ’24 release is packed with features designed to enhance your Salesforce experience. From a sleek new interface to powerful automation tools, enhanced analytics, and expanded integration options, this release aims to elevate workflow efficiency and data protection. Jump into the exciting updates, and let’s make automation simpler and more user-friendly together! Like1 Related Posts AI Automated Offers with Marketing Cloud Personalization AI-Powered Offers Elevate the relevance of each customer interaction on your website and app through Einstein Decisions. Driven by a Read more Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Read More
Adopt a Large Language Model

Adopt a Large Language Model

In 2023, Algo Communications, a Canadian company, faced a significant challenge. With rapid growth on the horizon, the company struggled to train customer service representatives (CSRs) quickly enough to keep pace. To address this, Algo turned to an innovative solution: generative AI. They needed to Adopt a Large Language Model. Algo adopted a large language model (LLM) to accelerate the onboarding of new CSRs. However, to ensure CSRs could accurately and fluently respond to complex customer queries, Algo needed more than a generic, off-the-shelf LLM. These models, typically trained on public internet data, lack the specific business context required for accurate answers. This led Algo to use retrieval-augmented generation, or RAG. Many people have already used generative AI models like OpenAI’s ChatGPT or Google’s Gemini (formerly Bard) for tasks like writing emails or crafting social media posts. However, achieving the best results can be challenging without mastering the art of crafting precise prompts. An AI model is only as effective as the data it’s trained on. For optimal performance, it needs accurate, contextual information rather than generic data. Off-the-shelf LLMs often lack up-to-date, reliable access to your specific data and customer relationships. RAG addresses this by embedding the most current and relevant proprietary data directly into LLM prompts. RAG isn’t limited to structured data like spreadsheets or relational databases. It can retrieve all types of data, including unstructured data such as emails, PDFs, chat logs, and social media posts, enhancing the AI’s output quality. How RAG Works RAG enables companies to retrieve and utilize data from various internal sources for improved AI results. By using your own trusted data, RAG reduces or eliminates hallucinations and incorrect outputs, ensuring responses are relevant and accurate. This process involves a specialized database called a vector database, which stores data in a numerical format suitable for AI and retrieves it when prompted. “RAG can’t do its job without the vector database doing its job,” said Ryan Schellack, Director of AI Product Marketing at Salesforce. “The two go hand in hand. Supporting retrieval-augmented generation means supporting a vector store and a machine-learning search mechanism designed for that data.” RAG, combined with a vector database, significantly enhances LLM outputs. However, users still need to understand the basics of crafting clear prompts. Faster Responses to Complex Questions In December 2023, Algo Communications began testing RAG with a few CSRs using a small sample of about 10% of its product base. They incorporated vast amounts of unstructured data, including chat logs and two years of email history, into their vector database. After about two months, CSRs became comfortable with the tool, leading to a wider rollout. In just two months, Algo’s customer service team improved case resolution times by 67%, allowing them to handle new inquiries more efficiently. “Exploring RAG helped us understand we could integrate much more data,” said Ryan Zoehner, Vice President of Commercial Operations at Algo Communications. “It enabled us to provide detailed, technically savvy responses, enhancing customer confidence.” RAG now touches 60% of Algo’s products and continues to expand. The company is continually adding new chat logs and conversations to the database, further enriching the AI’s contextual understanding. This approach has halved onboarding time, supporting Algo’s rapid growth. “RAG is making us more efficient,” Zoehner said. “It enhances job satisfaction and speeds up onboarding. Unlike other LLM efforts, RAG lets us maintain our brand identity and company ethos.” RAG has also allowed Algo’s CSRs to focus more on personalizing customer interactions. “It allows our team to ensure responses resonate well,” Zoehner said. “This human touch aligns with our brand and ensures quality across all interactions.” Write Better Prompts – Adopt a Large Language Model If you want to learn how to craft effective generative AI prompts or use Salesforce’s Prompt Builder, check out Trailhead, Salesforce’s free online learning platform. Start learning Trail: Get Started with Prompts and Prompt Builder Like Related Posts AI Automated Offers with Marketing Cloud Personalization AI-Powered Offers Elevate the relevance of each customer interaction on your website and app through Einstein Decisions. Driven by a Read more Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Read More
Gen AI Unleased With Vector Database

Gen AI Unleased With Vector Database

Salesforce Unveils Data Cloud Vector Database with GenAI Integration Salesforce has officially launched its Data Cloud Vector Database, leveraging GenAI to rapidly process a company’s vast collection of PDFs, emails, transcripts, online reviews, and other unstructured data. Gen AI Unleased With Vector Database. Rahul Auradkar, Executive Vice President and General Manager of Salesforce Unified Data Services and Einstein Units, highlighted the efficiency gains in a one-on-one briefing with InformationWeek. Auradkar demonstrated the new capabilities through a live demo, showcasing the potential of the Data Cloud Vector Database. Enhanced Efficiency and Data Utilization The new Data Cloud integrates with the Einstein 1 platform, combining unstructured and structured data for rapid analysis by sales, marketing, and customer service teams. This integration significantly enhances the accuracy of Einstein Copilot, Salesforce’s enterprise conversational AI assistant. Gen AI Unleased With Vector Database Auradkar demonstrated how a customer service query could retrieve multiple relevant results within seconds. This process, which typically takes hours of manual effort, now leverages unstructured data, which makes up 90% of customer data, to deliver swift and accurate results. “This advancement allows our customers to harness the full potential of 90% of their enterprise data—unstructured data that has been underutilized or siloed—to drive use cases, AI, automation, and analytics experiences across both structured and unstructured data,” Auradkar explained. Comprehensive Data Management Using Salesforce’s Einstein 1 platform, Data Cloud enables users to ingest, store, unify, index, and perform semantic queries on unstructured data across all applications. This data encompasses diverse unstructured content from websites, social media platforms, and other sources, resulting in more accurate outcomes and insights. Auradkar emphasized, “This represents an order of magnitude improvement in productivity and customer satisfaction. For instance, a large shipping company with thousands of customer cases can now categorize and access necessary information far more efficiently.” Additional Announcements Salesforce also introduced several new AI and Data Cloud features: Auradkar noted that these innovations enhance Salesforce’s competitive edge by prioritizing flexibility and enabling customers to take control of their data. “We’ll continue on this journey,” Auradkar said. “Our future investments will focus on how this product evolves and scales. We’re building significant flexibility for our customers to use any model they choose, including any large language model.” For more insights and updates, visit Salesforce’s official announcements and stay tuned for further developments. Like1 Related Posts AI Automated Offers with Marketing Cloud Personalization AI-Powered Offers Elevate the relevance of each customer interaction on your website and app through Einstein Decisions. Driven by a Read more Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Read More
Data Cloud Vector Database and Hyperforce

Data Cloud Vector Database and Hyperforce

Salesforce World Tour Highlights: Data Cloud Vector Database and Hyperforce At the Salesforce World Tour on June 6, 2024, at the Excel Centre in east London, the focus was on advancements in the Data Cloud and Slack platforms. The event, sponsored by AWS, Cognizant, Deloitte, and PWC, showcased significant innovations, particularly for GenAI enthusiasts. Data Cloud Vector Database and Hyperforce. Vector Database in Data Cloud A key highlight was the announcement of the general availability of a Vector Database capability within the Data Cloud, integrated into the Einstein 1 Platform. This capability enhances Salesforce’s CRM platform, Customer 360, by combining structured and unstructured data about end-users. The Vector Database collects, ingests, and unifies data, allowing enterprises to deploy GenAI across all applications without needing to fine-tune an off-the-shelf large language model (LLM). Addressing Data Fragmentation Salesforce reports that approximately 80% of customer data is dispersed across various corporate departments in an unstructured format, trapped in PDFs, emails, chat conversations, and transcripts. The Vector Database unifies this fragmented data, creating a comprehensive profile of the customer journey. This unified approach not only improves customer engagement but also enhances organizational agility. By consolidating data from all corporate silos, companies can quickly and efficiently address issues such as product recalls and returns. Hyperforce: Enhancing Data Residency and Compliance During the keynote, Salesforce emphasized the importance of personalization in customer engagement and the benefits of deploying GenAI in customer-facing sectors. The event highlighted the need to overcome the fear and mistrust of GenAI and showcased how enterprises can enhance employee productivity through upskilling in GenAI technologies. One notable announcement was the general availability of Hyperforce, a solution designed to address data residency issues by integrating all Salesforce applications under the same compliance, security, privacy, and scalability standards. Built for the public cloud and composed of code rather than hardware, Hyperforce ensures safe delivery of applications worldwide, offering a common layer for deploying all application stacks and handling data compliance in a fragmented technology landscape. Salesforce AI Center The Salesforce AI Center was also introduced at the event. The first of its kind, located in the Blue Fin Building near Blackfriars, London, this center will support AI experts, Salesforce partners, and customers, facilitating training and upskilling programs. Set to open on June 18, 2024, the center aims to upskill 100,000 developers worldwide and is part of Salesforce’s $4 billion investment in the UK and Ireland. Industry Reactions and Future Prospects GlobalData senior analyst Beatriz Valle commented on Salesforce’s continued integration of GenAI across its portfolio, including platforms like Tableau, Einstein for analytics, and Slack for collaboration. According to Salesforce, the Data Cloud tool leverages all metadata in the Einstein 1 Platform, connecting unstructured and structured data, reducing the need for fine-tuning LLMs, and enhancing the accuracy of results delivered by Einstein Copilot, Salesforce’s conversational AI assistant. Vector databases, while not new, have gained prominence due to the GenAI revolution. They power the retrieval-augmented generation (RAG) technique, linking proprietary data with large language models like OpenAI’s GPT-4, enabling enterprises to generate more accurate results. Competitors such as Oracle, Amazon, Microsoft, and Google also offer vector databases, but Salesforce’s early investments in GenAI are proving fruitful with the launch of the Data Cloud Vector Database. Data Cloud Vector Database and Hyperforce Salesforce’s AI-powered integration solutions, highlighted during the World Tour, underscore the company’s commitment to advancing digital transformation. By leveraging GenAI and innovative tools like the Vector Database and Hyperforce, Salesforce is enabling enterprises to overcome the challenges of data fragmentation and compliance, paving the way for a more agile and competitive digital future. Like Related Posts AI Automated Offers with Marketing Cloud Personalization AI-Powered Offers Elevate the relevance of each customer interaction on your website and app through Einstein Decisions. Driven by a Read more Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Read More
AI Design Beyond the Chatbot

AI Design Beyond the Chatbot

As AI continues to advance, designers, builders, and creators are confronted with profound questions about the future of applications and how users will engage with digital experiences. AI Design Beyond the Chatbot. Generative AI has opened up vast possibilities, empowering people to utilize AI for tasks such as writing articles, generating marketing materials, building teaching assistants, and summarizing data. However, alongside its benefits, there are challenges. Sometimes, generative AI produces unexpected or biased responses, a phenomenon known as hallucination. In response, approaches like retrieval augmented generation (RAG) have emerged as effective solutions. RAG leverages a vector database, like SingleStore, to retrieve relevant information and provide users with contextually accurate responses. AI Design Beyond the Chatbot Looking ahead, the evolution of AI may lead to a future where users interact with a central LLM operating system, fostering more personalized and ephemeral experiences. Concepts like Mercury OS offer glimpses into this potential future. Moreover, we anticipate the rise of multimodal experiences, including voice and gesture interfaces, making technology more ubiquitous in our lives. Imran Chaudhri’s demonstration of a screen-less future, where humans interact with computers through natural language, exemplifies this trend. However, amidst these exciting prospects, the current state of AI integration in businesses varies. While some are exploring innovative ways to leverage AI, others may simply add AI chat interfaces without considering contextual integration. To harness AI effectively, it’s crucial to identify the right use cases and prioritize user value. AI should enhance experiences by reducing task time, simplifying tasks, or personalizing experiences. Providing contextual assistance is another key aspect. AI models can offer tailored suggestions and recommendations based on user context, enriching the user experience. Notion and Coda exemplify this by seamlessly integrating AI recommendations into user workflows. Furthermore, optimizing for creativity and control ensures users feel empowered in creation experiences. Tools like Adobe Firefly strike a balance between providing creative freedom and offering control over generated content. Building good prompts is essential for obtaining quality results from AI models. Educating users on how to construct effective prompts and managing expectations regarding AI limitations are critical considerations. Ultimately, as AI becomes more integrated into daily workflows, it’s vital to ensure seamless integration into user experiences. Responsible AI design requires ongoing dialogue and exploration to navigate this rapidly evolving landscape effectively. Like Related Posts AI Automated Offers with Marketing Cloud Personalization AI-Powered Offers Elevate the relevance of each customer interaction on your website and app through Einstein Decisions. Driven by a Read more Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Read More
Summer 24 The AI Release

Summer 24 The AI Release

Salesforce Unveils Summer 2024 Release with Generative AI at the Forefront Salesforce has announced its Summer 2024 release, featuring generative AI (GenAI) as a key highlight. Set to be generally available on June 17, 2024, this release promises enhanced productivity and access to large language models (LLMs) on an open platform. Read on to see why we call Summer 24 the AI release. Key Features of the Summer 2024 Release 1. Bring Your Own LLM Expansion 2. Slack AI 3. Zero Copy Integration with Amazon Redshift 4. Vector Database 5. Data Cloud for Commerce 6. Digital Wallet Enhanced Security with Einstein Trust Layer The Einstein Trust Layer ensures enhanced protection for customer and company data, making the new features more secure. Upcoming Pre-Summer Releases In addition to the major features coming in June, Salesforce has already introduced several innovations: Unified Knowledge Solution Salesforce and Vonage Partnership Conclusion Salesforce’s Summer 2024 release is packed with generative AI enhancements, robust integrations, and new tools aimed at boosting productivity, security, and data insights. With features gradually rolling out and pre-summer innovations already available, Salesforce continues to lead in delivering cutting-edge AI solutions to its users. Like Related Posts AI Automated Offers with Marketing Cloud Personalization AI-Powered Offers Elevate the relevance of each customer interaction on your website and app through Einstein Decisions. Driven by a Read more Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Read More
Public Sector Einstein 1 for Service

Public Sector Einstein 1 for Service

Salesforce, a prominent provider of cloud-based software solutions, has unveiled the introduction of Public Sector Einstein 1 for Service, a specialized software platform tailored explicitly for government employees. Salesforce is targeting governmental customer service improvements with the launch of the Public Sector Einstein 1 for Service. This latest offering, built on Salesforce’s Einstein 1 platform, integrates a variety of artificial intelligence-driven capabilities aimed at streamlining administrative tasks within the public sector. Public Sector Einstein 1 for Service.  Built on Salesforce’s Einstein 1 platform, the offering is designed to leverage data and automation to improve worker efficiency, reduce or eliminate repetitive tasks, and improve the ability of workers to interact with systems, data, and the people they serve.  Public Sector Einstein 1 for Service presents a suite of AI-powered features crafted to enhance efficiency and productivity for government entities. These encompass Caseworker Narrative Generation, utilizing generative AI to synthesize data summaries; Service Cloud Voice, enabling real-time transcription of conversations; and Einstein Activity Capture for Public Sector, facilitating documentation of case interactions through natural language processing. Additionally, the platform incorporates Data Cloud for Public Sector and Interaction Notes for Public Sector, providing comprehensive note-taking functionalities. Salesforce’s Executive Vice President and General Manager for the Public Sector, Nasi Jazayeri, underscored the significance of harnessing trusted AI to enhance operational effectiveness, data management, and service delivery for government agencies, empowering employees to better serve constituents. Having previously provided tools for other FEDramp-compliant products – including Field Service and Security Center – Salesforce’s newest solution utilizes trusted conversational and generative AI (GenAI) to improve agent efficiency. The solution also promises public sector organizations the ability to swiftly generate case reports, record real-time call transcriptions, and document and format case interactions—all through a single unified solution. Another key aspect of the tool is the inclusion of Salesforce’s Data Cloud system, which allows users to consolidate data from various sources – including benefits, education, and healthcare – into a standardized data model. Public Sector Einstein 1 for Service also includes Data Cloud, which is designed to capture, connect, and harmonize an organization’s entire corpus of data into a common data model. This can be used to create unified constituent profiles that serve as a single source of truth for the organization, enabling the organization to personalize outreach and interactions. A new feature being offered is Interaction Notes for Public Sector, which allows caseworkers to take detailed notes of their meetings and conversations with constituents or other case participants, specify the confidentiality level of the notes, add action items or next steps, and then search for and filter summaries to find notes from previous interactions, all in one place. This feature takes a common practice deployed at many public sector agencies and helps to organize information that can often be lost when managed through manual processes. Of course this also brings in the Salesforce Vector Database. In doing so, public service organizations are able to create specific profiles for their constituents and personalize their customer service offerings accordingly. If you have contemplated adding Salesforce Nonprofit Cloud, check out this new offering from Tectonic – Salesforce Imploementation Solutions. Like Related Posts AI Automated Offers with Marketing Cloud Personalization AI-Powered Offers Elevate the relevance of each customer interaction on your website and app through Einstein Decisions. Driven by a Read more Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Read More
Salesforce Spiff Announced

Salesforce Spiff Announced

Salesforce unveiled Salesforce Spiff yesterday, introducing incentive compensation management directly into the world’s leading AI CRM to automate commissions and boost seller motivation. With this enhancement, Sales Cloud now offers sellers and sales leaders a comprehensive growth platform covering the entire journey from pipeline development to paycheck delivery. Recently integrated into Salesforce’s ecosystem through acquisition, Spiff empowers organizations to increase revenue by aiding sales leaders in managing intricate incentive compensation plans and understanding the multiple factors influencing revenue performance. This product boasts an intuitive user interface, real-time visibility, transparency into critical financial data, comprehensive analytics and reporting capabilities, and seamless integration with other Salesforce applications. Significance of Salesforce Spiff: Many organizations struggle with setting accurate quotas for sales compensation programs, with 64% citing this as a major challenge. Incentive-based pay is a fundamental component of total compensation, with 90% of top-performing companies employing incentive programs to reward sales associates. Overcoming these hurdles is vital for optimizing sales performance and achieving organizational objectives. Additionally, incentive packages often vary by level and business objective, posing manual management challenges without compensation management technology. Innovative Features of Salesforce Spiff: Salesforce Spiff Commission Estimator: Sales reps can view estimated commissions and align them with business objectives while creating customer quotes in Sales Cloud. Salesforce Spiff Rep Dashboards & Mobile App: Real-time dashboards allow sales reps to track commission trajectories seamlessly within their workflows. Time-saving Tools for Commission Administrators: Salesforce Spiff Commission Designer: Employ a low-code commission builder to visualize the potential impact of various plan amendments. Salesforce Spiff Assistant: Leverage a conversational AI assistant to gain quick insights into sales plans, rules, and calculations. This tool provides logic, error, and filter explanations as well as formula optimization in natural language, simplifying plan building and management. Salesforce’s Perspective on Spiff: “Sales leaders understand the critical role of compensation in driving sales rep behavior. The challenge lies in aligning compensation plans with desired outcomes while navigating data across fragmented point solutions,” stated Ketan Karkhanis, EVP & GM of Sales Cloud. “Spiff bridges the gap between what sellers desire—transparent compensation—and what sales leaders seek—compensation planning integrated into CRM that aligns behaviors with strategic outcomes.” “One of our biggest challenges was engaging our sales reps with their compensation plans to motivate them to achieve their goals. Spiff has provided us with a platform to showcase our commitment to our culture and employees. Spiff has truly transformed our commission program,” Lindsey Sanford, Senior Director of Sales and Marketing at RadNet Availability: Salesforce Spiff will be accessible as an add-on for Sales Cloud customers in the upcoming months. Non-Salesforce customers can also purchase the product by visiting Salesforce.com/salesforcespiff. Like Related Posts AI Automated Offers with Marketing Cloud Personalization AI-Powered Offers Elevate the relevance of each customer interaction on your website and app through Einstein Decisions. Driven by a Read more Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Read More
Generative AI Prompts with Retrieval Augmented Generation

Generative AI Prompts with Retrieval Augmented Generation

By now, you’ve likely experimented with generative AI language models (LLMs) such as OpenAI’s ChatGPT or Google’s Gemini to aid in composing emails or crafting social media content. Yet, achieving optimal results can be challenging—particularly if you haven’t mastered the art and science of formulating effective prompts. Generative AI Prompts with Retrieval Augmented Generation. The effectiveness of an AI model hinges on its training data. To excel, it requires precise context and substantial factual information, rather than generic details. This is where Retrieval Augmented Generation (RAG) comes into play, enabling you to seamlessly integrate your most current and pertinent proprietary data directly into your LLM prompt. Here’s a closer look at how RAG operates and the benefits it can offer your business. Generative AI Prompts with Retrieval Augmented Generation Why RAG Matters: An AI model’s efficacy is determined by the quality of its training data. For optimal performance, it needs specific context and substantial factual information, not just generic data. An off-the-shelf LLM lacks the real-time updates and trustworthy access to proprietary data essential for precise responses. RAG addresses this gap by embedding up-to-date and pertinent proprietary data directly into LLM prompts, enhancing response accuracy. How RAG Works: RAG leverages powerful semantic search technologies within Salesforce to retrieve relevant information from internal data sources like emails, documents, and customer records. This retrieved data is then fed into a generative AI model (such as CodeT5 or Einstein Language), which uses its language understanding capabilities to craft a tailored response based on the retrieved facts and the specific context of the user’s query or task. Case Study: Algo Communications In 2023, Canada-based Algo Communications faced the challenge of rapidly onboarding customer service representatives (CSRs) to support its growth. Seeking a robust solution, the company turned to generative AI, adopting an LLM enhanced with RAG for training CSRs to accurately respond to complex customer inquiries. Algo integrated extensive unstructured data, including chat logs and email history, into its vector database, enhancing the effectiveness of RAG. Within just two months of adopting RAG, Algo’s CSRs exhibited greater confidence and efficiency in addressing inquiries, resulting in a 67% faster resolution of cases. Key Benefits of RAG for Algo Communications: Efficiency Improvement: RAG enabled CSRs to complete cases more quickly, allowing them to address new inquiries at an accelerated pace. Enhanced Onboarding: RAG reduced onboarding time by half, facilitating Algo’s rapid growth trajectory. Brand Consistency: RAG empowered CSRs to maintain the company’s brand identity and ethos while providing AI-assisted responses. Human-Centric Customer Interactions: RAG freed up CSRs to focus on adding a human touch to customer interactions, improving overall service quality and customer satisfaction. Retrieval Augmented Generation (RAG) enhances the capabilities of generative AI models by integrating current and relevant proprietary data directly into LLM prompts, resulting in more accurate and tailored responses. This technology not only improves efficiency and onboarding but also enables organizations to maintain brand consistency and deliver exceptional customer experiences. Like Related Posts AI Automated Offers with Marketing Cloud Personalization AI-Powered Offers Elevate the relevance of each customer interaction on your website and app through Einstein Decisions. Driven by a Read more Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Read More
LLM Knowledge Test

LLM Knowledge Test

Large Language Models. How much do you know about them? Take the LLM Knowledge Test to find out. Question 1Do you need to have a vector store for all your text-based LLM use cases? A. Yes B. No Correct Answer: B ExplanationA vector store is used to store the vector representation of a word or sentence. These vector representations capture the semantic meaning of the words or sentences and are used in various NLP tasks. However, not all text-based LLM use cases require a vector store. Some tasks, such as summarization, sentiment analysis, and translation, do not need context augmentation. Here is why: Question 2Which technique helps mitigate bias in prompt-based learning? A. Fine-tuning B. Data augmentation C. Prompt calibration D. Gradient clipping Correct Answer: C ExplanationPrompt calibration involves adjusting prompts to minimize bias in the generated outputs. Fine-tuning modifies the model itself, while data augmentation expands the training data. Gradient clipping prevents exploding gradients during training. Question 3Which of the following is NOT a technique specifically used for aligning Large Language Models (LLMs) with human values and preferences? A. RLHF B. Direct Preference Optimization C. Data Augmentation Correct Answer: C ExplanationData Augmentation is a general machine learning technique that involves expanding the training data with variations or modifications of existing data. While it can indirectly impact LLM alignment by influencing the model’s learning patterns, it’s not specifically designed for human value alignment. Incorrect Options: A) Reinforcement Learning from Human Feedback (RLHF) is a technique where human feedback is used to refine the LLM’s reward function, guiding it towards generating outputs that align with human preferences. B) Direct Preference Optimization (DPO) is another technique that directly compares different LLM outputs based on human preferences to guide the learning process. Question 4In Reinforcement Learning from Human Feedback (RLHF), what describes “reward hacking”? A. Optimizes for desired behavior B. Exploits reward function Correct Answer: B ExplanationReward hacking refers to a situation in RLHF where the agent discovers unintended loopholes or biases in the reward function to achieve high rewards without actually following the desired behavior. The agent essentially “games the system” to maximize its reward metric. Why Option A is Incorrect:While optimizing for the desired behavior is the intended outcome of RLHF, it doesn’t represent reward hacking. Option A describes a successful training process. In reward hacking, the agent deviates from the desired behavior and finds an unintended way to maximize the reward. Question 5Fine-tuning GenAI model for a task (e.g., Creative writing), which factor significantly impacts the model’s ability to adapt to the target task? A. Size of fine-tuning dataset B. Pre-trained model architecture Correct Answer: B ExplanationThe architecture of the pre-trained model acts as the foundation for fine-tuning. A complex and versatile architecture like those used in large models (e.g., GPT-3) allows for greater adaptation to diverse tasks. The size of the fine-tuning dataset plays a role, but it’s secondary. A well-architected pre-trained model can learn from a relatively small dataset and generalize effectively to the target task. Why A is Incorrect:While the size of the fine-tuning dataset can enhance performance, it’s not the most crucial factor. Even a massive dataset cannot compensate for limitations in the pre-trained model’s architecture. A well-designed pre-trained model can extract relevant patterns from a smaller dataset and outperform a less sophisticated model with a larger dataset. Question 6What does the self-attention mechanism in transformer architecture allow the model to do? A. Weigh word importance B. Predict next word C. Automatic summarization Correct Answer: A ExplanationThe self-attention mechanism in transformers acts as a spotlight, illuminating the relative importance of words within a sentence. In essence, self-attention allows transformers to dynamically adjust the focus based on the current word being processed. Words with higher similarity scores contribute more significantly, leading to a richer understanding of word importance and sentence structure. This empowers transformers for various NLP tasks that heavily rely on context-aware analysis. Incorrect Options: Question 7What is one advantage of using subword algorithms like BPE or WordPiece in Large Language Models (LLMs)? A. Limit vocabulary size B. Reduce amount of training data C. Make computationally efficient Correct Answer: A ExplanationLLMs deal with massive amounts of text, leading to a very large vocabulary if you consider every single word. Subword algorithms like Byte Pair Encoding (BPE) and WordPiece break down words into smaller meaningful units (subwords) which are then used as the vocabulary. This significantly reduces the vocabulary size while still capturing the meaning of most words, making the model more efficient to train and use. Incorrect Answer Explanations: Question 8Compared to Softmax, how does Adaptive Softmax speed up large language models? A. Sparse word reps B. Zipf’s law exploit C. Pre-trained embedding Correct Answer: B ExplanationStandard Softmax struggles with vast vocabularies, requiring expensive calculations for every word. Imagine a large language model predicting the next word in a sentence. Softmax multiplies massive matrices for each word in the vocabulary, leading to billions of operations! Adaptive Softmax leverages Zipf’s law (common words are frequent, rare words are infrequent) to group words by frequency. Frequent words get precise calculations in smaller groups, while rare words are grouped together for more efficient computations. This significantly reduces the cost of training large language models. Incorrect Answer Explanations: Question 9Which configuration parameter for inference can be adjusted to either increase or decrease randomness within the model output layer? A. Max new tokens B. Top-k sampling C. Temperature Correct Answer: C ExplanationDuring text generation, large language models (LLMs) rely on a softmax layer to assign probabilities to potential next words. Temperature acts as a key parameter influencing the randomness of these probability distributions. Why other options are incorrect: Question 10What transformer model uses masking & bi-directional context for masked token prediction? A. Autoencoder B. Autoregressive C. Sequence-to-sequence Correct Answer: A ExplanationAutoencoder models are pre-trained using masked language modeling. They use randomly masked tokens in the input sequence, and the pretraining objective is to predict the masked tokens to reconstruct the original sentence. Question 11What technique allows you to scale model

Read More
Jan '24 Einstein Data Cloud Updates

January ’24 Einstein Data Cloud Updates

Utilize Generative AI to Target Audiences Effectively Harness the power of generative AI with Einstein Segment Creation in Data Cloud to create precise audience segments. Describe your target audience, and Einstein Segment Creation swiftly produces a segment using trusted customer data available in Data Cloud. This segment can be easily edited and fine-tuned as necessary. Jan ’24 Einstein Data Cloud Updates. Where: This enhancement is applicable to Data Cloud in Developer, Enterprise, Performance, and Unlimited editions. Einstein generative AI is accessible in Lightning Experience. When: This functionality is rolling out gradually, starting in Spring ’24. How: In Data Cloud, create a new segment and choose Einstein Segment Creation. In the Einstein panel, input a description of your segment using simple text, review the draft, and make adjustments as needed. Gain Insights into Segment Performance with Segment Intelligence Analyze segment data efficiently with Segment Intelligence, an in-platform intelligence tool for Data Cloud for Marketing. Offering a straightforward setup process, out-of-the-box data connectors, and pre-built visualizations, Segment Intelligence aids in optimizing segments and activations across various channels, including Marketing Cloud Engagement, Google Ads, Meta Ads, and Commerce Cloud. Where: This update applies to Data Cloud in Developer, Enterprise, Performance, and Unlimited editions. Utilizing Segment Intelligence requires a Data Cloud Starter license. When: For details regarding timing and eligibility, contact your Salesforce account executive. How: To configure Segment Intelligence, navigate to Salesforce Setup. To view Segment Intelligence dashboards, go to Data Cloud and select the Segment Intelligence tab. Activate Audiences on Google DV360 and LinkedIn Effortlessly activate audiences on Google DV360 and LinkedIn as native activation destinations in Data Cloud. Directly use segments for targeted advertising campaigns and insights reporting. Where: This change is applicable to Data Cloud in Developer, Enterprise, Performance, and Unlimited editions. Requires an Ad Audiences license. When: This functionality is available starting in March 2024. Enhance Identity Resolution with More Frequent Ruleset Processing Experience more timely ruleset processing as rulesets now run automatically whenever your data changes. This improvement eliminates the need to wait for a daily ruleset run, ensuring efficient and cost-effective processing. Where: This update applies to Data Cloud in Developer, Enterprise, Performance, and Unlimited editions. Refine Identity Resolution Match Rules with Fuzzy Matching Extend the use of fuzzy matching to more fields, allowing fuzzy matching on any text field in your identity resolution match rules. Up to two fuzzy match fields, other than first name, can be used in a match rule, with a total of six fuzzy match fields in any ruleset. Enhance match rules by updating to the “Fuzzy Precision – High” method for fields like last name, city, and account. Where: This enhancement applies to Data Cloud in Developer, Enterprise, Performance, and Unlimited editions. Salesforce Einstein’s AI Capabilities Salesforce Einstein stands out as a comprehensive AI solution for CRM. Notable features include being data-ready, eliminating the need for data preparation or model management. Simply input data into Salesforce, and Einstein seamlessly operates. Additionally, Salesforce introduces the Data Cloud, formerly known as Genie, as a significant AI-powered product. This platform, combining Data Cloud and AI in Einstein 1, empowers users to manage unstructured data efficiently. The introduction of the Data Cloud Vector Database allows for the storage and retrieval of unstructured data, enabling Einstein Copilot to search and interpret vast amounts of information. Salesforce also unveils Einstein Copilot Search, currently in closed beta, enhancing AI search capabilities to respond to complex queries from users. Jan ’24 Einstein Data Cloud Updates This groundbreaking offering addresses the challenge of managing unstructured data, a substantial portion of business data, and complements it with the capability to use familiar automation tools such as Flow and Apex to monitor and trigger workflows based on changes in this data. Overall, Salesforce aims to revolutionize how organizations handle unstructured data with these innovative additions to the Data Cloud. Like2 Related Posts Salesforce Unveils Einstein: AI Built into Every Cloud The Next Era of CRM Intelligence Salesforce is taking AI beyond standalone features—introducing Einstein, an embedded AI layer across its Read more 50 Advantages of Salesforce Sales Cloud According to the Salesforce 2017 State of Service report, 85% of executives with service oversight identify customer service as a Read more Salesforce Artificial Intelligence Is artificial intelligence integrated into Salesforce? Salesforce Einstein stands as an intelligent layer embedded within the Lightning Platform, bringing robust Read more CRM Cloud Salesforce What is a CRM Cloud Salesforce? Salesforce Service Cloud is a customer relationship management (CRM) platform for Salesforce clients to Read more

Read More
Salesforce Enhances Einstein 1 Platform with New Vector Database and AI Capabilities

Salesforce Enhances Einstein 1 Platform with New Vector Database and AI Capabilities

Salesforce (NYSE: CRM) has announced major updates to its Einstein 1 Platform, introducing the Data Cloud Vector Database and Einstein Copilot Search. These new features aim to power AI, analytics, and automation by integrating business data with large language models (LLMs) across the Einstein 1 Platform. Salesforce Enhances Einstein 1 Platform with New Vector Database and AI Capabilities. Unifying Business Data for Enhanced AI The Data Cloud Vector Database will unify all business data, including unstructured data like PDFs, emails, and transcripts, with CRM data. This will enable accurate and relevant AI prompts and Einstein Copilot, eliminating the need for expensive and complex fine-tuning of LLMs. Built into the Einstein 1 Platform, the Data Cloud Vector Database allows all business applications to harness unstructured data through workflows, analytics, and automation. This enhances decision-making and customer insights across Salesforce CRM applications. Introducing Einstein Copilot Search Einstein Copilot Search will provide advanced AI search capabilities, delivering precise answers from the Data Cloud in a conversational AI experience. This feature aims to boost productivity for all business users by interpreting and responding to complex queries with real-time data from various sources. Key Features and Benefits Salesforce Enhances Einstein 1 Platform with New Vector Database and AI Capabilities Data Cloud Vector Database Einstein Copilot Search Addressing the Data Challenge With 90% of enterprise data existing in unstructured formats, accessing and leveraging this data for business applications and AI models has been challenging. As Forrester predicts, the volume of unstructured data managed by enterprises will double by 2024. Salesforce’s new capabilities address this by enabling businesses to effectively harness their data, driving AI innovation and improved customer experiences. Salesforce’s Vision Rahul Auradkar, EVP and GM of Unified Data Services & Einstein, stated, “The Data Cloud Vector Database transforms all business data into valuable insights. This advancement, coupled with the power of LLMs, fosters a data-driven ecosystem where AI, CRM, automation, Einstein Copilot, and analytics turn data into actionable intelligence and drive innovation.” Practical Applications Customer Success Story Shohreh Abedi, EVP at AAA – The Auto Club Group, highlighted the impact: “With Salesforce automation and AI, we’ve reduced response time for roadside events by 10% and manual service cases by 30%. Salesforce AI helps us deliver faster support and increased productivity.” Availability Salesforce Enhances Einstein 1 Platform with New Vector Database and AI Capabilities Salesforce’s new Data Cloud Vector Database and Einstein Copilot Search promise to revolutionize how businesses utilize their data, driving AI-powered innovation and improved customer experiences. Salesforce Enhances Einstein 1 Platform with New Vector Database and AI Capabilities Like Related Posts AI Automated Offers with Marketing Cloud Personalization AI-Powered Offers Elevate the relevance of each customer interaction on your website and app through Einstein Decisions. Driven by a Read more Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Read More
Retrieval Augmented Generation Techniques

Retrieval Augmented Generation Techniques

A comprehensive study has been conducted on advanced retrieval augmented generation techniques and algorithms, systematically organizing various approaches. This insight includes a collection of links referencing various implementations and studies mentioned in the author’s knowledge base. If you’re familiar with the RAG concept, skip to the Advanced RAG section. Retrieval Augmented Generation, known as RAG, equips Large Language Models (LLMs) with retrieved information from a data source to ground their generated answers. Essentially, RAG combines Search with LLM prompting, where the model is asked to answer a query provided with information retrieved by a search algorithm as context. Both the query and the retrieved context are injected into the prompt sent to the LLM. RAG emerged as the most popular architecture for LLM-based systems in 2023, with numerous products built almost exclusively on RAG. These range from Question Answering services that combine web search engines with LLMs to hundreds of apps allowing users to interact with their data. Even the vector search domain experienced a surge in interest, despite embedding-based search engines being developed as early as 2019. Vector database startups such as Chroma, Weavaite.io, and Pinecone have leveraged existing open-source search indices, mainly Faiss and Nmslib, and added extra storage for input texts and other tooling. Two prominent open-source libraries for LLM-based pipelines and applications are LangChain and LlamaIndex, both founded within a month of each other in October and November 2022, respectively. These were inspired by the launch of ChatGPT and gained massive adoption in 2023. The purpose of this Tectonic insight is to systemize key advanced RAG techniques with references to their implementations, mostly in LlamaIndex, to facilitate other developers’ exploration of the technology. The problem addressed is that most tutorials focus on individual techniques, explaining in detail how to implement them, rather than providing an overview of the available tools. Naive RAG The starting point of the RAG pipeline described in this article is a corpus of text documents. The process begins with splitting the texts into chunks, followed by embedding these chunks into vectors using a Transformer Encoder model. These vectors are then indexed, and a prompt is created for an LLM to answer the user’s query given the context retrieved during the search step. In runtime, the user’s query is vectorized with the same Encoder model, and a search is executed against the index. The top-k results are retrieved, corresponding text chunks are fetched from the database, and they are fed into the LLM prompt as context. An overview of advanced RAG techniques, illustrated with core steps and algorithms. 1.1 Chunking Texts are split into chunks of a certain size without losing their meaning. Various text splitter implementations capable of this task exist. 1.2 Vectorization A model is chosen to embed the chunks, with options including search-optimized models like bge-large or E5 embeddings family. 2.1 Vector Store Index Various indices are supported, including flat indices and vector indices like Faiss, Nmslib, or Annoy. 2.2 Hierarchical Indices Efficient search within large databases is facilitated by creating two indices: one composed of summaries and another composed of document chunks. 2.3 Hypothetical Questions and HyDE An alternative approach involves asking an LLM to generate a question for each chunk, embedding these questions in vectors, and performing query search against this index of question vectors. 2.4 Context Enrichment Smaller chunks are retrieved for better search quality, with surrounding context added for the LLM to reason upon. 2.4.1 Sentence Window Retrieval Each sentence in a document is embedded separately to provide accurate search results. 2.4.2 Auto-merging Retriever Documents are split into smaller child chunks referring to larger parent chunks to enhance context retrieval. 2.5 Fusion Retrieval or Hybrid Search Keyword-based old school search algorithms are combined with modern semantic or vector search to improve retrieval results. Encoder and LLM Fine-tuning Fine-tuning of Transformer Encoders or LLMs can further enhance the RAG pipeline’s performance, improving context retrieval quality or answer relevance. Evaluation Various frameworks exist for evaluating RAG systems, with metrics focusing on retrieved context relevance, answer groundedness, and overall answer relevance. The next big thing about building a nice RAG system that can work more than once for a single query is the chat logic, taking into account the dialogue context, same as in the classic chat bots in the pre-LLM era.This is needed to support follow up questions, anaphora, or arbitrary user commands relating to the previous dialogue context. It is solved by query compression technique, taking chat context into account along with the user query. Query routing is the step of LLM-powered decision making upon what to do next given the user query — the options usually are to summarise, to perform search against some data index or to try a number of different routes and then to synthesise their output in a single answer. Query routers are also used to select an index, or, broader, data store, where to send user query — either you have multiple sources of data, for example, a classic vector store and a graph database or a relational DB, or you have an hierarchy of indices — for a multi-document storage a pretty classic case would be an index of summaries and another index of document chunks vectors for example. This insight aims to provide an overview of core algorithmic approaches to RAG, offering insights into techniques and technologies developed in 2023. It emphasizes the importance of speed in RAG systems and suggests potential future directions, including exploration of web search-based RAG and advancements in agentic architectures. Like Related Posts AI Automated Offers with Marketing Cloud Personalization AI-Powered Offers Elevate the relevance of each customer interaction on your website and app through Einstein Decisions. Driven by a Read more Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence

Read More
gettectonic.com