Large Language Model - gettectonic.com - Page 10
Evaluating RAG With Needle in Haystack Test

Evaluating RAG With Needle in Haystack Test

Retrieval-Augmented Generation (RAG) in Real-World Applications Retrieval-augmented generation (RAG) is at the core of many large language model (LLM) applications, from companies creating headlines to developers solving problems for small businesses. Evaluating RAG With Needle in Haystack Test. Evaluating RAG systems is critical for their development and deployment. Trust in AI cannot be achieved without proof AI can be trusted. One innovative approach to this trust evaluation is the “Needle in a Haystack” test, introduced by Greg Kamradt. This test assesses an LLM’s ability to identify and utilize specific information (the “needle”) embedded within a larger, complex body of text (the “haystack”). In RAG systems, context windows often teem with information. Large pieces of context from a vector database are combined with instructions, templating, and other elements in the prompt. The Needle in a Haystack test evaluates how well an LLM can pinpoint specific details within this clutter. Even if a RAG system retrieves relevant context, it is ineffective if it overlooks crucial specifics. Conducting the Needle in a Haystack Test Aparna Dhinakaran conducted this test multiple times across several major language models. Here’s an overview of her process and findings: Test Setup Key Findings Further Experiments We extended our tests to include additional models and configurations: Models Tested: Lars Wiik Similar Tests Included: Result Evaluating RAG With Needle in Haystack Test The Needle in a Haystack test effectively measures an LLM’s ability to retrieve specific information from dense contexts. Our key takeaways include: The test highlights the importance of tailored prompting and continuous evaluation in developing and deploying LLMs, especially when connected to private data. Small changes in prompt structure can lead to significant performance differences, underscoring the need for precise tuning and testing. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Unfolding AI Revolution

Unfolding AI Revolution

Ways the AI Revolution is Unfolding The transformative potential of artificial intelligence (AI) is being explored by James Manyika, Senior VP of Research, Technology, and Society at Google, and Michael Spence, Nobel laureate in economics and professor at NYU Stern School of Business, in their recent article, “The Coming AI Economic Revolution: Can Artificial Intelligence Reverse the Productivity Slowdown?” Published in Foreign Affairs, the article outlines the conditions necessary for an AI-powered economy to thrive, including policies that augment human capabilities, promote widespread adoption, and foster organizational innovation. Manyika and Spence highlight AI’s potential to reverse stagnating productivity growth in advanced economies, stating, “By the beginning of the next decade, the shift to AI could become a leading driver of global prosperity.” However, the authors caution that this economic revolution will require robust policy frameworks to prevent harm and unlock AI’s full potential. Here are the key insights from their analysis: 1. The Great Slowdown The rapid advancements in AI arrive at a critical juncture for the global economy. While technological innovations have surged, productivity growth has stagnated. For instance, total factor productivity (TFP), a key contributor to GDP growth, grew by 1.7% in the U.S. between 1997 and 2005 but has since slowed to just 0.4%. This slowdown is exacerbated by aging populations and shrinking labor forces in major economies like China, Japan, and Italy. Without a transformative force like AI, economic growth could remain stifled, characterized by higher inflation, reduced labor supply, and elevated capital costs. 2. A Different Digital Revolution Unlike the rule-based automation of the 1990s digital revolution, AI has shattered previous technological constraints. Advances in AI now enable tasks that were previously unprogrammable, such as pattern recognition and decision-making. AI systems have surpassed human performance in areas like image recognition, cancer detection, and even strategic games like Go. This shift extends the impact of technology to domains previously thought to require exclusively human intuition and creativity. 3. Quick Studies Generative AI, particularly large language models (LLMs), offers exceptional versatility, multimodality, and accessibility, making its economic impact potentially transformative: Applications range from digital assistants drafting documents to ambient intelligence systems that automate homes or generate health records based on patient-clinician interactions. 4. Creative Instruction Despite its promise, AI has drawn criticism for issues like bias, misinformation, and the potential for job displacement. Critics highlight that AI systems may amplify societal inequities or produce unreliable outputs. However, research suggests that AI will primarily augment work rather than eliminate it. While about 10% of jobs may decline, two-thirds of occupations will likely see AI enhancing specific tasks. This shift emphasizes collaboration between humans and intelligent machines, requiring workers to develop new skills. Studies, such as MIT’s Work of the Future task force, reinforce that automation will not lead to a jobless future but rather to evolving roles and opportunities. 5. With Us, Not Against Us The full benefits of AI will not materialize if its deployment is left solely to market forces. Proactive measures are necessary to maximize AI’s positive impact while mitigating risks. This includes fostering widespread adoption of AI in ways that empower workers, enhance productivity, and address societal challenges. Policies should prioritize accessibility and equitable diffusion to ensure AI serves as a force for inclusive economic growth. 6. The Real AI Challenge Generative AI has the potential to spark a productivity renaissance at a time when the global economy urgently needs it. Yet, Manyika and Spence caution that AI could exacerbate existing economic disparities if not guided effectively. They argue that focusing solely on existential threats overlooks the broader risks posed by inequitable AI deployment. Instead, a positive vision is needed—one that prioritizes AI as a tool for global economic progress, equitable growth, and generational prosperity. “Harnessing the power of AI for good will require more than simply focusing on potential damage,” the authors conclude. “It will demand effective measures to turn that vision into reality.” The unfolding AI revolution offers immense opportunities, but realizing its full potential requires thoughtful action. By addressing risks and fostering innovation, AI could reshape the global economy for the better. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Salesforce Enhances Einstein 1 Platform with New Vector Database and AI Capabilities

Salesforce Enhances Einstein 1 Platform with New Vector Database and AI Capabilities

Salesforce (NYSE: CRM) has announced major updates to its Einstein 1 Platform, introducing the Data Cloud Vector Database and Einstein Copilot Search. These new features aim to power AI, analytics, and automation by integrating business data with large language models (LLMs) across the Einstein 1 Platform. Salesforce Enhances Einstein 1 Platform with New Vector Database and AI Capabilities. Unifying Business Data for Enhanced AI The Data Cloud Vector Database will unify all business data, including unstructured data like PDFs, emails, and transcripts, with CRM data. This will enable accurate and relevant AI prompts and Einstein Copilot, eliminating the need for expensive and complex fine-tuning of LLMs. Built into the Einstein 1 Platform, the Data Cloud Vector Database allows all business applications to harness unstructured data through workflows, analytics, and automation. This enhances decision-making and customer insights across Salesforce CRM applications. Introducing Einstein Copilot Search Einstein Copilot Search will provide advanced AI search capabilities, delivering precise answers from the Data Cloud in a conversational AI experience. This feature aims to boost productivity for all business users by interpreting and responding to complex queries with real-time data from various sources. Key Features and Benefits Salesforce Enhances Einstein 1 Platform with New Vector Database and AI Capabilities Data Cloud Vector Database Einstein Copilot Search Addressing the Data Challenge With 90% of enterprise data existing in unstructured formats, accessing and leveraging this data for business applications and AI models has been challenging. As Forrester predicts, the volume of unstructured data managed by enterprises will double by 2024. Salesforce’s new capabilities address this by enabling businesses to effectively harness their data, driving AI innovation and improved customer experiences. Salesforce’s Vision Rahul Auradkar, EVP and GM of Unified Data Services & Einstein, stated, “The Data Cloud Vector Database transforms all business data into valuable insights. This advancement, coupled with the power of LLMs, fosters a data-driven ecosystem where AI, CRM, automation, Einstein Copilot, and analytics turn data into actionable intelligence and drive innovation.” Practical Applications Customer Success Story Shohreh Abedi, EVP at AAA – The Auto Club Group, highlighted the impact: “With Salesforce automation and AI, we’ve reduced response time for roadside events by 10% and manual service cases by 30%. Salesforce AI helps us deliver faster support and increased productivity.” Availability Salesforce Enhances Einstein 1 Platform with New Vector Database and AI Capabilities Salesforce’s new Data Cloud Vector Database and Einstein Copilot Search promise to revolutionize how businesses utilize their data, driving AI-powered innovation and improved customer experiences. Salesforce Enhances Einstein 1 Platform with New Vector Database and AI Capabilities Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Retrieval Augmented Generation Techniques

Retrieval Augmented Generation Techniques

A comprehensive study has been conducted on advanced retrieval augmented generation techniques and algorithms, systematically organizing various approaches. This insight includes a collection of links referencing various implementations and studies mentioned in the author’s knowledge base. If you’re familiar with the RAG concept, skip to the Advanced RAG section. Retrieval Augmented Generation, known as RAG, equips Large Language Models (LLMs) with retrieved information from a data source to ground their generated answers. Essentially, RAG combines Search with LLM prompting, where the model is asked to answer a query provided with information retrieved by a search algorithm as context. Both the query and the retrieved context are injected into the prompt sent to the LLM. RAG emerged as the most popular architecture for LLM-based systems in 2023, with numerous products built almost exclusively on RAG. These range from Question Answering services that combine web search engines with LLMs to hundreds of apps allowing users to interact with their data. Even the vector search domain experienced a surge in interest, despite embedding-based search engines being developed as early as 2019. Vector database startups such as Chroma, Weavaite.io, and Pinecone have leveraged existing open-source search indices, mainly Faiss and Nmslib, and added extra storage for input texts and other tooling. Two prominent open-source libraries for LLM-based pipelines and applications are LangChain and LlamaIndex, both founded within a month of each other in October and November 2022, respectively. These were inspired by the launch of ChatGPT and gained massive adoption in 2023. The purpose of this Tectonic insight is to systemize key advanced RAG techniques with references to their implementations, mostly in LlamaIndex, to facilitate other developers’ exploration of the technology. The problem addressed is that most tutorials focus on individual techniques, explaining in detail how to implement them, rather than providing an overview of the available tools. Naive RAG The starting point of the RAG pipeline described in this article is a corpus of text documents. The process begins with splitting the texts into chunks, followed by embedding these chunks into vectors using a Transformer Encoder model. These vectors are then indexed, and a prompt is created for an LLM to answer the user’s query given the context retrieved during the search step. In runtime, the user’s query is vectorized with the same Encoder model, and a search is executed against the index. The top-k results are retrieved, corresponding text chunks are fetched from the database, and they are fed into the LLM prompt as context. An overview of advanced RAG techniques, illustrated with core steps and algorithms. 1.1 Chunking Texts are split into chunks of a certain size without losing their meaning. Various text splitter implementations capable of this task exist. 1.2 Vectorization A model is chosen to embed the chunks, with options including search-optimized models like bge-large or E5 embeddings family. 2.1 Vector Store Index Various indices are supported, including flat indices and vector indices like Faiss, Nmslib, or Annoy. 2.2 Hierarchical Indices Efficient search within large databases is facilitated by creating two indices: one composed of summaries and another composed of document chunks. 2.3 Hypothetical Questions and HyDE An alternative approach involves asking an LLM to generate a question for each chunk, embedding these questions in vectors, and performing query search against this index of question vectors. 2.4 Context Enrichment Smaller chunks are retrieved for better search quality, with surrounding context added for the LLM to reason upon. 2.4.1 Sentence Window Retrieval Each sentence in a document is embedded separately to provide accurate search results. 2.4.2 Auto-merging Retriever Documents are split into smaller child chunks referring to larger parent chunks to enhance context retrieval. 2.5 Fusion Retrieval or Hybrid Search Keyword-based old school search algorithms are combined with modern semantic or vector search to improve retrieval results. Encoder and LLM Fine-tuning Fine-tuning of Transformer Encoders or LLMs can further enhance the RAG pipeline’s performance, improving context retrieval quality or answer relevance. Evaluation Various frameworks exist for evaluating RAG systems, with metrics focusing on retrieved context relevance, answer groundedness, and overall answer relevance. The next big thing about building a nice RAG system that can work more than once for a single query is the chat logic, taking into account the dialogue context, same as in the classic chat bots in the pre-LLM era.This is needed to support follow up questions, anaphora, or arbitrary user commands relating to the previous dialogue context. It is solved by query compression technique, taking chat context into account along with the user query. Query routing is the step of LLM-powered decision making upon what to do next given the user query — the options usually are to summarise, to perform search against some data index or to try a number of different routes and then to synthesise their output in a single answer. Query routers are also used to select an index, or, broader, data store, where to send user query — either you have multiple sources of data, for example, a classic vector store and a graph database or a relational DB, or you have an hierarchy of indices — for a multi-document storage a pretty classic case would be an index of summaries and another index of document chunks vectors for example. This insight aims to provide an overview of core algorithmic approaches to RAG, offering insights into techniques and technologies developed in 2023. It emphasizes the importance of speed in RAG systems and suggests potential future directions, including exploration of web search-based RAG and advancements in agentic architectures. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the

Read More
Fine Tune Your Large Language Model

Fine Tune Your Large Language Model

Revamping Your LLM? There’s a Superior Approach to Fine Tune Your Large Language Model. The next evolution in AI fine-tuning might transcend fine-tuning altogether. Vector databases present an efficient means to access and analyze all your business data. Have you ever received redundant emails promoting a product you’ve already purchased or encountered repeated questions in different service interactions? Large language models (LLMs), like OpenAI’s ChatGPT and Google’s Bard, aim to alleviate such issues by enhancing information-sharing and personalization within your company’s operations. However, off-the-shelf LLMs, built on generic internet data, lack access to your proprietary data, limiting the nuanced customer experience. Additionally, these models might not incorporate the latest information—ChatGPT, for instance, only extends up to January 2022. To customize off-the-shelf LLMs for your company, fine-tuning requires integrating your proprietary data, but this process is costly, time-consuming, and may raise trust concerns. A superior alternative is a vector database, described as “a new kind of database for the AI era.” This database offers the benefits of fine-tuning while addressing privacy concerns, promoting data unification, and saving time and resources. Fine-tuning involves training an LLM for specific tasks, such as analyzing customer sentiment or summarizing a patient’s health history. However, it is resource-intensive and fails to resolve the fundamental issue of fragmented data across your organization. A vector database, organized around vectors that describe different types of data, can seamlessly integrate with an LLM or the prompt. By storing and organizing data with an emphasis on vectors, this database streamlines access to relevant information, eliminating the need for fine-tuning and unifying enterprise data with your CRM. This is pivotal for the accuracy, completeness, and efficiency of AI outputs. Unstructured data, comprising 90% of corporate data, poses a challenge for LLMs due to its varied formats. A vector database resolves this by allowing AI to process unstructured and structured data, delivering enhanced business value and ROI. Ultimately, a company’s proprietary data serves as the cornerstone for constructing an enterprise LLM. A vector database ensures seamless storage and processing of this data, facilitating better decision-making across all business applications. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Retrieval Augmented Generation in Artificial Intelligence

RAG – Retrieval Augmented Generation in Artificial Intelligence

Salesforce has introduced advanced capabilities for unstructured data in Data Cloud and Einstein Copilot Search. By leveraging semantic search and prompts in Einstein Copilot, Large Language Models (LLMs) now generate more accurate, up-to-date, and transparent responses, ensuring the security of company data through the Einstein Trust Layer. Retrieval Augmented Generation in Artificial Intelligence has taken Salesforce’s Einstein and Data Cloud to new heights. These features are supported by the AI framework called Retrieval Augmented Generation (RAG), allowing companies to enhance trust and relevance in generative AI using both structured and unstructured proprietary data. RAG Defined: RAG assists companies in retrieving and utilizing their data, regardless of its location, to achieve superior AI outcomes. The RAG pattern coordinates queries and responses between a search engine and an LLM, specifically working on unstructured data such as emails, call transcripts, and knowledge articles. How RAG Works: Salesforce’s Implementation of RAG: RAG begins with Salesforce Data Cloud, expanding to support storage of unstructured data like PDFs and emails. A new unstructured data pipeline enables teams to select and utilize unstructured data across the Einstein 1 Platform. The Data Cloud Vector Database combines structured and unstructured data, facilitating efficient processing. RAG in Action with Einstein Copilot Search: RAG for Enterprise Use: RAG aids in processing internal documents securely. Its four-step process involves ingestion, natural language query, augmentation, and response generation. RAG prevents arbitrary answers, known as “hallucinations,” and ensures relevant, accurate responses. Applications of RAG: RAG offers a pragmatic and effective approach to using LLMs in the enterprise, combining internal or external knowledge bases to create a range of assistants that enhance employee and customer interactions. Retrieval-augmented generation (RAG) is an AI technique for improving the quality of LLM-generated responses by including trusted sources of knowledge, outside of the original training set, to improve the accuracy of the LLM’s output. Implementing RAG in an LLM-based question answering system has benefits: 1) assurance that an LLM has access to the most current, reliable facts, 2) reduce hallucinations rates, and 3) provide source attribution to increase user trust in the output. Retrieval Augmented Generation in Artificial Intelligence Content updated July 2024. Like2 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Communicating With Machines

Communicating With Machines

For as long as machines have existed, humans have struggled to communicate effectively with them. The rise of large language models (LLMs) has transformed this dynamic, making “prompting” the bridge between our intentions and AI’s actions. By providing pre-trained models with clear instructions and context, we can ensure they understand and respond correctly. As UX practitioners, we now play a key role in facilitating this interaction, helping humans and machines truly connect. The UX discipline was born alongside graphical user interfaces (GUIs), offering a way for the average person to interact with computers without needing to write code. We introduced familiar concepts like desktops, trash cans, and save icons to align with users’ mental models, while complex code ran behind the scenes. Now, with the power of AI and the transformer architecture, a new form of interaction has emerged—natural language communication. This shift has changed the design landscape, moving us from pure graphical interfaces to an era where text-based interactions dominate. As designers, we must reconsider where our focus should lie in this evolving environment. A Mental Shift In the era of command-based design, we focused on breaking down complex user problems, mapping out customer journeys, and creating deterministic flows. Now, with AI at the forefront, our challenge is to provide models with the right context for optimal output and refine the responses through iteration. Shifting Complexity to the Edges Successful communication, whether with a person or a machine, hinges on context. Just as you would clearly explain your needs to a salesperson to get the right product, AI models also need clear instructions. Expecting users to input all the necessary information in their prompts won’t lead to widespread adoption of these models. Here, UX practitioners play a critical role. We can design user experiences that integrate context—some visible to users, others hidden—shaping how AI interacts with them. This ensures that users can seamlessly communicate with machines without the burden of detailed, manual prompts. The Craft of Prompting As designers, our role in crafting prompts falls into three main areas: Even if your team isn’t building custom models, there’s still plenty of work to be done. You can help select pre-trained models that align with user goals and design a seamless experience around them. Understanding the Context Window A key concept for UX designers to understand is the “context window“—the information a model can process to generate an output. Think of it as the amount of memory the model retains during a conversation. Companies can use this to include hidden prompts, helping guide AI responses to align with brand values and user intent. Context windows are measured in tokens, not time, so even if you return to a conversation weeks later, the model remembers previous interactions, provided they fit within the token limit. With innovations like Gemini’s 2-million-token context window, AI models are moving toward infinite memory, which will bring new design challenges for UX practitioners. How to Approach Prompting Prompting is an iterative process where you craft an instruction, test it with the model, and refine it based on the results. Some effective techniques include: Depending on the scenario, you’ll either use direct, simple prompts (for user-facing interactions) or broader, more structured system prompts (for behind-the-scenes guidance). Get Organized As prompting becomes more common, teams need a unified approach to avoid conflicting instructions. Proper documentation on system prompting is crucial, especially in larger teams. This helps prevent errors and hallucinations in model responses. Prompt experimentation may reveal limitations in AI models, and there are several ways to address these: Looking Ahead The UX landscape is evolving rapidly. Many organizations, particularly smaller ones, have yet to realize the importance of UX in AI prompting. Others may not allocate enough resources, underestimating the complexity and importance of UX in shaping AI interactions. As John Culkin said, “We shape our tools, and thereafter, our tools shape us.” The responsibility of integrating UX into AI development goes beyond just individual organizations—it’s shaping the future of human-computer interaction. This is a pivotal moment for UX, and how we adapt will define the next generation of design. Content updated October 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Crucial Role of Data and Integration in AI at Dreamforce

The Crucial Role of Data and Integration in AI at Dreamforce

Understanding The Crucial Role of Data and Integration in AI at Dreamforce At this year’s Dreamforce, AI is the star of the show, but two essential supporting actors are data and integration. Enterprises are increasingly recognizing the importance of unifying their diverse data sources for effective analysis and swift action, and the race to harness AI makes this integration even more critical. Integration is key not only for merging data but also for automating end-to-end processes, enabling organizations to move faster and deliver better outcomes to customers. Crucial Role of Data and Integration in AI at Dreamforce. It’s no surprise that MuleSoft, acquired by Salesforce five years ago, is now a major contributor to Salesforce’s growth. Brian Millham, President and COO at Salesforce, highlighted this during the company’s recent Q2 earnings call: “In Q2, nearly half of our greater than $1 million deals included MuleSoft. As customers integrate data from all sources to drive efficiency, growth, and insights, MuleSoft has become mission-critical and was included in half of our top 10 deals.” Breaking Down Silos Param Kahlon, EVP and General Manager for Automation and Integration at Salesforce, recently discussed the investments customers are making in data and integration. He emphasized the importance of breaking down operational silos: “We are in the business of breaking silos across systems to ensure that data can travel seamlessly through multiple systems and people for processes like order-to-cash or procure-to-pay. Our technology connects these dots.” The surge in AI interest has increased the urgency to act, as Kahlon explained: “Creating data repositories for AI algorithms requires real-time data across silos, driving significant demand for our integration solutions.” Consolidating Data Enterprises have long struggled with data consolidation due to monolithic application stacks with separate data stores. This has been a challenge even within Salesforce’s own products. Last year, Salesforce introduced a Customer Data Platform (CDP) called Data Cloud, which includes a real-time data layer named Genie. Kahlon elaborated on its significance: “Data Cloud’s strength lies in its understanding and storage of Salesforce metadata. This native integration allows for real-time actions within Salesforce, enhancing the ability to aggregate, reason over, and act on data.” For example, when a customer contacts a bank, Data Cloud can compile their ATM usage, website interactions, and recent support cases, providing the agent with a comprehensive view to better assist the customer. Leveraging Metadata for AI Salesforce’s metadata layer, which has been fine-tuned over two decades, gives it a distinct advantage. Kahlon noted: “This metadata-based architecture allows us to create meaningful AI algorithms that are natively consumed within Salesforce, enabling visualization and action based on real-time data.” This is crucial for training the underlying Large Language Model (LLM) accurately, ensuring generated content is contextually grounded and trustworthy. Kahlon emphasized: “The trust layer is essential. We need to ensure no hallucination or toxicity in the LLM’s responses, and that communications align with our company’s values.” Real-Time Data and API Management Data Cloud’s ability to connect to other data sources like Snowflake without duplicating data is a significant benefit. Kahlon commented: “Duplicating data is not desirable. Customers need real-time access to the actual source of truth.” On the integration front, APIs have simplified connecting applications and data sources. However, managing API sprawl is crucial. Kahlon explained: “Standardizing API use and publishing them in a centralized portal is essential for reusability and consistency. Low-code platforms and connectors are becoming increasingly relevant, enabling business users to access data without relying on IT.” Automation and AI The demand for automation is growing, and low-code tools are vital. Instead of integration experts being overwhelmed, organizations should establish Centers for Excellence to focus on creating reusable connectors and automations. Kahlon added: “Companies need low-code tools to involve more business users in the transformation journey without slowing down due to legacy applications.” In the future, AI may further ease the workload on integration specialists. MuleSoft recently introduced an API Experience Hub to make APIs discoverable, and AI might eventually help monitor execution logs and manage APIs more effectively. Kahlon concluded: “AI could help developers find and use APIs efficiently, enhancing security and governance while simplifying access to data across the organization.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
SaaS Data Protection from Own

Reporting With Own

In any Salesforce organization, vast amounts of data are generated constantly from sales activities, customer interactions, marketing campaigns, and more. Summarizing and digesting this information quickly is crucial, especially when presenting the big picture to leadership. This is where Salesforce reports come into play. The Salesforce Reports feature enables organizations to analyze, visualize, and summarize data in real time. By pulling data from across your Salesforce environment, reports help consolidate information into easily digestible formats, such as charts, tables, and graphs. Salesforce reports are essential for: How Historical Data Can Improve Reporting in Salesforce While real-time reports are valuable, incorporating historical data can significantly enhance reporting by offering deeper insights into your organization’s long-term performance. Here’s how: Challenges of Reporting with Historical Data in Salesforce While incorporating historical data is smart, Salesforce’s native reporting capabilities impose certain limitations: Don’t Let Salesforce Reporting Limitations Hold You Back With Own Discover, customers can effortlessly generate time-series datasets from any objects and fields over any time period in just a few clicks. These datasets can be accessed using standard query and reporting tools without requiring a data warehouse or the need to enrich existing data warehouses, overcoming Salesforce’s native limitations. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Tableau Pulse and Tableau GPT

Announcing Tableau Pulse and Tableau GPT

It’s fair to say that many are familiar with ChatGPT, the groundbreaking Large Language Model from OpenAI that has transformed how we work and interact with AI. At TC 2023, Tableau announced a new tool called Tableau GPT. But what exactly is Tableau GPT, and how does it fit into Tableau’s suite of products? Announcing Tableau Pulse and Tableau GPT. Tableau GPT Tableau GPT is an assistant leveraging the advanced capabilities of generative AI to simplify and democratize data analysis. Built from Einstein GPT, a Salesforce product developed in collaboration with OpenAI, Tableau GPT integrates generative AI into Tableau’s user experience. This integration aims to help users work smarter, learn faster, and communicate more effectively. During the Devs on Stage segment of the keynote at TC, Matthew Miller, Senior Director of Product Management, showcased Tableau GPT’s ability to generate calculations. For example, with a prompt like “Extract email addresses from JSON,” Tableau GPT quickly produces a calculation that users can copy into the calculation window. Tableau Pulse Tableau GPT also powers a new tool called Tableau Pulse, designed to generate powerful insights swiftly. Tableau Pulse provides “data digests” on a personalized metrics homepage, offering a curated, ‘newsfeed’-like experience of key KPIs. As users interact with Pulse, it learns to deliver more personalized results based on their interests. For example, Tableau Pulse highlights metrics that require attention, derived from recent data trends identified by Tableau GPT. The tool provides the latest metric values, visual trends, and AI-generated insights for user-selected KPIs. Tableau Pulse also enables users to ask questions about their data in natural language. For instance, when asked, “What is driving change in Appliance Sales?” Tableau Pulse responded with a brief answer and visualization. Further inquiries, such as “What else should I know about air fryers?” revealed that the “inventory fill rate” for air fryers is forecasted to fall below a set threshold, providing actionable insights that users can share across their organization. Future Impact and Availability Tableau GPT and Pulse promise to revolutionize interactions with Tableau products, enabling quicker visualization creation and making data accessible to non-technical users. Salesforce announced that Tableau Pulse and Tableau GPT would enter pilot testing later this year. When they do, we’ll be ready to share new insights. Follow us on LinkedIn to stay updated on all the latest developments and features in Tableau! Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
tectonic logo

AI Large Language Models

What Exactly Constitutes a Large Language Model? Picture having an exceptionally intelligent digital assistant that extensively combs through text, encompassing books, articles, websites, and various written content up to the year 2021. Yet, unlike a library that houses entire books, this digital assistant processes patterns from the textual data it undergoes. This digital assistant, akin to a large language model (LLM), represents an advanced computer model tailored to comprehend and generate text with humanlike qualities. Its training involves exposure to vast amounts of text data, allowing it to discern patterns, language structures, and relationships between words and sentences. How Do These Large Language Models Operate? Fundamentally, large language models, exemplified by GPT-3, undertake predictions on a token-by-token basis, sequentially building a coherent sequence. Given a request, they strive to predict the subsequent token, utilizing their acquired knowledge of patterns during training. These models showcase remarkable pattern recognition, generating contextually relevant content across diverse topics. The “large” aspect of these models refers to their extensive size and complexity, necessitating substantial computational resources like powerful servers equipped with multiple processors and ample memory. This capability enables the model to manage and process vast datasets, enhancing its proficiency in comprehending and generating high-quality text. While the sizes of LLMs may vary, they typically house billions of parameters—variables learned during the training process, embodying the knowledge extracted from the data. The greater the number of parameters, the more adept the model becomes at capturing intricate patterns. For instance, GPT-3 boasts around 175 billion parameters, marking a significant advancement in language processing capabilities, while GPT-4 is purported to exceed 1 trillion parameters. While these numerical feats are impressive, the challenges associated with these mammoth models include resource-intensive training, environmental implications, potential biases, and more. Large language models serve as virtual assistants with profound knowledge, aiding in a spectrum of language-related tasks. They contribute to writing, offer information, provide creative suggestions, and engage in conversations, aiming to make human-technology interactions more natural. However, users should be cognizant of their limitations and regard them as tools rather than infallible sources of truth. What Constitutes the Training of Large Language Models? Training a large language model is analogous to instructing a robot in comprehending and utilizing human language. The process involves: Fine-Tuning: A Closer Look Fine-tuning involves further training a pre-trained model on a more specific and compact dataset than the original. It is akin to training a robot proficient in various cuisines to specialize in Italian dishes using a dedicated cookbook. The significance of fine-tuning lies in: Versioning and Progression Large language models evolve through versions, with changes in size, training data, or parameters. Each iteration aims to address weaknesses, handle a broader task spectrum, or minimize biases and errors. The progression is simplified as follows: In essence, large language model versions emulate successive editions of a book series, each release striving for refinement, expansiveness, and captivating capabilities. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com