Linear Archives - gettectonic.com
AI Agent Workflows

AI Agent Workflows

AI Agent Workflows: The Ultimate Guide to Choosing Between LangChain and LangGraph Explore two transformative libraries—LangChain and LangGraph—both created by the same developer, designed to build Agentic AI applications. This guide dives into their foundational components, differences in handling functionality, and how to choose the right tool for your use case. Language Models as the Bridge Modern language models have unlocked revolutionary ways to connect users with AI systems and enable AI-to-AI communication via natural language. Enterprises aiming to harness Agentic AI capabilities often face the pivotal question: “Which tools should we use?” For those eager to begin, this question can become a roadblock. Why LangChain and LangGraph? LangChain and LangGraph are among the leading frameworks for crafting Agentic AI applications. By understanding their core building blocks and approaches to functionality, you’ll gain clarity on how each aligns with your needs. Keep in mind that the rapid evolution of generative AI tools means today’s truths might shift tomorrow. Note: Initially, this guide intended to compare AutoGen, LangChain, and LangGraph. However, AutoGen’s upcoming 0.4 release introduces a foundational redesign. Stay tuned for insights post-launch! Understanding the Basics LangChain LangChain offers two primary methods: Key components include: LangGraph LangGraph is tailored for graph-based workflows, enabling flexibility in non-linear, conditional, or feedback-loop processes. It’s ideal for cases where LangChain’s predefined structure might not suffice. Key components include: Comparing Functionality Tool Calling Conversation History and Memory Retrieval-Augmented Generation (RAG) Parallelism and Error Handling When to Choose LangChain, LangGraph, or Both LangChain Only LangGraph Only Using LangChain + LangGraph Together Final Thoughts Whether you choose LangChain, LangGraph, or a combination, the decision depends on your project’s complexity and specific needs. By understanding their unique capabilities, you can confidently design robust Agentic AI workflows. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce LlamaRank

Salesforce LlamaRank

Document ranking remains a critical challenge in information retrieval and natural language processing. Effective document retrieval and ranking are crucial for enhancing the performance of search engines, question-answering systems, and Retrieval-Augmented Generation (RAG) systems. Traditional ranking models often struggle to balance result precision with computational efficiency, especially when dealing with large datasets and diverse query types. This challenge underscores the growing need for advanced models that can provide accurate, contextually relevant results in real-time from continuous data streams and increasingly complex queries. Salesforce AI Research has introduced a cutting-edge reranker named LlamaRank, designed to significantly enhance document ranking and code search tasks across various datasets. Built on the Llama3-8B-Instruct architecture, LlamaRank integrates advanced linear and calibrated scoring mechanisms, achieving both speed and interpretability. The Salesforce AI Research team developed LlamaRank as a specialized tool for document relevancy ranking. Enhanced by iterative feedback from their dedicated RLHF data annotation team, LlamaRank outperforms many leading APIs in general document ranking and sets a new standard for code search performance. The model’s training data includes high-quality synthesized data from Llama3-70B and Llama3-405B, along with human-labeled annotations, covering a broad range of domains from topic-based search and document QA to code QA. In RAG systems, LlamaRank plays a crucial role. Initially, a query is processed using a less precise but cost-effective method, such as semantic search with embeddings, to generate a list of potential documents. The reranker then refines this list to identify the most relevant documents, ensuring that the language model is fine-tuned with only the most pertinent information, thereby improving accuracy and coherence in the output responses. LlamaRank’s architecture, based on Llama3-8B-Instruct, leverages a diverse training corpus of synthetic and human-labeled data. This extensive dataset enables LlamaRank to excel in various tasks, from general document retrieval to specialized code searches. The model underwent multiple feedback cycles from Salesforce’s data annotation team to achieve optimal accuracy and relevance in its scoring predictions. During inference, LlamaRank predicts token probabilities and calculates a numeric relevance score, facilitating efficient reranking. Demonstrated on several public datasets, LlamaRank has shown impressive performance. For instance, on the SQuAD dataset for question answering, LlamaRank achieved a hit rate of 99.3%. It posted a hit rate of 92.0% on the TriviaQA dataset. In code search benchmarks, LlamaRank recorded a hit rate of 81.8% on the Neural Code Search dataset and 98.6% on the TrailheadQA dataset. These results highlight LlamaRank’s versatility and efficiency across various document types and query scenarios. LlamaRank’s technical specifications further emphasize its advantages. Supporting up to 8,000 tokens per document, it significantly outperforms competitors like Cohere’s reranker. It delivers low-latency performance, ranking 64 documents in under 200 ms with a single H100 GPU, compared to approximately 3.13 seconds on Cohere’s serverless API. Additionally, LlamaRank features linear scoring calibration, offering clear and interpretable relevance scores. While LlamaRank’s size of 8 billion parameters contributes to its high performance, it is approaching the upper limits of reranking model size. Future research may focus on optimizing model size to balance quality and efficiency. Overall, LlamaRank from Salesforce AI Research marks a significant advancement in reranking technology, promising to greatly enhance RAG systems’ effectiveness across a wide range of applications. With its powerful performance, efficiency, and clear scoring, LlamaRank represents a major step forward in document retrieval and search accuracy. The community eagerly anticipates its broader adoption and further development. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Predictive Analytics

Predictive Analytics in Salesforce

Predictive Analytics in Salesforce: Enhancing Decision-Making with AI In an ever-changing business environment, companies seek tools to forecast trends and anticipate challenges, enabling them to remain competitive. Predictive analytics, powered by Salesforce’s AI capabilities, offers a cutting-edge solution for these needs. In this guide, we’ll explore how predictive analytics works and how Salesforce empowers businesses to make smarter, data-driven decisions. What is Predictive Analytics? Predictive analytics uses historical data, statistical modeling, and machine learning to forecast future outcomes. With the vast amount of data organizations generate—ranging from transaction logs to multimedia—unifying this information can be challenging due to data silos. These silos hinder the development of accurate predictive models and limit Salesforce’s ability to deliver actionable insights. The result? Missed opportunities, inefficiencies, and impersonal customer experiences. When organizations implement proper integrations and data management practices, predictive analytics can harness this data to uncover patterns and predict future events. Techniques such as logistic regression, linear regression, neural networks, and decision trees help businesses gain actionable insights that enhance planning and decision-making. Einstein Prediction Builder A key component of the Salesforce Einstein Suite, Einstein Prediction Builder enables users to create custom AI models with minimal coding or data science expertise. Using in-house data, businesses can anticipate trends, forecast customer behavior, and predict outcomes with tailored precision. Key Features of Einstein Prediction Builder Note: Einstein Prediction Builder requires an Enterprise or Unlimited Edition subscription to access. Predictive Model Types in Salesforce Salesforce employs various predictive models tailored to specific needs: Building Custom Predictions Salesforce supports custom predictions tailored to unique business needs, such as forecasting regional sales or calculating appointment attendance rates. Tips for Building Predictions Prescriptive Analytics: Turning Predictions into Actions Predictive insights are only as valuable as the actions they inspire. Einstein Next Best Action bridges this gap by providing context-specific recommendations based on predictions. How Einstein Next Best Action Works Data Quality: The Foundation of Accurate Predictions The effectiveness of predictive analytics depends on the quality of your data. Poor data—whether due to errors, duplicates, or inconsistencies—can skew results and undermine trust. Best Practices for Data Quality Modern tools like DataGroomr can automate data validation and cleaning, ensuring that predictions are based on trustworthy information. Empowering Smarter Decisions with Predictive Analytics Salesforce’s AI-driven predictive analytics transforms decision-making by providing actionable insights from historical data. Businesses can anticipate trends, improve operational efficiency, and deliver personalized customer experiences. As predictive analytics continues to evolve, companies leveraging these tools will gain a competitive edge in an increasingly dynamic marketplace. Embrace the power of predictive analytics in Salesforce to make faster, more strategic decisions and drive sustained success. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Exploring Emerging LLM

Exploring Emerging LLM

Exploring Emerging LLM Agent Types and Architectures The Evolution Beyond ReAct AgentsThe shortcomings of first-generation ReAct agents have paved the way for a new era of LLM agents, bringing innovative architectures and possibilities. In 2024, agents have taken center stage in the AI landscape. Companies globally are developing chatbot agents, tools like MultiOn are bridging agents to external websites, and frameworks like LangGraph and LlamaIndex Workflows are helping developers build more structured, capable agents. However, despite their rising popularity within the AI community, agents are yet to see widespread adoption among consumers or enterprises. This leaves businesses wondering: How do we navigate these emerging frameworks and architectures? Which tools should we leverage for our next application? Having recently developed a sophisticated agent as a product copilot, we share key insights to guide you through the evolving agent ecosystem. What Are LLM-Based Agents? At their core, LLM-based agents are software systems designed to execute complex tasks by chaining together multiple processing steps, including LLM calls. These agents: The Rise and Fall of ReAct Agents ReAct (reason, act) agents marked the first wave of LLM-powered tools. Promising broad functionality through abstraction, they fell short due to their limited utility and overgeneralized design. These challenges spurred the emergence of second-generation agents, emphasizing structure and specificity. The Second Generation: Structured, Scalable Agents Modern agents are defined by smaller solution spaces, offering narrower but more reliable capabilities. Instead of open-ended design, these agents map out defined paths for actions, improving precision and performance. Key characteristics of second-gen agents include: Common Agent Architectures Agent Development Frameworks Several frameworks are now available to simplify and streamline agent development: While frameworks can impose best practices and tooling, they may introduce limitations for highly complex applications. Many developers still prefer code-driven solutions for greater control. Should You Build an Agent? Before investing in agent development, consider these criteria: If you answered “yes,” an agent may be a suitable choice. Challenges and Solutions in Agent Development Common Issues: Strategies to Address Challenges: Conclusion The generative AI landscape is brimming with new frameworks and fervent innovation. Before diving into development, evaluate your application needs and consider whether agent frameworks align with your objectives. By thoughtfully assessing the tools and architectures available, you can create agents that deliver measurable value while avoiding unnecessary complexity. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Chatbot-less AI-ifying

Chatbot-less AI-ifying

AI-ify Your Product Without Adding a Chatbot: Inspiration from Top AI Use Cases Artificial intelligence doesn’t always need to look like a chatbot. Some of the most innovative implementations of AI have created intuitive user experiences (UX) without relying on traditional conversational interfaces. Here are seven standout patterns from leading companies and startups that demonstrate how AI can elevate your product in ways that feel natural and empowering for users. These are just a preview of the 24 trending AI-UX patterns featured in the “Trending AI-UX Patterns” ebook by AIverse—perfect for borrowing (or expensing to your company). Pattern 1: Linear Back-and-Forth (Classic Chat) While chat interfaces revolutionized access to AI, this pattern is just the beginning. Think of ChatGPT—its conversational simplicity opened the door to powerful LLMs for non-tech audiences. But beyond basic chat, consider integrating generative UI commands or API-based functionality into your product to transform linear data access into something seamless and engaging. Pattern 2: Non-Linear Conversations Inspired by Subform, this pattern mirrors how humans think—connecting ideas in a web, not a straight line. Non-linear exploration allows users to navigate through information like dots on a map, offering a flexible, intuitive flow. For example, imagine an AI that surfaces related ideas or actions based on user input—ideal for creative tools or brainstorming apps. Pattern 3: Context Bundling Why stop at simple text input when you can bundle context visually? Figma’s dual-tone matrix simplifies tone adjustments for text by letting users drag across a 2D grid. It eliminates the need for complex prompts while maintaining control over customization. Think of ways to integrate pre-bundled prompts directly into your UI to create an intuitive, visually driven experience. Pattern 4: Living Documents Tools like Elicit bring AI into familiar interfaces like spreadsheets by enhancing workflows without disrupting them. Elicit’s bulk data extraction uses subtle animations and transparency—highlighting “low confidence” answers for clarity. This hybrid approach integrates AI in a way that feels natural and predictable, making it a great choice for data-heavy tools or reporting systems. Pattern 5: Work With Me One of the most human-centered AI patterns comes from Granola, which uses meeting summaries based on your rough notes. Instead of overwhelming users with full transcriptions, it creates concise, actionable insights, perfectly blending human oversight with AI-powered efficiency. This pattern exemplifies the “human-in-the-loop” trend, ensuring collaboration between the user and AI. Pattern 6: Highlight and Curate Take inspiration from Lex’s “@lex” comment feature, which allows users to highlight and comment directly in the flow of their work—no app switching or disruption required. By building on familiar text-interaction patterns, this approach integrates AI subtly, offering suggestions or enhancements without breaking the user’s autonomy. Pattern 7: Invisible AI (Agentive UX) AI can work quietly in the background until needed, as demonstrated by Ford’s lane assist. This feature seamlessly takes control during critical moments (e.g., steering) and hands it back to the user effortlessly. Visual, auditory, and haptic feedback make the transition intuitive and reassuring. This “agentive” pattern is perfect for products where AI acts as a silent partner, ready to assist only when necessary. Tectonic Conclusions These patterns prove that AI can elevate your product without resorting to a chatbot. Whether through non-linear exploration, visual bundling, or seamless agentive experiences, the key is to integrate AI in a way that feels intuitive, empowering, and aligned with user needs. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
einstein discovery dictionary

Einstein Discovery Dictionary

Familiarize yourself with terminology that is commonly associated with Einstein Discovery. Actionable VariableAn actionable variable is an explanatory variable that people can control, such as deciding which marketing campaign to use for a particular customer. Contrast these variables with explanatory variables that can’t be controlled, such as a customer’s street address or a person’s age. If a variable is designated as actionable, the model uses prescriptive analytics to suggest actions (improvements) the user can take to improve the predicted outcome. Actual OutcomeAn actual outcome is the real-world value of an observation’s outcome variable after the outcome has occurred. Einstein Discovery calculates model performance by comparing how closely predicted outcomes come to actual outcomes. An actual outcome is sometimes called an observed outcome. AlgorithmSee modeling algorithm. Analytics DatasetAn Analytics dataset is a collection of related data that is stored in a denormalized, yet highly compressed, form. The data is optimized for analysis and interactive exploration. AttributeSee variable. AverageIn Einstein Discovery, the average represents the statistical mean for a variable. BiasIf Einstein Discovery detects bias in your data, it means that variables are being treated unequally in your model. Removing bias from your model can produce more ethical and accountable models and, therefore, predictions. See disparate impact. Binary Classification Use CaseThe binary classification use case applies to business outcomes that are binary: categorical (text) fields with only two possible values, such as win-lose, pass-fail, public-private, retain-churn, and so on. These outcomes separate your data into two distinct groups. For analysis purposes, Einstein Discovery converts the two values into Boolean true and false. Einstein Discovery uses logistic regression to analyze binary outcomes. Binary classification is one of the main use cases that Einstein Discovery supports. Compare with multiclass classification. CardinalityCardinality is the number of distinct values in a category. Variables with high cardinality (too many distinct values) can result in complex visualizations that are difficult to read and interpret. Einstein Discovery supports up to 100 categories per variable. You can optionally consolidate the remaining categories (categories with fewer than 25 observations) into a category called Other. Null values are put into a category called Unspecified. Categorical VariableA categorical variable is a type of variable that represents qualitative values (categories). A model that represents a binary or multiclass classification use case has a categorical variable as its outcome. See category. CategoryA category is a qualitative value that usually contains categorical (text) data, such as Product Category, Lead Status, and Case Subject. Categories are handy for grouping and filtering your data. Unlike measures, you can’t perform math on categories. In Salesforce Help for Analytics datasets, categories are referred to as dimensions. CausationCausation describes a cause-and-effect relationship between things. In Einstein Discovery, causality refers to the degree to which variables influence each other (or not), such as between explanatory variables and an outcome variable. Some variables can have an obvious, direct effect on each other (for example, how price and discount affect the sales margin). Other variables can have a weaker, less obvious effect (for example, how weather can affect on-time delivery). Many variables have no effect on each other: they are independent and mutually exclusive (for example, win-loss records of soccer teams and currency exchange rates). It’s important to remember that you can’t presume a causal relationship between variables based simply on a statistical correlation between them. In fact, correlation provides you with a hint that indicates further investigation into the association between those variables. Only with more exploration can you determine whether a causal link between them really exists and, if so, how significant that effect is .CoefficientA coefficient is a numeric value that represents the impact that an explanatory variable (or a pair of explanatory variables) has on the outcome variable. The coefficient quantifies the change in the mean of the outcome variable when there’s a one-unit shift in the explanatory variable, assuming all other variables in the model remain constant. Comparative InsightComparative insights are insights derived from a model. Comparative insights reveal information about the relationships between explanatory variables and the outcome variable in your story. With comparative insights, you isolate factors (categories or buckets) and compare their impact with other factors or with global averages. Einstein Discovery shows waterfall charts to help you visualize these comparisons. CorrelationA correlation is simply the association—or “co-relationship”—between two or more things. In Einstein Discovery, correlation describes the statistical association between variables, typically between explanatory variables and an outcome variable. The strength of the correlation is quantified as a percentage. The higher the percentage, the stronger the correlation. However, keep in mind that correlation is not causation. Correlation merely describes the strength of association between variables, not whether they causally affect each other. CountA count is the number of observations (rows) associated with an analysis. The count can represent all observations in the dataset, or the subset of observations that meet associated filter criteria.DatasetSee Analytics dataset. Date VariableA date variable is a type of variable that contains date/time (temporal) data.Dependent VariableSee outcome variable. Deployment WizardThe Deployment Wizard is the Einstein Discovery tool used to deploy models into your Salesforce org. Descriptive InsightsDescriptive insights are insights derived from historical data using descriptive analytics. Descriptive insights show what happened in your data. For example, Einstein Discovery in Reports produces descriptive insights for reports. Diagnostic InsightsDiagnostic insights are insights derived from a model. Whereas descriptive insights show what happened in your data, diagnostic insights show why it happened. Diagnostic insights drill deeper into correlations to help you understand which variables most significantly impacted the business outcome you’re analyzing. The term why refers to a high statistical correlation, not necessarily a causal relationship. Disparate ImpactIf Einstein Discovery detects disparate impact in your data, it means that the data reflects discriminatory practices toward a particular demographic. For example, your data can reveal gender disparities in starting salaries. Removing disparate impact from your model can produce more accountable and ethical insights and, therefore, predictions that are fair and equitable. Dominant ValuesIf Einstein Discovery detects dominant values in a variable, it means that the data is unbalanced. Most values are in the same category, which can limit the value of the analysis. DriftOver time, a deployed model’s performance can drift, becoming less accurate in predicting outcomes. Drift can occur due to changing factors in the data or in your business environment. Drift also results from now-obsolete assumptions built into the story

Read More
gettectonic.com