LLMs Archives - gettectonic.com

From Generative AI to Agentic AI

Understanding the Coming Shift: From Generative AI to Agentic AI Large Language Models (LLMs), such as GPT, excel at generating text, answering questions, and supporting various tasks. However, they operate reactively, responding only to the input they receive based on learned patterns. LLMs cannot make decisions independently, adapt to new situations, or plan ahead. Agentic AI addresses these limitations. Unlike Generative AI, Agentic AI can set goals for itself, take initiative by itself, and learn from its experiences. It is proactive, capable of adjusting its actions over time, and can manage complex, evolving tasks that demand continuous problem-solving and decision-making. This transition from reactive to proactive AI unlocks exciting new possibilities across industries. In this insight, we will explore the differences between Agentic AI and Generative AI, examining their distinct impacts on technology and industries. Let’s begin by understanding what sets them apart. What is Agentic AI? Agentic AI refers to systems capable of autonomous decision-making and action to achieve specific goals. These systems go beyond generating content—they interact with their environments, respond to changes, and complete tasks with minimal human guidance. For example: What is Generative AI? Generative AI focuses on creating content—text, images, music, or video—by learning from large datasets to identify patterns, styles, or structures. For instance: Generative AI acts like a creative assistant, producing content based on what it has learned, but it remains reactive and task-specific. Key Differences in Workflows Agentic AI employs an iterative, cyclical workflow that includes stages like “Thinking/Research” and “Revision.” This adaptive process involves self-assessment, testing, and refinement, enabling the system to learn from each phase and tackle complex, evolving tasks effectively. Generative AI, in contrast, follows a linear, single-step workflow, moving directly from input to output without iterative improvements. While efficient for straightforward tasks, it lacks the ability to revisit or refine its results, limiting its effectiveness for dynamic or nuanced challenges. Characteristics of Agentic AI vs. Generative AI Feature Agentic AI Generative AI Autonomy Acts independently, making decisions and executing tasks. Requires human input to generate responses. Behavior Goal-directed, proactively working toward specific objectives. Task-oriented, reacting to immediate prompts. Adaptation and Learning Learns from experiences, adjusting actions dynamically. Operates based on pre-trained patterns, without learning. Decision-Making Handles complex decisions, weighing multiple outcomes. Makes basic decisions, selecting outputs based on patterns. Environmental Perception Understands and interacts with its surroundings. Lacks awareness of the physical environment. Case Study: Agentic Workflow in Action Andrew Ng highlighted the power of the Agentic Workflow in a coding task. Using the HumanEval benchmark, his team tested two approaches: This illustrates how iterative methods can enhance performance, even for older AI models. Conclusion As AI becomes increasingly integrated into our lives and workplaces, understanding the distinction between Generative AI and Agentic AI is essential. Generative AI has transformed tasks like content creation, offering immediate, reactive solutions. However, it remains limited to following instructions without true autonomy. Agentic AI represents a significant leap in technology. From chatbots to today. By setting goals, making decisions, and adapting in real-time, it can tackle complex, dynamic tasks without constant human oversight. Approaches like the Agentic Workflow further enhance AI’s capabilities, enabling iterative learning and continuous improvement. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Python-Based Reasoning

Python-Based Reasoning

Introducing a Python-Based Reasoning Engine for Deterministic AI As the demand for deterministic systems grows reviving foundational ideas for the age of large language models (LLMs) is here. The Challenge One of the critical issues with modern AI systems is establishing constraints around how they validate and reason about incoming data. As we increasingly rely on stochastic LLMs to process unstructured data, enforcing rules and guardrails becomes vital for ensuring reliability and consistency. The Solution Thus a company has developed a Python-based reasoning and validation framework inspired by Pydantic, designed to empower developers and non-technical domain experts to create sophisticated rule engines. The system is: By transforming Standard Operating Procedures (SOPs) and business guardrails into enforceable code, this symbolic reasoning framework addresses the need for structured, interpretable, and reliable AI systems. Key Features System Architecture The framework includes five core components: Types of Engines Case Studies 1. Validation Engine: Mining Company Compliance A mining company needed to validate employee qualifications against region-specific requirements. The system was configured to check rules such as minimum age and required certifications for specific roles. Input Example:Employee data and validation rules were modeled as JSON: jsonCopy code{ “employees”: [ { “name”: “Sarah”, “age”: 25, “documents”: [{ “type”: “safe_handling_at_work” }] }, { “name”: “John”, “age”: 17, “documents”: [{ “type”: “heavy_lifting” }] } ], “rules”: [ { “type”: “min_age”, “parameters”: { “min_age”: 18 } } ] } Output:Violations, such as “Minimum age must be 18,” were flagged immediately, enabling quick remediation. 2. Reasoning Engine: Solving the River Crossing Puzzle To showcase its capabilities, we modeled the classic river crossing puzzle, where a farmer must transport a wolf, a goat, and a cabbage across a river without leaving incompatible items together. Steps Taken: Enhanced Scenario:Adding a new rule—“Wolf cannot be left with a chicken”—created an unsolvable scenario. By introducing a compensatory rule, “Farmer can carry two items at once,” the system adapted and solved the puzzle with fewer moves. Developer Insights The system supports rapid iteration and debugging. For example, adding rules is as simple as defining Python classes: pythonCopy codeclass GoatCabbageRule(Rule): def evaluate(self, state): return not (state.goat == state.cabbage and state.farmer != state.goat) def get_description(self): return “Goat cannot be left alone with cabbage” Real-World Impact This framework accelerates development by enabling non-technical stakeholders to contribute to rule creation through natural language, with developers approving and implementing these rules. This process reduces development time by up to 5x and adapts seamlessly to varied use cases, from logistics to compliance. 🔔🔔 Follow us on LinkedIn 🔔🔔 Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce Foundations

Salesforce Foundations

We are excited that Agentforce Service Agents are now live! Agentforce Service Agent is the autonomous conversational AI assistant to help your customers with their service and support needs. What does this mean for Foundations Customers?Salesforce Foundations is required for all customers in order to try or buy Agentforce. Additionally, customers who have Foundations can try Agentforce Agents for free with a limited number of credits to test a use case or deploy a proof of concept. Salesforce Foundations is not a product or add-on. It’s a multi-cloud feature set that will be added to Sales and Service Cloud — no integration needed, with no additional upfront cost for our customers. It includes foundational features from Sales, Service, Marketing, Commerce, and Data Cloud. Salesforce Foundations provides a 360-degree view of your customer relationships across sales, service, marketing, and commerce through integrated applications and unified data. It also boosts productivity with streamlined, visually friendly user interface improvements, that you can turn on or off per your requirements. If you’re a Salesforce Sales Cloud or Service Cloud customer, you’ve become accustomed to the power, convenience, and full-featured functionality of our trusted CRM. Adding the additional functionality and engagement capabilities of a new Salesforce Cloud is exciting, but it’s also a big change for your organization to consider when you’re not sure about the value it brings. So, what if you could use essential features in the most popular Salesforce Clouds and turn them on when you’re ready? Now you can with Salesforce Foundations. Salesforce Foundations is a new, no-cost addition to your existing CRM that equips you to expand your business reach. The suite gives Salesforce customers on Enterprise, Unlimited, and Einstein 1 editions the power of Data Cloud, and access to essential Salesforce sales, service, Agentforce, marketing, and commerce capabilities. This suite is built into your existing CRM, and provides new functionality to give you a more robust 360-degree view of your customers. This chart shows the Salesforce Foundations features you get with your current Sales Cloud or Service Cloud package. You get Sales for Salesforce Foundations You get Service for Salesforce Foundations You get Marketing for Salesforce Foundations You get Commerce for Salesforce Foundations You get Data Cloud for Salesforce Foundations You get Agentforce for Salesforce Foundations If you already have Sales Cloud * Yes Yes Yes Yes Yes If you already have Service Cloud Yes * Yes Yes Yes Yes If you already have Sales & Service Clouds * * Yes Yes Yes Yes *Your current Salesforce product. Benefits of Salesforce Foundations The features you get with Salesforce Foundations open doors to all sorts of new ways your teams can work more efficiently and engage with your customers on a more personal level. The benefits listed below are only a few of the ways Salesforce Foundations can help your business grow and thrive. Check out Discover Salesforce Foundations to see the full list of capabilities included with Salesforce Foundations. With Salesforce Foundations, your organization benefits from: Sales features that help you take care of your entire sales pipeline, from prospecting to closing. You can manage your leads, opportunities, accounts, and contacts in the preconfigured Sales Console. Service features that make it easy to provide proactive, personalized support to your customers through the preconfigured Service Console. Omni-channel case routing makes sure the most qualified agents work each case, Knowledge Management helps agents provide accurate and relevant help articles to customers, and macros help agents complete repetitive tasks with a single click. Agentforce brings the power of conversational AI to your business. Try out an intelligent, trusted, and customizable AI agent and help your users get more done with Salesforce. Agentforce’s autonomous apps use LLMs and context to assist customers and human agents. Marketing features that allow you to join data from disparate sources, better understand and analyze your customers, and choose how to connect with your audiences. You can create customized marketing campaigns powered by Salesforce Flows to send at the right time. Commerce features that help boost sales with a Direct to Customer (D2C) online storefront. You can define customer experiences like search, carts, and checkout. Pay Now lets you generate secure payment links for customers when opportunities close, so you get paid faster. Data Cloud functionality that creates unified profiles by aggregating data from all of your data sources into a single view so you can better understand your customers. Create customer segments to more accurately target campaigns, analyze your customers, and manage consent data. Data Cloud also powers features so you can send online store order confirmation emails and marketing messages. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Agentforce Redefines Generative AI

Agentforce Redefines Generative AI

Agentforce: Redefining Generative AI in Salesforce Many Dreamforce attendees who expected to hear about Einstein Copilot were surprised when Salesforce introduced Agentforce just a week before the conference. While it might seem like a rebranding of Copilot, Agentforce marks a significant evolution by enabling more autonomous agents that go beyond summarizing or generating content to perform specific actions. Here’s a breakdown of the transition and what it means for Salesforce users: Key Vocabulary Updates How Agentforce Works Agents take user input, known as an “utterance,” and translate it into actionable steps based on predefined configurations. This allows the system to enhance performance over time while delivering responses tailored to user needs. Understanding Agentforce 1. Topics: Organizing Agent Capabilities Agentforce introduces “Topics,” a new layer of organization that categorizes actions by business function. When a user provides an utterance, the agent identifies the relevant topic first, then determines the best actions to address it. 2. Actions: What Agents Can Do Actions remain largely unchanged from Einstein Copilot. These are tasks agents perform to execute plans. 3. Prompts: The Key to Better Results LLMs rely on prompts to generate outputs, and crafting effective prompts is essential for reducing irrelevant responses and optimizing agent behavior. How Generative AI Enhances Salesforce Agentforce unlocks several benefits across productivity, personalization, standardization, and efficiency: Implementing Agentforce: Tips for Success Getting Started Start by using standard Agent actions. These out-of-the-box tools, such as opportunity summarization or close plan creation, provide a strong foundation. You can make minor adjustments to optimize their performance before diving into more complex custom actions. Testing and Iteration Testing AI agents is different from traditional workflows. Agents must handle various phrasing of the same user request (utterances) while maintaining consistency in responses. The Future of Salesforce with Agentforce As you gain expertise in planning, developing, testing, and deploying Agentforce actions, you’ll unlock new possibilities for transforming your Salesforce experience. With generative AI tools like Agentforce, Salesforce evolves from a traditional point-and-click interface into an intelligent, agent-driven platform with streamlined, conversational workflows. This isn’t just an upgrade — it’s the foundation for reimagining how businesses interact with their CRM in an AI-assisted world. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Statement Accuracy Prediction based on Language Model Activations

Statement Accuracy Prediction based on Language Model Activations

When users first began interacting with ChatGPT, they noticed an intriguing behavior: the model would often reverse its stance when told it was wrong. This raised concerns about the reliability of its outputs. How can users trust a system that appears to contradict itself? Recent research has revealed that large language models (LLMs) not only generate inaccurate information (often referred to as “hallucinations”) but are also aware of their inaccuracies. Despite this awareness, these models proceed to present their responses confidently. Unveiling LLM Awareness of Hallucinations Researchers discovered this phenomenon by analyzing the internal mechanisms of LLMs. Whenever an LLM generates a response, it transforms the input query into a numerical representation and performs a series of computations before producing the output. At intermediate stages, these numerical representations are called “activations.” These activations contain significantly more information than what is reflected in the final output. By scrutinizing these activations, researchers can identify whether the LLM “knows” its response is inaccurate. A technique called SAPLMA (Statement Accuracy Prediction based on Language Model Activations) has been developed to explore this capability. SAPLMA examines the internal activations of LLMs to predict whether their outputs are truthful or not. Why Do Hallucinations Occur? LLMs function as next-word prediction models. Each word is selected based on its likelihood given the preceding words. For example, starting with “I ate,” the model might predict the next words as follows: The issue arises when earlier predictions constrain subsequent outputs. Once the model commits to a word, it cannot go back to revise its earlier choice. For instance: In another case: This mechanism reveals how the constraints of next-word prediction can lead to hallucinations, even when the model “knows” it is generating an incorrect response. Detecting Inaccuracies with SAPLMA To investigate whether an LLM recognizes its own inaccuracies, researchers developed the SAPLMA method. Here’s how it works: The classifier itself is a simple neural network with three dense layers, culminating in a binary output that predicts the truthfulness of the statement. Results and Insights The SAPLMA method achieved an accuracy of 60–80%, depending on the topic. While this is a promising result, it is not perfect and has notable limitations. For example: However, if LLMs can learn to detect inaccuracies during the generation process, they could potentially refine their outputs in real time, reducing hallucinations and improving reliability. The Future of Error Mitigation in LLMs The SAPLMA method represents a step forward in understanding and mitigating LLM errors. Accurate classification of inaccuracies could pave the way for models that can self-correct and produce more reliable outputs. While the current limitations are significant, ongoing research into these methods could lead to substantial improvements in LLM performance. By combining techniques like SAPLMA with advancements in LLM architecture, researchers aim to build models that are not only aware of their errors but capable of addressing them dynamically, enhancing both the accuracy and trustworthiness of AI systems. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More

Autonomy, Architecture, and Action

Redefining AI Agents: Autonomy, Architecture, and Action AI agents are reshaping how technology interacts with us and executes tasks. Their mission? To reason, plan, and act independently—following instructions, making autonomous decisions, and completing actions, often without user involvement. These agents adapt to new information, adjust in real time, and pursue their objectives autonomously. This evolution in agentic AI is revolutionizing how goals are accomplished, ushering in a future of semi-autonomous technology. At their foundation, AI agents rely on one or more large language models (LLMs). However, designing agents is far more intricate than building chatbots or generative assistants. While traditional AI applications often depend on user-driven inputs—such as prompt engineering or active supervision—agents operate autonomously. Core Principles of Agentic AI Architectures To enable autonomous functionality, agentic AI systems must incorporate: Essential Infrastructure for AI Agents Building and deploying agentic AI systems requires robust software infrastructure that supports: Agent Development Made Easier with Langflow and Astra DB Langflow simplifies the development of agentic applications with its visual IDE. It integrates with Astra DB, which combines vector and graph capabilities for ultra-low latency data access. This synergy accelerates development by enabling: Transforming Autonomy into Action Agentic AI is fundamentally changing how tasks are executed by empowering systems to act autonomously. By leveraging platforms like Astra DB and Langflow, organizations can simplify agent design and deploy scalable, effective AI applications. Start building the next generation of AI-powered autonomy today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Scope of Generative AI

Exploring Generative AI

Like most employees at most companies, I wear a few different hats around Tectonic. Whether I’m building a data model, creating and scheduing an email campaign, standing up a platform generative AI is always at my fingertips. At my very core, I’m a marketer. Have been for so long I do it without eveven thinking. Or at least, everyuthing I do has a hat tip to its future marketing needs. Today I want to share some of the AI content generators I’ve been using, am looking to use, or just heard about. But before we rip into the insight, here’s a primer. Types of AI Content Generators ChatGPT, a powerful AI chatbot, drew significant attention upon its November 2022 release. While the GPT-3 language model behind it had existed for some time, ChatGPT made this technology accessible to nontechnical users, showcasing how AI can generate content. Over two years later, numerous AI content generators have emerged to cater to diverse use cases. This rapid development raises questions about the technology’s impact on work. Schools are grappling with fears of plagiarism, while others are embracing AI. Legal debates about copyright and digital media authenticity continue. President Joe Biden’s October 2023 executive order addressed AI’s risks and opportunities in areas like education, workforce, and consumer privacy, underscoring generative AI’s transformative potential. What is AI-Generated Content? AI-generated content, also known as generative AI, refers to algorithms that automatically create new content across digital media. These algorithms are trained on extensive datasets and require minimal user input to produce novel outputs. For instance, ChatGPT sets a standard for AI-generated content. Based on GPT-4o, it processes text, images, and audio, offering natural language and multimodal capabilities. Many other generative AI tools operate similarly, leveraging large language models (LLMs) and multimodal frameworks to create diverse outputs. What are the Different Types of AI-Generated Content? AI-generated content spans multiple media types: Despite their varied outputs, most generative AI systems are built on advanced LLMs like GPT-4 and Google Gemini. These multimodal models process and generate content across multiple formats, with enhanced capabilities evolving over time. How Generative AI is Used Generative AI applications span industries: These tools often combine outputs from various media for complex, multifaceted projects. AI Content Generators AI content generators exist across various media. Below are good examples organized by gen ai type: Written Content Generators Image Content Generators Music Content Generators Code Content Generators Other AI Content Generators These tools showcase how AI-powered content generation is revolutionizing industries, making content creation faster and more accessible. I do hope you will comment below on your favorites, other AI tools not showcased above, or anything else AI-related that is on your mind. Written by Tectonic’s Marketing Operations Director, Shannan Hearne. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
From Chatbots to Agentic AI

From Chatbots to Agentic AI

The transition from LLM-powered chatbots to agentic systems, or agentic AI, can be summed up by the old saying: “Less talk, more action.” Keeping up with advancements in AI can be overwhelming, especially when managing an existing business. The speed and complexity of innovation can make it feel like the first day of school all over again. This insight offers a comprehensive look at AI agents, their components, and key characteristics. The introductory section breaks down the elements that form the term “AI agent,” providing a clear definition. After establishing this foundation, we explore the evolution of LLM applications, particularly the shift from traditional chatbots to agentic systems. The goal is to understand why AI agents are becoming increasingly vital in AI development and how they differ from LLM-powered chatbots. By the end of this guide, you will have a deeper understanding of AI agents, their potential applications, and their impact on organizational workflows. For those of you with a technical background who prefer to get hands-on, click here for the best repository for AI developers and builders. What is an AI Agent? Components of AI Agents To understand the term “AI agent,” we need to examine its two main components. First, let’s consider artificial intelligence, or AI. Artificial Intelligence (AI) refers to non-biological intelligence that mimics human cognition to perform tasks traditionally requiring human intellect. Through machine learning and deep learning techniques, algorithms—especially neural networks—learn patterns from data. AI systems are used for tasks such as detection, classification, and prediction, with content generation becoming a prominent domain due to transformer-based models. These systems can match or exceed human performance in specific scenarios. The second component is “agent,” a term commonly used in both technology and human contexts. In computer science, an agent refers to a software entity with environmental awareness, able to perceive and act within its surroundings. A computational agent typically has the ability to: In human contexts, an agent is someone who acts on behalf of another person or organization, making decisions, gathering information, and facilitating interactions. They often play intermediary roles in transactions and decision-making. To define an AI agent, we combine these two perspectives: it is a computational entity with environmental awareness, capable of perceiving inputs, acting with tools, and processing information using foundation models backed by both long-term and short-term memory. Key Components and Characteristics of AI Agents From LLMs to AI Agents Now, let’s take a step back and understand how we arrived at the concept of AI agents, particularly by looking at how LLM applications have evolved. The shift from traditional chatbots to LLM-powered applications has been rapid and transformative. Form Factor Evolution of LLM Applications Traditional Chatbots to LLM-Powered Chatbots Traditional chatbots, which existed before generative AI, were simpler and relied on heuristic responses: “If this, then that.” They followed predefined rules and decision trees to generate responses. These systems had limited interactivity, with the fallback option of “Speak to a human” for complex scenarios. LLM-Powered Chatbots The release of OpenAI’s ChatGPT on November 30, 2022, marked the introduction of LLM-powered chatbots, fundamentally changing the game. These chatbots, like ChatGPT, were built on GPT-3.5, a large language model trained on massive datasets. Unlike traditional chatbots, LLM-powered systems can generate human-like responses, offering a much more flexible and intelligent interaction. However, challenges remained. LLM-powered chatbots struggled with personalization and consistency, often generating plausible but incorrect information—a phenomenon known as “hallucination.” This led to efforts in grounding LLM responses through techniques like retrieval-augmented generation (RAG). RAG Chatbots RAG is a method that combines data retrieval with LLM generation, allowing systems to access real-time or proprietary data, improving accuracy and relevance. This hybrid approach addresses the hallucination problem, ensuring more reliable outputs. LLM-Powered Chatbots to AI Agents As LLMs expanded, their abilities grew more sophisticated, incorporating advanced reasoning, multi-step planning, and the use of external tools (function calling). Tool use refers to an LLM’s ability to invoke specific functions, enabling it to perform more complex tasks. Tool-Augmented LLMs and AI Agents As LLMs became tool-augmented, the emergence of AI agents followed. These agents integrate reasoning, planning, and tool use into an autonomous, goal-driven system that can operate iteratively within a dynamic environment. Unlike traditional chatbot interfaces, AI agents leverage a broader set of tools to interact with various systems and accomplish tasks. Agentic Systems Agentic systems—computational architectures that include AI agents—embody these advanced capabilities. They can autonomously interact with systems, make decisions, and adapt to feedback, forming the foundation for more complex AI applications. Components of an AI Agent AI agents consist of several key components: Characteristics of AI Agents AI agents are defined by the following traits: Conclusion AI agents represent a significant leap from traditional chatbots, offering greater autonomy, complexity, and interactivity. However, the term “AI agent” remains fluid, with no universal industry standard. Instead, it exists on a continuum, with varying degrees of autonomy, adaptability, and proactive behavior defining agentic systems. Value and Impact of AI Agents The key benefits of AI agents lie in their ability to automate manual processes, reduce decision-making burdens, and enhance workflows in enterprise environments. By “agentifying” repetitive tasks, AI agents offer substantial productivity gains and the potential to transform how businesses operate. As AI agents evolve, their applications will only expand, driving new efficiencies and enabling organizations to leverage AI in increasingly sophisticated ways. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
ThoughtSpot AI agent Spotter enables conversational BI

ThoughtSpot AI agent Spotter enables conversational BI

ThoughtSpot Unveils Spotter: A Generative AI-Powered Data Agent ThoughtSpot, a leading analytics vendor, has launched Spotter, an advanced generative AI-powered agent designed to revolutionize how users interact with data. Spotter enables conversational data exploration, contextual understanding, and autonomous analysis, making it a significant leap forward in the analytics landscape. Spotter’s Role in ThoughtSpot’s Evolution Spotter replaces Sage, ThoughtSpot’s earlier generative AI-powered interface, which debuted in March 2023. Despite moving from private to public preview and gaining new capabilities, Sage never reached general availability. Spotter is now generally available for ThoughtSpot Analytics, while its embedded version is in beta testing. Unlike earlier AI tools that focused on question-and-answer interactions, such as Sage and Microsoft’s copilots, Spotter takes the concept further by integrating contextual awareness and autonomous decision-making. Spotter doesn’t just respond to queries; it suggests follow-up questions, identifies anomalies, and provides proactive insights, functioning more like a virtual analyst than a reactive chatbot. Key Features of Spotter Spotter is built to enhance productivity and insight generation through the following capabilities: Generative AI’s Growing Impact on BI ThoughtSpot has long aimed to make analytics accessible to non-technical users through natural language search. However, previous NLP tools often required users to learn specific vocabularies, limiting widespread adoption. Generative AI bridges this gap. By leveraging extensive vocabularies and LLM technology, tools like Spotter enable users of all skill levels to access and analyze data effortlessly. Spotter stands out with its ability to deliver proactive insights, identify trends, and adapt to user behavior, enhancing the decision-making process. Expert Perspectives on Spotter Donald Farmer, founder of TreeHive Strategy, highlighted Spotter’s autonomy as a game-changer: “Spotter is a big move forward for ThoughtSpot and AI. The natural language interface is more conversational, but the key advantage is its autonomous analysis, which identifies trends and insights without users needing to ask.” Mike Leone, an analyst at TechTarget’s Enterprise Strategy Group, emphasized Spotter’s ability to adapt to users: “Spotter’s ability to deliver personalized and contextually relevant responses is critical for organizations pursuing generative AI initiatives. This goes a long way in delivering unique value across a business.” Farmer also pointed to Spotter’s embedded capabilities, noting its growing appeal as an embedded analytics solution integrated with productivity tools like Salesforce and ServiceNow. Competitive Positioning Spotter aligns ThoughtSpot with other vendors embracing agentic AI in analytics. Google recently introduced Conversational Analytics in Looker, and Salesforce’s Tableau platform now includes Tableau Agent. ThoughtSpot’s approach builds on its core strength in search-based analytics while expanding into generative AI-driven capabilities. Leone observed: “ThoughtSpot is right in line with the market in delivering an agentic experience and is laying the groundwork for broader AI functionality over time.” A Step Toward the Future of Analytics With Spotter, ThoughtSpot is redefining the role of AI in business intelligence. The tool combines conversational ease, proactive insights, and seamless integration, empowering users to make data-driven decisions more efficiently. As generative AI continues to evolve, tools like Spotter demonstrate how businesses can unlock the full potential of their data. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More

Why Build a General-Purpose Agent?

A general-purpose LLM agent serves as an excellent starting point for prototyping use cases and establishing the foundation for a custom agentic architecture tailored to your needs. What is an LLM Agent? An LLM (Large Language Model) agent is a program where execution logic is governed by the underlying model. Unlike approaches such as few-shot prompting or fixed workflows, LLM agents adapt dynamically. They can determine which tools to use (e.g., web search or code execution), how to use them, and iterate based on results. This adaptability enables handling diverse tasks with minimal configuration. Agentic Architectures Explained:Agentic systems range from the reliability of fixed workflows to the flexibility of autonomous agents. For instance: Your architecture choice will depend on the desired balance between reliability and flexibility for your use case. Building a General-Purpose LLM Agent Step 1: Select the Right LLM Choosing the right model is critical for performance. Evaluate based on: Model Recommendations (as of now): For simpler use cases, smaller models running locally can also be effective, but with limited functionality. Step 2: Define the Agent’s Control Logic The system prompt differentiates an LLM agent from a standalone model. This prompt contains rules, instructions, and structures that guide the agent’s behavior. Common Agentic Patterns: Starting with ReAct or Plan-then-Execute patterns is recommended for general-purpose agents. Step 3: Define the Agent’s Core Instructions To optimize the agent’s behavior, clearly define its features and constraints in the system prompt: Example Instructions: Step 4: Define and Optimize Core Tools Tools expand an agent’s capabilities. Common tools include: For each tool, define: Example: Implementing an Arxiv API tool for scientific queries. Step 5: Memory Handling Strategy Since LLMs have limited memory (context window), a strategy is necessary to manage past interactions. Common approaches include: For personalization, long-term memory can store user preferences or critical information. Step 6: Parse the Agent’s Output To make raw LLM outputs actionable, implement a parser to convert outputs into a structured format like JSON. Structured outputs simplify execution and ensure consistency. Step 7: Orchestrate the Agent’s Workflow Define orchestration logic to handle the agent’s next steps after receiving an output: Example Orchestration Code: pythonCopy codedef orchestrator(llm_agent, llm_output, tools, user_query): while True: action = llm_output.get(“action”) if action == “tool_call”: tool_name = llm_output.get(“tool_name”) tool_params = llm_output.get(“tool_params”, {}) if tool_name in tools: try: tool_result = tools[tool_name](**tool_params) llm_output = llm_agent({“tool_output”: tool_result}) except Exception as e: return f”Error executing tool ‘{tool_name}’: {str(e)}” else: return f”Error: Tool ‘{tool_name}’ not found.” elif action == “return_answer”: return llm_output.get(“answer”, “No answer provided.”) else: return “Error: Unrecognized action type from LLM output.” This orchestration ensures seamless interaction between tools, memory, and user queries. When to Consider Multi-Agent Systems A single-agent setup works well for prototyping but may hit limits with complex workflows or extensive toolsets. Multi-agent architectures can: Starting with a single agent helps refine workflows, identify bottlenecks, and scale effectively. By following these steps, you’ll have a versatile system capable of handling diverse use cases, from competitive analysis to automating workflows. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Python-Based Reasoning Engine

Python-Based Reasoning Engine

Introducing a Python-Based Reasoning Engine for Deterministic AI In the age of large language models (LLMs), there’s a growing need for deterministic systems that enforce rules and constraints while reasoning about information. We’ve developed a Python-based reasoning and validation framework that bridges the gap between traditional rule-based logic and modern AI capabilities, inspired by frameworks like Pydantic. This approach is designed for developers and non-technical experts alike, making it easy to build complex rule engines that translate natural language instructions into enforceable code. Our fine-tuned model automates the creation of rules while ensuring human oversight for quality and conflict detection. The result? Faster implementation of rule engines, reduced developer overhead, and flexible extensibility across domains. The Framework at a Glance Our system consists of five core components: To analogize, this framework operates like a game of chess: Our framework supports two primary use cases: Key Features and Benefits Case Studies Validation Engine: Ensuring Compliance A mining company needed to validate employee qualifications based on age, region, and role. Example Data Structure: jsonCopy code{ “employees”: [ { “name”: “Sarah”, “age”: 25, “role”: “Manager”, “documents”: [“safe_handling_at_work”, “heavy_lifting”] }, { “name”: “John”, “age”: 17, “role”: “Laborer”, “documents”: [“heavy_lifting”] } ] } Rules: jsonCopy code{ “rules”: [ { “type”: “min_age”, “parameters”: { “min_age”: 18 } }, { “type”: “dozer_operator”, “parameters”: { “document_type”: “dozer_qualification” } } ] } Outcome:The system flagged violations, such as employees under 18 or missing required qualifications, ensuring compliance with organizational rules. Reasoning Engine: Solving the River Crossing Puzzle The classic river crossing puzzle demonstrates the engine’s reasoning capabilities. Problem Setup:A farmer must ferry a goat, a wolf, and a cabbage across a river, adhering to specific constraints (e.g., the goat cannot be left alone with the cabbage). Steps: Output:The engine generated a solution in 0.0003 seconds, showcasing its efficiency in navigating complex logic. Advanced Features: Dynamic Rule Expansion The system supports real-time rule adjustments. For instance, adding a “wolf cannot be left with a chicken” constraint introduces a conflict. By extending rules (e.g., allowing the farmer to carry two items), the engine dynamically resolves previously unsolvable scenarios. Sample Code Snippet: pythonCopy codeclass CarryingCapacityRule(Rule): def evaluate(self, state): items_moved = sum(1 for item in [‘wolf’, ‘goat’, ‘cabbage’, ‘chicken’] if getattr(state, item) == state.farmer) return items_moved <= 2 def get_description(self): return “Farmer can carry up to two items at a time” Result:The adjusted engine solved the puzzle in three moves, down from seven, while maintaining rule integrity. Collaborative UI for Rule Creation Our user interface empowers domain experts to define rules without writing code. Developers validate these rules, which are then seamlessly integrated into the system. Visual Workflow: Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Standards in Healthcare Cybersecurity

Deploying Large Language Models in Healthcare

Study Identifies Cost-Effective Strategies for Deploying Large Language Models in Healthcare Efficient deployment of large language models (LLMs) at scale in healthcare can streamline clinical workflows and reduce costs by up to 17 times without compromising reliability, according to a study published in NPJ Digital Medicine by researchers at the Icahn School of Medicine at Mount Sinai. The research highlights the potential of LLMs to enhance clinical operations while addressing the financial and computational hurdles healthcare organizations face in scaling these technologies. To investigate solutions, the team evaluated 10 LLMs of varying sizes and capacities using real-world patient data. The models were tested on chained queries and increasingly complex clinical notes, with outputs assessed for accuracy, formatting quality, and adherence to clinical instructions. “Our study was driven by the need to identify practical ways to cut costs while maintaining performance, enabling health systems to confidently adopt LLMs at scale,” said Dr. Eyal Klang, director of the Generative AI Research Program at Icahn Mount Sinai. “We aimed to stress-test these models, evaluating their ability to manage multiple tasks simultaneously and identifying strategies to balance performance and affordability.” The team conducted over 300,000 experiments, finding that high-capacity models like Meta’s Llama-3-70B and GPT-4 Turbo 128k performed best, maintaining high accuracy and low failure rates. However, performance began to degrade as task volume and complexity increased, particularly beyond 50 tasks involving large prompts. The study further revealed that grouping tasks—such as identifying patients for preventive screenings, analyzing medication safety, and matching patients for clinical trials—enabled LLMs to handle up to 50 simultaneous tasks without significant accuracy loss. This strategy also led to dramatic cost savings, with API costs reduced by up to 17-fold, offering a pathway for health systems to save millions annually. “Understanding where these models reach their cognitive limits is critical for ensuring reliability and operational stability,” said Dr. Girish N. Nadkarni, co-senior author and director of The Charles Bronfman Institute of Personalized Medicine. “Our findings pave the way for the integration of generative AI in hospitals while accounting for real-world constraints.” Beyond cost efficiency, the study underscores the potential of LLMs to automate key tasks, conserve resources, and free up healthcare providers to focus more on patient care. “This research highlights how AI can transform healthcare operations. Grouping tasks not only cuts costs but also optimizes resources that can be redirected toward improving patient outcomes,” said Dr. David L. Reich, co-author and chief clinical officer of the Mount Sinai Health System. The research team plans to explore how LLMs perform in live clinical environments and assess emerging models to determine whether advancements in AI technology can expand their cognitive thresholds. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
AI Agents Set to Break Through in 2025

AI Agents Set to Break Through in 2025

2025: The Year AI Agents Transform Work and Life Despite years of hype around artificial intelligence, its true disruptive impact has so far been limited. However, industry experts believe that’s about to change in 2025 as autonomous AI agents prepare to enter and reshape nearly every facet of our lives. Since OpenAI’s ChatGPT took the world by storm in late 2022, billions of dollars have been funneled into the AI sector. Big tech and startups alike are racing to harness the transformative potential of the technology. Yet, while millions now interact with AI chatbots daily, turning them into tools that deliver tangible business value has proven challenging. A recent study by Boston Consulting Group revealed that only 26% of companies experimenting with AI have progressed beyond proof of concept to derive measurable value. This lag reflects the limitations of current AI tools, which serve primarily as copilots—capable of assisting but requiring constant oversight and remaining prone to errors. AI Agents Set to Break Through in 2025 The status quo, however, is poised for a radical shift. Autonomous AI agents—capable of independently analyzing information, making decisions, and taking action—are expected to emerge as the industry’s next big breakthrough. “For the first time, technology isn’t just offering tools for humans to do work,” Salesforce CEO Marc Benioff wrote in Time. “It’s providing intelligent, scalable digital labor that performs tasks autonomously. Instead of waiting for human input, agents can analyze information, make decisions, and adapt as they go.” At their core, AI agents leverage the same large language models (LLMs) that power tools like ChatGPT. But these agents take it further, acting as reasoning engines that develop step-by-step strategies to execute tasks. Armed with access to external data sources like customer records or financial databases and equipped with software tools, agents can achieve goals independently. While current LLMs still face reasoning limitations, advancements are on the horizon. New models like OpenAI’s “o1” and DeepSeek’s “R1” are specialized for reasoning, sparking hope that 2025 will see agents grow far more capable. Big Tech and Startups Betting Big Major players are already gearing up for this new era. Startups are also eager to carve out their share of the market. According to Pitchbook, funding deals for agent-focused ventures surged by over 80% in 2024, with the median deal value increasing nearly 50%. Challenges to Overcome Despite the enthusiasm, significant hurdles remain. 2025: A Turning Point Despite these challenges, many experts believe 2025 will mark the mainstream adoption of AI agents. A New World of Work No matter the pace, it’s clear that AI agents will dominate the industry’s focus in 2025. If the technology delivers on its promise, the workplace could undergo a profound transformation, enabling entirely new ways of working and automating tasks that once required human intervention. The question isn’t if agents will redefine the way we work—it’s how fast. By the end of 2025, the shift could be undeniable. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Transforming the Role of Data Science Teams

Transforming the Role of Data Science Teams

GenAI: Transforming the Role of Data Science Teams Challenges, Opportunities, and the Evolving Responsibilities of Data Scientists Generative AI (GenAI) is revolutionizing the AI landscape, offering faster development cycles, reduced technical overhead, and enabling groundbreaking use cases that once seemed unattainable. However, it also introduces new challenges, including the risks of hallucinations and reliance on third-party APIs. For Data Scientists and Machine Learning (ML) teams, this shift directly impacts their roles. GenAI-driven projects, often powered by external providers like OpenAI, Anthropic, or Meta, blur traditional lines. AI solutions are increasingly accessible to non-technical teams, but this accessibility raises fundamental questions about the role and responsibilities of data science teams in ensuring effective, ethical, and future-proof AI systems. Let’s explore how this evolution is reshaping the field. Expanding Possibilities Without Losing Focus While GenAI unlocks opportunities to solve a broader range of challenges, not every problem warrants an AI solution. Data Scientists remain vital in assessing when and where AI is appropriate, selecting the right approaches—whether GenAI, traditional ML, or hybrid solutions—and designing reliable systems. Although GenAI broadens the toolkit, two factors shape its application: For example, incorporating features that enable user oversight of AI outputs may prove more strategic than attempting full automation with extensive fine-tuning. Differentiation will not come from simply using LLMs, which are widely accessible, but from the unique value and functionality they enable. Traditional ML Is Far from Dead—It’s Evolving with GenAI While GenAI is transformative, traditional ML continues to play a critical role. Many use cases, especially those unrelated to text or images, are best addressed with ML. GenAI often complements traditional ML, enabling faster prototyping, enhanced experimentation, and hybrid systems that blend the strengths of both approaches. For instance, traditional ML workflows—requiring extensive data preparation, training, and maintenance—contrast with GenAI’s simplified process: prompt engineering, offline evaluation, and API integration. This allows rapid proof of concept for new ideas. Once proven, teams can refine solutions using traditional ML to optimize costs or latency, or transition to Small Language Models (SMLs) for greater control and performance. Hybrid systems are increasingly common. For example, DoorDash combines LLMs with ML models for product classification. LLMs handle cases the ML model cannot classify confidently, retraining the ML system with new insights—a powerful feedback loop. GenAI Solves New Problems—But Still Needs Expertise The AI landscape is shifting from bespoke in-house models to fewer, large multi-task models provided by external vendors. While this simplifies some aspects of AI implementation, it requires teams to remain vigilant about GenAI’s probabilistic nature and inherent risks. Key challenges unique to GenAI include: Data Scientists must ensure robust evaluations, including statistical and model-based metrics, before deployment. Monitoring tools like Datadog now offer LLM-specific observability, enabling teams to track system performance in real-world environments. Teams must also address ethical concerns, applying frameworks like ComplAI to benchmark models and incorporating guardrails to align outputs with organizational and societal values. Building AI Literacy Across Organizations AI literacy is becoming a critical competency for organizations. Beyond technical implementation, competitive advantage now depends on how effectively the entire workforce understands and leverages AI. Data Scientists are uniquely positioned to champion this literacy by leading initiatives such as internal training, workshops, and hackathons. These efforts can: The New Role of Data Scientists: A Strategic Pivot The role of Data Scientists is not diminishing but evolving. Their expertise remains essential to ensure AI solutions are reliable, ethical, and impactful. Key responsibilities now include: By adapting to this new landscape, Data Scientists will continue to play a pivotal role in guiding organizations to harness AI effectively and responsibly. GenAI is not replacing them; it’s expanding their impact. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
gettectonic.com