Large Language Model Archives - gettectonic.com

pardot1084082=08782fe4994719f386e1d3c9bbd9a12817d57a65b36593fbba0e8645340e02b6

AI Data Cloud and Integration

It is Time to Implement Data Cloud

With Salesforce Data Cloud you can: With incomplete data your 360-degree customer view is limited and often leads to multiple sales reps working on the same lead. Slow access to the right leads at the right time leads to missed opportunties and delayed closings. If your team cannot trust the data due to siloes and inaccuracies, they avoid using it. It is Time to Implement Data Cloud. Unified Connect and harmonize data from all your Salesforce applications and external data systems. Then activate your data with insights and automation across every customer touchpoint. Powerful With Data Cloud and Agentforce, you can create the most intelligent agents possible, giving them access to the exact data they need to deliver any employee or customer experience. Secure Securely connect your data to any large language model (LLM) without sacrificing data governance and security thanks to the Einstein 1 trust layer. Open Data Cloud is fully open and extensible – bring your own data lake or model to reduce complexity and leverage what’s already been built. Plus, share out to popular destinations like Snowflake, Google Ads, or Meta Ads. Salesforce Data Cloud is the only hyperscale data engine native to Salesforce. It is more than a CDP. It goes beyond a data lake. You can do more with Data Cloud. Your Agentforce journey begins with Data Cloud. Agents need the right data to work. With Data Cloud, you can create the most intelligent agents possible, giving them access to the exact data they need to deliver any employee or customer experience. Use any data in your organization with Agentforce in a safe and secure manner thanks to the Einstein 1 Trust Layer. Datablazers are Salesforce community members who are passionate about driving business growth with data and AI powered by Data Cloud. Sign up to join a growing group of members to learn, connect, and grow with Data Cloud. Join today. The path to AI success begins and ends with quality data. Business, IT, and analytics decision makers with high data maturity were 2x more likely than low-maturity leaders to have the quality data needed to use AI effectively, according to our State of Data and Analytics report. “What’s data maturity?” you might wonder. Hang tight, we’ll explain in chapter 1 of this guide. Data-leading companies also experience: Your data strategy isn’t just important, it’s critical in getting you to the head of the market with new AI technology by your side. That’s why this Salesforce guide is based on recent industry findings and provides best practices to help your company get the most from your data. Tectonic will be sharing a focus on the 360 degree customer view with Salesforce Data Cloud in our insights. Stay tuned. It is Time to Implement Data Cloud Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Generative AI in Marketing

Generative AI in Marketing

Generative Artificial Intelligence (GenAI) continues to reshape industries, providing product managers (PMs) across domains with opportunities to embrace AI-focused innovation and enhance their technical expertise. Over the past few years, GenAI has gained immense popularity. AI-enabled products have proliferated across industries like a rapidly expanding field of dandelions, fueled by abundant venture capital investment. From a product management perspective, AI offers numerous ways to improve productivity and deepen strategic domain knowledge. However, the fundamentals of product management remain paramount. This discussion underscores why foundational PM practices continue to be indispensable, even in the evolving landscape of GenAI, and how these core skills can elevate PMs navigating this dynamic field. Why PM Fundamentals Matter, AI or Not Three core reasons highlight the enduring importance of PM fundamentals and actionable methods for excelling in the rapidly expanding GenAI space. 1. Product Development is Inherently Complex While novice PMs might assume product development is straightforward, the reality reveals a web of interconnected and dynamic elements. These may include team dependencies, sales and marketing coordination, internal tooling managed by global teams, data telemetry updates, and countless other tasks influencing outcomes. A skilled product manager identifies and orchestrates these moving pieces, ensuring product growth and delivery. This ability is often more impactful than deep technical AI expertise (though having both is advantageous). The complexity of modern product development is further amplified by the rapid pace of technological change. Incorporating AI tools such as GitHub Copilot can accelerate workflows but demands a strong product culture to ensure smooth integration. PMs must focus on fundamentals like understanding user needs, defining clear problems, and delivering value to avoid chasing fleeting AI trends instead of solving customer problems. While AI can automate certain tasks, it is limited by costs, specificity, and nuance. A PM with strong foundational knowledge can effectively manage these limitations and identify areas for automation or improvement, such as: 2. Interpersonal Skills Are Irreplaceable As AI product development grows more complex, interpersonal skills become increasingly critical. PMs work with diverse teams, including developers, designers, data scientists, marketing professionals, and executives. While AI can assist in specific tasks, strong human connections are essential for success. Key interpersonal abilities for PMs include: Stakeholder management remains a cornerstone of effective product management. PMs must build trust and tailor their communication to various audiences—a skill AI cannot replicate. 3. Understanding Vertical Use Cases is Essential Vertical use cases focus on niche, specific tasks within a broader context. In the GenAI ecosystem, this specificity is exemplified by AI agents designed for narrow applications. For instance, Microsoft Copilot includes a summarization agent that excels at analyzing Word documents. The vertical AI market has experienced explosive growth, valued at .1 billion in 2024 and projected to reach .1 billion by 2030. PMs are crucial in identifying and validating these vertical use cases. For example, the team at Planview developed the AI Assistant “Planview Copilot” by hypothesizing specific use cases and iteratively validating them through customer feedback and data analysis. This approach required continuous application of fundamental PM practices, including discovery, prioritization, and feedback internalization. PMs must be adept at discovering vertical use cases and crafting strategies to deliver meaningful solutions. Key steps include: Conclusion Foundational product management practices remain critical, even as AI transforms industries. These core skills ensure that PMs can navigate the challenges of GenAI, enabling organizations to accelerate customer value in work efficiency, time savings, and quality of life. By maintaining strong fundamentals, PMs can lead their teams to thrive in an AI-driven future. AI Agents on Madison Avenue: The New Frontier in Advertising AI agents, hailed as the next big advancement in artificial intelligence, are making their presence felt in the world of advertising. Startups like Adaly and Anthrologic are introducing personalized AI tools designed to boost productivity for advertisers, offering automation for tasks that are often time-consuming and tedious. Retail brands such as Anthropologie are already adopting this technology to streamline their operations. How AI Agents WorkIn simple terms, AI agents operate like advanced AI chatbots. They can handle tasks such as generating reports, optimizing media budgets, or analyzing data. According to Tyler Pietz, CEO and founder of Anthrologic, “They can basically do anything that a human can do on a computer.” Big players like Salesforce, Microsoft, Anthropic, Google, and Perplexity are also championing AI agents. Perplexity’s CEO, Aravind Srinivas, recently suggested that businesses will soon compete for the attention of AI agents rather than human customers. “Brands need to get comfortable doing this,” he remarked to The Economic Times. AI Agents Tailored for Advertisers Both Adaly and Anthrologic have developed AI software specifically trained for advertising tasks. Built on large language models like ChatGPT, these platforms respond to voice and text prompts. Advertisers can train these AI systems on internal data to automate tasks like identifying data discrepancies or analyzing economic impacts on regional ad budgets. Pietz noted that an AI agent can be set up in about a month and take on grunt work like scouring spreadsheets for specific figures. “Marketers still log into 15 different platforms daily,” said Kyle Csik, co-founder of Adaly. “When brands in-house talent, they often hire people to manage systems rather than think strategically. AI agents can take on repetitive tasks, leaving room for higher-level work.” Both Pietz and Csik bring agency experience to their ventures, having crossed paths at MediaMonks. Industry Response: Collaboration, Not Replacement The targets for these tools differ: Adaly focuses on independent agencies and brands, while Anthrologic is honing in on larger brands. Meanwhile, major holding companies like Omnicom and Dentsu are building their own AI agents. Omnicom, on the verge of merging with IPG, has developed internal AI solutions, while Dentsu has partnered with Microsoft to create tools like Dentsu DALL-E and Dentsu-GPT. Havas is also developing its own AI agent, according to Chief Activation Officer Mike Bregman. Bregman believes AI tools won’t immediately threaten agency jobs. “Agencies have a lot of specialization that machines can’t replace today,” he said. “They can streamline processes, but

Read More
ai trust layer

Gen AI Trust Layers

Addressing the Generative AI Production Gap with Trust Layers Despite the growing excitement around generative AI, only a small percentage of projects have successfully moved into production. A key barrier is the persistent concern over large language models (LLMs) generating hallucinations—responses that are inconsistent or completely disconnected from reality. To address these issues, organizations are increasingly adopting AI trust layers to enhance reliability and mitigate risk. Understanding the Challenge Generative AI models, like LLMs, are powerful tools trained on vast amounts of unstructured data, enabling them to answer questions and complete tasks based on text, documents, recordings, images, and videos. This capability has revolutionized the creation of chatbots, co-pilots, and even semi-autonomous agents. However, these models are inherently non-deterministic, meaning they don’t always produce consistent outputs. This lack of predictability leads to the infamous phenomenon of hallucination—what the National Institute of Standards and Technology (NIST) terms “confabulation.” While hallucination is a byproduct of how generative models function, its risks in mission-critical applications cannot be ignored. Implementing AI Trust Layers To address these challenges, organizations are turning to AI trust layers—frameworks designed to monitor and control generative AI behavior. These trust layers vary in implementation: Galileo: Building AI Trust from the Ground Up Galileo, founded in 2021 by Yash Sheth, Atindriyo Sanyal, and Vikram Chatterji, has emerged as a leader in developing AI trust solutions. Drawing on his decade of experience at Google building LLMs for speech recognition, Sheth recognized early on that non-deterministic AI systems needed robust trust frameworks to achieve widespread adoption in enterprise settings. The Need for Trust in Mission-Critical AI “Sheth explained: ‘Generative AI doesn’t give you the same answer every time. To mitigate risk in mission-critical tasks, you need a trust framework to ensure these models behave as expected in production.’ Enterprises, which prioritize privacy, security, and reputation, require this level of assurance before deploying LLMs at scale. Galileo’s Approach to Trust Layers Galileo’s AI trust layer is built on its proprietary foundation model, which evaluates the behavior of target LLMs. This approach is bolstered by metrics and real-time guardrails to block undesirable outcomes, such as hallucinations, data leaks, or harmful outputs. Key Products in Galileo’s Suite Sheth described the underlying technology: “Our evaluation foundation models are dependable, reliable, and scalable. They run continuously in production, ensuring bad outcomes are blocked in real time.” By combining these components, Galileo provides enterprises with a trust layer that gives them confidence in their generative AI applications, mirroring the reliability of traditional software systems. From Research to Real-World Impact Unlike vendors who quickly adapted traditional machine learning frameworks for generative AI, Galileo spent two years conducting research and developing its Generative AI Studio, launched in August 2023. This thorough approach has started to pay off: A Crucial Moment for AI Trust Layers As enterprises prepare to move generative AI experiments into production, trust layers are becoming essential. These frameworks address lingering concerns about the unpredictable nature of LLMs, allowing organizations to scale AI while minimizing risk. Sheth emphasized the stakes: “When mission-critical software starts becoming infused with AI, trust layers will define whether we progress or regress to the stone ages of software. That’s what’s holding back proof-of-concepts from reaching production.” With Galileo’s innovative approach, enterprises now have a path to unlock the full potential of generative AI—responsibly, securely, and at scale. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More

Reward-Guided Speculative Decoding

Salesforce AI Research Unveils Reward-Guided Speculative Decoding (RSD): A Breakthrough in Large Language Model (LLM) Inference Efficiency Addressing the Computational Challenges of LLMs The rapid scaling of large language models (LLMs) has led to remarkable advancements in natural language understanding and reasoning. However, inference—the process of generating responses one token at a time—remains a major computational bottleneck. As LLMs grow in size and complexity, latency and energy consumption increase, posing challenges for real-world applications that demand cost efficiency, speed, and scalability. Traditional decoding methods, such as greedy and beam search, require repeated evaluations of large models, leading to significant computational overhead. Even parallel decoding techniques struggle to balance efficiency with output quality. These challenges have driven research into hybrid approaches that combine lightweight models with more powerful ones, optimizing speed without sacrificing performance. Introducing Reward-Guided Speculative Decoding (RSD) Salesforce AI Research introduces Reward-Guided Speculative Decoding (RSD), a novel framework designed to enhance LLM inference efficiency. RSD employs a dual-model strategy: Unlike traditional speculative decoding, which enforces strict token matching between draft and target models, RSD introduces a controlled bias that prioritizes high-reward outputs—tokens deemed more accurate or contextually relevant. This strategic bias significantly reduces unnecessary computations. RSD’s mathematically derived threshold mechanism dictates when the target model should intervene. By dynamically blending outputs from both models based on a reward function, RSD accelerates inference while maintaining or even enhancing response quality. This innovation addresses the inefficiencies inherent in sequential token generation for LLMs. Technical Insights and Benefits of RSD RSD integrates two models in a sequential, cooperative manner: This mechanism is guided by a binary step weighting function, ensuring that only high-quality tokens bypass the target model, significantly reducing computational demands. Key Benefits: The theoretical foundation of RSD, including the probabilistic mixture distribution and adaptive acceptance criteria, provides a robust framework for real-world deployment across diverse reasoning tasks. Empirical Results: Superior Performance Across Benchmarks Experiments on challenging datasets—such as GSM8K, MATH500, OlympiadBench, and GPQA—demonstrate RSD’s effectiveness. Notably, on the MATH500 benchmark, RSD achieved 88.0% accuracy using a 72B target model and a 7B PRM, outperforming the target model’s standalone accuracy of 85.6% while reducing FLOPs by nearly 4.4×. These results highlight RSD’s potential to surpass traditional methods, including speculative decoding (SD), beam search, and Best-of-N strategies, in both speed and accuracy. A Paradigm Shift in LLM Inference Reward-Guided Speculative Decoding (RSD) represents a significant advancement in LLM inference. By intelligently combining a draft model with a powerful target model and incorporating a reward-based acceptance criterion, RSD effectively mitigates computational costs without compromising quality. This biased acceleration approach strategically bypasses expensive computations for high-reward outputs, ensuring an efficient and scalable inference process. With empirical results showcasing up to 4.4× faster performance and superior accuracy, RSD sets a new benchmark for hybrid decoding frameworks, paving the way for broader adoption in real-time AI applications. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More

Decision Domain Management

Roger’s first week in the office felt like a wilder than 8 second ride on a raging rodeo bull. Armed with top-notch academic achievements, he hoped to breeze through operational routines and impress his new managers. What he didn’t expect was to land in a whirlwind of half-documented processes, half-baked ideas, and near-constant firefighting. While the organization had detailed SOPs for simple, routine tasks—approving invoices, updating customer records, and shipping standard orders—Roger quickly realized that behind the structured facade, there was a deeper level of uncertainty. Every day, he heard colleagues discuss “strategic pivots” or “risky product bets.” There were whispers about AI-based initiatives that promised to automate entire workflows. Yet, when the conversation shifted to major decisions—like selecting the right AI use cases—leaders often seemed to rely more on intuition than any structured methodology. One afternoon, Roger was invited to a cross-functional meeting about the company’s AI roadmap. Expecting an opportunity to showcase his knowledge, he instead found himself in a room filled with brilliant minds pulling in different directions. Some argued that AI should focus on automating repetitive tasks aligned with existing SOPs. Others insisted that AI’s real value lay in predictive modeling—helping forecast new market opportunities. The debate went in circles, with no consensus on where or how to allocate AI resources. After an hour of heated discussion, the group dispersed, each manager still convinced of the merit of their own perspective but no closer to a resolution. That evening, as Roger stood near the coffee machine, he muttered to himself, “We have SOPs for simple tasks, but nothing for big decisions. How do we even begin selecting which AI models or agents to develop first?” His frustration led him to a conversation with a coworker who had been with the company for years. “We’re missing something fundamental here,” Roger said. “We’re rushing to onboard AI agents that can mimic our SOPs—like some large language model trained to follow rote instructions—but that’s not where the real value lies. We don’t even have a framework for weighing one AI initiative against another. Everything feels like guesswork.” His coworker shrugged. “That’s just how it’s always been. The big decisions happen behind closed doors, mostly based on experience and intuition. If you’re waiting for a blueprint, you might be waiting a long time.” That was Roger’s ;ight bulb moment. Despite all his academic training, he realized the organization lacked a structured approach to high-level decision-making. Sure, they had polished SOPs for operational tasks, but when it came to determining which AI initiatives to prioritize, there were no formal criteria, classifications, or scoring mechanisms in place. Frustrated but determined, Roger decided he needed answers. Two days later, he approached a coworker known for their deep understanding of business strategy and technology. After a quick greeting, he outlined his concerns—the disorganized AI roadmap meeting, the disconnect between SOP-driven automation and strategic AI modeling, and his growing suspicion that even senior leaders were making decisions without a clear framework. His coworker listened, then gestured for him to take a seat. “Take a breath,” they said. “You’re not the first to notice this gap. Let me explain what’s really missing.” Why SOPs Aren’t Enough The coworker acknowledged that the organization was strong in SOPs. “We’re great at detailing exactly how to handle repetitive, rules-based tasks—like verifying invoices or updating inventory. In those areas, we can plug in AI agents pretty easily. They follow a well-defined script and execute tasks efficiently. But that’s just the tip of the iceberg.” They leaned forward and continued, “Where we struggle, as you’ve discovered, is in decision-making at deeper levels—strategic decisions like which new product lines to pursue, or tactical decisions like selecting the right vendor partnerships. There’s no documented methodology for these. It’s all in people’s heads.” Roger tilted his head, intrigued. “So how do we fix something as basic but great impact as that?” “That’s where Decision Domain Management comes in,” he explained. In the context of data governance and management, data domains are the high-level blocks that data professionals use to define master data. Simply put, data domains help data teams logically group data that is of interest to their business or stakeholders. “Think of it as the equivalent of SOPs—but for decision-making. Instead of prescribing exact steps for routine tasks, it helps classify decisions, assess their importance, and determine whether AI can support them—and if so, in what capacity.” They broke it down further. The Decision Types “First, we categorize decisions into three broad types: Once we correctly classify a decision, we get a clearer picture of how critical it is and whether it requires an AI agent (good at routine tasks) or an AI model (good at predictive and analytical tasks).” The Cynefin Framework The coworker then introduced the Cynefin Framework, explaining how it helps categorize decision contexts: By combining Decision Types with the Cynefin Framework, organizations can determine exactly where AI projects will be most beneficial. Putting It into Practice Seeing the spark of understanding in Roger’s eyes, the coworker provided some real-world examples: ✅ AI agents are ideal for simple SOP-based tasks like invoice validation or shipping notifications. ✅ AI models can support complicated decisions, like vendor negotiations, by analyzing performance metrics. ✅ Strategic AI modeling can help navigate complex decisions, such as predicting new market trends, but human judgment is still required. “Once we classify decisions,” the coworker continued, “we can score and prioritize AI investments based on impact and feasibility. Instead of throwing AI at random problems, we make informed choices.” The Lightbulb Moment Roger exhaled, visibly relieved. “So the problem isn’t just that we lack a single best AI approach—it’s that we don’t have a shared structure for decision-making in the first place,” he said. “If we build that structure, we’ll know which AI investments matter most, and we won’t keep debating in circles.” The coworker nodded. “Exactly. Decision Domain Management is the missing blueprint. We can’t expect AI to handle what even humans haven’t clearly defined. By categorizing

Read More

Einstein Service Agent

It’s been a little over a year since the global surge in GenAI chatbots, sparked by the excitement around ChatGPT. Since then, numerous vendors, both large and mid-sized, have invested heavily in the technology, and many users have already adopted AI-powered chatbots. The competition is intensifying, with CRM giant Salesforce releasing its own GenAI chatbot software, Einstein Service Agent. Einstein Service Agent, built on the Einstein 1 Platform, is Salesforce’s first fully autonomous AI agent. It interacts with large language models (LLMs) by analyzing the context of customer messages to determine the next actions. Utilizing GenAI, the agent generates conversational responses grounded in a company’s trusted business data, including Salesforce CRM data. Salesforce claims that service organizations can now significantly reduce the number of tedious inquiries that hinder productivity, allowing human agents to focus on more complex tasks. For customers, this means getting answers faster without waiting for human agents. Additionally, the service promises 24/7 availability for customer communication in natural language, with an easy handoff to human agents for more complicated issues. Businesses are increasingly turning to AI-based chatbots because, unlike traditional chatbots, they don’t rely on specific programmed queries and can understand context and nuance. Alongside Salesforce, other tech leaders like AWS and Google Cloud have released their own chatbots, such as Amazon Lex and Vertex AI, continuously enhancing their software. Recently, AWS updated its chatbot with the QnAIntent capability in Amazon Lex, allowing integration with a knowledge base in Amazon Bedrock. Similarly, Google released Vertex AI Agent Builder earlier this year, enabling organizations to build AI agents with no code, which can function together with one main agent and subagents. The AI arms race is just beginning, with more vendors developing software to meet market demands. For users, this means that while AI takes over many manual and tedious tasks, the primary challenge will be choosing the right vendor that best suits the needs and resources of their business. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More

From Generative AI to Agentic AI

Understanding the Coming Shift: From Generative AI to Agentic AI Large Language Models (LLMs), such as GPT, excel at generating text, answering questions, and supporting various tasks. However, they operate reactively, responding only to the input they receive based on learned patterns. LLMs cannot make decisions independently, adapt to new situations, or plan ahead. Agentic AI addresses these limitations. Unlike Generative AI, Agentic AI can set goals for itself, take initiative by itself, and learn from its experiences. It is proactive, capable of adjusting its actions over time, and can manage complex, evolving tasks that demand continuous problem-solving and decision-making. This transition from reactive to proactive AI unlocks exciting new possibilities across industries. In this insight, we will explore the differences between Agentic AI and Generative AI, examining their distinct impacts on technology and industries. Let’s begin by understanding what sets them apart. What is Agentic AI? Agentic AI refers to systems capable of autonomous decision-making and action to achieve specific goals. These systems go beyond generating content—they interact with their environments, respond to changes, and complete tasks with minimal human guidance. For example: What is Generative AI? Generative AI focuses on creating content—text, images, music, or video—by learning from large datasets to identify patterns, styles, or structures. For instance: Generative AI acts like a creative assistant, producing content based on what it has learned, but it remains reactive and task-specific. Key Differences in Workflows Agentic AI employs an iterative, cyclical workflow that includes stages like “Thinking/Research” and “Revision.” This adaptive process involves self-assessment, testing, and refinement, enabling the system to learn from each phase and tackle complex, evolving tasks effectively. Generative AI, in contrast, follows a linear, single-step workflow, moving directly from input to output without iterative improvements. While efficient for straightforward tasks, it lacks the ability to revisit or refine its results, limiting its effectiveness for dynamic or nuanced challenges. Characteristics of Agentic AI vs. Generative AI Feature Agentic AI Generative AI Autonomy Acts independently, making decisions and executing tasks. Requires human input to generate responses. Behavior Goal-directed, proactively working toward specific objectives. Task-oriented, reacting to immediate prompts. Adaptation and Learning Learns from experiences, adjusting actions dynamically. Operates based on pre-trained patterns, without learning. Decision-Making Handles complex decisions, weighing multiple outcomes. Makes basic decisions, selecting outputs based on patterns. Environmental Perception Understands and interacts with its surroundings. Lacks awareness of the physical environment. Case Study: Agentic Workflow in Action Andrew Ng highlighted the power of the Agentic Workflow in a coding task. Using the HumanEval benchmark, his team tested two approaches: This illustrates how iterative methods can enhance performance, even for older AI models. Conclusion As AI becomes increasingly integrated into our lives and workplaces, understanding the distinction between Generative AI and Agentic AI is essential. Generative AI has transformed tasks like content creation, offering immediate, reactive solutions. However, it remains limited to following instructions without true autonomy. Agentic AI represents a significant leap in technology. From chatbots to today. By setting goals, making decisions, and adapting in real-time, it can tackle complex, dynamic tasks without constant human oversight. Approaches like the Agentic Workflow further enhance AI’s capabilities, enabling iterative learning and continuous improvement. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Python-Based Reasoning

Python-Based Reasoning

Introducing a Python-Based Reasoning Engine for Deterministic AI As the demand for deterministic systems grows reviving foundational ideas for the age of large language models (LLMs) is here. The Challenge One of the critical issues with modern AI systems is establishing constraints around how they validate and reason about incoming data. As we increasingly rely on stochastic LLMs to process unstructured data, enforcing rules and guardrails becomes vital for ensuring reliability and consistency. The Solution Thus a company has developed a Python-based reasoning and validation framework inspired by Pydantic, designed to empower developers and non-technical domain experts to create sophisticated rule engines. The system is: By transforming Standard Operating Procedures (SOPs) and business guardrails into enforceable code, this symbolic reasoning framework addresses the need for structured, interpretable, and reliable AI systems. Key Features System Architecture The framework includes five core components: Types of Engines Case Studies 1. Validation Engine: Mining Company Compliance A mining company needed to validate employee qualifications against region-specific requirements. The system was configured to check rules such as minimum age and required certifications for specific roles. Input Example:Employee data and validation rules were modeled as JSON: jsonCopy code{ “employees”: [ { “name”: “Sarah”, “age”: 25, “documents”: [{ “type”: “safe_handling_at_work” }] }, { “name”: “John”, “age”: 17, “documents”: [{ “type”: “heavy_lifting” }] } ], “rules”: [ { “type”: “min_age”, “parameters”: { “min_age”: 18 } } ] } Output:Violations, such as “Minimum age must be 18,” were flagged immediately, enabling quick remediation. 2. Reasoning Engine: Solving the River Crossing Puzzle To showcase its capabilities, we modeled the classic river crossing puzzle, where a farmer must transport a wolf, a goat, and a cabbage across a river without leaving incompatible items together. Steps Taken: Enhanced Scenario:Adding a new rule—“Wolf cannot be left with a chicken”—created an unsolvable scenario. By introducing a compensatory rule, “Farmer can carry two items at once,” the system adapted and solved the puzzle with fewer moves. Developer Insights The system supports rapid iteration and debugging. For example, adding rules is as simple as defining Python classes: pythonCopy codeclass GoatCabbageRule(Rule): def evaluate(self, state): return not (state.goat == state.cabbage and state.farmer != state.goat) def get_description(self): return “Goat cannot be left alone with cabbage” Real-World Impact This framework accelerates development by enabling non-technical stakeholders to contribute to rule creation through natural language, with developers approving and implementing these rules. This process reduces development time by up to 5x and adapts seamlessly to varied use cases, from logistics to compliance. 🔔🔔 Follow us on LinkedIn 🔔🔔 Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Agentforce Redefines Generative AI

Agentforce Redefines Generative AI

Agentforce: Redefining Generative AI in Salesforce Many Dreamforce attendees who expected to hear about Einstein Copilot were surprised when Salesforce introduced Agentforce just a week before the conference. While it might seem like a rebranding of Copilot, Agentforce marks a significant evolution by enabling more autonomous agents that go beyond summarizing or generating content to perform specific actions. Here’s a breakdown of the transition and what it means for Salesforce users: Key Vocabulary Updates How Agentforce Works Agents take user input, known as an “utterance,” and translate it into actionable steps based on predefined configurations. This allows the system to enhance performance over time while delivering responses tailored to user needs. Understanding Agentforce 1. Topics: Organizing Agent Capabilities Agentforce introduces “Topics,” a new layer of organization that categorizes actions by business function. When a user provides an utterance, the agent identifies the relevant topic first, then determines the best actions to address it. 2. Actions: What Agents Can Do Actions remain largely unchanged from Einstein Copilot. These are tasks agents perform to execute plans. 3. Prompts: The Key to Better Results LLMs rely on prompts to generate outputs, and crafting effective prompts is essential for reducing irrelevant responses and optimizing agent behavior. How Generative AI Enhances Salesforce Agentforce unlocks several benefits across productivity, personalization, standardization, and efficiency: Implementing Agentforce: Tips for Success Getting Started Start by using standard Agent actions. These out-of-the-box tools, such as opportunity summarization or close plan creation, provide a strong foundation. You can make minor adjustments to optimize their performance before diving into more complex custom actions. Testing and Iteration Testing AI agents is different from traditional workflows. Agents must handle various phrasing of the same user request (utterances) while maintaining consistency in responses. The Future of Salesforce with Agentforce As you gain expertise in planning, developing, testing, and deploying Agentforce actions, you’ll unlock new possibilities for transforming your Salesforce experience. With generative AI tools like Agentforce, Salesforce evolves from a traditional point-and-click interface into an intelligent, agent-driven platform with streamlined, conversational workflows. This isn’t just an upgrade — it’s the foundation for reimagining how businesses interact with their CRM in an AI-assisted world. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Agentic AI is Here

On Premise Gen AI

In 2025, enterprises transitioning generative AI (GenAI) into production after years of experimentation are increasingly considering on-premises deployment as a cost-effective alternative to the cloud. Since OpenAI ignited the AI revolution in late 2022, organizations have tested large language models powering GenAI services on platforms like AWS, Microsoft Azure, and Google Cloud. These experiments demonstrated GenAI’s potential to enhance business operations while exposing the substantial costs of cloud usage. To avoid difficult conversations with CFOs about escalating cloud expenses, CIOs are exploring on-premises AI as a financially viable solution. Advances in software from startups and packaged infrastructure from vendors such as HPE and Dell are making private data centers an attractive option for managing costs. A survey conducted by Menlo Ventures in late 2024 found that 47% of U.S. enterprises with at least 50 employees were developing GenAI solutions in-house. Similarly, Informa TechTarget’s Enterprise Strategy Group reported a rise in enterprises considering on-premises and public cloud equally for new applications—from 37% in 2024 to 45% in 2025. This shift is reflected in hardware sales. HPE reported a 16% revenue increase in AI systems, reaching $1.5 billion in Q4 2024. During the same period, Dell recorded a record .6 billion in AI server orders, with its sales pipeline expanding by over 50% across various customer segments. “Customers are seeking diverse AI-capable server solutions,” noted David Schmidt, senior director of Dell’s PowerEdge server line. While heavily regulated industries have traditionally relied on on-premises systems to ensure data privacy and security, broader adoption is now driven by the need for cost control. Fortune 2000 companies are leading this trend, opting for private infrastructure over the cloud due to more predictable expenses. “It’s not unusual to see cloud bills exceeding 0,000 or even million per month,” said John Annand, an analyst at Info-Tech Research Group. Global manufacturing giant Jabil primarily uses AWS for GenAI development but emphasizes ongoing cost management. “Does moving to the cloud provide a cost advantage? Sometimes it doesn’t,” said CIO May Yap. Jabil employs a continuous cloud financial optimization process to maximize efficiency. On-Premises AI: Technology and Trends Enterprises now have alternatives to cloud infrastructure, including as-a-service solutions like Dell APEX and HPE GreenLake, which offer flexible pay-per-use pricing for AI servers, storage, and networking tailored for private data centers or colocation facilities. “The high cost of cloud drives organizations to seek more predictable expenses,” said Tiffany Osias, vice president of global colocation services at Equinix. Walmart exemplifies in-house AI development, creating tools like a document summarization app for its benefits help desk and an AI assistant for corporate employees. Startups are also enabling enterprises to build AI applications with turnkey solutions. “About 80% of GenAI requirements can now be addressed with push-button solutions from startups,” said Tim Tully, partner at Menlo Ventures. Companies like Ragie (RAG-as-a-service) and Lamatic.ai (GenAI platform-as-a-service) are driving this innovation. Others, like Squid AI, integrate custom AI agents with existing enterprise infrastructure. Open-source frameworks like LangChain further empower on-premises development, offering tools for creating chatbots, virtual assistants, and intelligent search systems. Its extension, LangGraph, adds functionality for building multi-agent workflows. As enterprises develop AI applications internally, consulting services will play a pivotal role. “Companies offering guidance on effective AI tool usage and aligning them with business outcomes will thrive,” Annand said. This evolution in AI deployment highlights the growing importance of balancing technological innovation with financial sustainability. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Statement Accuracy Prediction based on Language Model Activations

Statement Accuracy Prediction based on Language Model Activations

When users first began interacting with ChatGPT, they noticed an intriguing behavior: the model would often reverse its stance when told it was wrong. This raised concerns about the reliability of its outputs. How can users trust a system that appears to contradict itself? Recent research has revealed that large language models (LLMs) not only generate inaccurate information (often referred to as “hallucinations”) but are also aware of their inaccuracies. Despite this awareness, these models proceed to present their responses confidently. Unveiling LLM Awareness of Hallucinations Researchers discovered this phenomenon by analyzing the internal mechanisms of LLMs. Whenever an LLM generates a response, it transforms the input query into a numerical representation and performs a series of computations before producing the output. At intermediate stages, these numerical representations are called “activations.” These activations contain significantly more information than what is reflected in the final output. By scrutinizing these activations, researchers can identify whether the LLM “knows” its response is inaccurate. A technique called SAPLMA (Statement Accuracy Prediction based on Language Model Activations) has been developed to explore this capability. SAPLMA examines the internal activations of LLMs to predict whether their outputs are truthful or not. Why Do Hallucinations Occur? LLMs function as next-word prediction models. Each word is selected based on its likelihood given the preceding words. For example, starting with “I ate,” the model might predict the next words as follows: The issue arises when earlier predictions constrain subsequent outputs. Once the model commits to a word, it cannot go back to revise its earlier choice. For instance: In another case: This mechanism reveals how the constraints of next-word prediction can lead to hallucinations, even when the model “knows” it is generating an incorrect response. Detecting Inaccuracies with SAPLMA To investigate whether an LLM recognizes its own inaccuracies, researchers developed the SAPLMA method. Here’s how it works: The classifier itself is a simple neural network with three dense layers, culminating in a binary output that predicts the truthfulness of the statement. Results and Insights The SAPLMA method achieved an accuracy of 60–80%, depending on the topic. While this is a promising result, it is not perfect and has notable limitations. For example: However, if LLMs can learn to detect inaccuracies during the generation process, they could potentially refine their outputs in real time, reducing hallucinations and improving reliability. The Future of Error Mitigation in LLMs The SAPLMA method represents a step forward in understanding and mitigating LLM errors. Accurate classification of inaccuracies could pave the way for models that can self-correct and produce more reliable outputs. While the current limitations are significant, ongoing research into these methods could lead to substantial improvements in LLM performance. By combining techniques like SAPLMA with advancements in LLM architecture, researchers aim to build models that are not only aware of their errors but capable of addressing them dynamically, enhancing both the accuracy and trustworthiness of AI systems. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More

Salesforce and AWS-Agentic Enterprise

Salesforce and AWS: Driving the Future of the Agentic Enterprise As AI-powered agents redefine the way businesses operate, strategic partnerships are playing a pivotal role in harnessing the power of data and artificial intelligence. Salesforce and AWS, two industry leaders, have taken significant steps toward building a smarter, agentic enterprise through their expanded collaboration. One year into this strategic partnership, their joint efforts are delivering transformative AI and data solutions, helping customers like Buyers Edge Platform unlock new efficiencies and capabilities. A Partnership Fueling Agentic AI Salesforce and AWS are aligning their AI and data initiatives to pave the way for advanced agentic systems—autonomous AI agents designed to enhance business operations and customer experiences. Among their notable achievements over the past year are: These innovations are creating an ecosystem that supports the delivery of agentic AI, enabling businesses to streamline operations and tap into new value from their data. “By integrating data and AI capabilities across our platforms, Salesforce and AWS are building a strong foundation for the future of agentic systems,” said Brian Landsman, EVP of Global Business Development and Technology Partnerships at Salesforce. “With a majority of large companies planning to implement agents by 2027, organizations need trusted partners to help them achieve their vision of a smarter enterprise.” Making AI More Accessible Salesforce is simplifying access to AI technology through the AWS Marketplace, offering customers an integrated solution that includes Agentforce—the agentic layer of the Salesforce platform. Agentforce enables businesses to deploy autonomous AI agents across various operations, streamlining workflows and delivering measurable results. Available in 23 countries, Salesforce’s presence on AWS Marketplace offers customers key advantages, including: By removing barriers to adoption, Salesforce and AWS empower companies to focus on leveraging technology for growth rather than navigating complex procurement systems. A New Era of Enterprise Efficiency As businesses increasingly rely on data and AI to remain competitive, the Salesforce-AWS partnership is setting the stage for enterprises to achieve more with agentic systems. These systems allow companies to execute complex tasks with unprecedented efficiency, maximizing ROI on technology investments. “Our partnership with Salesforce empowers mutual customers to realize the full potential of their data and AI investments,” said Chris Grusz, Managing Director of Technology Partnerships at AWS. “Together, we’re delivering immediate, actionable insights with agentic AI, enabling organizations to automate strategically and unlock more value across their operations.” Looking Ahead By seamlessly integrating data and AI capabilities, Salesforce and AWS are not just building technology solutions—they’re reshaping how enterprises operate and thrive in the digital age. As agentic AI becomes an essential part of business strategy, this partnership provides a blueprint for leveraging technology to drive smarter, more agile, and more effective enterprises. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More

Autonomy, Architecture, and Action

Redefining AI Agents: Autonomy, Architecture, and Action AI agents are reshaping how technology interacts with us and executes tasks. Their mission? To reason, plan, and act independently—following instructions, making autonomous decisions, and completing actions, often without user involvement. These agents adapt to new information, adjust in real time, and pursue their objectives autonomously. This evolution in agentic AI is revolutionizing how goals are accomplished, ushering in a future of semi-autonomous technology. At their foundation, AI agents rely on one or more large language models (LLMs). However, designing agents is far more intricate than building chatbots or generative assistants. While traditional AI applications often depend on user-driven inputs—such as prompt engineering or active supervision—agents operate autonomously. Core Principles of Agentic AI Architectures To enable autonomous functionality, agentic AI systems must incorporate: Essential Infrastructure for AI Agents Building and deploying agentic AI systems requires robust software infrastructure that supports: Agent Development Made Easier with Langflow and Astra DB Langflow simplifies the development of agentic applications with its visual IDE. It integrates with Astra DB, which combines vector and graph capabilities for ultra-low latency data access. This synergy accelerates development by enabling: Transforming Autonomy into Action Agentic AI is fundamentally changing how tasks are executed by empowering systems to act autonomously. By leveraging platforms like Astra DB and Langflow, organizations can simplify agent design and deploy scalable, effective AI applications. Start building the next generation of AI-powered autonomy today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Apple's Privacy Changes: A Call for Email Marketing Innovation

Liar Liar Apple on Fire

Apple Developing Update After AI System Generates Inaccurate News Summaries Apple is working on a software update to address inaccuracies generated by its Apple Intelligence system after multiple instances of false news summaries were reported. The BBC first alerted Apple in mid-December to significant errors in the system, including a fabricated summary that falsely attributed a statement to BBC News. The summary suggested Luigi Mangione, accused of killing United Healthcare CEO Brian Thompson, had shot himself, a claim entirely unsubstantiated. Other publishers, such as ProPublica, also raised concerns about Apple Intelligence producing misleading summaries. While Apple did not respond immediately to the BBC’s December report, it issued a statement after pressure mounted from groups like the National Union of Journalists and Reporters Without Borders, both of which called for the removal of Apple Intelligence. Apple assured stakeholders it is working to refine the technology. A Widespread AI Issue: Hallucinations Apple joins the ranks of other AI vendors struggling with generative AI hallucinations—instances where AI produces false or misleading information. In October 2024, Perplexity AI faced a lawsuit from Dow Jones & Co. and the New York Post over fabricated news content attributed to their publications. Similarly, Google had to improve its AI summaries after providing users with inaccurate information. On January 16, Apple temporarily disabled AI-generated summaries for news apps on iPhone, iPad, and Mac devices. The Core Problem: AI Hallucination Chirag Shah, a professor of Information Science at the University of Washington, emphasized that hallucination is inherent to the way large language models (LLMs) function. “The nature of AI models is to generate, synthesize, and summarize, which makes them prone to mistakes,” Shah explained. “This isn’t something you can debug easily—it’s intrinsic to how LLMs operate.” While Apple plans to introduce an update that clearly labels summaries as AI-generated, Shah believes this measure falls short. “Most people don’t understand how these headlines or summaries are created. The responsible approach is to pause the technology until it’s better understood and mitigation strategies are in place,” he said. Legal and Brand Implications for Apple The hallucinated summaries pose significant reputational and legal risks for Apple, according to Michael Bennett, an AI adviser at Northeastern University. Before launching Apple Intelligence, the company was perceived as lagging in the AI race. The release of this system was intended to position Apple as a leader. Instead, the inaccuracies have damaged its credibility. “This type of hallucinated summarization is both an embarrassment and a serious legal liability,” Bennett said. “These errors could form the basis for defamation claims, as Apple Intelligence misattributes false information to reputable news sources.” Bennett criticized Apple’s seemingly minimal response. “It’s surprising how casual Apple’s reaction has been. This is a major issue for their brand and could expose them to significant legal consequences,” he added. Opportunity for Publishers The incident highlights the need for publishers to protect their interests when partnering with AI vendors like Apple and Google. Publishers should demand stronger safeguards to prevent false attributions and negotiate new contractual clauses to minimize brand risk. “This is an opportunity for publishers to lead the charge, pushing AI companies to refine their models or stop attributing false summaries to news sources,” Bennett said. He suggested legal action as a potential recourse if vendors fail to address these issues. Potential Regulatory Action The Federal Trade Commission (FTC) may also scrutinize the issue, as consumers paying for products like iPhones with AI capabilities could argue they are not receiving the promised service. However, Bennett believes Apple will likely act to resolve the problem before regulatory involvement becomes necessary. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
gettectonic.com