Large Language Model - gettectonic.com
Salesforce prompt builder

Salesforce Prompt Builder

Salesforce Prompt Builder: Field Generation Prompt Template What is a Prompt? A prompt is a set of detailed instructions designed to guide a Large Language Model (LLM) in generating relevant and high-quality output. Just like chefs fine-tune their recipes through testing and adjustments, prompt design involves iterating on instructions to ensure that the LLM delivers accurate, actionable results. Effective prompt design involves “grounding” your prompts with specific data, such as business context, product details, and customer information. By tailoring prompts to your particular needs, you help the LLM provide responses that align with your business goals. Like a well-crafted recipe, an effective prompt consists of both ingredients and instructions that work together to produce optimal results. A great prompt offers clear directions to the LLM, ensuring it generates output that meets your expectations. But what does an ideal prompt template look like? Here’s a breakdown: What is a Field Generation Prompt Template? The Field Generation Prompt Template is a tool that integrates AI-powered workflows directly into fields within Lightning record pages. This template allows users to populate fields with summaries or descriptions generated by an LLM, streamlining interactions and enhancing productivity during customer conversations. Let’s explore how to set up a Field Generation Prompt Template by using an example: generating a summary of case comments to help customer service agents efficiently review a case. Steps to Create a Field Generation Prompt Template 1. Create a New Rich Text Field on the Case Object 2. Enable Einstein Setup 3. Create a Prompt Template with the Field Generation Template Type 4. Configure the Prompt Template Workspace Optional: You can also use Flow or Apex to incorporate additional merge fields. 5. Preview the LLM’s Response Example Prompt: Scenario:You are a customer service representative at a company called ENForce.com, and you need a quick summary of a case’s comments. Record Merge Fields: Instructions: vbnetCopy codeFollow these instructions precisely. Do not add information not provided. – Refer to the “contact” as “client” in the summary. – Use clear, concise, and straightforward language in the active voice with a friendly, informal, and informative tone. – Include an introductory sentence and closing sentence, along with several bullet points. – Use a variety of emojis as bullet points to make the list more engaging. – Limit the summary to no more than seven sentences. – Do not include any reference to missing values or incomplete data. 6. Add the “Case Summary” Field to the Lightning Record Page 7. Generate the Summary By following these steps, you can leverage Salesforce’s Prompt Builder to enhance case management processes and improve the efficiency of customer service interactions through AI-assisted summaries. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More

Empowering LLMs with a Robust Agent Framework

PydanticAI: Empowering LLMs with a Robust Agent Framework As the Generative AI landscape evolves at a historic pace, AI agents and multi-agent systems are expected to dominate 2025. Industry leaders like AWS, OpenAI, and Microsoft are racing to release frameworks, but among these, PydanticAI stands out for its unique integration of the powerful Pydantic library with large language models (LLMs). Why Pydantic Matters Pydantic, a Python library, simplifies data validation and parsing, making it indispensable for handling external inputs such as JSON, user data, or API responses. By automating data checks (e.g., type validation and format enforcement), Pydantic ensures data integrity while reducing errors and development effort. For instance, instead of manually validating fields like age or email, Pydantic allows you to define models that automatically enforce structure and constraints. Consider the following example: pythonCopy codefrom pydantic import BaseModel, EmailStr class User(BaseModel): name: str age: int email: EmailStr user_data = {“name”: “Alice”, “age”: 25, “email”: “[email protected]”} user = User(**user_data) print(user.name) # Alice print(user.age) # 25 print(user.email) # [email protected] If invalid data is provided (e.g., age as a string), Pydantic throws a detailed error, making debugging straightforward. What Makes PydanticAI Special Building on Pydantic’s strengths, PydanticAI brings structured, type-safe responses to LLM-based AI agents. Here are its standout features: Building an AI Agent with PydanticAI Below is an example of creating a PydanticAI-powered bank support agent. The agent interacts with customer data, evaluates risks, and provides structured advice. Installation bashCopy codepip install ‘pydantic-ai-slim[openai,vertexai,logfire]’ Example: Bank Support Agent pythonCopy codefrom dataclasses import dataclass from pydantic import BaseModel, Field from pydantic_ai import Agent, RunContext from bank_database import DatabaseConn @dataclass class SupportDependencies: customer_id: int db: DatabaseConn class SupportResult(BaseModel): support_advice: str = Field(description=”Advice for the customer”) block_card: bool = Field(description=”Whether to block the customer’s card”) risk: int = Field(description=”Risk level of the query”, ge=0, le=10) support_agent = Agent( ‘openai:gpt-4o’, deps_type=SupportDependencies, result_type=SupportResult, system_prompt=( “You are a support agent in our bank. Provide support to customers and assess risk levels.” ), ) @support_agent.system_prompt async def add_customer_name(ctx: RunContext[SupportDependencies]) -> str: customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id) return f”The customer’s name is {customer_name!r}” @support_agent.tool async def customer_balance(ctx: RunContext[SupportDependencies], include_pending: bool) -> float: return await ctx.deps.db.customer_balance( id=ctx.deps.customer_id, include_pending=include_pending ) async def main(): deps = SupportDependencies(customer_id=123, db=DatabaseConn()) result = await support_agent.run(‘What is my balance?’, deps=deps) print(result.data) result = await support_agent.run(‘I just lost my card!’, deps=deps) print(result.data) Key Concepts Why PydanticAI Matters PydanticAI simplifies the development of production-ready AI agents by bridging the gap between unstructured LLM outputs and structured, validated data. Its ability to handle complex workflows with type safety and its seamless integration with modern AI tools make it an essential framework for developers. As we move toward a future dominated by multi-agent AI systems, PydanticAI is poised to be a cornerstone in building reliable, scalable, and secure AI-driven applications. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More

Real-World Insights and Applications

Salesforce’s Agentforce empowers businesses to create and deploy custom AI agents tailored to their unique needs. Built on a foundation of flexibility, the platform leverages both Salesforce’s proprietary AI models and third-party models like those from OpenAI, Anthropic, Amazon, and Google. This versatility enables businesses to automate a wide range of tasks, from generating detailed sales reports to summarizing Slack conversations. AI in Action: Real-World Insights and Applications The “CXO AI Playbook” by Business Insider explores how organizations across industries and sizes are adopting AI. Featured companies reveal their challenges, the decision-makers driving AI initiatives, and their strategic goals for the future. Salesforce’s approach with Agentforce aligns with this vision, offering advanced tools to address dynamic business needs and improve operational efficiency. Building on Salesforce’s Legacy of Innovation Salesforce has long been a leader in AI integration. It introduced Einstein in 2016 to handle scripted tasks like predictive analytics. As AI capabilities evolved, Salesforce launched Einstein GPT and later Einstein Copilot, which expanded into decision-making and natural language processing. By early 2024, these advancements culminated in Agentforce—a platform designed to provide customizable, prebuilt AI agents for diverse applications. “We recognized that our customers wanted to extend our AI capabilities or create their own custom agents,” said Tyler Carlson, Salesforce’s VP of Business Development. A Powerful Ecosystem: Agentforce’s Core Features Agentforce is powered by the Atlas Reasoning Engine, Salesforce’s proprietary technology that employs ReAct prompting to enable AI agents to break down problems, refine their responses, and deliver more accurate outcomes. The engine integrates seamlessly with Salesforce’s own large language models (LLMs) and external models, ensuring adaptability and precision. Agentforce also emphasizes strict data privacy and security. For example, data shared with external LLMs is subject to limited retention policies and content filtering to ensure compliance and safety. Key Applications and Use Cases Businesses can leverage tools like Agentbuilder to design and scale AI agents with specific functionalities, such as: Seamless Integration with Slack Currently in beta, Agentforce’s Slack integration brings AI automation directly to the workplace. This allows employee-facing agents to execute tasks and answer queries within the communication tool. “Slack is valuable for employee-facing agents because it makes their capabilities easily accessible,” Carlson explained. Measurable Impact: Driving Success with Agentforce Salesforce measures the success of Agentforce by tracking client outcomes. Early adopters report significant results, such as a 90% resolution rate for customer inquiries managed by AI agents. As adoption grows, Salesforce envisions a robust ecosystem of partners, AI skills, and agent capabilities. “By next year, we foresee thousands of agent skills and topics available to clients, driving broader adoption across our CRM systems and Slack,” Carlson shared. Salesforce’s Agentforce represents the next generation of intelligent business automation, combining advanced AI with seamless integrations to deliver meaningful, measurable outcomes at scale. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More

Agentforce Custom AI Agents

Salesforce Introduces Agentforce: A New AI Platform to Build Custom Digital Agents Salesforce has unveiled Agentforce, its latest AI platform designed to help companies build and deploy intelligent digital agents to automate a wide range of tasks. Building on Salesforce’s generative AI advancements, Agentforce integrates seamlessly with its existing tools, enabling businesses to enhance efficiency and decision-making through automation. Agentforce Custom AI Agents. With applications like generating reports from sales data, summarizing Slack conversations, and routing emails to the appropriate departments, Agentforce offers businesses unprecedented flexibility in automating routine processes. The Problem Agentforce Solves Salesforce’s journey in AI began in 2016 with the launch of Einstein, a suite of AI tools for its CRM software. While Einstein automated some tasks, its capabilities were largely predefined and lacked the flexibility to handle complex, dynamic scenarios. The rapid evolution of generative AI opened new doors for improving natural language understanding and decision-making. This led to innovations like Einstein GPT and later Einstein Copilot, which laid the foundation for Agentforce. With Agentforce, businesses can now create prebuilt or fully customizable agents that adapt to unique business needs. Agentforce Custom AI Agents “We recognized that our customers want to extend the agents we provide or build their own,” said Tyler Carlson, Salesforce’s Vice President of Business Development. How Agentforce Works At the heart of Agentforce is the Atlas Reasoning Engine, a proprietary technology developed by Salesforce. It leverages advanced techniques like ReAct prompting, which allows AI agents to break down problems into steps, reason through them, and iteratively refine their actions until they meet user expectations. Key Features: Ensuring Security and Compliance Given the potential risks of integrating third-party LLMs, Salesforce has implemented robust safeguards, including: AI in Action: Real-World Applications One notable use case of Agentforce is its collaboration with Workday to develop an AI Employee Service Agent. This agent helps employees find answers to HR-related questions using a company’s internal policies and documents. Another example involves agents autonomously managing general email inboxes by analyzing message intent and forwarding emails to relevant teams. “These agents are not monolithic or tied to a single LLM,” Carlson explained. “Their versatility lies in combining different models and technologies for better outcomes.” Measuring Success Salesforce gauges Agentforce’s success through client outcomes and platform adoption. For example, some users report that Agentforce resolves up to 90% of customer inquiries autonomously. Looking ahead, Salesforce aims to expand the Agentforce ecosystem significantly. “By next year, we want thousands of agent skills and topics available for customers to leverage,” Carlson added. A Platform for the Future of AI Agentforce represents Salesforce’s vision of creating autonomous AI agents that empower businesses to work smarter, faster, and more efficiently. With tools like Agentbuilder and integrations across its ecosystem, Salesforce is positioning Agentforce as a cornerstone of AI-led innovation, helping businesses stay ahead in a rapidly evolving technological landscape. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More

Salesforce Agents are Transforming Internal Workflows

How Salesforce Agents are Transforming Internal Workflows Salesforce CIO and Executive Vice President Juan Perez, with three decades of IT leadership experience, is leading the charge in deploying generative AI solutions like Agentforce within Salesforce. Perez’s approach reflects lessons learned during his tenure at UPS, where he oversaw IT operations for a global enterprise. His strategies emphasize scalability, data strategy, and modernization to support growth, with AI now playing a pivotal role. UPS Lessons Applied to Salesforce Perez draws on his UPS experience in managing IT at scale to navigate Salesforce’s needs as a growing enterprise. At UPS, he managed a complex, global IT organization supporting diverse operations, from running an airline to ensuring timely package delivery. Similarly, Salesforce’s IT strategy prioritizes scalable solutions, robust data strategies, and AI integration. “Salesforce intelligently realized the importance of leveraging its own technologies, including AI, to modernize and support growth,” Perez explains. Generative AI’s Transformative Potential Perez views generative AI (GenAI) as a transformative force on par with the internet’s emergence in the 1990s. By reducing the time spent on data analysis and decision-making, AI enables teams to focus on actions that improve productivity and customer service. While GenAI isn’t a solution in itself, Perez sees it as an enabler that amplifies human efforts. Evaluating and Integrating AI in Salesforce’s Stack Salesforce adopts a rigorous, multi-step approach to evaluate new technologies, including large language models (LLMs) and generative AI tools. Perez outlines a “filtering mechanism” for implementation: This structured approach ensures AI investments are both impactful and sustainable. Measuring AI’s ROI To quantify the impact of AI, Salesforce evaluates metrics like lines of code generated using AI tools and time saved through automation. In one example, approximately 26% of production-ready code in a recent deployment was AI-generated. This efficiency is factored into planning and budgeting, allowing resources to be reallocated to other initiatives. Mitigating “Shadow AI” Risks Perez warns against “shadow AI,” where decentralized or unmanaged AI implementations can lead to security, data privacy, and investment inefficiencies. He stresses the need for visibility and governance to prevent these risks. To address this, Salesforce has established an AI Council that is evolving into an Agentforce Center of Excellence. This body ensures responsible development, aligns projects with organizational goals, and maintains oversight of AI implementations across the enterprise. Responsible and Scalable AI Adoption Salesforce’s commitment to using its own products extends to Agentforce, a generative AI suite designed to streamline internal workflows. With a focus on governance, scalability, and measurable impact, Salesforce sets a benchmark for AI adoption. As Perez explains, “We ensure our AI solutions are safe, effective, and capable of driving significant value while remaining aligned with our strategic goals.” By combining rigorous evaluation, measurable outcomes, and proactive governance, Salesforce demonstrates how AI can transform workflows while mitigating risks. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Is Your LLM Agent Enterprise-Ready?

Is Your LLM Agent Enterprise-Ready?

Customer Relationship Management (CRM) systems are the backbone of modern business operations, orchestrating customer interactions, data management, and process automation. As businesses embrace advanced AI, the potential for transformative growth is clear—automating workflows, personalizing customer experiences, and enhancing operational efficiency. However, deploying large language model (LLM) agents in CRM systems demands rigorous, real-world evaluations to ensure they meet the complexity and dynamic needs of professional environments.

Read More
Ready or Not Here AI Agents Come

Ready or Not Here AI Agents Come

As organizations embrace the growing presence of AI agents, leaders must address concerns about allowing autonomous systems to operate in sensitive environments. AI agents, often viewed as the future of how enterprises deploy large language models, raise important questions around security and identity management. The rise of agentic AI has been notable in 2024, with Google launching its Vertex AI Agents, Salesforce introducing Agentforce, and AWS rolling out the re Agent for Amazon Bedrock. These agents promise to deliver significant value by executing tasks using natural language commands, reasoning through the best solutions, and taking action without human intervention. However, as Katie Norton, research manager for DevSecOps & Software Supply Chain Security at IDC, highlighted at Venafi’s Machine Identity Conference, AI agents present unique security challenges. Unlike robotic process automation (RPA), AI agents act autonomously, creating a need for secure machine identities, especially as they access sensitive data across multiple systems. Matt McLarty, CTO at Boomi, added that the complexity of managing agentic AI revolves around ensuring proper authentication and authorization. He pointed out scenarios where agents dynamically interact with systems, such as opening support tickets, which require secure verification of agent access rights. While these agents offer significant potential, businesses are not yet prepared to issue credentials for autonomous agents, according to McLarty. The current reliance on existing authentication and authorization systems needs to evolve to support these new AI capabilities. He also emphasized the importance of pairing agents with human oversight, ensuring that access and actions are traceable. As AI advances into its third wave, characterized by autonomous agents capable of reasoning and action, companies need to rethink their approaches to workforce collaboration. These agents will handle low-value, time-consuming tasks, while human workers focus on strategic initiatives. In sales, for example, AI agents will manage customer interactions, schedule meetings, and resolve basic issues, allowing salespeople to build deeper relationships. At Dreamforce 2024, Salesforce unveiled Agentforce, a platform that empowers organizations to build and deploy customized AI agents across service, sales, marketing, and commerce. This suite aims to increase efficiency, productivity, and customer satisfaction. However, for AI agents to succeed, they must complement human skills and operate within established guardrails. Organizations need to implement audit trails to ensure accountability and develop training programs for employees to effectively collaborate with AI. Ultimately, the future of work will feature a hybrid workforce where humans and AI agents work together to drive innovation and success. As companies move forward, they must ensure AI agents understand their limits and recognize when human intervention is necessary. This balance between AI-driven efficiency and human oversight will enable businesses to thrive in an ever-evolving landscape. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Productivity Paradox

AI Productivity Paradox

The AI Productivity Paradox: Why Aren’t More Workers Using AI Tooks Like ChatGPT?The Real Barrier Isn’t Technical Skills — It’s Time to Think Despite the transformative potential of tools like ChatGPT, most knowledge workers aren’t utilizing them effectively. Those who do tend to use them for basic tasks like summarization. Less than 5% of ChatGPT’s user base subscribes to the paid Plus version, indicating that a small fraction of potential professional users are tapping into AI for more complex, high-value tasks. Having spent over a decade building AI products at companies such as Google Brain and Shopify Ads, the evolution of AI has been clearly evident. With the advent of ChatGPT, AI has transitioned from being an enhancement for tools like photo organizers to becoming a significant productivity booster for all knowledge workers. Most executives are aware that today’s buzz around AI is more than just hype. They’re eager to make their companies AI-forward, recognizing that it’s now more powerful and user-friendly than ever. Yet, despite this potential and enthusiasm, widespread adoption remains slow. The real issue lies in how organizations approach work itself. Systemic problems are hindering the integration of these tools into the daily workflow. Ultimately, the question executives need to ask isn’t, “How can we use AI to work faster? Or can this feature be built with AI?” but rather, “How can we use AI to create more value? What are the questions we should be asking but aren’t?” Real-world ImpactRecently, large language models (LLMs)—the technology behind tools like ChatGPT—were used to tackle a complex data structuring and analysis task. This task would typically require a cross-functional team of data analysts and content designers, taking a month or more to complete. Here’s what was accomplished in just one day using Google AI Studio: However, the process wasn’t just about pressing a button and letting AI do all the work. It required focused effort, detailed instructions, and multiple iterations. Hours were spent crafting precise prompts, providing feedback, and redirecting the AI when it went off course. In this case, the task was compressed from a month-long process to a single day. While it was mentally exhausting, the result wasn’t just a faster process—it was a fundamentally better and different outcome. The LLMs uncovered nuanced patterns and edge cases within the data that traditional analysis would have missed. The Counterintuitive TruthHere lies the key to understanding the AI productivity paradox: The success in using AI was possible because leadership allowed for a full day dedicated to rethinking data processes with AI as a thought partner. This provided the space for deep, strategic thinking, exploring connections and possibilities that would typically take weeks. However, this quality-focused work is often sacrificed under the pressure to meet deadlines. Ironically, most people don’t have time to figure out how they could save time. This lack of dedicated time for exploration is a luxury many product managers (PMs) can’t afford. Under constant pressure to deliver immediate results, many PMs don’t have even an hour for strategic thinking. For many, the only way to carve out time for this work is by pretending to be sick. This continuous pressure also hinders AI adoption. Developing thorough testing plans or proactively addressing AI-related issues is viewed as a luxury, not a necessity. This creates a counterproductive dynamic: Why use AI to spot issues in documentation if fixing them would delay launch? Why conduct further user research when the direction has already been set from above? Charting a New Course — Investing in PeopleProviding employees time to “figure out AI” isn’t enough; most need training to fully understand how to leverage ChatGPT beyond simple tasks like summarization. Yet the training required is often far less than what people expect. While the market is flooded with AI training programs, many aren’t suitable for most employees. These programs are often time-consuming, overly technical, and not tailored to specific job functions. The best results come from working closely with individuals for brief periods—10 to 15 minutes—to audit their current workflows and identify areas where LLMs could be used to streamline processes. Understanding the technical details behind token prediction isn’t necessary to create effective prompts. It’s also a myth that AI adoption is only for those with technical backgrounds under 40. In fact, attention to detail and a passion for quality work are far better indicators of success. By setting aside biases, companies may discover hidden AI enthusiasts within their ranks. For example, a lawyer in his sixties, after just five minutes of explanation, grasped the potential of LLMs. By tailoring examples to his domain, the technology helped him draft a law review article he had been putting off for months. It’s likely that many companies already have AI enthusiasts—individuals who’ve taken the initiative to explore LLMs in their work. These “LLM whisperers” could come from any department: engineering, marketing, data science, product management, or customer service. By identifying these internal innovators, organizations can leverage their expertise. Once these experts are found, they can conduct “AI audits” of current workflows, identify areas for improvement, and provide starter prompts for specific use cases. These internal experts often better understand the company’s systems and goals, making them more capable of spotting relevant opportunities. Ensuring Time for ExplorationBeyond providing training, it’s crucial that employees have the time to explore and experiment with AI tools. Companies can’t simply tell their employees to innovate with AI while demanding that another month’s worth of features be delivered by Friday at 5 p.m. Ensuring teams have a few hours a month for exploration is essential for fostering true AI adoption. Once the initial hurdle of adoption is overcome, employees will be able to identify the most promising areas for AI investment. From there, organizations will be better positioned to assess the need for more specialized training. ConclusionThe AI productivity paradox is not about the complexity of the technology but rather how organizations approach work and innovation. Harnessing AI’s potential is simpler than “AI influencers” often suggest, requiring only

Read More
AI platform for automated task management

AI platform for automated task management

Salesforce Doubles Down on AI Innovation with Agentforce Salesforce, renowned for its CRM software used by over 150,000 businesses, including Amazon and Walmart, continues to push the boundaries of innovation. Beyond its flagship CRM, Salesforce also owns Slack, the popular workplace communication app. Now, the company is taking its AI capabilities to the next level with Agentforce—a platform that empowers businesses to build and deploy AI-powered digital agents for automating tasks such as creating sales reports and summarizing Slack conversations. What Problem Does Agentforce Solve? Salesforce has been leveraging AI for years, starting with the launch of Einstein in 2016. Einstein’s initial capabilities were limited to basic, scriptable tasks. However, the rise of generative AI created an opportunity to tackle more complex challenges, enabling tools to make smarter decisions and interpret natural language. This evolution led to a series of innovations—Einstein GPT, Einstein Copilot, and now Agentforce—a flexible platform offering prebuilt and customizable agents designed to meet diverse business needs. “Our customers wanted more. Some wanted to tweak the agents we offer, while others wanted to create their own,” said Tyler Carlson, Salesforce’s VP of Business Development. The Technology Behind Agentforce Agentforce is powered by Salesforce’s Atlas Reasoning Engine, developed in-house to drive smarter decision-making. The platform integrates with AI models from leading providers like OpenAI, Anthropic, Amazon, and Google, offering businesses a variety of tools to choose from. Slack, which Salesforce acquired in 2021, plays a pivotal role as a testing ground for these AI agents. Currently in beta, Agentforce’s Slack integration allows businesses to implement automations directly where employees work, enhancing usability. “Slack makes these tools easy to use and accessible,” Carlson noted. How Agentforce Stands Out Customizing AI for Business Needs With tools like Agentbuilder, businesses can create AI agents tailored to specific tasks. For instance, an agent could prioritize and sort incoming emails, respond to HR inquiries, or handle customer support using internal data. One standout example is Salesforce’s partnership with Workday to develop an AI-powered service agent for employee questions. Driving Results and Adoption Salesforce has already seen promising results from early trials, with Agentforce resolving 90% of customer inquiries autonomously. The company aims to expand adoption and functionality, allowing these agents to handle even larger workloads. “We’re building a bigger ecosystem of partners and skills,” Carlson emphasized. “By next year, we want Agentforce to be a must-have for businesses.” With Agentforce, Salesforce continues to cement its role as a leader in AI innovation, helping businesses work smarter, faster, and more effectively. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Agent Rivalry

AI Agent Rivalry

Microsoft and Salesforce’s AI Agent Rivalry Heats Up The battle for dominance in the AI agent space has escalated, with Salesforce CEO Marc Benioff intensifying his criticism of Microsoft’s AI solutions. Following remarks at Dreamforce 2024, Benioff took to X (formerly Twitter) to call out Microsoft for what he called “rebranding Copilot as ‘agents’ in panic mode.” The AI Agent rivalry winner may be determined not by flashy features but by delivering tangible, transformative outcomes for businesses navigating the complexities of AI adoption. AI Agent Rivalry. Benioff didn’t hold back, labeling Microsoft’s Copilot as “a flop”, citing issues like data leaks, inaccuracies, and requiring customers to build their own large language models (LLMs). In contrast, he touted Salesforce’s Agentforce as a solution that autonomously drives sales, service, marketing, analytics, and commerce without the complications he attributes to Microsoft’s offerings. Microsoft’s Copilot: A New UI for AI Microsoft recently unveiled new autonomous agent capabilities for Copilot Studio and Dynamics 365, positioning these agents as tools to enhance productivity across teams and functions. CEO Satya Nadella described Copilot as “the UI for AI” and emphasized its flexibility, allowing businesses to create, manage, and integrate agents seamlessly. Despite the fanfare, Benioff dismissed Copilot’s updates, likening it to “Clippy 2.0” and claiming it fails to deliver accuracy or transformational impact. Salesforce Expands Agentforce with Strategic Partnerships At Dreamforce 2024, Salesforce unveiled its Agentforce Partner Network, a global ecosystem featuring collaborators like AWS, Google Cloud, IBM, and Workday. The move aims to bolster the capabilities of Agentforce, Salesforce’s AI-driven platform that delivers tailored, autonomous business solutions. Agentforce allows businesses to deploy customizable agents without complex coding. With features like the Agent Builder, users can craft workflows and instructions in natural language, making the platform accessible to both technical and non-technical teams. Flexibility and Customization: Salesforce vs. Microsoft Both Salesforce and Microsoft emphasize AI’s transformative potential, but their approaches differ: Generative AI vs. Predictive AI Salesforce has doubled down on generative AI, with Einstein GPT producing personalized content using CRM data while also providing predictive analytics to forecast customer behavior and sales outcomes. Microsoft, on the other hand, combines generative and predictive AI across its ecosystem. Copilot not only generates content but also performs autonomous decision-making in Dynamics 365 and Azure, positioning itself as a comprehensive enterprise solution. The Rise of Multi-Agent AI Systems The competition between Microsoft and Salesforce reflects a broader trend in AI-driven automation. Companies like OpenAI are experimenting with frameworks like Swarm, which simplifies the creation of interconnected AI agents for tasks such as lead generation and marketing campaign development. Similarly, startups like DevRev are introducing conversational AI builders to design custom agents, offering enterprises up to 95% task accuracy without the need for coding. What Lies Ahead in the AI Agent Landscape? As Salesforce and Microsoft push the boundaries of AI integration, businesses are evaluating these tools for their flexibility, customization, and impact on operations. While Salesforce leads in CRM-focused AI, Microsoft’s integrated approach appeals to enterprises seeking cross-functional AI solutions. In the end, the winner may be determined not by flashy features but by delivering tangible, transformative outcomes for businesses navigating the complexities of AI adoption. AI Agent Rivalry. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Where LLMs Fall Short

LLM Economies

Throughout history, disruptive technologies have been the catalyst for major social and economic revolutions. The invention of the plow and irrigation systems 12,000 years ago sparked the Agricultural Revolution, while Johannes Gutenberg’s 15th-century printing press fueled the Protestant Reformation and helped propel Europe out of the Middle Ages into the Renaissance. In the 18th century, James Watt’s steam engine ushered in the Industrial Revolution. More recently, the internet has revolutionized communication, commerce, and information access, shrinking the world into a global village. Similarly, smartphones have transformed how people interact with their surroundings. Now, we stand at the dawn of the AI revolution. Large Language Models (LLMs) represent a monumental leap forward, with significant economic implications at both macro and micro levels. These models are reshaping global markets, driving new forms of currency, and creating a novel economic landscape. The reason LLMs are transforming industries and redefining economies is simple: they automate both routine and complex tasks that traditionally require human intelligence. They enhance decision-making processes, boost productivity, and facilitate cost reductions across various sectors. This enables organizations to allocate human resources toward more creative and strategic endeavors, resulting in the development of new products and services. From healthcare to finance to customer service, LLMs are creating new markets and driving AI-driven services like content generation and conversational assistants into the mainstream. To truly grasp the engine driving this new global economy, it’s essential to understand the inner workings of this disruptive technology. These posts will provide both a macro-level overview of the economic forces at play and a deep dive into the technical mechanics of LLMs, equipping you with a comprehensive understanding of the revolution happening now. Why Now? The Connection Between Language and Human Intelligence AI did not begin with ChatGPT’s arrival in November 2022. Many people were developing machine learning classification models in 1999, and the roots of AI go back even further. Artificial Intelligence was formally born in 1950, when Alan Turing—considered the father of theoretical computer science and famed for cracking the Nazi Enigma code during World War II—created the first formal definition of intelligence. This definition, known as the Turing Test, demonstrated the potential for machines to exhibit human-like intelligence through natural language conversations. The test involves a human evaluator who engages in conversations with both a human and a machine. If the evaluator cannot reliably distinguish between the two, the machine is considered to have passed the test. Remarkably, after 72 years of gradual AI development, ChatGPT simulated this very interaction, passing the Turing Test and igniting the current AI explosion. But why is language so closely tied to human intelligence, rather than, for example, vision? While 70% of our brain’s neurons are devoted to vision, OpenAI’s pioneering image generation model, DALL-E, did not trigger the same level of excitement as ChatGPT. The answer lies in the profound role language has played in human evolution. The Evolution of Language The development of language was the turning point in humanity’s rise to dominance on Earth. As Yuval Noah Harari points out in his book Sapiens: A Brief History of Humankind, it was the ability to gossip and discuss abstract concepts that set humans apart from other species. Complex communication, such as gossip, requires a shared, sophisticated language. Human language evolved from primitive cave signs to structured alphabets, which, along with grammar rules, created languages capable of expressing thousands of words. In today’s digital age, language has further evolved with the inclusion of emojis, and now with the advent of GenAI, tokens have become the latest cornerstone in this progression. These shifts highlight the extraordinary journey of human language, from simple symbols to intricate digital representations. In the next post, we will explore the intricacies of LLMs, focusing specifically on tokens. But before that, let’s delve into the economic forces shaping the LLM-driven world. The Forces Shaping the LLM Economy AI Giants in Competition Karl Marx and Friedrich Engels argued that those who control the means of production hold power. The tech giants of today understand that AI is the future means of production, and the race to dominate the LLM market is well underway. This competition is fierce, with industry leaders like OpenAI, Google, Microsoft, and Facebook battling for supremacy. New challengers such as Mistral (France), AI21 (Israel), and Elon Musk’s xAI and Anthropic are also entering the fray. The LLM industry is expanding exponentially, with billions of dollars of investment pouring in. For example, Anthropic has raised $4.5 billion from 43 investors, including major players like Amazon, Google, and Microsoft. The Scarcity of GPUs Just as Bitcoin mining requires vast computational resources, training LLMs demands immense computing power, driving a search for new energy sources. Microsoft’s recent investment in nuclear energy underscores this urgency. At the heart of LLM technology are Graphics Processing Units (GPUs), essential for powering deep neural networks. These GPUs have become scarce and expensive, adding to the competitive tension. Tokens: The New Currency of the LLM Economy Tokens are the currency driving the emerging AI economy. Just as money facilitates transactions in traditional markets, tokens are the foundation of LLM economics. But what exactly are tokens? Tokens are the basic units of text that LLMs process. They can be single characters, parts of words, or entire words. For example, the word “Oscar” might be split into two tokens, “os” and “car.” The performance of LLMs—quality, speed, and cost—hinges on how efficiently they generate these tokens. LLM providers price their services based on token usage, with different rates for input (prompt) and output (completion) tokens. As companies rely more on LLMs, especially for complex tasks like agentic applications, token usage will significantly impact operational costs. With fierce competition and the rise of open-source models like Llama-3.1, the cost of tokens is rapidly decreasing. For instance, OpenAI reduced its GPT-4 pricing by about 80% over the past year and a half. This trend enables companies to expand their portfolio of AI-powered products, further fueling the LLM economy. Context Windows: Expanding Capabilities

Read More
RAGate

RAGate

RAGate: Revolutionizing Conversational AI with Adaptive Retrieval-Augmented Generation Building Conversational AI systems is challenging.It’s not just feasible; it’s complex, resource-intensive, and time-consuming. The difficulty lies in creating systems that can not only understand and generate human-like responses but also adapt effectively to conversational nuances, ensuring meaningful engagement with users. Retrieval-Augmented Generation (RAG) has already transformed Conversational AI by combining the internal knowledge of large language models (LLMs) with external knowledge sources. By leveraging RAG with business data, organizations empower their customers to ask natural language questions and receive insightful, data-driven answers. The challenge?Not every query requires external knowledge. Over-reliance on external sources can disrupt conversational flow, much like consulting a book for every question during a conversation—even when internal knowledge is sufficient. Worse, if no external knowledge is available, the system may respond with “I don’t know,” despite having relevant internal knowledge to answer. The solution?RAGate — an adaptive mechanism that dynamically determines when to use external knowledge and when to rely on internal insights. Developed by Xi Wang, Procheta Sen, Ruizhe Li, and Emine Yilmaz and introduced in their July 2024 paper on Adaptive Retrieval-Augmented Generation for Conversational Systems, RAGate addresses this balance with precision. What Is Conversational AI? At its core, conversation involves exchanging thoughts, emotions, and information, guided by tone, context, and subtle cues. Humans excel at this due to emotional intelligence, socialization, and cultural exposure. Conversational AI aims to replicate these human-like interactions by leveraging technology to generate natural, contextually appropriate, and engaging responses. These systems adapt fluidly to user inputs, making the interaction dynamic—like conversing with a human. Internal vs. External Knowledge in AI Systems To understand RAGate’s value, we need to differentiate between two key concepts: Limitations of Traditional RAG Systems RAG integrates LLMs’ natural language capabilities with external knowledge retrieval, often guided by “guardrails” to ensure responsible, domain-specific responses. However, strict reliance on external knowledge can lead to: How RAGate Enhances Conversational AI RAGate, or Retrieval-Augmented Generation Gate, adapts dynamically to determine when external knowledge retrieval is necessary. It enhances response quality by intelligently balancing internal and external knowledge, ensuring conversational relevance and efficiency. The mechanism: Traditional RAG vs. RAGate: An Example Scenario: A healthcare chatbot offers advice based on general wellness principles and up-to-date medical research. This adaptive approach improves response accuracy, reduces latency, and enhances the overall conversational experience. RAGate Variants RAGate offers three implementation methods, each tailored to optimize performance: Variant Approach Key Feature RAGate-Prompt Uses natural language prompts to decide when external augmentation is needed. Lightweight and simple to implement. RAGate-PEFT Employs parameter-efficient fine-tuning (e.g., QLoRA) for better decision-making. Fine-tunes the model with minimal resource requirements. RAGate-MHA Leverages multi-head attention to interactively assess context and retrieve external knowledge. Optimized for complex conversational scenarios. RAGate Varients How to Implement RAGate Key Takeaways RAGate represents a breakthrough in Conversational AI, delivering adaptive, contextually relevant, and efficient responses by balancing internal and external knowledge. Its potential spans industries like healthcare, education, finance, and customer support, enhancing decision-making and user engagement. By intelligently combining retrieval-augmented generation with nuanced adaptability, RAGate is set to redefine the way businesses and individuals interact with AI. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more We Are All Cloud Users My old company and several others are concerned about security, and feel more secure with being able to walk down Read more

Read More
Salesforce AI Research Introduces LaTRO

Salesforce AI Research Introduces LaTRO

Salesforce AI Research Introduces LaTRO: A Breakthrough in Enhancing Reasoning for Large Language Models Large Language Models (LLMs) have revolutionized tasks such as answering questions, generating content, and assisting with workflows. However, they often struggle with advanced reasoning tasks like solving complex math problems, logical deduction, and structured data analysis. Salesforce AI Research has addressed this challenge by introducing LaTent Reasoning Optimization (LaTRO), a groundbreaking framework that enables LLMs to self-improve their reasoning capabilities during training. The Need for Advanced Reasoning in LLMs Reasoning—especially sequential, multi-step reasoning—is essential for tasks that require logical progression and problem-solving. While current models excel at simpler queries, they often fall short in tackling more complex tasks due to a reliance on external feedback mechanisms or runtime optimizations. Enhancing reasoning abilities is therefore critical to unlocking the full potential of LLMs across diverse applications, from advanced mathematics to real-time data analysis. Existing techniques like Chain-of-Thought (CoT) prompting guide models to break problems into smaller steps, while methods such as Tree-of-Thought and Program-of-Thought explore multiple reasoning pathways. Although these techniques improve runtime performance, they don’t fundamentally enhance reasoning during the model’s training phase, limiting the scope of improvement. Salesforce AI Research Introduces LaTRO: A Self-Rewarding Framework LaTRO shifts the paradigm by transforming reasoning into a training-level optimization problem. It introduces a self-rewarding mechanism that allows models to evaluate and refine their reasoning pathways without relying on external feedback or supervised fine-tuning. This intrinsic approach fosters continual improvement and empowers models to solve complex tasks more effectively. How LaTRO Works LaTRO’s methodology centers on sampling reasoning paths from a latent distribution and optimizing these paths using variational techniques. Here’s how it works: This self-rewarding cycle ensures that the model continuously refines its reasoning capabilities during training. Unlike traditional methods, LaTRO’s framework operates autonomously, without the need for external reward models or costly supervised feedback loops. Key Benefits of LaTRO Performance Highlights LaTRO’s effectiveness has been validated across various datasets and models: Applications and Implications LaTRO’s ability to foster logical coherence and structured reasoning has far-reaching applications in fields requiring robust problem-solving: By enabling LLMs to autonomously refine their reasoning processes, LaTRO brings AI closer to achieving human-like cognitive abilities. The Future of AI with LaTRO LaTRO sets a new benchmark in AI research by demonstrating that reasoning can be optimized during training, not just at runtime. This advancement by Salesforce AI Research highlights the potential for self-evolving AI models that can independently improve their problem-solving capabilities. Salesforce AI Research Introduces LaTRO As the field of AI progresses, frameworks like LaTRO pave the way for more autonomous, intelligent systems capable of navigating complex reasoning tasks across industries. LaTRO represents a significant leap forward, moving AI closer to achieving true autonomous reasoning. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
healthcare Can prioritize ai governance

Healthcare Can Prioritize AI Governance

As artificial intelligence gains momentum in healthcare, it’s critical for health systems and related stakeholders to develop robust AI governance programs. AI’s potential to address challenges in administration, operations, and clinical care is drawing interest across the sector. As this technology evolves, the range of applications in healthcare will only broaden.

Read More
Where LLMs Fall Short

Where LLMs Fall Short

Large Language Models (LLMs) have transformed natural language processing, showcasing exceptional abilities in text generation, translation, and various language tasks. Models like GPT-4, BERT, and T5 are based on transformer architectures, which enable them to predict the next word in a sequence by training on vast text datasets. How LLMs Function LLMs process input text through multiple layers of attention mechanisms, capturing complex relationships between words and phrases. Here’s an overview of the process: Tokenization and Embedding Initially, the input text is broken down into smaller units, typically words or subwords, through tokenization. Each token is then converted into a numerical representation known as an embedding. For instance, the sentence “The cat sat on the mat” could be tokenized into [“The”, “cat”, “sat”, “on”, “the”, “mat”], each assigned a unique vector. Multi-Layer Processing The embedded tokens are passed through multiple transformer layers, each containing self-attention mechanisms and feed-forward neural networks. Contextual Understanding As the input progresses through layers, the model develops a deeper understanding of the text, capturing both local and global context. This enables the model to comprehend relationships such as: Training and Pattern Recognition During training, LLMs are exposed to vast datasets, learning patterns related to grammar, syntax, and semantics: Generating Responses When generating text, the LLM predicts the next word or token based on its learned patterns. This process is iterative, where each generated token influences the next. For example, if prompted with “The Eiffel Tower is located in,” the model would likely generate “Paris,” given its learned associations between these terms. Limitations in Reasoning and Planning Despite their capabilities, LLMs face challenges in areas like reasoning and planning. Research by Subbarao Kambhampati highlights several limitations: Lack of Causal Understanding LLMs struggle with causal reasoning, which is crucial for understanding how events and actions relate in the real world. Difficulty with Multi-Step Planning LLMs often struggle to break down tasks into a logical sequence of actions. Blocksworld Problem Kambhampati’s research on the Blocksworld problem, which involves stacking and unstacking blocks, shows that LLMs like GPT-3 struggle with even simple planning tasks. When tested on 600 Blocksworld instances, GPT-3 solved only 12.5% of them using natural language prompts. Even after fine-tuning, the model solved only 20% of the instances, highlighting the model’s reliance on pattern recognition rather than true understanding of the planning task. Performance on GPT-4 Temporal and Counterfactual Reasoning LLMs also struggle with temporal reasoning (e.g., understanding the sequence of events) and counterfactual reasoning (e.g., constructing hypothetical scenarios). Token and Numerical Errors LLMs also exhibit errors in numerical reasoning due to inconsistencies in tokenization and their lack of true numerical understanding. Tokenization and Numerical Representation Numbers are often tokenized inconsistently. For example, “380” might be one token, while “381” might split into two tokens (“38” and “1”), leading to confusion in numerical interpretation. Decimal Comparison Errors LLMs can struggle with decimal comparisons. For example, comparing 9.9 and 9.11 may result in incorrect conclusions due to how the model processes these numbers as strings rather than numerically. Examples of Numerical Errors Hallucinations and Biases Hallucinations LLMs are prone to generating false or nonsensical content, known as hallucinations. This can happen when the model produces irrelevant or fabricated information. Biases LLMs can perpetuate biases present in their training data, which can lead to the generation of biased or stereotypical content. Inconsistencies and Context Drift LLMs often struggle to maintain consistency over long sequences of text or tasks. As the input grows, the model may prioritize more recent information, leading to contradictions or neglect of earlier context. This is particularly problematic in multi-turn conversations or tasks requiring persistence. Conclusion While LLMs have advanced the field of natural language processing, they still face significant challenges in reasoning, planning, and maintaining contextual accuracy. These limitations highlight the need for further research and development of hybrid AI systems that integrate LLMs with other techniques to improve reasoning, consistency, and overall performance. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
gettectonic.com