Cohere Archives - gettectonic.com

pardot1084082=08782fe4994719f386e1d3c9bbd9a12817d57a65b36593fbba0e8645340e02b6

AI Agents and Work

From AI Workflows to Autonomous Agents

From AI Workflows to Autonomous Agents: The Path to True AI Autonomy Building functional AI agents is often portrayed as a straightforward task—chain a large language model (LLM) to some APIs, add memory, and declare autonomy. Yet, anyone who has deployed such systems in production knows the reality: agents that perform well in controlled demos often falter in the real world, making poor decisions, entering infinite loops, or failing entirely when faced with unanticipated scenarios. AI Workflows vs. AI Agents: Key Differences The distinction between workflows and agents, as highlighted by Anthropic and LangGraph, is critical. Workflows dominate because they work reliably. But to achieve true agentic AI, the field must overcome fundamental challenges in reasoning, adaptability, and robustness. The Evolution of AI Workflows 1. Prompt Chaining: Structured but Fragile Breaking tasks into sequential subtasks improves accuracy by enforcing step-by-step validation. However, this approach introduces latency, cascading failures, and sometimes leads to verbose but incorrect reasoning. 2. Routing Frameworks: Efficiency with Blind Spots Directing tasks to specialized models (e.g., math to a math-optimized LLM) enhances efficiency. Yet, LLMs struggle with self-assessment—they often attempt tasks beyond their capabilities, leading to confident but incorrect outputs. 3. Parallel Processing: Speed at the Cost of Coherence Running multiple subtasks simultaneously speeds up workflows, but merging conflicting results remains a challenge. Without robust synthesis mechanisms, parallelization can produce inconsistent or nonsensical outputs. 4. Orchestrator-Worker Models: Flexibility Within Limits A central orchestrator delegates tasks to specialized components, enabling scalable multi-step problem-solving. However, the system remains bound by predefined logic—true adaptability is still missing. 5. Evaluator-Optimizer Loops: Limited by Feedback Quality These loops refine performance based on evaluator feedback. But if the evaluation metric is flawed, optimization merely entrenches errors rather than correcting them. The Four Pillars of True Autonomous Agents For AI to move beyond workflows and achieve genuine autonomy, four critical challenges must be addressed: 1. Self-Awareness Current agents lack the ability to recognize uncertainty, reassess faulty reasoning, or know when to halt execution. A functional agent must self-monitor and adapt in real-time to avoid compounding errors. 2. Explainability Workflows are debuggable because each step is predefined. Autonomous agents, however, require transparent decision-making—they should justify their reasoning at every stage, enabling developers to diagnose and correct failures. 3. Security Granting agents API access introduces risks beyond content moderation. True agent security requires architectural safeguards that prevent harmful or unintended actions before execution. 4. Scalability While workflows scale predictably, autonomous agents become unstable as complexity grows. Solving this demands more than bigger models—it requires agents that handle novel scenarios without breaking. The Road Ahead: Beyond the Hype Today’s “AI agents” are largely advanced workflows masquerading as autonomous systems. Real progress won’t come from larger LLMs or longer context windows, but from agents that can:✔ Detect and correct their own mistakes✔ Explain their reasoning transparently✔ Operate securely in open environments✔ Scale intelligently to unforeseen challenges The shift from workflows to true agents is closer than it seems—but only if the focus remains on real decision-making, not just incremental automation improvements. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Natural Language Processing Explained

Exploring 3 Types of Natural Language Processing in Healthcare

Healthcare generates vast amounts of unstructured, text-based data—primarily in the form of clinical notes stored in electronic health records (EHRs). While this data holds immense potential for improving patient outcomes, extracting meaningful insights from it remains a challenge. Natural language processing (NLP) offers a solution by enabling healthcare stakeholders to analyze and interpret this data efficiently. NLP technologies can support population health management, clinical decision-making, and medical research by transforming unstructured text into actionable insights. Despite the excitement around NLP in healthcare—particularly amid clinician burnout and EHR inefficiencies—its two core components, natural language understanding (NLU) and natural language generation (NLG), receive less attention. This insight explores NLP, NLU, and NLG, highlighting their differences and healthcare applications. Understanding NLP, NLU, and NLG While related, these three concepts serve distinct purposes: Healthcare Applications NLP technologies offer diverse benefits across clinical, administrative, and research settings: 1. NLP in Clinical and Operational Use Cases Real-World Examples: 2. NLU for Research & Chatbots While less widely adopted than NLP, NLU shows promise in: 3. NLG for Generative AI in Healthcare Challenges & Barriers to Adoption Despite their potential, NLP technologies face several hurdles: 1. Data Quality & Accessibility 2. Bias & Fairness Concerns 3. Regulatory & Privacy Issues 4. Performance & Clinical Relevance The Future of NLP in Healthcare Despite these challenges, NLP, NLU, and NLG hold tremendous potential to revolutionize healthcare by:✔ Enhancing clinical decision-making✔ Streamlining administrative workflows✔ Accelerating medical research As the technology matures, addressing data, bias, and regulatory concerns will be key to unlocking its full impact. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Salesforce Launches Nature-Focused AI Accelerator to Power Environmental Nonprofits

Salesforce Launches Nature-Focused AI Accelerator to Power Environmental Nonprofits

San Francisco, April 2025 – Salesforce has unveiled a groundbreaking nature-focused AI accelerator, empowering mission-driven organizations to scale their impact in forest conservation, regenerative agriculture, water access, and corporate sustainability. The new initiative, part of Salesforce’s Agents for Impact program, leverages agentic AI—an advanced form of autonomous artificial intelligence capable of independent decision-making, learning, and real-time action—to help nonprofits overcome resource barriers and amplify their environmental efforts. Why AI for Nature? Climate change and biodiversity loss demand urgent, scalable solutions. Yet, many nonprofits struggle with limited staffing, funding, and technical expertise. According to Salesforce research: “To fully harness nature’s potential for global resilience, we need innovation that matches the scale of the challenge,” says Sunya Norman, SVP of Impact at Salesforce. “Agentic AI enables nonprofits to achieve more with fewer resources—transforming how we protect and restore our planet.” Meet the AI-Powered Nonprofits Salesforce’s accelerator supports five organizations deploying AI for measurable environmental impact: 🌳 Forest Stewardship Council (FSC) 🌾 Rare 💧 Global Water Center ⚖️ Fair Trade USA 🏢 Ceres Beyond the Accelerator: Salesforce’s Broader Sustainability Push The Agents for Impact program aligns with Salesforce’s commitment to responsible AI development. Recently, the company: “Transparency, like the AI Energy Score, is critical,” says Ariane Thomas, Global Tech Director of Sustainability at L’Oréal. “By sharing energy data, we can collectively reduce AI’s environmental footprint.” The Future of AI for Good This accelerator marks a major leap in using AI to protect ecosystems, support farmers, and drive corporate sustainability. With Salesforce’s support, nonprofits can now scale their impact like never before—proving that technology and nature can work hand in hand. Ready to see AI drive real environmental change? Learn more about Salesforce’s Agents for Impact program.  Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
AI Captivates the World

AI vs Human Intelligence

Artificial Intelligence vs. Human Intelligence: Key Differences Explained Artificial intelligence (AI) often mimics human-like capabilities, but there are fundamental differences between natural human intelligence and artificial systems. While AI has made remarkable strides in replicating certain aspects of human cognition, it operates in ways that are distinct from how humans think, learn, and solve problems. Below, we explore three key areas where AI and human intelligence diverge. Defining Intelligence Human IntelligenceHuman intelligence is often described using terms like smartness, understanding, brainpower, reasoning, sharpness, and wisdom. These concepts reflect the complexity of human cognition, which has been debated for thousands of years. At its core, human intelligence is a biopsychological capacity to acquire, apply, and adapt knowledge and skills. It encompasses not only logical reasoning but also emotional understanding, creativity, and social interaction. Artificial IntelligenceAI refers to machines designed to perform tasks traditionally associated with human intelligence, such as learning, problem-solving, and decision-making. Over the past few decades, AI has advanced rapidly, particularly in areas like machine learning and generative AI. However, AI lacks the depth and breadth of human intelligence, operating instead through algorithms and data processing. Human Intelligence: What Humans Do Better Humans excel in areas that require empathy, judgment, intuition, and creativity. These qualities are deeply rooted in our evolution as social beings. For example: These capabilities make human intelligence uniquely suited for tasks that involve emotional connection, ethical decision-making, and creative thinking. Artificial Intelligence: What AI Does Better AI outperforms humans in several areas, particularly those involving data processing, pattern recognition, and speed: However, AI’s strengths are limited to the data it is trained on and the algorithms it uses, lacking the adaptability and contextual understanding of human intelligence. 3 Key Differences Between AI and Human Intelligence AI and Human Intelligence: Working Together The future lies in human-AI collaboration, where the strengths of both are leveraged to address complex challenges. For example: While some may find the idea of integrating AI into decision-making unsettling, the scale of global challenges—from climate change to healthcare—demands the combined power of human and artificial intelligence. By working together, humans and AI can amplify each other’s strengths while mitigating weaknesses. Conclusion AI and human intelligence are fundamentally different, each excelling in areas where the other falls short. Human intelligence is unparalleled in creativity, empathy, and ethical reasoning, while AI dominates in data processing, pattern recognition, and speed. The key to unlocking the full potential of AI lies in human-AI collaboration, where the unique strengths of both are harnessed to solve the world’s most pressing problems. As we move forward, this partnership will likely become not just beneficial but essential. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
The Rise of AI Agents: 2024 and Beyond

The Rise of AI Agents: 2024 and Beyond

In 2024, we witnessed major breakthroughs in AI agents. OpenAI’s o1 and o3 models demonstrated the ability to deconstruct complex tasks, while Claude 3.5 showcased AI’s capacity to interact with computers like humans—navigating interfaces and running software. These advancements, alongside improvements in memory and learning systems, are pushing AI beyond simple chat interactions into the realm of autonomous systems. AI agents are already making an impact in specialized fields, including legal analysis, scientific research, and technical support. While they excel in structured environments with defined rules, they still struggle with unpredictable scenarios and open-ended challenges. Their success rates drop significantly when handling exceptions or adapting to dynamic conditions. The field is evolving from conversational AI to intelligent systems capable of reasoning and independent action. Each step forward demands greater computational power and introduces new technical challenges. This article explores how AI agents function, their current capabilities, and the infrastructure required to ensure their reliability. What is an AI Agent? An AI agent is a system designed to reason through problems, plan solutions, and execute tasks using external tools. Unlike traditional AI models that simply respond to prompts, agents possess: Understanding the shift from passive responders to autonomous agents is key to grasping the opportunities and challenges ahead. Let’s explore the breakthroughs that have fueled this transformation. 2024’s Key Breakthroughs OpenAI o3’s High Score on the ARC-AGI Benchmark Three pivotal advancements in 2024 set the stage for autonomous AI agents: AI Agents in Action These capabilities are already yielding practical applications. As Reid Hoffman observed, we are seeing the emergence of specialized AI agents that extend human capabilities across various industries: Recent research from Sierra highlights the rapid maturation of these systems. AI agents are transitioning from experimental prototypes to real-world deployment, capable of handling complex business rules while engaging in natural conversations. The Road Ahead: Key Questions As AI agents continue to evolve, three critical questions for us all emerge: The next wave of AI innovation will be defined by how well we address these challenges. By building robust systems that balance autonomy with oversight, we can unlock the full potential of AI agents in the years ahead. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
salesforce service assistant

Salesforce Service Assistant

Salesforce Service Assistant is an AI-powered tool that helps service representatives resolve cases faster. It’s available on Service Cloud and is designed to save time for agents. How it works Benefits Helps agents resolve cases faster, Saves time for service representatives, Grounded in the organization’s knowledge base and data, and Adheres to company policies. Additional information Alongside agent guidance, the Service Assistant provides two other notable features. The first enables agents to create conversation summaries with “just a click” after using the solution to complete a case. The second allows agents to request that the assistant auto-crafts a new knowledge article when its guidance proved insufficient, based on how they resolved the query. Thanks to this second feature, the Service Assistant may get better with time, aiding agent proficiency, customer satisfaction, and – ultimately – average handling time (AHT). However, despite this capability, Salesforce has pledged to advance the solution further. Indeed, during a recent webinar, Kevin Qi, Associate Product Manager at Salesforce, teased what will come in June. Pointing to Service Cloud’s Summer ‘25 release wave, Qi said: The next phase of Service Assistant involves actionable plans. So, not only will it help guide the service rep, but it’ll also take actions to automate various steps, so it can look up orders, check eligibilities, and more to help speed up the efficiency of tackling that case. Beyond the summer, Salesforce plans to have the Assistant blend modalities, guiding customer conversations across channels to further streamline the interaction. “The Service Assistant will become even more adaptive, support more channels, including messaging and voice, being able to adapt to changes in case context,” concluded Qi. The Latest AI Solutions on Service Cloud Alongside the Service Assistant, Salesforce has released several other AI and Agentforce capabilities, embedded across Service Cloud. Qi picked out the “Freeform Instructions in Service Email Assistant” feature for special reference. “If the agent doesn’t have a template already made for a particular instance, they can type – in natural language – the sort of email they’d want to generate and have Agentforce create that email in the flow of work,” he said. That capability may prove highly beneficial in helping agents piece their thoughts together when resolving a tricky case. After all, they can note some key points – in natural language – and the feature will create a coherent customer response. Alongside this comes a solution to quickly summarize case activity for wrap-up in beta. Yet, most new features focus on improving the knowledge that feeds into AI solutions, like the Service Assistant. For starters, there’s a flow orchestrator in beta that helps contact center leaders build a process for approving new knowledge articles and updates. Additionally, there’s an “Update Knowledge Content with AI” feature. This ingests prompts and – as it says on the tin – updates the tone, style, and length of particular knowledge articles. Last comes the “Knowledge Sync to Data Cloud” tool that pulls contact center knowledge into the Salesforce customer data platform (CDP). Not only does this democratize service insights, but it also supports contact centers in grounding the Service Assistant and other AI agents. Both of these final knowledge capabilities are now generally available. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
The Growing Role of AI in Cloud Management

Introducing TACO

Advancing Multi-Modal AI with TACO: A Breakthrough in Reasoning and Tool Integration Developing effective multi-modal AI systems for real-world applications demands mastering diverse tasks, including fine-grained recognition, visual grounding, reasoning, and multi-step problem-solving. However, current open-source multi-modal models fall short in these areas, especially when tasks require external tools like OCR or mathematical calculations. These limitations largely stem from the reliance on single-step datasets that fail to provide a coherent framework for multi-step reasoning and logical action chains. Addressing these shortcomings is crucial for unlocking multi-modal AI’s full potential in tackling complex challenges. Challenges in Existing Multi-Modal Models Most existing multi-modal models rely on instruction tuning with direct-answer datasets or few-shot prompting approaches. Proprietary systems like GPT-4 have demonstrated the ability to effectively navigate CoTA (Chains of Thought and Actions) reasoning, but open-source models struggle due to limited datasets and tool integration. Earlier efforts, such as LLaVa-Plus and Visual Program Distillation, faced barriers like small dataset sizes, poor-quality training data, and a narrow focus on simple question-answering tasks. These limitations hinder their ability to address complex, multi-modal challenges requiring advanced reasoning and tool application. Introducing TACO: A Multi-Modal Action Framework Researchers from the University of Washington and Salesforce Research have introduced TACO (Training Action Chains Optimally), an innovative framework that redefines multi-modal learning by addressing these challenges. TACO introduces several advancements that establish a new benchmark for multi-modal AI performance: Training and Architecture TACO’s training process utilized a carefully curated CoTA dataset of 293K instances from 31 sources, including Visual Genome, offering a diverse range of tasks such as mathematical reasoning, OCR, and visual understanding. The system employs: Benchmark Performance TACO demonstrated significant performance improvements across eight benchmarks, achieving an average accuracy increase of 3.6% over instruction-tuned baselines and gains as high as 15% on MMVet tasks involving OCR and mathematical reasoning. Key findings include: Transforming Multi-Modal AI Applications TACO represents a transformative step in multi-modal action modeling by addressing critical deficiencies in reasoning and tool-based actions. Its innovative approach leverages high-quality synthetic datasets and advanced training methodologies to unlock the potential of multi-modal AI in real-world applications, from visual question answering to complex multi-step reasoning tasks. By bridging the gap between reasoning and action integration, TACO paves the way for AI systems capable of tackling intricate scenarios with unprecedented accuracy and efficiency. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Salesforce’s AI Energy Score

Salesforce’s AI Energy Score

Salesforce’s AI Energy Score: Setting a New Standard for AI Sustainability Understanding AI’s Environmental Impact As AI adoption accelerates globally, concerns about its environmental footprint have grown. Due to AI’s reliance on power-intensive data centers, the technology consumes vast amounts of energy and water, raising sustainability challenges. To address this, Salesforce, in collaboration with Hugging Face, Cohere, and Carnegie Mellon University, has introduced the AI Energy Score—a pioneering tool designed to measure and compare AI models’ energy efficiency. The AI Energy Score Launch The AI Energy Score will debut at the AI Action Summit on February 10, 2025, where leaders from over 100 countries, along with private sector and civil society representatives, will convene to discuss AI’s role in sustainability. Recognized by the French Government and the Paris Peace Forum, this initiative marks a significant step toward transparent and accountable AI development. “We are at a critical moment where the rapid acceleration of both the climate crisis and AI innovation intersect,” says Boris Gamazaychikov, Head of AI Sustainability at Salesforce.“AI’s environmental impact has remained largely opaque, with little transparency around its energy consumption. The AI Energy Score provides a standardized framework to disclose and compare these impacts, removing a key blocker to making sustainable AI the norm.” What Is the AI Energy Score? Developed in partnership with Hugging Face, Cohere, and Carnegie Mellon University, the AI Energy Score aims to establish clear and standardized energy consumption metrics for AI models. “The AI Energy Score is a major milestone for sustainable AI,” says Dr. Sasha Luccioni, AI & Climate Lead at Hugging Face. “By creating a transparent rating system, we address a key blocker for reducing AI’s environmental impact. We’re excited to launch this initiative and drive industry-wide adoption.” Key features of the AI Energy Score include: ✅ Standardized energy ratings – A framework for evaluating AI models’ energy efficiency✅ Public leaderboard – A ranking of 200+ AI models across 10 common tasks (e.g., text and image generation)✅ Benchmarking portal – A platform for submitting and assessing AI models, both open and proprietary✅ Recognizable energy use label – A 1–5 star system for easy identification of energy-efficient models✅ Label generator – A tool for AI developers to create and share standardized energy labels The Impact of the AI Energy Score The introduction of this score is expected to have far-reaching implications for the AI industry: 🔹 Driving market preference – Transparency will push demand for more energy-efficient AI models🔹 Incentivizing sustainable development – Public disclosure will encourage AI developers to prioritize efficiency🔹 Empowering informed decisions – AI users and businesses can make better choices based on energy efficiency data Salesforce’s Commitment to Sustainable AI Salesforce is leading by example, becoming the first AI model developer to disclose energy efficiency data for its proprietary models under this framework. This aligns with the company’s broader sustainability goals and ethical AI approach. Agentforce: AI Efficiency at Scale Salesforce’s Agentforce platform, introduced in 2024, is designed to deploy autonomous AI agents across business functions while maintaining energy efficiency. “Agentforce is built with sustainability at its core, delivering high performance while minimizing environmental impact,” explains Boris Gamazaychikov.“Unlike DIY AI approaches that require energy-intensive model training for each customer, Agentforce is optimized out of the box, reducing costly and carbon-heavy training.” Organizations are already leveraging Agentforce for impact-driven efficiencies: ✅ Good360 uses Agentforce to allocate donated goods more efficiently, cutting waste and emissions while saving 1,000+ employee hours annually✅ Businesses can reduce operational costs by optimizing AI model energy consumption “Reducing AI energy use isn’t just good for the environment—it lowers costs, optimizes infrastructure, and improves long-term profitability,” says Suzanne DiBianca, EVP & Chief Impact Officer at Salesforce.“We’re proud to work with industry leaders to build a more transparent AI ecosystem.” Addressing the AI Energy Challenge With AI-driven data center power usage projected to double by 2026, the AI Energy Score is a timely solution to help organizations manage and reduce their AI-related environmental impact. “The AI Energy Score isn’t just an energy-use metric—it’s a strategic business advantage,” adds Boris Gamazaychikov. “By helping organizations assess and optimize AI model energy consumption, it supports lower costs, better infrastructure efficiency, and long-term profitability.” As AI continues to evolve, sustainability must be part of the equation. The AI Energy Score is a major step in ensuring that the AI industry moves toward a more responsible, energy-efficient future.: Setting a New Standard for AI Sustainability Understanding AI’s Environmental Impact As AI adoption accelerates globally, concerns about its environmental footprint have grown. Due to AI’s reliance on power-intensive data centers, the technology consumes vast amounts of energy and water, raising sustainability challenges. To address this, Salesforce, in collaboration with Hugging Face, Cohere, and Carnegie Mellon University, has introduced the AI Energy Score—a pioneering tool designed to measure and compare AI models’ energy efficiency. The AI Energy Score Launch The AI Energy Score will debut at the AI Action Summit on February 10, 2025, where leaders from over 100 countries, along with private sector and civil society representatives, will convene to discuss AI’s role in sustainability. Recognized by the French Government and the Paris Peace Forum, this initiative marks a significant step toward transparent and accountable AI development. “We are at a critical moment where the rapid acceleration of both the climate crisis and AI innovation intersect,” says Boris Gamazaychikov, Head of AI Sustainability at Salesforce.“AI’s environmental impact has remained largely opaque, with little transparency around its energy consumption. The AI Energy Score provides a standardized framework to disclose and compare these impacts, removing a key blocker to making sustainable AI the norm.” What Is the AI Energy Score? Developed in partnership with Hugging Face, Cohere, and Carnegie Mellon University, the AI Energy Score aims to establish clear and standardized energy consumption metrics for AI models. “The AI Energy Score is a major milestone for sustainable AI,” says Dr. Sasha Luccioni, AI & Climate Lead at Hugging Face. “By creating a transparent rating system, we address a key blocker for reducing AI’s

Read More
Generative AI Energy Consumption Rises

Generative AI Tools

Generative AI Tools: A Comprehensive Overview of Emerging Capabilities The widespread adoption of generative AI services like ChatGPT has sparked immense interest in leveraging these tools for practical enterprise applications. Today, nearly every enterprise app integrates generative AI capabilities to enhance functionality and efficiency. A broad range of AI, data science, and machine learning tools now support generative AI use cases. These tools assist in managing the AI lifecycle, governing data, and addressing security and privacy concerns. While such capabilities also aid in traditional AI development, this discussion focuses on tools specifically designed for generative AI. Not all generative AI relies on large language models (LLMs). Emerging techniques generate images, videos, audio, synthetic data, and translations using methods such as generative adversarial networks (GANs), diffusion models, variational autoencoders, and multimodal approaches. Here is an in-depth look at the top categories of generative AI tools, their capabilities, and notable implementations. It’s worth noting that many leading vendors are expanding their offerings to support multiple categories through acquisitions or integrated platforms. Enterprises may want to explore comprehensive platforms when planning their generative AI strategies. 1. Foundation Models and Services Generative AI tools increasingly simplify the development and responsible use of LLMs, initially pioneered through transformer-based approaches by Google researchers in 2017. 2. Cloud Generative AI Platforms Major cloud providers offer generative AI platforms to streamline development and deployment. These include: 3. Use Case Optimization Tools Foundation models often require optimization for specific tasks. Enterprises use tools such as: 4. Quality Assurance and Hallucination Mitigation Hallucination detection tools address the tendency of generative models to produce inaccurate or misleading information. Leading tools include: 5. Prompt Engineering Tools Prompt engineering tools optimize interactions with LLMs and streamline testing for bias, toxicity, and accuracy. Examples include: 6. Data Aggregation Tools Generative AI tools have evolved to handle larger data contexts efficiently: 7. Agentic and Autonomous AI Tools Developers are creating tools to automate interactions across foundation models and services, paving the way for autonomous AI. Notable examples include: 8. Generative AI Cost Optimization Tools These tools aim to balance performance, accuracy, and cost effectively. Martian’s Model Router is an early example, while traditional cloud cost optimization platforms are expected to expand into this area. Generative AI tools are rapidly transforming enterprise applications, with foundational, cloud-based, and domain-specific solutions leading the way. By addressing challenges like accuracy, hallucination, and cost, these tools unlock new potential across industries and use cases, enabling enterprises to stay ahead in the AI-driven landscape. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Salesforce AI Research Introduces LaTRO

Salesforce AI Research Introduces LaTRO

Salesforce AI Research Introduces LaTRO: A Breakthrough in Enhancing Reasoning for Large Language Models Large Language Models (LLMs) have revolutionized tasks such as answering questions, generating content, and assisting with workflows. However, they often struggle with advanced reasoning tasks like solving complex math problems, logical deduction, and structured data analysis. Salesforce AI Research has addressed this challenge by introducing LaTent Reasoning Optimization (LaTRO), a groundbreaking framework that enables LLMs to self-improve their reasoning capabilities during training. The Need for Advanced Reasoning in LLMs Reasoning—especially sequential, multi-step reasoning—is essential for tasks that require logical progression and problem-solving. While current models excel at simpler queries, they often fall short in tackling more complex tasks due to a reliance on external feedback mechanisms or runtime optimizations. Enhancing reasoning abilities is therefore critical to unlocking the full potential of LLMs across diverse applications, from advanced mathematics to real-time data analysis. Existing techniques like Chain-of-Thought (CoT) prompting guide models to break problems into smaller steps, while methods such as Tree-of-Thought and Program-of-Thought explore multiple reasoning pathways. Although these techniques improve runtime performance, they don’t fundamentally enhance reasoning during the model’s training phase, limiting the scope of improvement. Salesforce AI Research Introduces LaTRO: A Self-Rewarding Framework LaTRO shifts the paradigm by transforming reasoning into a training-level optimization problem. It introduces a self-rewarding mechanism that allows models to evaluate and refine their reasoning pathways without relying on external feedback or supervised fine-tuning. This intrinsic approach fosters continual improvement and empowers models to solve complex tasks more effectively. How LaTRO Works LaTRO’s methodology centers on sampling reasoning paths from a latent distribution and optimizing these paths using variational techniques. Here’s how it works: This self-rewarding cycle ensures that the model continuously refines its reasoning capabilities during training. Unlike traditional methods, LaTRO’s framework operates autonomously, without the need for external reward models or costly supervised feedback loops. Key Benefits of LaTRO Performance Highlights LaTRO’s effectiveness has been validated across various datasets and models: Applications and Implications LaTRO’s ability to foster logical coherence and structured reasoning has far-reaching applications in fields requiring robust problem-solving: By enabling LLMs to autonomously refine their reasoning processes, LaTRO brings AI closer to achieving human-like cognitive abilities. The Future of AI with LaTRO LaTRO sets a new benchmark in AI research by demonstrating that reasoning can be optimized during training, not just at runtime. This advancement by Salesforce AI Research highlights the potential for self-evolving AI models that can independently improve their problem-solving capabilities. Salesforce AI Research Introduces LaTRO As the field of AI progresses, frameworks like LaTRO pave the way for more autonomous, intelligent systems capable of navigating complex reasoning tasks across industries. LaTRO represents a significant leap forward, moving AI closer to achieving true autonomous reasoning. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Where LLMs Fall Short

Where LLMs Fall Short

Large Language Models (LLMs) have transformed natural language processing, showcasing exceptional abilities in text generation, translation, and various language tasks. Models like GPT-4, BERT, and T5 are based on transformer architectures, which enable them to predict the next word in a sequence by training on vast text datasets. How LLMs Function LLMs process input text through multiple layers of attention mechanisms, capturing complex relationships between words and phrases. Here’s an overview of the process: Tokenization and Embedding Initially, the input text is broken down into smaller units, typically words or subwords, through tokenization. Each token is then converted into a numerical representation known as an embedding. For instance, the sentence “The cat sat on the mat” could be tokenized into [“The”, “cat”, “sat”, “on”, “the”, “mat”], each assigned a unique vector. Multi-Layer Processing The embedded tokens are passed through multiple transformer layers, each containing self-attention mechanisms and feed-forward neural networks. Contextual Understanding As the input progresses through layers, the model develops a deeper understanding of the text, capturing both local and global context. This enables the model to comprehend relationships such as: Training and Pattern Recognition During training, LLMs are exposed to vast datasets, learning patterns related to grammar, syntax, and semantics: Generating Responses When generating text, the LLM predicts the next word or token based on its learned patterns. This process is iterative, where each generated token influences the next. For example, if prompted with “The Eiffel Tower is located in,” the model would likely generate “Paris,” given its learned associations between these terms. Limitations in Reasoning and Planning Despite their capabilities, LLMs face challenges in areas like reasoning and planning. Research by Subbarao Kambhampati highlights several limitations: Lack of Causal Understanding LLMs struggle with causal reasoning, which is crucial for understanding how events and actions relate in the real world. Difficulty with Multi-Step Planning LLMs often struggle to break down tasks into a logical sequence of actions. Blocksworld Problem Kambhampati’s research on the Blocksworld problem, which involves stacking and unstacking blocks, shows that LLMs like GPT-3 struggle with even simple planning tasks. When tested on 600 Blocksworld instances, GPT-3 solved only 12.5% of them using natural language prompts. Even after fine-tuning, the model solved only 20% of the instances, highlighting the model’s reliance on pattern recognition rather than true understanding of the planning task. Performance on GPT-4 Temporal and Counterfactual Reasoning LLMs also struggle with temporal reasoning (e.g., understanding the sequence of events) and counterfactual reasoning (e.g., constructing hypothetical scenarios). Token and Numerical Errors LLMs also exhibit errors in numerical reasoning due to inconsistencies in tokenization and their lack of true numerical understanding. Tokenization and Numerical Representation Numbers are often tokenized inconsistently. For example, “380” might be one token, while “381” might split into two tokens (“38” and “1”), leading to confusion in numerical interpretation. Decimal Comparison Errors LLMs can struggle with decimal comparisons. For example, comparing 9.9 and 9.11 may result in incorrect conclusions due to how the model processes these numbers as strings rather than numerically. Examples of Numerical Errors Hallucinations and Biases Hallucinations LLMs are prone to generating false or nonsensical content, known as hallucinations. This can happen when the model produces irrelevant or fabricated information. Biases LLMs can perpetuate biases present in their training data, which can lead to the generation of biased or stereotypical content. Inconsistencies and Context Drift LLMs often struggle to maintain consistency over long sequences of text or tasks. As the input grows, the model may prioritize more recent information, leading to contradictions or neglect of earlier context. This is particularly problematic in multi-turn conversations or tasks requiring persistence. Conclusion While LLMs have advanced the field of natural language processing, they still face significant challenges in reasoning, planning, and maintaining contextual accuracy. These limitations highlight the need for further research and development of hybrid AI systems that integrate LLMs with other techniques to improve reasoning, consistency, and overall performance. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Evaluating RAG With Needle in Haystack Test

Agentic RAG

Agentic RAG: The Next Evolution of AI-Powered Knowledge Retrieval From RAG to Agentic RAG: A Paradigm Shift in AI Applications While Retrieval-Augmented Generation (RAG) dominated AI advancements in 2023, agentic workflows are now driving the next wave of innovation in 2024. By integrating AI agents into RAG pipelines, developers can build more powerful, adaptive, and intelligent LLM-powered applications. This article explores:✔ What is Agentic RAG?✔ How it works (single-agent vs. multi-agent architectures)✔ Implementation methods (function calling vs. agent frameworks)✔ Enterprise adoption & real-world use cases✔ Benefits & limitations Understanding the Foundations: RAG & AI Agents What is Retrieval-Augmented Generation (RAG)? RAG enhances LLMs by retrieving external knowledge before generating responses, reducing hallucinations and improving accuracy. Traditional (Vanilla) RAG Pipeline: Limitations of Vanilla RAG: ❌ Single knowledge source (no dynamic tool integration).❌ One-shot retrieval (no iterative refinement).❌ No reasoning over retrieved data quality. What Are AI Agents? AI agents are autonomous LLM-driven systems with: The ReAct Framework (Reason + Act) What is Agentic RAG? Agentic RAG embeds AI agents into RAG pipelines, enabling:✅ Multi-source retrieval (databases, APIs, web search).✅ Dynamic query refinement (self-correcting searches).✅ Validation of results (quality checks before generation). How Agentic RAG Works Instead of a static retrieval step, an AI agent orchestrates: Agentic RAG Architectures 1. Single-Agent RAG (Router) 2. Multi-Agent RAG (Orchestrated Workflow) Implementing Agentic RAG Option 1: LLMs with Function Calling Example: Function Calling with Ollama python Copy def ollama_generation_with_tools(query, tools_schema): # LLM decides tool use → executes → refines response … Option 2: Agent Frameworks Why Enterprises Are Adopting Agentic RAG Real-World Use Cases 🔹 Replit’s AI Dev Agent – Helps debug & write code.🔹 Microsoft Copilots – Assist users in real-time tasks.🔹 Customer Support Bots – Multi-step query resolution. Benefits ✔ Higher accuracy (validated retrievals).✔ Dynamic tool integration (APIs, web, databases).✔ Autonomous task handling (reducing manual work). Limitations ⚠ Added latency (LLM reasoning steps).⚠ Unpredictability (agents may fail without safeguards).⚠ Complex debugging (multi-agent coordination). Conclusion: The Future of Agentic RAG Agentic RAG represents a leap beyond traditional RAG, enabling:🚀 Smarter, self-correcting retrieval.🤖 Seamless multi-tool workflows.🔍 Enterprise-grade reliability. As frameworks mature, expect AI agents to become the backbone of next-gen LLM applications—transforming industries from customer service to software development. Ready to build your own Agentic RAG system? Explore frameworks like LangChain, CrewAI, or OpenAI’s function calling to get started. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Cohere-Powered Slack Agents

Cohere-Powered Slack Agents

Salesforce AI and Cohere-Powered Slack Agents: Seamless CRM Data Interaction and Enhanced Productivity Slack agents, powered by Salesforce AI and integrated with Cohere, enable seamless interaction with CRM data within the Slack platform. These agents allow teams to use natural language to surface data insights and take action, simplifying workflows. With Slack’s AI Workflow Builder and support for third-party AI agents, including Cohere, productivity is further enhanced through automated processes and customizable AI assistants. By leveraging these technologies, Slack agents provide users with direct access to CRM data and AI-powered insights, improving efficiency and collaboration. Key Features of Slack Agents: Salesforce AI and Cohere Productivity Enhancements with Slack Agents: Salesforce AI and Cohere AI Agent Capabilities in Slack: Salesforce and Cohere Data Security and Compliance for Slack Agents FAQ What are Slack agents, and how do they integrate with Salesforce AI and Cohere?Slack agents are AI-powered assistants that enable teams to interact with CRM data directly within Slack. Salesforce AI agents allow natural language data interactions, while Cohere’s integration enhances productivity with customizable AI assistants and automated workflows. How do Salesforce AI agents in Slack improve team productivity?Salesforce AI agents enable users to interact with both CRM and conversational data, update records, and analyze opportunities using natural language. This integration improves workflow efficiency, leading to a reported 47% productivity boost. What features does the Cohere integration with Slack AI offer?Cohere integration offers customizable AI assistants that can help generate workflows, summarize channel content, and provide intelligent responses to user queries within Slack. How do Slack agents handle data security and compliance?Slack agents leverage cloud-native DLP solutions, automatically detecting sensitive data across different file types and setting up automated remediation processes for enhanced security and compliance. Can Slack agents work with AI providers beyond Salesforce and Cohere?Yes, Slack supports AI agents from various providers. In addition to Salesforce AI and Cohere, integrations include Adobe Express, Anthropic, Perplexity, IBM, and Amazon Q Business, offering users a wide array of AI-powered capabilities. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
  • 1
  • 2
gettectonic.com