Agentic AI Archives - gettectonic.com - Page 2
What is Explainable AI

What is Explainable AI

Building a trusted AI system starts with ensuring transparency in how decisions are made. Explainable AI is vital not only for addressing trust issues within organizations but also for navigating regulatory challenges. According to research from Forrester, many business leaders express concerns over AI, particularly generative AI, which surged in popularity following the 2022 release of ChatGPT by OpenAI. “AI faces a trust issue,” explained Forrester analyst Brandon Purcell, underscoring the need for explainability to foster accountability. He highlighted that explainability helps stakeholders understand how AI systems generate their outputs. “Explainability builds trust,” Purcell stated at the Forrester Technology and Innovation Summit in Austin, Texas. “When employees trust AI systems, they’re more inclined to use them.” Implementing explainable AI does more than encourage usage within an organization—it also helps mitigate regulatory risks, according to Purcell. Explainability is crucial for compliance, especially under regulations like the EU AI Act. Forrester analyst Alla Valente emphasized the importance of integrating accountability, trust, and security into AI efforts. “Don’t wait for regulators to set standards—ensure you’re already meeting them,” she advised at the summit. Purcell noted that explainable AI varies depending on whether the AI model is predictive, generative, or agentic. Building an Explainable AI System AI explainability encompasses several key elements, including reproducibility, observability, transparency, interpretability, and traceability. For predictive models, transparency and interpretability are paramount. Transparency involves using “glass-box modeling,” where users can see how the model analyzed the data and arrived at its predictions. This approach is likely to be a regulatory requirement, especially for high-risk applications. Interpretability is another important technique, useful for lower-risk cases such as fraud detection or explaining loan decisions. Techniques like partial dependence plots show how specific inputs affect predictive model outcomes. “With predictive AI, explainability focuses on the model itself,” Purcell noted. “It’s one area where you can open the hood and examine how it works.” In contrast, generative AI models are often more opaque, making explainability harder. Businesses can address this by documenting the entire system, a process known as traceability. For those using models from vendors like Google or OpenAI, tools like transparency indexes and model cards—which detail the model’s use case, limitations, and performance—are valuable resources. Lastly, for agentic AI systems, which autonomously pursue goals, reproducibility is key. Businesses must ensure that the model’s outputs can be consistently replicated with similar inputs before deployment. These systems, like self-driving cars, will require extensive testing in controlled environments before being trusted in the real world. “Agentic systems will need to rack up millions of virtual miles before we let them loose,” Purcell concluded. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Recent advancements in AI

Recent advancements in AI

Recent advancements in AI have been propelled by large language models (LLMs) containing billions to trillions of parameters. Parameters—variables used to train and fine-tune machine learning models—have played a key role in the development of generative AI. As the number of parameters grows, models like ChatGPT can generate human-like content that was unimaginable just a few years ago. Parameters are sometimes referred to as “features” or “feature counts.” While it’s tempting to equate the power of AI models with their parameter count, similar to how we think of horsepower in cars, more parameters aren’t always better. An increase in parameters can lead to additional computational overhead and even problems like overfitting. There are various ways to increase the number of parameters in AI models, but not all approaches yield the same improvements. For example, Google’s Switch Transformers scaled to trillions of parameters, but some of their smaller models outperformed them in certain use cases. Thus, other metrics should be considered when evaluating AI models. The exact relationship between parameter count and intelligence is still debated. John Blankenbaker, principal data scientist at SSA & Company, notes that larger models tend to replicate their training data more accurately, but the belief that more parameters inherently lead to greater intelligence is often wishful thinking. He points out that while these models may sound knowledgeable, they don’t actually possess true understanding. One challenge is the misunderstanding of what a parameter is. It’s not a word, feature, or unit of data but rather a component within the model‘s computation. Each parameter adjusts how the model processes inputs, much like turning a knob in a complex machine. In contrast to parameters in simpler models like linear regression, which have a clear interpretation, parameters in LLMs are opaque and offer no insight on their own. Christine Livingston, managing director at Protiviti, explains that parameters act as weights that allow flexibility in the model. However, more parameters can lead to overfitting, where the model performs well on training data but struggles with new information. Adnan Masood, chief AI architect at UST, highlights that parameters influence precision, accuracy, and data management needs. However, due to the size of LLMs, it’s impractical to focus on individual parameters. Instead, developers assess models based on their intended purpose, performance metrics, and ethical considerations. Understanding the data sources and pre-processing steps becomes critical in evaluating the model’s transparency. It’s important to differentiate between parameters, tokens, and words. A parameter is not a word; rather, it’s a value learned during training. Tokens are fragments of words, and LLMs are trained on these tokens, which are transformed into embeddings used by the model. The number of parameters influences a model’s complexity and capacity to learn. More parameters often lead to better performance, but they also increase computational demands. Larger models can be harder to train and operate, leading to slower response times and higher costs. In some cases, smaller models are preferred for domain-specific tasks because they generalize better and are easier to fine-tune. Transformer-based models like GPT-4 dwarf previous generations in parameter count. However, for edge-based applications where resources are limited, smaller models are preferred as they are more adaptable and efficient. Fine-tuning large models for specific domains remains a challenge, often requiring extensive oversight to avoid problems like overfitting. There is also growing recognition that parameter count alone is not the best way to measure a model’s performance. Alternatives like Stanford’s HELM and benchmarks such as GLUE and SuperGLUE assess models across multiple factors, including fairness, efficiency, and bias. Three trends are shaping how we think about parameters. First, AI developers are improving model performance without necessarily increasing parameters. A study of 231 models between 2012 and 2023 found that the computational power required for LLMs has halved every eight months, outpacing Moore’s Law. Second, new neural network approaches like Kolmogorov-Arnold Networks (KANs) show promise, achieving comparable results to traditional models with far fewer parameters. Lastly, agentic AI frameworks like Salesforce’s Agentforce offer a new architecture where domain-specific AI agents can outperform larger general-purpose models. As AI continues to evolve, it’s clear that while parameter count is an important consideration, it’s just one of many factors in evaluating a model’s overall capabilities. To stay on the cutting edge of artificial intelligence, contact Tectonic today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Commerce Cloud and Agentic AI

Commerce Cloud and Agentic AI

Recognizing the demand from both B2B and B2C buyers for seamless, consistent commerce experiences across online and offline channels, Salesforce has introduced an AI-powered, unified commerce version of its Commerce Cloud platform. Salesforce, a leader in merging ecommerce and CRM software, has taken a significant step toward unified commerce with this next-generation update to Salesforce Commerce Cloud. This move aligns with the expectations of both B2B buyers and consumers, who increasingly seek integrated and personalized interactions. The company states that Commerce Cloud now “natively connects all aspects of commerce—B2C, direct-to-consumer, and B2B commerce; order management; and payments—with sales, service, and marketing, all on a single platform.” This integration offers businesses a complete view of the customer journey through a shared catalog and user profile. By unifying elements like catalogs, pricing, orders, and marketing segments, companies can deliver personalized interactions, boost customer loyalty, and drive revenue across all touchpoints. Unified Commerce: A $1.5 Trillion Opportunity Salesforce cites research from Adyen, which indicates that adopting unified commerce strategies could present a $1.5 trillion opportunity for retailers globally. In North America, 76 of the top 2000 online retailers use Salesforce’s ecommerce platform. In 2023, these retailers generated over 6 billion in web sales. Salesforce’s B2B clients include major companies such as Siemens, Schneider Electric, GE Renewable Energy, and Chambers Gasket. AI-Powered Commerce Cloud Salesforce emphasizes that AI powers key aspects of its next-generation Commerce Cloud, enabling the platform to autonomously manage tasks like product recommendations and order lookups by leveraging data from digital and in-store interactions, orders, inventory levels, customer reviews, unified profiles, and CRM information. The AI-backed “Agentforce” agents are designed to assist employees in delivering personalized interactions, strengthening customer relationships, and improving profit margins. According to Justin Racine, Principal of Unified Commerce at Perficient, Salesforce’s efforts to unify the commerce experience across its broad range of products align with the needs of both B2B buyers and consumers. He notes that modern buyers expect brands to connect and communicate with them based on their previous behaviors, preferences, and purchases. Unlocking Revenue with Agentforce Michael Affronti, Senior Vice President and General Manager of Commerce Cloud, highlights that this new version embodies unified commerce by providing businesses with a single, integrated platform. The platform consolidates the entire commerce journey, with AI-powered Agentforce agents unlocking new revenue streams and delivering personalized experiences across every channel. Furniture designer and manufacturer MillerKnoll has already benefited from the unified platform. Frank DeMaria, Vice President of Digital Engineering & Platforms, mentions that the integration of sales, service, marketing, and other functions has helped the company offer personalized experiences and improve online sales and customer satisfaction across its portfolio of brands, including HermanMiller. Key Features of the New Commerce Cloud Racine adds that Salesforce’s new release unifies its product suite under a cohesive platform, providing marketers and business users with a comprehensive 360-degree view of the customer. This enables brands to build experiences and ordering workflows that are predictive rather than reactive. The integration of Agentforce represents a breakthrough, blending AI with brand interactions to unlock potential gains for merchandisers and buyers, and Racine is excited to see how these technologies enhance revenue and customer loyalty. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Generative ai energy consumption

Growing Energy Consumption in Generative AI

Growing Energy Consumption in Generative AI, but ROI Impact Remains Unclear The rising energy costs associated with generative AI aren’t always central in enterprise financial considerations, yet experts suggest IT leaders should take note. Building a business case for generative AI involves both obvious and hidden expenses. Licensing fees for large language models (LLMs) and SaaS subscriptions are visible expenses, but less apparent costs include data preparation, cloud infrastructure upgrades, and managing organizational change. Growing Energy Consumption in Generative AI. One under-the-radar cost is the energy required by generative AI. Training LLMs demands vast computing power, and even routine AI tasks like answering user queries or generating images consume energy. These intensive processes require robust cooling systems in data centers, adding to energy use. While energy costs haven’t been a focus for GenAI adopters, growing awareness has prompted the International Energy Agency (IEA) to predict a doubling of data center electricity consumption by 2026, attributing much of the increase to AI. Goldman Sachs echoed these concerns, projecting data center power consumption to more than double by 2030. For now, generative AI’s anticipated benefits outweigh energy cost concerns for most enterprises, with hyperscalers like Google bearing the brunt of these costs. Google recently reported a 13% increase in greenhouse gas emissions, citing AI as a major contributor and suggesting that reducing emissions might become more challenging with AI’s continued growth. Growing Energy Consumption in Generative AI While not a barrier to adoption, energy costs play into generative AI’s long-term viability, noted Scott Likens, global AI engineering leader at PwC, emphasizing that “there’s energy being used — you don’t take it for granted.” Energy Costs and Enterprise Adoption Generative AI users might not see a line item for energy costs, yet these are embedded in fees. Ryan Gross of Caylent points out that the costs are mainly tied to model training and inferencing, with each model query, though individually minor, adding up over time. These expenses are often spread across the customer base, as companies pay for generative AI access through a licensing model. A PwC sustainability study showed that GenAI power costs, particularly from model training, are distributed among licensees. Token-based pricing for LLM usage also reflects inferencing costs, though these charges have decreased. Likens noted that the largest expenses still come from infrastructure and data management rather than energy. Potential Efficiency Gains Though energy isn’t a primary consideration, enterprises could reduce consumption indirectly through technological advancements. Newer, more cost-efficient models like OpenAI’s GPT-4o mini are 60% less expensive per token than prior versions, enabling organizations to deploy GenAI on a larger scale while keeping costs lower. Small, fine-tuned models can be used to address latency and lower energy consumption, part of a “multimodel” approach that can provide different accuracy and latency levels with varying energy demands. Agentic AI also offers opportunities for cost and energy savings. By breaking down tasks and routing them through specialized models, companies can minimize latency and reduce power usage. According to Likens, using agentic architecture could cut costs and consumption, particularly when tasks are routed to more efficient models. Rising Data Center Energy Needs While enterprises may feel shielded from direct energy costs, data centers bear the growing power demand. Cooling solutions are evolving, with liquid cooling systems becoming more prevalent for AI workloads. As data centers face the “AI growth cycle,” the demand for energy-efficient cooling solutions has fueled a resurgence in thermal management investment. Liquid cooling, being more efficient than air cooling, is gaining traction due to the power demands of AI and high-performance computing. IDTechEx projects that data center liquid cooling revenue could exceed $50 billion by 2035. Meanwhile, data centers are exploring nuclear power, with AWS, Google, and Microsoft among those considering nuclear energy as a sustainable solution to meet AI’s power demands. Future ROI Considerations While enterprises remain shielded from the full energy costs of generative AI, careful model selection and architectural choices could help curb consumption. PwC, for instance, factors in the “carbon impact” as part of its GenAI deployment strategy, recognizing that energy considerations are now a part of the generative AI value proposition. As organizations increasingly factor sustainability into their tech decisions, energy efficiency might soon play a larger role in generative AI ROI calculations. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Agents Connect Tool Calling and Reasoning

AI Agents Connect Tool Calling and Reasoning

AI Agents: Bridging Tool Calling and Reasoning in Generative AI Exploring Problem Solving and Tool-Driven Decision Making in AI Introduction: The Emergence of Agentic AI Recent advancements in libraries and low-code platforms have simplified the creation of AI agents, often referred to as digital workers. Tool calling stands out as a key capability that enhances the “agentic” nature of Generative AI models, enabling them to move beyond mere conversational tasks. By executing tools (functions), these agents can act on your behalf and tackle intricate, multi-step problems requiring sound decision-making and interaction with diverse external data sources. This insight explores the role of reasoning in tool calling, examines the challenges associated with tool usage, discusses common evaluation methods for tool-calling proficiency, and provides examples of how various models and agents engage with tools. Reasoning as a Means of Problem-Solving Successful agents rely on two fundamental expressions of reasoning: reasoning through evaluation and planning, and reasoning through tool use. While both reasoning expressions are vital, they don’t always need to be combined to yield powerful solutions. For instance, OpenAI’s new o1 model excels in reasoning through evaluation and planning, having been trained to utilize chain of thought effectively. This has notably enhanced its ability to address complex challenges, achieving human PhD-level accuracy on benchmarks like GPQA across physics, biology, and chemistry, and ranking in the 86th-93rd percentile on Codeforces contests. However, the o1 model currently lacks explicit tool calling capabilities. Conversely, many models are specifically fine-tuned for reasoning through tool use, allowing them to generate function calls and interact with APIs effectively. These models focus on executing the right tool at the right moment but may not evaluate their results as thoroughly as the o1 model. The Berkeley Function Calling Leaderboard (BFCL) serves as an excellent resource for comparing the performance of various models on tool-calling tasks and provides an evaluation suite for assessing fine-tuned models against challenging scenarios. The recently released BFCL v3 now includes multi-step, multi-turn function calling, raising the standards for tool-based reasoning tasks. Both reasoning types are powerful in their own right, and their combination holds the potential to develop agents that can effectively deconstruct complex tasks and autonomously interact with their environments. For more insights into AI agent architectures for reasoning, planning, and tool calling, check out my team’s survey paper on ArXiv. Challenges in Tool Calling: Navigating Complex Agent Behaviors Creating robust and reliable agents necessitates overcoming various challenges. In tackling complex problems, an agent often must juggle multiple tasks simultaneously, including planning, timely tool interactions, accurate formatting of tool calls, retaining outputs from prior steps, avoiding repetitive loops, and adhering to guidelines to safeguard the system against jailbreaks and prompt injections. Such demands can easily overwhelm a single agent, leading to a trend where what appears to an end user as a single agent is actually a coordinated effort of multiple agents and prompts working in unison to divide and conquer the task. This division enables tasks to be segmented and addressed concurrently by distinct models and agents, each tailored to tackle specific components of the problem. This is where models with exceptional tool-calling capabilities come into play. While tool calling is a potent method for empowering productive agents, it introduces its own set of challenges. Agents must grasp the available tools, choose the appropriate one from a potentially similar set, accurately format the inputs, execute calls in the correct sequence, and potentially integrate feedback or instructions from other agents or humans. Many models are fine-tuned specifically for tool calling, allowing them to specialize in selecting functions accurately at the right time. Key considerations when fine-tuning a model for tool calling include: Common Benchmarks for Evaluating Tool Calling As tool usage in language models becomes increasingly significant, numerous datasets have emerged to facilitate the evaluation and enhancement of model tool-calling capabilities. Two prominent benchmarks include the Berkeley Function Calling Leaderboard and the Nexus Function Calling Benchmark, both utilized by Meta to assess the performance of their Llama 3.1 model series. The recent ToolACE paper illustrates how agents can generate a diverse dataset for fine-tuning and evaluating model tool use. Here’s a closer look at each benchmark: Each of these benchmarks enhances our ability to evaluate model reasoning through tool calling. They reflect a growing trend toward developing specialized models for specific tasks and extending the capabilities of LLMs to interact with the real world. Practical Applications of Tool Calling If you’re interested in observing tool calling in action, here are some examples to consider, categorized by ease of use, from simple built-in tools to utilizing fine-tuned models and agents with tool-calling capabilities. While the built-in web search feature is convenient, most applications require defining custom tools that can be integrated into your model workflows. This leads us to the next complexity level. To observe how models articulate tool calls, you can use the Databricks Playground. For example, select the Llama 3.1 405B model and grant access to sample tools like get_distance_between_locations and get_current_weather. When prompted with, “I am going on a trip from LA to New York. How far are these two cities? And what’s the weather like in New York? I want to be prepared for when I get there,” the model will decide which tools to call and what parameters to provide for an effective response. In this scenario, the model suggests two tool calls. Since the model cannot execute the tools, the user must input a sample result to simulate. Suppose you employ a model fine-tuned on the Berkeley Function Calling Leaderboard dataset. When prompted, “How many times has the word ‘freedom’ appeared in the entire works of Shakespeare?” the model will successfully retrieve and return the answer, executing the required tool calls without the user needing to define any input or manage the output format. Such models handle multi-turn interactions adeptly, processing past user messages, managing context, and generating coherent, task-specific outputs. As AI agents evolve to encompass advanced reasoning and problem-solving capabilities, they will become increasingly adept at managing

Read More
Tableau Einstein is Here

Tableau Einstein is Here

Tableau Einstein marks a new chapter for Tableau, transforming the analytics experience by moving beyond traditional reports and dashboards to deliver insights directly within the flow of a user’s work. This new AI-powered analytics platform blends existing Tableau and Salesforce capabilities with innovative features designed to revolutionize how users engage with data. The platform is built around four key areas: autonomous insight delivery through AI, AI-assisted development of a semantic layer, real-time data access, and a marketplace for data and AI products, allowing customers to personalize their Tableau experience. Some features, like Tableau Pulse and Tableau Agent, which provide autonomous insights, are already available. Additional tools, such as Tableau Semantics and a marketplace for AI products, are expected to launch in 2025. Access to Tableau Einstein is provided through a Tableau+ subscription, though pricing details remain private. Since being acquired by Salesforce in 2019, Tableau has shifted its focus toward AI, following the trend of many analytics vendors. In February, Tableau introduced Tableau Pulse, a generative AI-powered tool that delivers insights in natural language. In July, it also rolled out Tableau Agent, an AI assistant to help users prepare and analyze data. With AI at its core, Tableau Einstein reflects deeper integration between Tableau and Salesforce. David Menninger, an analyst at Ventana Research, commented that these new capabilities represent a meaningful step toward true integration between the two platforms. Donald Farmer, founder of TreeHive Strategy, agrees, highlighting that while the robustness of Tableau Einstein’s AI capabilities compared to its competitors remains to be seen, the platform offers more than just incremental add-ons. “It’s an impressive release,” he remarked. A Paradigm Shift in Analytics A significant aspect of Tableau Einstein is its agentic nature, where AI-powered agents deliver insights autonomously, without user prompts. Traditionally, users queried data and analyzed reports to derive insights. Tableau Einstein changes this model by proactively providing insights within the workflow, eliminating the need for users to formulate specific queries. The concept of autonomous insights, represented by tools like Tableau Pulse and Agentforce for Tableau, allows businesses to build autonomous agents that deliver actionable data. This aligns with the broader trend in analytics, where the market is shifting toward agentic AI and away from dashboard reliance. Menninger noted, “The market is moving toward agentic AI and analytics, where agents, not dashboards, drive decisions. Agents can act on data rather than waiting for users to interpret it.” Farmer echoed this sentiment, stating that the integration of AI within Tableau is intuitive and seamless, offering a significantly improved analytics experience. He specifically pointed out Tableau Pulse’s elegant design and the integration of Agentforce AI, which feels deeply integrated rather than a superficial add-on. Core Features and Capabilities One of the most anticipated features of Tableau Einstein is Tableau Semantics, a semantic layer designed to enhance AI models by enabling organizations to define and structure their data consistently. Expected to be generally available by February 2025, Tableau Semantics will allow enterprises to manage metrics, data dimensions, and relationships across datasets with the help of AI. Pre-built metrics for Salesforce data will also be available, along with AI-driven tools to simplify semantic layer management. Tableau is not the first to offer a semantic layer—vendors like MicroStrategy and Looker have similar features—but the infusion of AI sets Tableau’s approach apart. According to Tableau’s chief product officer, Southard Jones, AI makes Tableau’s semantic layer more agile and user-friendly compared to older, labor-intensive systems. Real-time data integration is another key component of Tableau Einstein, made possible through Salesforce’s Data Cloud. This integration enables Tableau users to securely access and combine structured and unstructured data from hundreds of sources without manual intervention. Unstructured data, such as text and images, is critical for comprehensive AI training, and Data Cloud allows enterprises to use it alongside structured data efficiently. Additionally, Tableau Einstein will feature a marketplace launching in mid-2025, which will allow users to build a composable infrastructure. Through APIs, users will be able to personalize their Tableau environment, share AI assets, and collaborate across departments more effectively. Looking Forward As Tableau continues to build on its AI-driven platform, Menninger and Farmer agree that the vendor’s move toward agentic AI is a smart evolution. While Tableau’s current capabilities are competitive, Menninger noted that the platform doesn’t necessarily set Tableau apart from competitors like Qlik, MicroStrategy, or Microsoft Fabric. However, the tight integration with Salesforce and the focus on agentic AI may provide Tableau with a short-term advantage in the fast-changing analytics landscape. Farmer added that Tableau Einstein’s autonomous insight generation feels like a significant leap forward for the platform. “Tableau has done great work in creating an agentic experience that feels, for the first time, like the real deal,” he said. Looking ahead, Tableau’s roadmap includes a continued focus on agentic AI, with the goal of providing each user with their own personal analyst. “It’s not just about productivity,” said Jones. “It’s about changing the value of what can be delivered.” Menninger concluded that Tableau’s shift away from dashboards is a reflection of where business intelligence is headed. “Dashboards, like data warehouses, don’t solve problems on their own. What matters is what you do with the information,” he said. “Tableau’s push toward agentic analytics and collaborative decision-making is the right move for its users and the market as a whole.” Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Copado Unveils AI Agents

Copado Unveils AI Agents

Copado Unveils AI Agents to Automate Key DevOps Tasks for Salesforce Applications Copado has introduced a suite of generative AI agents designed to automate common tasks that DevOps teams frequently encounter when building and deploying applications on Salesforce’s software-as-a-service (SaaS) platform. This announcement comes ahead of the Dreamforce 2024 conference hosted by Salesforce. These AI agents are the result of over a decade of data collection by Copado, according to David Brooks, Copado’s vice president of products. The initial AI agents will focus on code generation and test automation, with future agents tackling user story creation, deployment scripts, and application environment optimization. Unlike AI co-pilot tools that assist with code generation, Copado’s agents will fully automate tasks, Brooks explained. DevOps teams will be able to orchestrate these AI agents to streamline workflows, making best DevOps practices more accessible to a wider range of development teams. As AI continues to reshape DevOps, more tasks will be automated using agentic AI. This approach involves creating AI agents trained on a specific, narrow dataset, ensuring higher accuracy compared to general-purpose large language models (LLMs) that pull data from across the web. While it’s unclear how quickly agentic AI will transform DevOps, Brooks noted that in the future, teams will consist of both human engineers and AI agents assigned to specific tasks. DevOps engineers will still be essential for overseeing the accuracy of these tasks, but many of the repetitive tasks that often lead to burnout will be automated. As the burden of routine work decreases, organizations can expect the pace of code writing and application deployment to significantly accelerate. This could lead to a shift in how DevOps teams approach application backlogs, enabling the deployment of more applications that might have previously been sidelined due to resource constraints. In the interim, Brooks advises DevOps teams to begin identifying which routine tasks can be assigned to AI agents. Doing so will free up human engineers to manage workflows at a scale that was once unimaginable, positioning teams to thrive in the AI-driven future of DevOps. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce AI Agents Explained

Salesforce AI Agents Explained

Salesforce’s AI Agents: Revolutionizing Enterprise Sales and Service for the Future In the rapidly evolving landscape of artificial intelligence (AI), Salesforce continues to lead the charge, transforming enterprise operations with cutting-edge AI agents. With the introduction of Agentforce, Salesforce is not just enhancing sales and service departments but reshaping business processes across sectors. This comprehensive exploration highlights how Salesforce’s AI agents are changing the game, offering enterprise-level executives insights into their revolutionary potential. Salesforce AI Agents Explained. AI Agents: Beyond Autonomous Vehicles A fitting analogy to grasp the progression of AI agents is the evolution of autonomous vehicles. Just as self-driving cars advance from basic driver assistance to full autonomy, AI agents evolve from simple automation to more complex decision-making. Salesforce’s Chief Product Officer, David Schmaier, draws this comparison: “In the autonomous driving world, we have levels of autonomy, from level zero to level five. AI agents for enterprises follow a similar path.” At the core of this evolution is what Salesforce defines as the “agentic” phase of AI. Unlike generative AI that follows instructions to create content, agentic AI autonomously determines and takes actions based on broader goals. Schmaier notes, “We’re at the point where AI not only creates content but takes strategic actions. It’s like having an infinite pool of interns handling mundane tasks so human employees can focus on higher-value activities.” Agentforce: Salesforce’s Next-Generation AI Platform Agentforce is the latest addition to Salesforce’s AI arsenal, unveiled during their Q2 ’25 earnings call and now positioned as a significant milestone in AI development. With Agentforce, organizations can build and manage autonomous agents for tasks across various business functions—not just customer service. This versatility is highlighted by Marc Benioff, Salesforce’s CEO, who described the energy around Agentforce during a recent briefing as “palpable.” Agentforce builds on Salesforce’s data management, security, and customization expertise, uniting these capabilities into an AI framework. Schmaier explains, “It’s about creating trusted, enterprise-ready agents, not just deploying a large language model. We’ve developed over 100 out-of-the-box use cases, from sales account summaries to service reply recommendations, all customizable and easy to deploy.” Agentforce “In Every App” A key announcement is the integration of Agentforce in every app across Salesforce’s product suite, including Sales, Service, Marketing, and Commerce Agents. The Atlas reasoning engine, Agent Builder, and a partner network were also introduced to further enhance its capabilities. The Atlas Reasoning Engine acts as the “brain” behind Agentforce, autonomously generating plans and refining them based on actions it needs to perform, such as running business processes or engaging customers through preferred channels. What Makes an AI Agent? Salesforce AI Agents Explained Building an AI agent with Agentforce requires five key elements: These components leverage existing Salesforce infrastructure, making it easier for businesses to deploy agents through Agent Builder, which is part of the new Agentforce Studio. Agents vs. Chatbots Unlike traditional chatbots, which provide pre-programmed responses, Salesforce’s AI agents use large language models (LLMs) and generative AI to interpret and autonomously execute customer requests based on CRM data. This distinction allows AI agents to perform tasks that go beyond simple queries, driving efficiency in customer service, sales, and other business areas. Practical Applications: Sales, Service, and Marketing Salesforce’s AI agents offer tangible business benefits. For instance, Sales Agent, available as both a Sales Development Representative (SDR) and Sales Coach, automates lead nurturing and inquiry management. It utilizes CRM data to deliver personalized pitches, handle objections, and even suggest meeting times—freeing sales teams to focus on more strategic tasks. In customer service, AI agents manage routine inquiries, allowing human representatives to address more complex customer needs. In marketing, AI agents generate data-driven insights to personalize campaigns, improving customer engagement and conversion rates. The Security and Trust Foundation Security and trust remain core to Salesforce’s approach to AI. The Einstein Trust Layer ensures that data protection, privacy, and ethical guidelines are maintained throughout AI interactions. Schmaier emphasizes, “Our platform defines what data agents can access and how they use it, adhering to strict data integrity standards.” The Trust Layer also prevents AI from training on customer data without consent, ensuring transparency and security. A Partnership Between Humans and AI-Salesforce AI Agents Explained Salesforce’s vision emphasizes the synergy between human employees and AI agents. As Schmaier points out, “AI agents handle routine tasks and deliver insights, allowing employees to focus on more creative and strategic work.” This human-AI partnership boosts productivity and innovation, ultimately improving business outcomes. The Future of AI in Business As AI technology advances, Salesforce is already working on next-generation capabilities for Agentforce, including predictive analytics and more sophisticated autonomous agents. Schmaier forecasts, “These agents will handle a wider range of tasks and provide deeper insights and recommendations.” With Agentforce launching in October 2024, businesses can expect significant returns on investment, thanks to its cost-efficient model starting at $2 per conversation. In summary, Salesforce’s Agentforce is a game-changing innovation, blending AI and human intelligence to transform sales, service, and marketing. As more details unfold, it’s clear that Agentforce will redefine the future of business operations—driving efficiency, personalization, and strategic success. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Small Language Models

Small Language Models

Large language models (LLMs) like OpenAI’s GPT-4 have gained acclaim for their versatility across various tasks, but they come with significant resource demands. In response, the AI industry is shifting focus towards smaller, task-specific models designed to be more efficient. Microsoft, alongside other tech giants, is investing in these smaller models. Science often involves breaking complex systems down into their simplest forms to understand their behavior. This reductionist approach is now being applied to AI, with the goal of creating smaller models tailored for specific functions. Sébastien Bubeck, Microsoft’s VP of generative AI, highlights this trend: “You have this miraculous object, but what exactly was needed for this miracle to happen; what are the basic ingredients that are necessary?” In recent years, the proliferation of LLMs like ChatGPT, Gemini, and Claude has been remarkable. However, smaller language models (SLMs) are gaining traction as a more resource-efficient alternative. Despite their smaller size, SLMs promise substantial benefits to businesses. Microsoft introduced Phi-1 in June last year, a smaller model aimed at aiding Python coding. This was followed by Phi-2 and Phi-3, which, though larger than Phi-1, are still much smaller than leading LLMs. For comparison, Phi-3-medium has 14 billion parameters, while GPT-4 is estimated to have 1.76 trillion parameters—about 125 times more. Microsoft touts the Phi-3 models as “the most capable and cost-effective small language models available.” Microsoft’s shift towards SLMs reflects a belief that the dominance of a few large models will give way to a more diverse ecosystem of smaller, specialized models. For instance, an SLM designed specifically for analyzing consumer behavior might be more effective for targeted advertising than a broad, general-purpose model trained on the entire internet. SLMs excel in their focused training on specific domains. “The whole fine-tuning process … is highly specialized for specific use-cases,” explains Silvio Savarese, Chief Scientist at Salesforce, another company advancing SLMs. To illustrate, using a specialized screwdriver for a home repair project is more practical than a multifunction tool that’s more expensive and less focused. This trend towards SLMs reflects a broader shift in the AI industry from hype to practical application. As Brian Yamada of VLM notes, “As we move into the operationalization phase of this AI era, small will be the new big.” Smaller, specialized models or combinations of models will address specific needs, saving time and resources. Some voices express concern over the dominance of a few large models, with figures like Jack Dorsey advocating for a diverse marketplace of algorithms. Philippe Krakowski of IPG also worries that relying on the same models might stifle creativity. SLMs offer the advantage of lower costs, both in development and operation. Microsoft’s Bubeck emphasizes that SLMs are “several orders of magnitude cheaper” than larger models. Typically, SLMs operate with around three to four billion parameters, making them feasible for deployment on devices like smartphones. However, smaller models come with trade-offs. Fewer parameters mean reduced capabilities. “You have to find the right balance between the intelligence that you need versus the cost,” Bubeck acknowledges. Salesforce’s Savarese views SLMs as a step towards a new form of AI, characterized by “agents” capable of performing specific tasks and executing plans autonomously. This vision of AI agents goes beyond today’s chatbots, which can generate travel itineraries but not take action on your behalf. Salesforce recently introduced a 1 billion-parameter SLM that reportedly outperforms some LLMs on targeted tasks. Salesforce CEO Mark Benioff celebrated this advancement, proclaiming, “On-device agentic AI is here!” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Guide to Creating a Working Sales Plan Creating a sales plan is a pivotal step in reaching your revenue objectives. To ensure its longevity and adaptability to Read more CRM Cloud Salesforce What is a CRM Cloud Salesforce? Salesforce Service Cloud is a customer relationship management (CRM) platform for Salesforce clients to Read more

Read More
Autonomous AI Service Agents

Autonomous AI Service Agents

Salesforce Set to Launch Autonomous AI Service Agents. Considering Tectonic only first wrote about Agentic AI in late June, its like Christmas in July! Salesforce is gearing up to introduce a new generation of customer service chatbots that leverage advanced AI tools to autonomously navigate through various actions and workflows. These bots, termed “autonomous AI agents,” are currently in pilot testing and are expected to be released later this year. Autonomous AI Service Agents Named Einstein Service Agent, these autonomous AI bots aim to utilize generative AI to understand customer intent, trigger workflows, and initiate actions within a user’s Salesforce environment, according to Ryan Nichols, Service Cloud’s chief product officer. By integrating natural language processing, predictive analytics, and generative AI, Einstein Service Agents will identify scenarios and resolve customer inquiries more efficiently. Traditional bots require programming with rules-based logic to handle specific customer service tasks, such as processing returns, issuing refunds, changing passwords, and renewing subscriptions. In contrast, the new autonomous bots, enhanced by generative AI, can better comprehend customer issues (e.g., interpreting “send back” as “return”) and summarize the steps to resolve them. Einstein Service Agent will operate across platforms like WhatsApp, Apple Messages for Business, Facebook Messenger, and SMS text, and will also process text, images, video, and audio that customers provide. Despite the promise of these new bots, their effectiveness is crucial, emphasized Liz Miller, an analyst at Constellation Research. If these bots fail to perform as expected, they risk wasting even more customer time than current technologies and damaging customer relationships. Miller also noted that successful implementation of autonomous AI agents requires human oversight for instances when the bots encounter confusion or errors. Customers, whether in B2C or B2B contexts, are often frustrated with the limitations of rules-based bots and prefer direct human interaction. It is annoying enough to be on the telephone repeating “live person” over and over again. It would be trafic to have to do it online, too. “It’s essential that these bots can handle complex questions,” Miller stated. “Advancements like this are critical, as they can prevent the bot from malfunctioning when faced with unprogrammed scenarios. However, with significant technological advancements like GenAI, it’s important to remember that human language and thought processes are intricate and challenging to map.” Nichols highlighted that the forthcoming Einstein Service Agent will be simpler to set up, as it reduces the need to manually program thousands of potential customer requests into a conversational decision tree. This new technology, which can understand multiple word permutations behind a service request, could potentially lower the need for extensive developer and data scientist involvement for Salesforce users. The pricing details for the autonomous Einstein Service Agent will be announced at its release. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Einstein Service Agent is Coming

Einstein Service Agent is Coming

Salesforce is entering the AI agent arena with a new service built on its Einstein AI platform. Introducing the Einstein Service Agent, a generative AI-powered self-service tool designed for end customers. This agent provides a conversational AI interface to answer questions and resolve various issues. Similar to the employee-facing Einstein Copilot used internally within organizations, the Einstein Service Agent can take action on behalf of users, such as processing product returns or issuing refunds. It can handle both simple and complex multi-step interactions, leveraging approved company workflows already established in Salesforce. Initially, Einstein Service Agent will be deployed for customer service scenarios, with plans to expand to other Salesforce clouds in the future. What sets Einstein Service Agents apart from other AI-driven workflows is their seamless integration with Salesforce’s existing customer data and workflows. “Einstein Service Agent is a generative AI-powered, self-service conversational experience built on our Einstein trust layer and platform,” Clara Shih, CEO of Salesforce AI, told VentureBeat. “Everything is grounded in our trust layer, as well as all the customer data and official business workflows that companies have been adding into Salesforce for the last 25 years.” Distinguishing AI Agent from AI Copilot Over the past year, Salesforce has detailed various aspects of its generative AI efforts, including the development of the Einstein Copilot, which became generally available at the end of April. The Einstein Copilot enables a wide range of conversational AI experiences for Salesforce users, focusing on direct users of the Salesforce platform. “Einstein Copilot is employee-facing, for salespeople, customer service reps, marketers, and knowledge workers,” Shih explained. “Einstein Service Agent is for our customers’ customers, for their self-service.” The concept of a conversational AI bot answering basic customer questions isn’t new, but Shih emphasized that Einstein Service Agent is different. It benefits from all the data and generative AI work Salesforce has done in recent years. This agent approach is not just about answering simple questions but also about delivering knowledge-based responses and taking action. With a copilot, multiple AI engines and responses can be chained together. The AI agent approach also chains AI models together. For Shih, the difference is a matter of semantics. “It’s a spectrum toward more and more autonomy,” Shih said. Driving AI Agent Approach with Customer Workflows As an example, Shih mentioned that Salesforce is working with a major apparel company as a pilot customer for Einstein Service Agent. If a customer places an online order and receives the wrong item, they could call the retailer during business hours for assistance from a human agent, who might be using the Einstein Copilot. If the customer reaches out when human agents aren’t available or chooses a self-service route, Einstein Service Agent can step in. The customer will be able to ask about the issue and, if enabled in the workflow, get a resolution. The workflow that understands who the customer is and how to handle the issue is already part of the Salesforce Service Cloud. Shih explained that Einstein Studio is where all administrative and configuration work for Einstein AI, including Service Agents, takes place, utilizing existing Salesforce data. The Einstein Service Agent provides a new layer for customers to interact with existing logic to solve issues. “Everything seemingly that the company has invested in over the last 25 years has come to light in the last 18 months, allowing customers to securely take advantage of generative AI in a trusted way,” Shih said. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce Tiny Giant LLM

Salesforce Tiny Giant LLM

‘On-device Agentic AI is Here!’: Salesforce Announces the ‘Tiny Giant’ LLM Salesforce CEO Marc Benioff is excited about the company’s latest innovation in AI, introducing the ‘Tiny Giant’ LLM, which he claims is the world’s top-performing “micro-model” for function-calling. Salesforce’s new slimline “Tiny Giant” LLM reportedly outperforms larger models, marking a significant advancement in on-device AI. According to a paper published on Arxiv by Salesforce’s AI Research department, the xLAM-7B LLM model ranked sixth among 46 models, including those from OpenAI and Google, in a competition testing function-calling (execution of tasks or functions through API calls). The xLAM-7B LLM was trained on just seven billion parameters, a small fraction compared to the 1.7 trillion parameters rumored to be used by GPT-4. However, Salesforce highlights the xLAM-1B, a smaller model, as its true star. Despite being trained on just one billion parameters, the xLAM-1B model finished 24th, surpassing GPT-3.5-Turbo and Claude-3 Haiku in performance. CEO Marc Benioff proudly shared these results on X (formerly Twitter), stating: “Meet Salesforce Einstein ‘Tiny Giant.’ Our 1B parameter model xLAM-1B is now the best micro-model for function-calling, outperforming models 7x its size… On-device agentic AI is here. Congrats Salesforce Research!” Salesforce’s research emphasizes that function-calling agents represent a significant advancement in AI and LLMs. Models like GPT-4, Gemini, and Mistral already execute API calls based on natural language prompts, enabling dynamic interactions with various digital services and applications. While many popular models are large and resource-intensive, requiring cloud data centers and extensive infrastructure, Salesforce’s new models demonstrate that smaller, more efficient models can achieve state-of-the-art performance. To test function-calling LLMs, Salesforce developed APIGen, an “Automated Pipeline for Generating verifiable and diverse function-calling datasets,” to synthesize data for AI training. Salesforce’s findings indicate that models trained on relatively small datasets can outperform those trained on larger datasets. “Models trained with our curated datasets, even with only seven billion parameters, can achieve state-of-the-art performance… outperforming multiple GPT-4 models,” the paper states. The ultimate goal is to create agentic AI models capable of function-calling and task execution on devices, minimizing the need for extensive external infrastructure and enabling self-sufficient operations. Dr. Eli David, Co-Founder of the cybersecurity firm Deep Instinct, commented on X, “Smaller, more efficient models are the way to go for widespread deployment of LLMs.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Agents in Line at HR

AI Agents in Line at HR

AI Agents in Line at HR may only be a satirical cartoon for a very short time. Sorry, Farside, but your AI bits may not be able to keep up with AI. July, 2034 — A new software unicorn has just emerged inbehind a bar in a pub in East London. Unicorn, by the way, descibes a startup company valued at over $1 billion, not necessarily with a billion dollar concept. Back to East London behind the soggy bar. Hey, its our fantasy. Besides if Amazon can start in a garage, isn’t anything possible? The CEO logs in as usual and gathers daily updates from the team. The Chief Technology Officer is suggesting a new feature to deploy. The Chief Product Officer wants to redesign the CRM (or whatever CRM has evolved to) integration. The Chief Revenue Officer is showing off the new pipeline, forecast by Accountant in a Box. The Chief Customer Officer is discussing the latest customer levitation tools and product feedback. The Chief Information Security Officer has found a new privacy conflict, which they are addressing with a newly-revised infrastructure set-up. And the Head of HR is fretting about the latest round of IT candidates. This sounds like every software business you’ve ever heard of. But the difference is that the CEO’s teammates are entirely AI, not human: The CTO is Lovable. The CPO is Cogna. The CCO is Gradient Labs. The CRO is 11x. The CISO is Zylon. Back to 2024: The Rise of AI Agents In 2024, the hottest topic in software is AI agents, or Agentic AI. Founders are rapidly standing up agentic applications that can solve specific needs in functions like sales and customer services — without a human required. Software buyers, seeing real opportunities to quickly improve their P&L, are swiftly building or purchasing these agentic products. Investors have poured hundreds of millions of dollars into startups in this space in recent months. Even Salesforce wasn’t launched with a silver AI spoon in its mouth. Salesforce began investing in artificial intelligence (AI) in 2014, when the company started acquiring machine learning startups and announced its Customer Success Platform. In 2016, Salesforce launched Einstein, its AI platform that supports several of its cloud services. Einstein is built into Salesforce products and includes features like natural language processing, machine learning, and predictive analytics. It helps organizations automate processes, make decisions based on insights, and improve the customer experience. YouTube How To Increase Revenue Using AI for CRM: Salesforce … Feb 12, 2024 — What is Salesforce Einstein? Salesforce Einstein is the first trusted artifici… TechForce Services How does Salesforce Use AI for Business Growth? Jan 31, 2024 — Powered by technologies like Machine Learning, Natural Language Processing, im… saasguru · LinkedIn · 7mo History of Salesforce AI From Predictive to Generative – LinkedIn Published Nov 27, 2023. In 2014, Salesforce, under the visionary leadership of… Twistellar AI in Salesforce: History, Present State and Prospects Organizations generate tons of data on marketing and sales, and surely your sales managers… Wikipedia Salesforce – Wikipedia In October 2014, Salesforce announced the development of its Customer Success Platform. Less than ten years ago, folks. Salesforce’s large database of data has helped the company address AI challenges quickly and with quality. The company’s data cloud offering provides AI with the right information at the right time, which can reduce friction and improve the customer experience.  Salesforce’s AI-powered solutions include: To catalyze this evolution, Salesforce strategically acquired RelateIQ in 2014. This move injected machine learning into the Salesforce ecosystem, capturing workplace communications data and providing valuable insights. Europe is home to many of these exciting companies. For example, H, a French AI agent startup, raised a $220 million seed round in May. Beyond RPA: The New Wave of AI Agents AI agents represent a significant step-change from Robotic Process Automation (RPA) bots, which, as explored last year, have several limitations due to their deterministic nature. Next-generation AI agents are non-deterministic, meaning that instead of stopping at a “dead end,” they can learn from mistakes and adjust their series of tasks. Not entirely unlock the mouse running the same maze over and over for the cheese. Eventually Mr. Squeakers learns which paths are dead ends and avoids them by making better choices at intersections. In AI Agents this makes them suited to complex and unstructured tasks and means they can transform the journey from intent to implementation in software development. They can deliver “pure work,” rather than acting only as a helpful co-pilot. The rise of AI agents is not only an opportunity to expand automation beyond what is possible with RPA but also to broadly redefine how knowledge work is performed. And by who. And even how is it defined. Given the right guardrails, next-generation AI agents have the potential to effectively and safely replace knowledge workers in many business scenarios. AI Agents in Action These agents are about to revolutionize the world of work as we know it and are already getting started. For example, Klarna recently revealed that its AI agent system handled two-thirds of customer chats in its first month in operation. While HR may not be swamped with AI CVs yet, it is certainly fathomable. One would suppose those candidates would have to be reviewed and interviewed by IT, not just HR. Here’s another deep thought. The internet of things (IoT) first appeared in a speech by Peter T. Lewis in September 1985. The Internet of Things (IoT) is a network of physical devices that can collect and transmit data over the internet using sensors, software, and other technologies. IoT devices can communicate with each other and with the cloud, and can even perform data analysis and be controlled remotely. The IoT concept was smart homes, health care environments, office spaces, and transportation. Only recently have we begun to think of the IoT as including the actual computers, or AI, in addition to sensored devices. It isn’t exactly a chicken and the egg question, but more of a

Read More
Agentic AI is Here

Agentic AI is Here

Embracing the Era of Agentic AI: Redefining Autonomous Systems A new paradigm in artificial intelligence, known as “Agentic Artificial Intelligence,” is poised to revolutionize the capabilities of the known autonomous universe. This cutting-edge technology represents a significant leap forward in AI-driven decision-making and action, promising transformative impacts across various industries including healthcare, manufacturing, IT, finance, marketing, and HR. Agents are the way to go! There is no two ways about this. Looking into the progression of the Large Language Model based applications since last year, its not hard to see that the Agentic Process (agents as reusable, specific and dedicated single unit of work) — would be the way to build Gen AI applications. What is Agentic AI? Agentic Artificial Intelligence marks a departure from traditional AI models that primarily focus on passive observation and analysis. Unlike its predecessors, which often require human intervention to execute tasks, Agentic AI systems possess the autonomy to initiate actions independently based on their assessments. This allows them to navigate much more complex environments and undertake tasks with a level of initiative and adaptability previously unseen. At least outside of sci-fy movies. Real-World Applications of Agentic Artificial Intelligence Healthcare In healthcare, Agentic AI systems are transforming patient care. These systems autonomously monitor vital signs, administer medication, and assist in surgical procedures with unparalleled precision. By augmenting healthcare professionals’ capabilities, these AI-driven agents enhance patient outcomes and streamline care processes. Augmenting is the key word, here. Manufacturing and Logistics In manufacturing and logistics, Agentic AI optimizes operations and boosts efficiency. Intelligent agents handle predictive maintenance of machinery, autonomous inventory management, and robotic assembly. Leveraging advanced algorithms and sensor technologies, these systems anticipate issues, coordinate complex workflows, and adapt to real-time production demands, driving a shift towards fully autonomous production environments. Customer Service Within enterprises, AI agents are revolutionizing business operations across various departments. In customer service, AI-powered chatbots with Agentic Artificial Intelligence capabilities engage with customers in natural language, providing personalized assistance and resolving queries efficiently. This enhances customer satisfaction and allows human agents to focus on more complex tasks. Marketing and Sales Agentic Artificial Intelligence empowers marketing and sales teams to analyze vast datasets, identify trends, and personalize campaigns with unprecedented precision. By understanding customer behavior and preferences at a granular level, AI agents optimize advertising strategies, maximize conversion rates, and drive revenue growth. Finance and Accounting In finance and accounting, Agentic AI streamlines processes like invoice processing, fraud detection, and risk management. These AI-driven agents analyze financial data in real time, flag anomalies, and provide insights that enable faster, more informed decision-making, thereby improving operational efficiency. Ethical Considerations of Agentic Artificial Intelligence The rise of Agentic AI also brings significant ethical and societal challenges. Concerns about data privacy, algorithmic bias, and job displacement necessitate robust regulation and ethical frameworks to ensure responsible and equitable deployment of AI technologies. Navigating the Future with Agentic AI The advent of Agentic AI ushers in a new era of autonomy and innovation in artificial intelligence. As these intelligent agents permeate various facets of our lives and enterprises, they present both challenges and opportunities. To navigate this new world, we must approach it with foresight, responsibility, and a commitment to harnessing technology for the betterment of humanity. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
gettectonic.com