PHI Archives - gettectonic.com - Page 4
Salesforce Flow Tests

Salesforce Flow is Here

Hello, Salesforce Flow. Goodbye, Workflow Rules and Process Builder. As Bob Dylan famously sang, “The times they are a-changin’.” If your nonprofit is still relying on Workflow Rules and Process Builder to automate tasks in Salesforce, it’s time to prepare for change. These tools are being retired, but there’s no need to panic—Salesforce Flow, a more powerful, versatile automation tool, is ready to take the lead. Why Move to Salesforce Flow? Salesforce is consolidating its automation features into one unified platform: Flow. This shift comes with significant benefits for nonprofits: What This Means for Nonprofits While existing Workflow Rules and Process Builders will still function for now, Salesforce plans to end support by December 31, 2025. This means no more updates or bug fixes, and unsupported automations could break unexpectedly soon after the deadline. To avoid disruptions, nonprofits should start migrating their automations to Flow sooner rather than later. How to Transition to Salesforce Flow Resources to Simplify Migration: Planning Your Migration: Start by auditing your existing automations to determine which Workflow Rules and Process Builders need to be transitioned. Think strategically about how to improve processes and leverage Flow’s expanded capabilities. What Can Flow Do for Your Nonprofit? Salesforce Flow empowers nonprofits to automate processes in innovative ways: Don’t Go It Alone Transitioning to Salesforce Flow may seem overwhelming, but it’s a chance to elevate your nonprofit’s automation capabilities. Whether you need help with migration tools, strategic planning, or Flow development, you don’t have to do it alone. Reach out to our support team or contact us to get started. Together, we can make this transition seamless and set your nonprofit up for long-term success with Salesforce Flow. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Agents and Digital Transformation

Ready for AI Agents

Brands that can effectively integrate agentic AI into their operations stand to gain a significant competitive edge. But as with any innovation, success will depend on balancing the promise of automation with the complexities of trust, privacy, and user experience.

Read More
AI Risk Management

AI Risk Management

Organizations must acknowledge the risks associated with implementing AI systems to use the technology ethically and minimize liability. Throughout history, companies have had to manage the risks associated with adopting new technologies, and AI is no exception. Some AI risks are similar to those encountered when deploying any new technology or tool, such as poor strategic alignment with business goals, a lack of necessary skills to support initiatives, and failure to secure buy-in across the organization. For these challenges, executives should rely on best practices that have guided the successful adoption of other technologies. In the case of AI, this includes: However, AI introduces unique risks that must be addressed head-on. Here are 15 areas of concern that can arise as organizations implement and use AI technologies in the enterprise: Managing AI Risks While AI risks cannot be eliminated, they can be managed. Organizations must first recognize and understand these risks and then implement policies to minimize their negative impact. These policies should ensure the use of high-quality data, require testing and validation to eliminate biases, and mandate ongoing monitoring to identify and address unexpected consequences. Furthermore, ethical considerations should be embedded in AI systems, with frameworks in place to ensure AI produces transparent, fair, and unbiased results. Human oversight is essential to confirm these systems meet established standards. For successful risk management, the involvement of the board and the C-suite is crucial. As noted, “This is not just an IT problem, so all executives need to get involved in this.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
Cohere-Powered Slack Agents

Cohere-Powered Slack Agents

Salesforce AI and Cohere-Powered Slack Agents: Seamless CRM Data Interaction and Enhanced Productivity Slack agents, powered by Salesforce AI and integrated with Cohere, enable seamless interaction with CRM data within the Slack platform. These agents allow teams to use natural language to surface data insights and take action, simplifying workflows. With Slack’s AI Workflow Builder and support for third-party AI agents, including Cohere, productivity is further enhanced through automated processes and customizable AI assistants. By leveraging these technologies, Slack agents provide users with direct access to CRM data and AI-powered insights, improving efficiency and collaboration. Key Features of Slack Agents: Salesforce AI and Cohere Productivity Enhancements with Slack Agents: Salesforce AI and Cohere AI Agent Capabilities in Slack: Salesforce and Cohere Data Security and Compliance for Slack Agents FAQ What are Slack agents, and how do they integrate with Salesforce AI and Cohere?Slack agents are AI-powered assistants that enable teams to interact with CRM data directly within Slack. Salesforce AI agents allow natural language data interactions, while Cohere’s integration enhances productivity with customizable AI assistants and automated workflows. How do Salesforce AI agents in Slack improve team productivity?Salesforce AI agents enable users to interact with both CRM and conversational data, update records, and analyze opportunities using natural language. This integration improves workflow efficiency, leading to a reported 47% productivity boost. What features does the Cohere integration with Slack AI offer?Cohere integration offers customizable AI assistants that can help generate workflows, summarize channel content, and provide intelligent responses to user queries within Slack. How do Slack agents handle data security and compliance?Slack agents leverage cloud-native DLP solutions, automatically detecting sensitive data across different file types and setting up automated remediation processes for enhanced security and compliance. Can Slack agents work with AI providers beyond Salesforce and Cohere?Yes, Slack supports AI agents from various providers. In addition to Salesforce AI and Cohere, integrations include Adobe Express, Anthropic, Perplexity, IBM, and Amazon Q Business, offering users a wide array of AI-powered capabilities. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI in Networking

AI in Networking

AI Tools in Networking: Tailoring Capabilities to Unique Needs AI tools are becoming increasingly common across various industries, offering a wide range of functionalities. However, network engineers may not require every capability these tools provide. Each network has distinct requirements that align with specific business objectives, necessitating that network engineers and developers select AI toolsets tailored to their networks’ needs. While network teams often desire similar AI capabilities, they also encounter common challenges in integrating these tools into their systems. The Rise of AI in Networking Though AI is not a new concept—having existed for decades in the form of automated and expert systems—it is gaining unprecedented attention. According to Jim Frey, principal analyst for networking at TechTarget’s Enterprise Strategy Group, many organizations have not fully grasped AI’s potential in production environments over the past three years. “AI has been around for a long time, but the interesting thing is, only a minority—not even half—have really said they’re using it effectively in production for the last three years,” Frey noted. Generative AI (GenAI) has significantly contributed to this renewed interest in AI. Shamus McGillicuddy, vice president of research at Enterprise Management Associates, categorizes AI tools into two main types: GenAI and AIOps (AI for IT operations). “Generative AI, like ChatGPT, has recently surged in popularity, becoming a focal point of discussion among IT professionals,” McGillicuddy explained. “AIOps, on the other hand, encompasses machine learning, anomaly detection, and analytics.” The increasing complexity of networks is another factor driving the adoption of AI in networking. Frey highlighted that the demands of modern network environments are beyond human capability to manage manually, making AI engines a vital solution. Essential AI Tool Capabilities for Networks While individual network needs vary, many network engineers seek similar functionalities when integrating AI. Commonly desired capabilities include: According to McGillicuddy’s research, network optimization and automated troubleshooting are among the most popular use cases for AI. However, many professionals prefer to retain manual oversight in the fixing process. “Automated troubleshooting can identify and analyze issues, but typically, people want to approve the proposed fixes,” McGillicuddy stated. Many of these capabilities are critical for enhancing security and mitigating threats. Frey emphasized that networking professionals increasingly view AI as a tool to improve organizational security. DeCarlo echoed this sentiment, noting that network managers share similar objectives with security professionals regarding proactive problem recognition. Frey also mentioned alternative use cases for AI, such as documentation and change recommendations, which, while less popular, can offer significant value to network teams. Ultimately, the relevance of any AI capability hinges on its fit within the network environment and team needs. “I don’t think you can prioritize one capability over another,” DeCarlo remarked. “It depends on the tools being used and their effectiveness.” Generative AI: A New Frontier Despite its recent emergence, GenAI has quickly become an asset in the networking field. McGillicuddy noted that in the past year and a half, network professionals have adopted GenAI tools, with ChatGPT being one of the most recognized examples. “One user reported that leveraging ChatGPT could reduce a task that typically takes four hours down to just 10 minutes,” McGillicuddy said. However, he cautioned that users must understand the limitations of GenAI, as mistakes can occur. “There’s a risk of errors or ‘hallucinations’ with these tools, and having blind faith in their outputs can lead to significant network issues,” he warned. In addition to ChatGPT, vendors are developing GenAI interfaces for their products, including virtual assistants. According to McGillicuddy’s findings, common use cases for vendor GenAI products include: DeCarlo added that GenAI tools offer valuable training capabilities due to their rapid processing speeds and in-depth analysis, which can expedite knowledge acquisition within the network. Frey highlighted that GenAI’s rise is attributed to its ability to outperform older systems lacking sophistication. Nevertheless, the complexity of GenAI infrastructures has led to a demand for AIOps tools to manage these systems effectively. “We won’t be able to manage GenAI infrastructures without the support of AI tools, as human capabilities cannot keep pace with rapid changes,” Frey asserted. Challenges in Implementing AI Tools While AI tools present significant benefits for networks, network engineers and managers must navigate several challenges before integration. Data Privacy, Collection, and Quality Data usage remains a critical concern for organizations considering AIOps and GenAI tools. Frey noted that the diverse nature of network data—combining operational information with personally identifiable information—heightens data privacy concerns. For GenAI, McGillicuddy pointed out the importance of validating AI outputs and ensuring high-quality data is utilized for training. “If you feed poor data to a generative AI tool, it will struggle to accurately understand your network,” he explained. Complexity of AI Tools Frey and McGillicuddy agreed that the complexity of both AI and network systems could hinder effective deployment. Frey mentioned that AI systems, especially GenAI, require careful tuning and strong recommendations to minimize inaccuracies. McGillicuddy added that intricate network infrastructures, particularly those involving multiple vendors, could limit the effectiveness of AIOps components, which are often specialized for specific systems. User Uptake and Skills Gaps User adoption of AI tools poses a significant challenge. Proper training is essential to realize the full benefits of AI in networking. Some network professionals may be resistant to using AI, while others may lack the knowledge to integrate these tools effectively. McGillicuddy noted that AIOps tools are often less intuitive than GenAI, necessitating a certain level of expertise for users to extract value. “Understanding how tools function and identifying potential gaps can be challenging,” DeCarlo added. The learning curve can be steep, particularly for teams accustomed to longstanding tools. Integration Issues Integration challenges can further complicate user adoption. McGillicuddy highlighted two dimensions of this issue: tools and processes. On the tools side, concerns arise about harmonizing GenAI with existing systems. “On the process side, it’s crucial to ensure that teams utilize these tools effectively,” he said. DeCarlo cautioned that organizations might need to create in-house supplemental tools to bridge integration gaps, complicating the synchronization of vendor AI

Read More
LLMs and AI

LLMs and AI

Large Language Models (LLMs): Revolutionizing AI and Custom Solutions Large Language Models (LLMs) are transforming artificial intelligence by enabling machines to generate and comprehend human-like text, making them indispensable across numerous industries. The global LLM market is experiencing explosive growth, projected to rise from $1.59 billion in 2023 to $259.8 billion by 2030. This surge is driven by the increasing demand for automated content creation, advances in AI technology, and the need for improved human-machine communication. Several factors are propelling this growth, including advancements in AI and Natural Language Processing (NLP), large datasets, and the rising importance of seamless human-machine interaction. Additionally, private LLMs are gaining traction as businesses seek more control over their data and customization. These private models provide tailored solutions, reduce dependency on third-party providers, and enhance data privacy. This guide will walk you through building your own private LLM, offering valuable insights for both newcomers and seasoned professionals. What are Large Language Models? Large Language Models (LLMs) are advanced AI systems that generate human-like text by processing vast amounts of data using sophisticated neural networks, such as transformers. These models excel in tasks such as content creation, language translation, question answering, and conversation, making them valuable across industries, from customer service to data analysis. LLMs are generally classified into three types: LLMs learn language rules by analyzing vast text datasets, similar to how reading numerous books helps someone understand a language. Once trained, these models can generate content, answer questions, and engage in meaningful conversations. For example, an LLM can write a story about a space mission based on knowledge gained from reading space adventure stories, or it can explain photosynthesis using information drawn from biology texts. Building a Private LLM Data Curation for LLMs Recent LLMs, such as Llama 3 and GPT-4, are trained on massive datasets—Llama 3 on 15 trillion tokens and GPT-4 on 6.5 trillion tokens. These datasets are drawn from diverse sources, including social media (140 trillion tokens), academic texts, and private data, with sizes ranging from hundreds of terabytes to multiple petabytes. This breadth of training enables LLMs to develop a deep understanding of language, covering diverse patterns, vocabularies, and contexts. Common data sources for LLMs include: Data Preprocessing After data collection, the data must be cleaned and structured. Key steps include: LLM Training Loop Key training stages include: Evaluating Your LLM After training, it is crucial to assess the LLM’s performance using industry-standard benchmarks: When fine-tuning LLMs for specific applications, tailor your evaluation metrics to the task. For instance, in healthcare, matching disease descriptions with appropriate codes may be a top priority. Conclusion Building a private LLM provides unmatched customization, enhanced data privacy, and optimized performance. From data curation to model evaluation, this guide has outlined the essential steps to create an LLM tailored to your specific needs. Whether you’re just starting or seeking to refine your skills, building a private LLM can empower your organization with state-of-the-art AI capabilities. For expert guidance or to kickstart your LLM journey, feel free to contact us for a free consultation. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Generative AI Energy Consumption Rises

Generative AI Energy Consumption Rises

Generative AI Energy Consumption Rises, but Impact on ROI Unclear The energy costs associated with generative AI (GenAI) are often overlooked in enterprise financial planning. However, industry experts suggest that IT leaders should account for the power consumption that comes with adopting this technology. When building a business case for generative AI, some costs are evident, like large language model (LLM) fees and SaaS subscriptions. Other costs, such as preparing data, upgrading cloud infrastructure, and managing organizational changes, are less visible but significant. Generative AI Energy Consumption Rises One often overlooked cost is the energy consumption of generative AI. Training LLMs and responding to user requests—whether answering questions or generating images—demands considerable computing power. These tasks generate heat and necessitate sophisticated cooling systems in data centers, which, in turn, consume additional energy. Despite this, most enterprises have not focused on the energy requirements of GenAI. However, the issue is gaining more attention at a broader level. The International Energy Agency (IEA), for instance, has forecasted that electricity consumption from data centers, AI, and cryptocurrency could double by 2026. By that time, data centers’ electricity use could exceed 1,000 terawatt-hours, equivalent to Japan’s total electricity consumption. Goldman Sachs also flagged the growing energy demand, attributing it partly to AI. The firm projects that global data center electricity use could more than double by 2030, fueled by AI and other factors. ROI Implications of Energy Costs The extent to which rising energy consumption will affect GenAI’s return on investment (ROI) remains unclear. For now, the perceived benefits of GenAI seem to outweigh concerns about energy costs. Most businesses have not been directly impacted, as these costs tend to affect hyperscalers more. For instance, Google reported a 13% increase in greenhouse gas emissions in 2023, largely due to AI-related energy demands in its data centers. Scott Likens, PwC’s global chief AI engineering officer, noted that while energy consumption isn’t a barrier to adoption, it should still be factored into long-term strategies. “You don’t take it for granted. There’s a cost somewhere for the enterprise,” he said. Energy Costs: Hidden but Present Although energy expenses may not appear on an enterprise’s invoice, they are still present. Generative AI’s energy consumption is tied to both model training and inference—each time a user makes a query, the system expends energy to generate a response. While the energy used for individual queries is minor, the cumulative effect across millions of users can add up. How these costs are passed to customers is somewhat opaque. Licensing fees for enterprise versions of GenAI products likely include energy costs, spread across the user base. According to PwC’s Likens, the costs associated with training models are shared among many users, reducing the burden on individual enterprises. On the inference side, GenAI vendors charge for tokens, which correspond to computational power. Although increased token usage signals higher energy consumption, the financial impact on enterprises has so far been minimal, especially as token costs have decreased. This may be similar to buying an EV to save on gas but spending hundreds and losing hours at charging stations. Energy as an Indirect Concern While energy costs haven’t been top-of-mind for GenAI adopters, they could indirectly address the issue by focusing on other deployment challenges, such as reducing latency and improving cost efficiency. Newer models, such as OpenAI’s GPT-4o mini, are more economical and have helped organizations scale GenAI without prohibitive costs. Organizations may also use smaller, fine-tuned models to decrease latency and energy consumption. By adopting multimodel approaches, enterprises can choose models based on the complexity of a task, optimizing for both speed and energy efficiency. The Data Center Dilemma As enterprises consider GenAI’s energy demands, data centers face the challenge head-on, investing in more sophisticated cooling systems to handle the heat generated by AI workloads. According to the Dell’Oro Group, the data center physical infrastructure market grew in the second quarter of 2024, signaling the start of the “AI growth cycle” for infrastructure sales, particularly thermal management systems. Liquid cooling, more efficient than air cooling, is gaining traction as a way to manage the heat from high-performance computing. This method is expected to see rapid growth in the coming years as demand for AI workloads continues to increase. Nuclear Power and AI Energy Demands To meet AI’s growing energy demands, some hyperscalers are exploring nuclear energy for their data centers. AWS, Google, and Microsoft are among the companies exploring this option, with AWS acquiring a nuclear-powered data center campus earlier this year. Nuclear power could help these tech giants keep pace with AI’s energy requirements while also meeting sustainability goals. I don’t know. It seems like if you akin AI accessibility to more nuclear power plants you would lose a lot of fans. As GenAI continues to evolve, both energy costs and efficiency are likely to play a greater role in decision-making. PwC has already begun including carbon impact as part of its GenAI value framework, which assesses the full scope of generative AI deployments. “The cost of carbon is in there, so we shouldn’t ignore it,” Likens said. Generative AI Energy Consumption Rises Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Agents and Digital Transformation

AI Agents and Digital Transformation

In the rapidly developingng world of technology, Artificial Intelligence (AI) is revolutionizing industries and reshaping how we interact with digital systems. One of the most promising advancements within AI is the development of AI agents. These intelligent entities, often powered by Large Language Models (LLMs), are driving the next wave of digital transformation by enabling automation, personalization, and enhanced decision-making across various sectors. AI Agents and digital transformation are here to stay. What is an AI Agent? An AI agent, or intelligent agent, is a software entity capable of perceiving its environment, reasoning about its actions, and autonomously working toward specific goals. These agents mimic human-like behavior using advanced algorithms, data processing, and machine-learning models to interact with users and complete tasks. LLMs to AI Agents — An Evolution The evolution of AI agents is closely tied to the rise of Large Language Models (LLMs). Models like GPT (Generative Pre-trained Transformer) have showcased remarkable abilities to understand and generate human-like text. This development has enabled AI agents to interpret complex language inputs, facilitating advanced interactions with users. Key Capabilities of LLM-Based Agents LLM-powered agents possess several key advantages: Two Major Types of LLM Agents LLM agents are classified into two main categories: Multi-Agent Systems (MAS) A Multi-Agent System (MAS) is a group of autonomous agents working together to achieve shared goals or solve complex problems. MAS applications span robotics, economics, and distributed computing, where agents interact to optimize processes. AI Agent Architecture and Key Elements AI agents generally follow a modular architecture comprising: Learning Strategies for LLM-Based Agents AI agents utilize various learning techniques, including supervised, reinforcement, and self-supervised learning, to adapt and improve their performance in dynamic environments. How Autonomous AI Agents Operate Autonomous AI agents act independently of human intervention by perceiving their surroundings, reasoning through possible actions, and making decisions autonomously to achieve set goals. AI Agents’ Transformative Power Across Industries AI agents are transforming numerous industries by automating tasks, enhancing efficiency, and providing data-driven insights. Here’s a look at some key use cases: Platforms Powering AI Agents The Benefits of AI Agents and Digital Transformation AI agents offer several advantages, including: The Future of AI Agents The potential of AI agents is immense, and as AI technology advances, we can expect more sophisticated agents capable of complex reasoning, adaptive learning, and deeper integration into everyday tasks. The future promises a world where AI agents collaborate with humans to drive innovation, enhance efficiency, and unlock new opportunities for growth in the digital age. AI Agents and Digital Transformation By partnering with AI development specialists at Tectonic, organizations can access cutting-edge solutions tailored to their needs, positioning themselves to stay ahead in the rapidly evolving AI-driven market. Agentforce Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
GPUs and AI Development

GPUs and AI Development

Graphics processing units (GPUs) have become widely recognized due to their growing role in AI development. However, a lesser-known but critical technology is also gaining attention: high-bandwidth memory (HBM). HBM is a high-density memory designed to overcome bottlenecks and maximize data transfer speeds between storage and processors. AI chipmakers like Nvidia rely on HBM for its superior bandwidth and energy efficiency. Its placement next to the GPU’s processor chip gives it a performance edge over traditional server RAM, which resides between storage and the processing unit. HBM’s ability to consume less power makes it ideal for AI model training, which demands significant energy resources. However, as the AI landscape transitions from model training to AI inferencing, HBM’s widespread adoption may slow. According to Gartner’s 2023 forecast, the use of accelerator chips incorporating HBM for AI model training is expected to decline from 65% in 2022 to 30% by 2027, as inferencing becomes more cost-effective with traditional technologies. How HBM Differs from Other Memory HBM shares similarities with other memory technologies, such as graphics double data rate (GDDR), in delivering high bandwidth for graphics-intensive tasks. But HBM stands out due to its unique positioning. Unlike GDDR, which sits on the printed circuit board of the GPU, HBM is placed directly beside the processor, enhancing speed by reducing signal delays caused by longer interconnections. This proximity, combined with its stacked DRAM architecture, boosts performance compared to GDDR’s side-by-side chip design. However, this stacked approach adds complexity. HBM relies on through-silicon via (TSV), a process that connects DRAM chips using electrical wires drilled through them, requiring larger die sizes and increasing production costs. According to analysts, this makes HBM more expensive and less efficient to manufacture than server DRAM, leading to higher yield losses during production. AI’s Demand for HBM Despite its manufacturing challenges, demand for HBM is surging due to its importance in AI model training. Major suppliers like SK Hynix, Samsung, and Micron have expanded production to meet this demand, with Micron reporting that its HBM is sold out through 2025. In fact, TrendForce predicts that HBM will contribute to record revenues for the memory industry in 2025. The high demand for GPUs, especially from Nvidia, drives the need for HBM as AI companies focus on accelerating model training. Hyperscalers, looking to monetize AI, are investing heavily in HBM to speed up the process. HBM’s Future in AI While HBM has proven essential for AI training, its future may be uncertain as the focus shifts to AI inferencing, which requires less intensive memory resources. As inferencing becomes more prevalent, companies may opt for more affordable and widely available memory solutions. Experts also see HBM following the same trajectory as other memory technologies, with continuous efforts to increase bandwidth and density. The next generation, HBM3E, is already in production, with HBM4 planned for release in 2026, promising even higher speeds. Ultimately, the adoption of HBM will depend on market demand, especially from hyperscalers. If AI continues to push the limits of GPU performance, HBM could remain a critical component. However, if businesses prioritize cost efficiency over peak performance, HBM’s growth may level off. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
Data Labeling

Data Labeling

Data Labeling: Essential for Machine Learning and AI Data labeling is the process of identifying and tagging data samples, essential for training machine learning (ML) models. While it can be done manually, software often assists in automating the process. Data labeling is critical for helping machine learning models make accurate predictions and is widely used in fields like computer vision, natural language processing (NLP), and speech recognition. How Data Labeling Works The process begins with collecting raw data, such as images or text, which is then annotated with specific labels to provide context for ML models. These labels need to be precise, informative, and independent to ensure high-quality model training. For instance, in computer vision, data labeling can tag images of animals so that the model can learn common features and correctly identify animals in new, unlabeled data. Similarly, in autonomous vehicles, labeling helps the AI differentiate between pedestrians, cars, and other objects, ensuring safe navigation. Why Data Labeling is Important Data labeling is integral to supervised learning, a type of machine learning where models are trained on labeled data. Through labeled examples, the model learns the relationships between input data and the desired output, which improves its accuracy in real-world applications. For example, a machine learning algorithm trained on labeled emails can classify future emails as spam or not based on those labels. It’s also used in more advanced applications like self-driving cars, where the model needs to understand its surroundings by recognizing and labeling various objects like roads, signs, and obstacles. Applications of Data Labeling The Data Labeling Process Data labeling involves several key steps: Errors in labeling can negatively affect the model’s performance, so many organizations adopt a human-in-the-loop approach to involve people in quality control and improve the accuracy of labels. Data Labeling vs. Data Classification vs. Data Annotation Types of Data Labeling Benefits and Challenges Benefits: Challenges: Methods of Data Labeling Companies can label data through various methods: Each organization must choose a method that fits its needs, based on factors like data volume, staff expertise, and budget. The Growing Importance of Data Labeling As AI and ML become more pervasive, the need for high-quality data labeling increases. Data labeling not only helps train models but also provides opportunities for new jobs in the AI ecosystem. For instance, companies like Alibaba, Amazon, Facebook, Tesla, and Waymo all rely on data labeling for applications ranging from e-commerce recommendations to autonomous driving. Looking Ahead Data tools are becoming more sophisticated, reducing the need for manual work while ensuring higher data quality. As data privacy regulations tighten, businesses must also ensure that labeling practices comply with local, state, and federal laws. In conclusion, labeling is a crucial step in building effective machine learning models, driving innovation, and ensuring that AI systems perform accurately across a wide range of applications. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Third Wave of AI at Salesforce

Third Wave of AI at Salesforce

The Third Wave of AI at Salesforce: How Agentforce is Transforming the Landscape At Dreamforce 2024, Salesforce unveiled several exciting innovations, with Agentforce taking center stage. This insight explores the key changes and enhancements designed to improve efficiency and elevate customer interactions. Introducing Agentforce Agentforce is a customizable AI agent builder that empowers organizations to create and manage autonomous agents for various business tasks. But what exactly is an agent? An agent is akin to a chatbot but goes beyond traditional capabilities. While typical chatbots are restricted to scripted responses and predefined questions, Agentforce agents leverage large language models (LLMs) and generative AI to comprehend customer inquiries contextually. This enables them to make independent decisions, whether processing requests or resolving issues using real-time data from your company’s customer relationship management (CRM) system. The Role of Atlas At the heart of Agentforce’s functionality lies the Atlas reasoning engine, which acts as the operational brain. Unlike standard assistive tools, Atlas is an agentic system with the autonomy to act on behalf of the user. Atlas formulates a plan based on necessary actions and can adjust that plan based on evaluations or new information. When it’s time to engage, Atlas knows which business processes to activate and connects with customers or employees via their preferred channels. This sophisticated approach allows Agentforce to significantly enhance operational efficiency. By automating routine inquiries, it frees up your team to focus on more complex tasks, delivering a smoother experience for both staff and customers. Speed to Value One of Agentforce’s standout features is its emphasis on rapid implementation. Many AI projects can be resource-intensive and take months or even years to launch. However, Agentforce enables quick deployment by leveraging existing Salesforce infrastructure, allowing organizations to implement solutions rapidly and with greater control. Salesforce also offers pre-built Agentforce agents tailored to specific business needs—such as Service Agent, Sales Development Representative Agent, Sales Coach, Personal Shopper Agent, and Campaign Agent—all customizable with the Agent Builder. Agentforce for Service and Sales will be generally available starting October 25, 2024, with certain elements of the Atlas Reasoning Engine rolling out in February 2025. Pricing begins at $2 per conversation, with volume discounts available. Transforming Customer Insights with Data Cloud and Marketing Cloud Dreamforce also highlighted enhancements to Data Cloud, Salesforce’s backbone for all cloud products. The platform now supports processing unstructured data, which constitutes up to 90% of company data often overlooked by traditional reporting systems. With new capabilities for analyzing various unstructured formats—like video, audio, sales demos, customer service calls, and voicemails—businesses can derive valuable insights and make informed decisions across Customer 360. Furthermore, Data Cloud One enables organizations to connect siloed Salesforce instances effortlessly, promoting seamless data sharing through a no-code, point-and-click setup. The newly announced Marketing Cloud Advanced edition serves as the “big sister” to Marketing Cloud Growth, equipping larger marketing teams with enhanced features like Path Experiment, which tests different content strategies across channels, and Einstein Engagement Scoring for deeper insights into customer behavior. Together, these enhancements empower companies to engage customers more meaningfully and measurably across all touchpoints. Empowering the Workforce Through Education Salesforce is committed to making AI accessible for all. They recently announced free instructor-led courses and AI certifications available through 2025, aimed at equipping the Salesforce community with essential AI and data management skills. To support this initiative, Salesforce is establishing AI centers in major cities, starting with London, to provide hands-on training and resources, fostering AI expertise. They also launched a global Agentforce World Tour to promote understanding and adoption of the new capabilities introduced at Dreamforce, featuring repackaged sessions from the conference and opportunities for specialists to answer questions. The Bottom Line What does this mean for businesses? With the rollout of Agentforce, along with enhancements to Data Cloud and Marketing Cloud, organizations can operate more efficiently and connect with customers in more meaningful ways. Coupled with a focus on education through free courses and global outreach, getting on board has never been easier. If you’d like to discuss how we can help your business maximize its potential with Salesforce through data and AI, connect with us and schedule a meeting with our team. Legacy systems can create significant gaps between operations and employee needs, slowing lead processes and resulting in siloed, out-of-sync data that hampers business efficiency. Responding to inquiries within five minutes offers a 75% chance of converting leads into customers, emphasizing the need for rapid, effective marketing responses. Salesforce aims to help customers strengthen relationships, enhance productivity, and boost margins through its premier AI CRM for sales, service, marketing, and commerce, while also achieving these goals internally. Recognizing the complexity of its decade-old processes, including lead assignment across three systems and 2 million lines of custom code, Salesforce took on the role of “customer zero,” leveraging Data Cloud to create a unified view of customers known as the “Customer 360 Truth Profile.” This consolidation of disparate data laid the groundwork for enterprise-wide AI and automation, improving marketing automation and reducing lead time by 98%. As Michael Andrew, SVP of Marketing Decision Science at Salesforce, noted, this initiative enabled the company to provide high-quality leads to its sales team with enriched data and AI scoring while accelerating time to market and enhancing data quality. Embracing Customer Zero “Almost exactly a year ago, we set out with a beginner’s mind to transform our lead automation process with a solution that would send the best leads to the right sales teams within minutes of capturing their data and support us for the next decade,” said Andrew. The initial success metric was “speed to lead,” aiming to reduce the handoff time from 20 minutes to less than one minute. The focus was also on integrating customer and lead data to develop a more comprehensive 360-degree profile for each prospect, enhancing lead assignment and sales rep productivity. Another objective was to boost business agility by cutting the average time to implement assignment changes from four weeks to mere days. Accelerating Success with

Read More
Battle of Copilots

Battle of Copilots

Salesforce is directly challenging Microsoft in the growing battle of AI copilots, which are designed to enhance customer experience (CX) across key business functions like sales and support. In this competitive landscape, Salesforce is taking on not only Microsoft but also major AI rivals such as Google Gemini, OpenAI GPT, and IBM watsonx. At the heart of this strategy is Salesforce Agentforce, a platform that leverages autonomous decision-making to meet enterprise demands for data and AI abstraction. Salesforce Dreamforce Highlights One of the most significant takeaways from last month’s Dreamforce conference in San Francisco was the unveiling of autonomous agents, bringing advanced GenAI capabilities to the app development process. CEO Marc Benioff and other Salesforce executives made it clear that Salesforce is positioning itself to compete with Microsoft’s Copilot, rebranding and advancing its own AI assistant, previously known as Einstein AI. Microsoft’s stronghold, however, lies in Copilot’s seamless integration with widely used products like Teams, Outlook, PowerPoint, and Word. Furthermore, Microsoft has established itself as a developer’s favorite, especially with GitHub Copilot and the Azure portfolio, which are integral to app modernization in many enterprises. “Salesforce faces an uphill battle in capturing market share from these established players,” says Charlotte Dunlap, Research Director at GlobalData. “Salesforce’s best chance lies in highlighting the autonomous capabilities of Agentforce—enabling businesses to automate more processes, moving beyond basic chatbot functions, and delivering a personalized customer experience.” This emphasis on autonomy is vital, given that many enterprises are still grappling with the complexities of emerging GenAI technologies. Dunlap points out that DevOps teams are struggling to find third-party expertise that understands how GenAI fits within existing IT systems, particularly around security and governance concerns. Salesforce’s focus on automation, combined with the integration prowess of MuleSoft, positions it as a key player in making GenAI tools more accessible and intuitive for businesses. Elevating AI Abstraction and Automation Salesforce has increasingly focused on the idea of abstracting data and AI, exemplified by its Data Cloud and low-level UI capabilities. Now, with models like the Atlas Reasoning Engine, Salesforce is looking to push beyond traditional AI assistants. These tools are designed to automate complex, previously human-dependent tasks, spanning functions like sales, service, and marketing. Simplifying the Developer Experience The true measure of Salesforce’s success in its GenAI strategy will emerge in the coming months. The company is well aware that its ability to simplify the developer experience is critical. Enterprises are looking for more than just AI innovation—they want thought leadership that can help secure budget and executive support for AI initiatives. Many companies report ongoing struggles in gaining that internal buy-in, further underscoring the importance of strong, strategic partnerships with technology providers like Salesforce. In its pursuit to rival Microsoft Copilot, Salesforce’s future hinges on how effectively it can build on its track record of simplifying the developer experience while promoting the unique autonomous qualities of Agentforce. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
collaboration between humans and AI

Collaboration Between Humans and AI

The Future of AI: What to Expect in the Next 5 Years In the next five years, AI will accelerate human life, reshape behaviors, and transform industries—these changes are inevitable. Collaboration Between Humans and AI. For much of the early 20th century, AI existed mainly in science fiction, where androids, sentient machines, and futuristic societies intrigued fans of the genre. From films like Metropolis to books like I, Robot, AI was the subject of speculative imagination. AI in fiction often over-dramatized reality and caused us to suspend belief in what was and was not possible. But by the mid-20th century, scientists began working to bring AI into reality. A Brief History of AI’s Impact on Society The 1956 Dartmouth Summer Research Project on Artificial Intelligence marked a key turning point, where John McCarthy coined the term “artificial intelligence” and helped establish a community of AI researchers. Although the initial excitement about AI often outpaced its actual capabilities, significant breakthroughs began emerging by the late 20th century. One such moment was IBM’s Deep Blue defeating chess champion Garry Kasparov in 1997, signaling that machines could perform complex cognitive tasks. The rise of big data and Moore’s Law, which fueled the exponential growth of computational power, enabled AI to process vast amounts of information and tackle tasks previously handled only by humans. By 2022, generative AI models like ChatGPT proved that machine learning could yield highly sophisticated and captivating technologies. AI’s influence is now everywhere. No longer is it only discussed in IT circles. AI is being featured in nearly all new products hitting the market. It is part of if not the creation tool of most commercials. Voice assistants like Alexa, recommendation systems used by Netflix, and autonomous vehicles represent just a glimpse of AI’s current role in society. Yet, over the next five years, AI’s development is poised to introduce far more profound societal changes. How AI Will Shape the Future Industries Most Affected by AI Long-term Risks of Collaboration Between Humans and AI AI’s potential to pose existential risks has long been a topic of concern. However, the more realistic danger lies in human societies voluntarily ceding control to AI systems. Algorithmic trading in finance, for example, demonstrates how human decisions are already being replaced by AI’s ability to operate at unimaginable speeds. Still, fear of AI should not overshadow the opportunities it presents. If organizations shy away from AI out of anxiety, they risk missing out on innovations and efficiency gains. The future of AI depends on a balanced approach that embraces its potential while mitigating its risks. In the coming years, the collaboration between humans and AI will drive profound changes across industries, legal frameworks, and societal norms, creating both challenges and opportunities for the future. Tectonic can help you map your AI journey for the best Collaboration Between Humans and AI. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Fivetrans Hybrid Deployment

Fivetrans Hybrid Deployment

Fivetran’s Hybrid Deployment: A Breakthrough in Data Engineering In the data engineering world, balancing efficiency with security has long been a challenge. Fivetran aims to shift this dynamic with its Hybrid Deployment solution, designed to seamlessly move data across any environment while maintaining control and flexibility. Fivetrans Hybrid Deployment. The Hybrid Advantage: Flexibility Meets Control Fivetran’s Hybrid Deployment offers a new approach for enterprises, particularly those handling sensitive data or operating in regulated sectors. Often, these businesses struggle to adopt data-driven practices due to security concerns. Hybrid Deployment changes this by enabling the secure movement of data across cloud and on-premises environments, giving businesses full control over their data while maintaining the agility of the cloud. As George Fraser, Fivetran’s CEO, notes, “Businesses no longer have to choose between managed automation and data control. They can now securely move data from all their critical sources—like Salesforce, Workday, Oracle, SAP—into a data warehouse or data lake, while keeping that data under their own control.” How it Works: A Secure, Streamlined Approach Fivetran’s Hybrid Deployment relies on a lightweight local agent to move data securely within a customer’s environment, while the Fivetran platform handles the management and monitoring. This separation of control and data planes ensures that sensitive information stays within the customer’s secure perimeter. Vinay Kumar Katta, a managing delivery architect at Capgemini, highlights the flexibility this provides, enabling businesses to design pipelines without sacrificing security. Beyond Security: Additional Benefits Hybrid Deployment’s benefits go beyond just security. It also offers: Early adopters are already seeing its value. Troy Fokken, chief architect at phData, praises how it “streamlines data pipeline processes,” especially for customers in regulated industries. AI Agent Architectures: Defining the Future of Autonomous Systems In the rapidly evolving world of AI, a new framework is emerging—AI agents designed to act autonomously, adapt dynamically, and explore digital environments. These AI agents are built on core architectural principles, bringing the next generation of autonomy to AI-driven tasks. What Are AI Agents? AI agents are systems designed to autonomously or semi-autonomously perform tasks, leveraging tools to achieve objectives. For instance, these agents may use APIs, perform web searches, or interact with digital environments. At their core, AI agents use Large Language Models (LLMs) and Foundation Models (FMs) to break down complex tasks, similar to human reasoning. Large Action Models (LAMs) Just as LLMs transformed natural language processing, Large Action Models (LAMs) are revolutionizing how AI agents interact with environments. These models excel at function calling—turning natural language into structured, executable actions, enabling AI agents to perform real-world tasks like scheduling or triggering API calls. Salesforce AI Research, for instance, has open-sourced several LAMs designed to facilitate meaningful actions. LAMs bridge the gap between unstructured inputs and structured outputs, making AI agents more effective in complex environments. Model Orchestration and Small Language Models (SLMs) Model orchestration complements LAMs by utilizing smaller, specialized models (SLMs) for niche tasks. Instead of relying on resource-heavy models, AI agents can call upon these smaller models for specific functions—such as summarizing data or executing commands—creating a more efficient system. SLMs, combined with techniques like Retrieval-Augmented Generation (RAG), allow smaller models to perform comparably to their larger counterparts, enhancing their ability to handle knowledge-intensive tasks. Vision-Enabled Language Models for Digital Exploration AI agents are becoming even more capable with vision-enabled language models, allowing them to interact with digital environments. Projects like Apple’s Ferret-UI and WebVoyager exemplify this, where agents can navigate user interfaces, recognize elements via OCR, and explore websites autonomously. Function Calling: Structured, Actionable Outputs A fundamental shift is happening with function calling in AI agents, moving from unstructured text to structured, actionable outputs. This allows AI agents to interact with systems more efficiently, triggering specific actions like booking meetings or executing API calls. The Role of Tools and Human-in-the-Loop AI agents rely on tools—algorithms, scripts, or even humans-in-the-loop—to perform tasks and guide actions. This approach is particularly valuable in high-stakes industries like healthcare and finance, where precision is crucial. The Future of AI Agents With the advent of Large Action Models, model orchestration, and function calling, AI agents are becoming powerful problem solvers. These agents are evolving to explore, learn, and act within digital ecosystems, bringing us closer to a future where AI mimics human problem-solving processes. As AI agents become more sophisticated, they will redefine how we approach digital tasks and interactions. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Revolution Customer Service with Agentforce

Revolution Customer Service with Agentforce

Agentforce stole the spotlight at Dreamforce, but it’s not just about replacing human workers. Equally significant for Service Cloud was the focus on how AI can be leveraged to make agents, dispatchers, and field service technicians more productive and proactive. Join a conversation to unpack the latest Sales Cloud innovations, with a spotlight on Agentforce for sales followed by a Q&A with Salesblazers. During the Dreamforce Service Cloud keynote, GM Kishan Chetan emphasized the dramatic shift over the past year, with AI moving from theoretical to practical applications. He challenged customer service leaders to embrace AI agents, highlighting that AI-driven solutions can transform customer service from delivering “good” benefits to achieving exponential growth. He noted that AI agents are capable of handling common customer requests like tech support, scheduling, and general inquiries, as well as more complex tasks such as de-escalation, billing inquiries, and even cross-selling and upselling. In practice, research by Valoir shows that most Service Cloud customers are still in the early stages of AI adoption, particularly with generative AI. While progress has accelerated recently, most companies are only seeing incremental gains in individual productivity rather than the exponential improvements highlighted at Dreamforce. To achieve those higher-level returns, customers must move beyond simple automation and summarization to AI-driven transformation, powered by Agentforce. Chetan and his team outlined four key steps to make this transition. “Agentforce represents the Third Wave of AI—advancing beyond copilots to a new era of highly accurate, low-hallucination intelligent agents that actively drive customer success. Unlike other platforms, Agentforce is a revolutionary and trusted solution that seamlessly integrates AI across every workflow, embedding itself deeply into the heart of the customer journey. This means anticipating needs, strengthening relationships, driving growth, and taking proactive action at every touchpoint,” said Marc Benioff, Chair and CEO, Salesforce. “While others require you to DIY your AI, Agentforce offers a fully tailored, enterprise-ready platform designed for immediate impact and scalability. With advanced security features, compliance with industry standards, and unmatched flexibility. Our vision is bold: to empower one billion agents with Agentforce by the end of 2025. This is what AI is meant to be.” In contrast to now-outdated copilots and chatbots that rely on human requests and strugglewith complex or multi-step tasks, Agentforce offers a new level of sophistication by operating autonomously, retrieving the right data on demand, building action plans for any task, and executing these plans without requiring human intervention. Like a self-driving car, Agentforce uses real-time data to adapt to changing conditions and operates independently within an organizations’ customized guardrails, ensuring every customer interaction is informed, relevant, and valuable. And when desired, Agentforce seamlessly hands off to human employees with a summary of the interaction, an overview of the customer’s details, and recommendations for what to do next. Deploy AI agents across channelsAgentforce Service Agent is more than a chatbot—it’s an autonomous AI agent capable of handling both simple and complex requests, understanding text, video, and audio. Customers were invited to build their own Service Agents during Dreamforce, and many took up the challenge. Service-related agents are a natural fit, as research shows Service Cloud customers are generally more prepared for AI adoption due to the volume and quality of customer data available in their CRM systems. Turn insights into actionLaunching in October 2024, Customer Experience Intelligence provides an omnichannel supervisor Wall Board that allows supervisors to monitor conversations in real time, complete with sentiment scores and organized metrics by topics and regions. Supervisors can then instruct Service Agent to dive into root causes, suggest proactive messaging, or even offer discounts. This development represents the next stage of Service Intelligence, combining Data Cloud, Tableau, and Einstein Conversation Mining to give supervisors real-time insights. It mirrors capabilities offered by traditional contact center vendors like Verint, which also blend interaction, sentiment, and other data in real time—highlighting the convergence of contact centers and Service Cloud service operations. Empower teams to become trusted advisorsSalesforce continues to navigate the delicate balance between digital and human agents, especially within Service Cloud. The key lies in the intelligent handoff of customer data when escalating from a digital agent to a human agent. Service Planner guides agents step-by-step through issue resolution, powered by Unified Knowledge. The demo also showcased how Service Agent can merge Commerce and Service by suggesting agents offer complimentary items from a customer’s shopping cart. Enable field teams to be proactiveSalesforce also announced improvements in field service, designed to help dispatchers and field service agents operate more proactively and efficiently. Agentforce for Dispatchers enhances the ability to address urgent appointments quickly. Asset Service Prediction leverages AI to forecast asset failures and upcoming service needs, while AI-generated prework briefs provide field techs with asset health scores and critical information before they arrive on site. Setting a clear roadmap for adopting Agentforce across these four areas is an essential step toward helping customers realize more than just incremental gains in their service operations. Equally important will be helping customers develop a data strategy that harnesses the power of Data Cloud and Salesforce’s partner ecosystem, enabling a truly data-driven service experience. Investments in capabilities like My Service Journeys will also be critical in guiding customers through the process of identifying which AI features will deliver the greatest returns for their specific needs. Agentforce leverages Salesforce’s generative AI, like Einstein GPT, to automate routine tasks, provide real-time insights, and offer personalized recommendations, enhancing efficiency and enabling agents to deliver exceptional customer experiences. Agentforce is not just another traditional chatbot; it is a next-generation, AI-powered solution that understands complex queries and acts autonomously to enhance operational efficiency. Unlike conventional chatbots, Agentforce is intelligent and adaptive, capable of managing a wide range of customer issues with precision. It offers 24/7 support, responds in a natural, human-like manner, and seamlessly escalates to human agents when needed and redefining customer service by delivering faster, smarter, and more effective support experiences. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM

Read More
gettectonic.com