ONC Archives - gettectonic.com - Page 11
AI Risk Management

AI Risk Management

Organizations must acknowledge the risks associated with implementing AI systems to use the technology ethically and minimize liability. Throughout history, companies have had to manage the risks associated with adopting new technologies, and AI is no exception. Some AI risks are similar to those encountered when deploying any new technology or tool, such as poor strategic alignment with business goals, a lack of necessary skills to support initiatives, and failure to secure buy-in across the organization. For these challenges, executives should rely on best practices that have guided the successful adoption of other technologies. In the case of AI, this includes: However, AI introduces unique risks that must be addressed head-on. Here are 15 areas of concern that can arise as organizations implement and use AI technologies in the enterprise: Managing AI Risks While AI risks cannot be eliminated, they can be managed. Organizations must first recognize and understand these risks and then implement policies to minimize their negative impact. These policies should ensure the use of high-quality data, require testing and validation to eliminate biases, and mandate ongoing monitoring to identify and address unexpected consequences. Furthermore, ethical considerations should be embedded in AI systems, with frameworks in place to ensure AI produces transparent, fair, and unbiased results. Human oversight is essential to confirm these systems meet established standards. For successful risk management, the involvement of the board and the C-suite is crucial. As noted, “This is not just an IT problem, so all executives need to get involved in this.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
What is Explainable AI

What is Explainable AI

Building a trusted AI system starts with ensuring transparency in how decisions are made. Explainable AI is vital not only for addressing trust issues within organizations but also for navigating regulatory challenges. According to research from Forrester, many business leaders express concerns over AI, particularly generative AI, which surged in popularity following the 2022 release of ChatGPT by OpenAI. “AI faces a trust issue,” explained Forrester analyst Brandon Purcell, underscoring the need for explainability to foster accountability. He highlighted that explainability helps stakeholders understand how AI systems generate their outputs. “Explainability builds trust,” Purcell stated at the Forrester Technology and Innovation Summit in Austin, Texas. “When employees trust AI systems, they’re more inclined to use them.” Implementing explainable AI does more than encourage usage within an organization—it also helps mitigate regulatory risks, according to Purcell. Explainability is crucial for compliance, especially under regulations like the EU AI Act. Forrester analyst Alla Valente emphasized the importance of integrating accountability, trust, and security into AI efforts. “Don’t wait for regulators to set standards—ensure you’re already meeting them,” she advised at the summit. Purcell noted that explainable AI varies depending on whether the AI model is predictive, generative, or agentic. Building an Explainable AI System AI explainability encompasses several key elements, including reproducibility, observability, transparency, interpretability, and traceability. For predictive models, transparency and interpretability are paramount. Transparency involves using “glass-box modeling,” where users can see how the model analyzed the data and arrived at its predictions. This approach is likely to be a regulatory requirement, especially for high-risk applications. Interpretability is another important technique, useful for lower-risk cases such as fraud detection or explaining loan decisions. Techniques like partial dependence plots show how specific inputs affect predictive model outcomes. “With predictive AI, explainability focuses on the model itself,” Purcell noted. “It’s one area where you can open the hood and examine how it works.” In contrast, generative AI models are often more opaque, making explainability harder. Businesses can address this by documenting the entire system, a process known as traceability. For those using models from vendors like Google or OpenAI, tools like transparency indexes and model cards—which detail the model’s use case, limitations, and performance—are valuable resources. Lastly, for agentic AI systems, which autonomously pursue goals, reproducibility is key. Businesses must ensure that the model’s outputs can be consistently replicated with similar inputs before deployment. These systems, like self-driving cars, will require extensive testing in controlled environments before being trusted in the real world. “Agentic systems will need to rack up millions of virtual miles before we let them loose,” Purcell concluded. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI in Networking

AI in Networking

AI Tools in Networking: Tailoring Capabilities to Unique Needs AI tools are becoming increasingly common across various industries, offering a wide range of functionalities. However, network engineers may not require every capability these tools provide. Each network has distinct requirements that align with specific business objectives, necessitating that network engineers and developers select AI toolsets tailored to their networks’ needs. While network teams often desire similar AI capabilities, they also encounter common challenges in integrating these tools into their systems. The Rise of AI in Networking Though AI is not a new concept—having existed for decades in the form of automated and expert systems—it is gaining unprecedented attention. According to Jim Frey, principal analyst for networking at TechTarget’s Enterprise Strategy Group, many organizations have not fully grasped AI’s potential in production environments over the past three years. “AI has been around for a long time, but the interesting thing is, only a minority—not even half—have really said they’re using it effectively in production for the last three years,” Frey noted. Generative AI (GenAI) has significantly contributed to this renewed interest in AI. Shamus McGillicuddy, vice president of research at Enterprise Management Associates, categorizes AI tools into two main types: GenAI and AIOps (AI for IT operations). “Generative AI, like ChatGPT, has recently surged in popularity, becoming a focal point of discussion among IT professionals,” McGillicuddy explained. “AIOps, on the other hand, encompasses machine learning, anomaly detection, and analytics.” The increasing complexity of networks is another factor driving the adoption of AI in networking. Frey highlighted that the demands of modern network environments are beyond human capability to manage manually, making AI engines a vital solution. Essential AI Tool Capabilities for Networks While individual network needs vary, many network engineers seek similar functionalities when integrating AI. Commonly desired capabilities include: According to McGillicuddy’s research, network optimization and automated troubleshooting are among the most popular use cases for AI. However, many professionals prefer to retain manual oversight in the fixing process. “Automated troubleshooting can identify and analyze issues, but typically, people want to approve the proposed fixes,” McGillicuddy stated. Many of these capabilities are critical for enhancing security and mitigating threats. Frey emphasized that networking professionals increasingly view AI as a tool to improve organizational security. DeCarlo echoed this sentiment, noting that network managers share similar objectives with security professionals regarding proactive problem recognition. Frey also mentioned alternative use cases for AI, such as documentation and change recommendations, which, while less popular, can offer significant value to network teams. Ultimately, the relevance of any AI capability hinges on its fit within the network environment and team needs. “I don’t think you can prioritize one capability over another,” DeCarlo remarked. “It depends on the tools being used and their effectiveness.” Generative AI: A New Frontier Despite its recent emergence, GenAI has quickly become an asset in the networking field. McGillicuddy noted that in the past year and a half, network professionals have adopted GenAI tools, with ChatGPT being one of the most recognized examples. “One user reported that leveraging ChatGPT could reduce a task that typically takes four hours down to just 10 minutes,” McGillicuddy said. However, he cautioned that users must understand the limitations of GenAI, as mistakes can occur. “There’s a risk of errors or ‘hallucinations’ with these tools, and having blind faith in their outputs can lead to significant network issues,” he warned. In addition to ChatGPT, vendors are developing GenAI interfaces for their products, including virtual assistants. According to McGillicuddy’s findings, common use cases for vendor GenAI products include: DeCarlo added that GenAI tools offer valuable training capabilities due to their rapid processing speeds and in-depth analysis, which can expedite knowledge acquisition within the network. Frey highlighted that GenAI’s rise is attributed to its ability to outperform older systems lacking sophistication. Nevertheless, the complexity of GenAI infrastructures has led to a demand for AIOps tools to manage these systems effectively. “We won’t be able to manage GenAI infrastructures without the support of AI tools, as human capabilities cannot keep pace with rapid changes,” Frey asserted. Challenges in Implementing AI Tools While AI tools present significant benefits for networks, network engineers and managers must navigate several challenges before integration. Data Privacy, Collection, and Quality Data usage remains a critical concern for organizations considering AIOps and GenAI tools. Frey noted that the diverse nature of network data—combining operational information with personally identifiable information—heightens data privacy concerns. For GenAI, McGillicuddy pointed out the importance of validating AI outputs and ensuring high-quality data is utilized for training. “If you feed poor data to a generative AI tool, it will struggle to accurately understand your network,” he explained. Complexity of AI Tools Frey and McGillicuddy agreed that the complexity of both AI and network systems could hinder effective deployment. Frey mentioned that AI systems, especially GenAI, require careful tuning and strong recommendations to minimize inaccuracies. McGillicuddy added that intricate network infrastructures, particularly those involving multiple vendors, could limit the effectiveness of AIOps components, which are often specialized for specific systems. User Uptake and Skills Gaps User adoption of AI tools poses a significant challenge. Proper training is essential to realize the full benefits of AI in networking. Some network professionals may be resistant to using AI, while others may lack the knowledge to integrate these tools effectively. McGillicuddy noted that AIOps tools are often less intuitive than GenAI, necessitating a certain level of expertise for users to extract value. “Understanding how tools function and identifying potential gaps can be challenging,” DeCarlo added. The learning curve can be steep, particularly for teams accustomed to longstanding tools. Integration Issues Integration challenges can further complicate user adoption. McGillicuddy highlighted two dimensions of this issue: tools and processes. On the tools side, concerns arise about harmonizing GenAI with existing systems. “On the process side, it’s crucial to ensure that teams utilize these tools effectively,” he said. DeCarlo cautioned that organizations might need to create in-house supplemental tools to bridge integration gaps, complicating the synchronization of vendor AI

Read More
Document Checklist in Salesforce Screen Flow

Document Checklist in Salesforce Screen Flow

One effective way to accomplish this is by using the Document Matrix element in Discovery Framework–based OmniScripts. This approach allows you to streamline the assessment process and ensure that the advisor uploads the correct documents.

Read More
Fully Formatted Facts

Fully Formatted Facts

A recent discovery by programmer and inventor Michael Calvin Wood is addressing a persistent challenge in AI: hallucinations. These false or misleading outputs, long considered an inherent flaw in large language models (LLMs), have posed a significant issue for developers. However, Wood’s breakthrough is challenging this assumption, offering a solution that could transform how AI-powered applications are built and used. The Importance of Wood’s Discovery for Developers Wood’s findings have substantial implications for developers working with AI. By eliminating hallucinations, developers can ensure that AI-generated content is accurate and reliable, particularly in applications where precision is critical. Understanding the Root Cause of Hallucinations Contrary to popular belief, hallucinations are not primarily caused by insufficient training data or biased algorithms. Wood’s research reveals that the issue stems from how LLMs process and generate information based on “noun-phrase routes.” LLMs organize information around noun phrases, and when they encounter semantically similar phrases, they may conflate or misinterpret them, leading to incorrect outputs. How LLMs Organize Information For example: The Noun-Phrase Dominance Model Wood’s research led to the development of the Noun-Phrase Dominance Model, which posits that neural networks in LLMs self-organize around noun phrases. This model is key to understanding and eliminating hallucinations by addressing how AI processes noun-phrase conflicts. Fully-Formatted Facts (FFF): A Solution Wood’s solution involves transforming input data into Fully-Formatted Facts (FFF)—statements that are literally true, devoid of noun-phrase conflicts, and structured as simple, complete sentences. Presenting information in this format has led to significant improvements in AI accuracy, particularly in question-answering tasks. How FFF Processing Works While Wood has not provided a step-by-step guide for FFF processing, he hints that the process began with named-entity recognition using the Python SpaCy library and evolved into using an LLM to reduce ambiguity while retaining the original writing style. His company’s REST API offers a wrapper around GPT-4o and GPT-4o-mini models, transforming input text to remove ambiguity before processing it. Current Methods vs. Wood’s Approach Current approaches, like Retrieval Augmented Generation (RAG), attempt to reduce hallucinations by adding more context. However, these methods often introduce additional noun-phrase conflicts. For instance, even with RAG, ChatGPT-3.5 Turbo experienced a 23% hallucination rate when answering questions about Wikipedia articles. In contrast, Wood’s method focuses on eliminating noun-phrase conflicts entirely. Results: RAG FF (Retrieval Augmented Generation with Formatted Facts) Wood’s method has shown remarkable results, eliminating hallucinations in GPT-4 and GPT-3.5 Turbo during question-answering tasks using third-party datasets. Real-World Example: Translation Error Elimination Consider a simple translation example: This transformation eliminates hallucinations by removing the potential noun-phrase conflict. Implications for the Future of AI The Noun-Phrase Dominance Model and the use of Fully-Formatted Facts have far-reaching implications: Roadmap for Future Development Wood and his team plan to expand their approach by: Conclusion: A New Era of Reliable AI Wood’s discovery represents a significant leap forward in the pursuit of reliable AI. By aligning input data with how LLMs process information, he has unlocked the potential for accurate, trustworthy AI systems. As this technology continues to evolve, it could have profound implications for industries ranging from healthcare to legal services, where AI could become a consistent and reliable tool. While there is still work to be done in expanding this method across all AI tasks, the foundation has been laid for a revolution in AI accuracy. Future developments will likely focus on refining and expanding these capabilities, enabling AI to serve as a trusted resource across a range of applications. Experience RAGFix For those looking to explore this technology, RAGFix offers an implementation of these groundbreaking concepts. Visit their official website to access demos, explore REST API integration options, and stay updated on the latest advancements in hallucination-free AI: Visit RAGFix.ai Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
AI Customer Service Agents Explained

AI Customer Service Agents Explained

AI customer service agents are advanced technologies designed to understand and respond to customer inquiries within defined guidelines. These agents can handle both simple and complex issues, such as answering frequently asked questions or managing product returns, all while offering a personalized, conversational experience. Research shows that 82% of service representatives report that customers ask for more than they used to. As a customer service leader, you’re likely facing increasing pressure to meet these growing expectations while simultaneously reducing costs, speeding up service, and providing personalized, round-the-clock support. This is where AI customer service agents can make a significant impact. Here’s a closer look at how AI agents can enhance your organization’s service operations, improve customer experience, and boost overall productivity and efficiency. What Are AI Customer Service Agents? AI customer service agents are virtual assistants designed to interact with customers and support service operations. Utilizing machine learning and natural language processing (NLP), these agents are capable of handling a broad range of tasks, from answering basic inquiries to resolving complex issues — even managing multiple tasks at once. Importantly, AI agents continuously improve through self-learning. Why Are AI-Powered Customer Service Agents Important? AI-powered customer service technology is becoming essential for several reasons: Benefits of AI Customer Service Agents AI customer service agents help service teams manage growing service demands by taking on routine tasks and providing essential support. Key benefits include: Why Choose Agentforce Service Agent? If you’re considering adding AI customer service agents to your strategy, Agentforce Service Agent offers a comprehensive solution: By embracing AI customer service agents like Agentforce Service Agent, businesses can reduce costs, meet growing customer demands, and stay competitive in an ever-evolving global market. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
LLMs and AI

LLMs and AI

Large Language Models (LLMs): Revolutionizing AI and Custom Solutions Large Language Models (LLMs) are transforming artificial intelligence by enabling machines to generate and comprehend human-like text, making them indispensable across numerous industries. The global LLM market is experiencing explosive growth, projected to rise from $1.59 billion in 2023 to $259.8 billion by 2030. This surge is driven by the increasing demand for automated content creation, advances in AI technology, and the need for improved human-machine communication. Several factors are propelling this growth, including advancements in AI and Natural Language Processing (NLP), large datasets, and the rising importance of seamless human-machine interaction. Additionally, private LLMs are gaining traction as businesses seek more control over their data and customization. These private models provide tailored solutions, reduce dependency on third-party providers, and enhance data privacy. This guide will walk you through building your own private LLM, offering valuable insights for both newcomers and seasoned professionals. What are Large Language Models? Large Language Models (LLMs) are advanced AI systems that generate human-like text by processing vast amounts of data using sophisticated neural networks, such as transformers. These models excel in tasks such as content creation, language translation, question answering, and conversation, making them valuable across industries, from customer service to data analysis. LLMs are generally classified into three types: LLMs learn language rules by analyzing vast text datasets, similar to how reading numerous books helps someone understand a language. Once trained, these models can generate content, answer questions, and engage in meaningful conversations. For example, an LLM can write a story about a space mission based on knowledge gained from reading space adventure stories, or it can explain photosynthesis using information drawn from biology texts. Building a Private LLM Data Curation for LLMs Recent LLMs, such as Llama 3 and GPT-4, are trained on massive datasets—Llama 3 on 15 trillion tokens and GPT-4 on 6.5 trillion tokens. These datasets are drawn from diverse sources, including social media (140 trillion tokens), academic texts, and private data, with sizes ranging from hundreds of terabytes to multiple petabytes. This breadth of training enables LLMs to develop a deep understanding of language, covering diverse patterns, vocabularies, and contexts. Common data sources for LLMs include: Data Preprocessing After data collection, the data must be cleaned and structured. Key steps include: LLM Training Loop Key training stages include: Evaluating Your LLM After training, it is crucial to assess the LLM’s performance using industry-standard benchmarks: When fine-tuning LLMs for specific applications, tailor your evaluation metrics to the task. For instance, in healthcare, matching disease descriptions with appropriate codes may be a top priority. Conclusion Building a private LLM provides unmatched customization, enhanced data privacy, and optimized performance. From data curation to model evaluation, this guide has outlined the essential steps to create an LLM tailored to your specific needs. Whether you’re just starting or seeking to refine your skills, building a private LLM can empower your organization with state-of-the-art AI capabilities. For expert guidance or to kickstart your LLM journey, feel free to contact us for a free consultation. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
AI Prompts to Accelerate Academic Reading

AI Prompts to Accelerate Academic Reading

10 AI Prompts to Accelerate Academic Reading with ChatGPT and Claude AI In the era of information overload, keeping pace with academic research can feel daunting. Tools like ChatGPT and Claude AI can streamline your reading and help you extract valuable insights from research papers quickly and efficiently. These AI assistants, when used ethically and responsibly, support your critical analysis by summarizing complex studies, highlighting key findings, and breaking down methodologies. While these prompts enhance efficiency, they should complement—never replace—your own critical thinking and thorough reading. AI Prompts for Academic Reading 1. Elevator Pitch Summary Prompt: “Summarize this paper in 3–5 sentences as if explaining it to a colleague during an elevator ride.”This prompt distills the essence of a paper, helping you quickly grasp the core idea and decide its relevance. 2. Key Findings Extraction Prompt: “List the top 5 key findings or conclusions from this paper, with a brief explanation of each.”Cut through jargon to access the research’s core contributions in seconds. 3. Methodology Breakdown Prompt: “Explain the study’s methodology in simple terms. What are its strengths and potential limitations?”Understand the foundation of the research and critically evaluate its validity. 4. Literature Review Assistant Prompt: “Identify the key papers cited in the literature review and summarize each in one sentence, explaining its connection to the study.”A game-changer for understanding the context and building your own literature review. 5. Jargon Buster Prompt: “List specialized terms or acronyms in this paper with definitions in plain language.”Create a personalized glossary to simplify dense academic language. 6. Visual Aid Interpreter Prompt: “Explain the key takeaways from Figure X (or Table Y) and its significance to the study.”Unlock insights from charts and tables, ensuring no critical information is missed. 7. Implications Explorer Prompt: “What are the potential real-world implications or applications of this research? Suggest 3–5 possible impacts.”Connect theory to practice by exploring broader outcomes and significance. 8. Cross-Disciplinary Connections Prompt: “How might this paper’s findings or methods apply to [insert your field]? Suggest potential connections or applications.”Encourage interdisciplinary thinking by finding links between research areas. 9. Future Research Generator Prompt: “Based on the limitations and unanswered questions, suggest 3–5 potential directions for future research.”Spark new ideas and identify gaps for exploration in your field. 10. The Devil’s Advocate Prompt: “Play devil’s advocate: What criticisms or counterarguments could be made against the paper’s main claims? How might the authors respond?”Refine your critical thinking and prepare for discussions or reviews. Additional Resources Generative AI Prompts with Retrieval Augmented GenerationAI Agents and Tabular DataAI Evolves With Agentforce and Atlas Conclusion Incorporating these prompts into your routine can help you process information faster, understand complex concepts, and uncover new insights. Remember, AI is here to assist—not replace—your research skills. Stay critical, adapt prompts to your needs, and maximize your academic productivity. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce Flows and LeanData

Salesforce Flows and LeanData

Mastering Opportunity Routing in Salesforce Flows While leads are essential at the top of the funnel, opportunities take center stage as the sales process advances. In Salesforce, the opportunity object acts as a container that can hold multiple contacts tied to a specific deal, making accurate opportunity routing crucial. Misrouting or delays at this stage can significantly impact revenue and forecasting, while manual processing risks incorrect assignments and uneven distribution. Leveraging Salesforce Flows for opportunity routing can help avoid these issues. Salesforce Flows and LeanData. What Is Opportunity Routing? Opportunity routing is the process of assigning open opportunities to the right sales rep based on specific criteria like territory, deal size, industry, or product type. The goal is to ensure every opportunity reaches the right person quickly, maximizing the chance to close the deal. Opportunity routing also helps prioritize high-potential deals, improving pipeline efficiency. Challenges of Manual Routing Manual opportunity routing can lead to several challenges: Benefits of Automating Routing with Salesforce Flows Using Salesforce Flows for opportunity routing offers many benefits: Setting Up Opportunity Routing in Salesforce Flows Here’s an outline for setting up opportunity routing in Salesforce: Managing Complex Salesforce Flows Opportunity routing in Salesforce Flows is powerful, but managing complex sales environments can be challenging: How LeanData Enhances Opportunity Routing LeanData extends Salesforce routing capabilities with advanced, no-code automation and auditing features: Salesforce Flows and LeanData Whether using Salesforce Flows or LeanData, the goal is to optimize time to revenue. While Salesforce Flows offer a robust foundation, organizations without dedicated admins or developers may face challenges in making frequent updates. LeanData provides greater flexibility and real-time automation, helping to streamline the routing process and drive revenue growth. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Generative AI Energy Consumption Rises

Generative AI Energy Consumption Rises

Generative AI Energy Consumption Rises, but Impact on ROI Unclear The energy costs associated with generative AI (GenAI) are often overlooked in enterprise financial planning. However, industry experts suggest that IT leaders should account for the power consumption that comes with adopting this technology. When building a business case for generative AI, some costs are evident, like large language model (LLM) fees and SaaS subscriptions. Other costs, such as preparing data, upgrading cloud infrastructure, and managing organizational changes, are less visible but significant. Generative AI Energy Consumption Rises One often overlooked cost is the energy consumption of generative AI. Training LLMs and responding to user requests—whether answering questions or generating images—demands considerable computing power. These tasks generate heat and necessitate sophisticated cooling systems in data centers, which, in turn, consume additional energy. Despite this, most enterprises have not focused on the energy requirements of GenAI. However, the issue is gaining more attention at a broader level. The International Energy Agency (IEA), for instance, has forecasted that electricity consumption from data centers, AI, and cryptocurrency could double by 2026. By that time, data centers’ electricity use could exceed 1,000 terawatt-hours, equivalent to Japan’s total electricity consumption. Goldman Sachs also flagged the growing energy demand, attributing it partly to AI. The firm projects that global data center electricity use could more than double by 2030, fueled by AI and other factors. ROI Implications of Energy Costs The extent to which rising energy consumption will affect GenAI’s return on investment (ROI) remains unclear. For now, the perceived benefits of GenAI seem to outweigh concerns about energy costs. Most businesses have not been directly impacted, as these costs tend to affect hyperscalers more. For instance, Google reported a 13% increase in greenhouse gas emissions in 2023, largely due to AI-related energy demands in its data centers. Scott Likens, PwC’s global chief AI engineering officer, noted that while energy consumption isn’t a barrier to adoption, it should still be factored into long-term strategies. “You don’t take it for granted. There’s a cost somewhere for the enterprise,” he said. Energy Costs: Hidden but Present Although energy expenses may not appear on an enterprise’s invoice, they are still present. Generative AI’s energy consumption is tied to both model training and inference—each time a user makes a query, the system expends energy to generate a response. While the energy used for individual queries is minor, the cumulative effect across millions of users can add up. How these costs are passed to customers is somewhat opaque. Licensing fees for enterprise versions of GenAI products likely include energy costs, spread across the user base. According to PwC’s Likens, the costs associated with training models are shared among many users, reducing the burden on individual enterprises. On the inference side, GenAI vendors charge for tokens, which correspond to computational power. Although increased token usage signals higher energy consumption, the financial impact on enterprises has so far been minimal, especially as token costs have decreased. This may be similar to buying an EV to save on gas but spending hundreds and losing hours at charging stations. Energy as an Indirect Concern While energy costs haven’t been top-of-mind for GenAI adopters, they could indirectly address the issue by focusing on other deployment challenges, such as reducing latency and improving cost efficiency. Newer models, such as OpenAI’s GPT-4o mini, are more economical and have helped organizations scale GenAI without prohibitive costs. Organizations may also use smaller, fine-tuned models to decrease latency and energy consumption. By adopting multimodel approaches, enterprises can choose models based on the complexity of a task, optimizing for both speed and energy efficiency. The Data Center Dilemma As enterprises consider GenAI’s energy demands, data centers face the challenge head-on, investing in more sophisticated cooling systems to handle the heat generated by AI workloads. According to the Dell’Oro Group, the data center physical infrastructure market grew in the second quarter of 2024, signaling the start of the “AI growth cycle” for infrastructure sales, particularly thermal management systems. Liquid cooling, more efficient than air cooling, is gaining traction as a way to manage the heat from high-performance computing. This method is expected to see rapid growth in the coming years as demand for AI workloads continues to increase. Nuclear Power and AI Energy Demands To meet AI’s growing energy demands, some hyperscalers are exploring nuclear energy for their data centers. AWS, Google, and Microsoft are among the companies exploring this option, with AWS acquiring a nuclear-powered data center campus earlier this year. Nuclear power could help these tech giants keep pace with AI’s energy requirements while also meeting sustainability goals. I don’t know. It seems like if you akin AI accessibility to more nuclear power plants you would lose a lot of fans. As GenAI continues to evolve, both energy costs and efficiency are likely to play a greater role in decision-making. PwC has already begun including carbon impact as part of its GenAI value framework, which assesses the full scope of generative AI deployments. “The cost of carbon is in there, so we shouldn’t ignore it,” Likens said. Generative AI Energy Consumption Rises Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Amazon DynamoDB to Salesforce Data Cloud

Amazon DynamoDB to Salesforce Data Cloud

Ingesting Data from Amazon DynamoDB to Salesforce Data Cloud Salesforce Data Cloud serves as your organization’s digital command center, enabling real-time ingestion, unification, and activation of data from any source. By transforming scattered customer information into actionable insights, it empowers businesses to operate with unparalleled efficiency. Integrating Amazon DynamoDB with Salesforce Data Cloud exemplifies the platform’s capacity to unify and activate enterprise data seamlessly. Follow this step-by-step guide to ingest data from Amazon DynamoDB into Salesforce Data Cloud. Prerequisites Part 1: Amazon DynamoDB Setup 1. AWS Account Setup 2. Create a DynamoDB Table 3. Populate the Table with Data 4. Security Credentials Part 2: Salesforce Data Cloud Configuration 1. Creating the Data Connection 2. Configuring Data Streams Create a New Data Stream Configure the Data Model 3. Data Modeling and Mapping Custom Object Creation Conclusion After completing the setup: This integration underscores Salesforce Data Cloud’s role as a centralized hub, capable of harmonizing diverse data sources, ensuring real-time synchronization, and enabling actionable insights. By connecting Amazon DynamoDB, businesses can unlock the full potential of their data, driving better decision-making and customer experiences. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Life of a Salesforce Admin in the AI Era

Life of a Salesforce Admin in the AI Era

The life of Salesforce admins is rapidly evolving as artificial intelligence (AI) becomes integral to business operations. Let’s examine the Life of a Salesforce Admin in the AI Era. By 2025, the Salesforce admin’s role will expand beyond managing CRM systems to include leveraging AI tools to enhance efficiency, boost productivity, and maintain security. While this future offers exciting opportunities, it also comes with new responsibilities that require admins to adapt and learn. So, what will Salesforce admins need to succeed in this AI-driven landscape? The Salesforce Admin’s Role in 2025 In 2025, Salesforce admins will be at the forefront of digital transformation, helping organizations harness the full potential of the Salesforce ecosystem and AI-powered tools. These AI tools will automate processes, predict trends, and improve overall efficiency. Many professionals are already enrolling in Salesforce Administrator courses focused on AI and automation, equipping them with the essential skills to thrive in this new era. Key Responsibilities in Life of a Salesforce Admin in the AI Era 1. AI Integration and Optimization Admins will be responsible for integrating AI tools like Salesforce Einstein AI into workflows, ensuring they’re properly configured and tailored to the organization’s needs. Core tasks include: 2. Automating Processes with AI AI will revolutionize automation, making complex workflows more efficient. Admins will need to: 3. Data Management and Predictive Analytics Admins will leverage AI to manage data and generate predictive insights. Key responsibilities include: 4. Enhancing Security and Compliance AI-powered security tools will help admins proactively protect systems. Responsibilities include: 5. Supporting AI-Driven Customer Experiences Admins will deploy AI tools that enhance customer interactions. Their responsibilities include: 6. Continuous Learning and Upskilling As AI evolves, so too must Salesforce admins. Key learning areas include: 7. Collaboration with Cross-Functional Teams Admins will work closely with IT, marketing, and sales teams to deploy AI solutions organization-wide. Their collaborative efforts will include: Skills Required for Future Salesforce Admins 1. AI and Machine Learning Proficiency Admins will need to understand how AI models like Einstein AI function and how to deploy them. While not requiring full data science expertise, a solid grasp of AI concepts—such as predictive analytics and machine learning—will be essential. 2. Advanced Data Management and Analysis Managing large datasets and ensuring data accuracy will be critical as admins work with AI tools. Proficiency in data modeling, SQL, SOQL, and ETL processes will be vital for handling AI-powered data management. 3. Automation and Process Optimization AI-enhanced automation will become a key responsibility. Admins must master tools like Salesforce Flow and Einstein Automate to build intelligent workflows and ensure smooth process automation. 4. Security and Compliance Expertise With AI-driven security protocols, admins will need to stay updated on data privacy regulations and deploy tools that ensure compliance and prevent data breaches. 5. Collaboration and Leadership Admins will lead the implementation of AI tools across departments, requiring strong collaboration and leadership skills to align AI-driven solutions with business objectives. Advanced Certifications for AI-Era Admins To stay competitive, Salesforce admins will need to pursue advanced certifications. Key certifications include: Tectonic’s Thoughts The Salesforce admin role is transforming as AI becomes an essential part of the platform. By mastering AI tools, optimizing processes, ensuring security, and continuously upskilling, Salesforce admins can become pivotal players in driving digital transformation. The future is bright for those who embrace the AI-powered Salesforce landscape and position themselves at the forefront of innovation. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Data Labeling

Data Labeling

Data Labeling: Essential for Machine Learning and AI Data labeling is the process of identifying and tagging data samples, essential for training machine learning (ML) models. While it can be done manually, software often assists in automating the process. Data labeling is critical for helping machine learning models make accurate predictions and is widely used in fields like computer vision, natural language processing (NLP), and speech recognition. How Data Labeling Works The process begins with collecting raw data, such as images or text, which is then annotated with specific labels to provide context for ML models. These labels need to be precise, informative, and independent to ensure high-quality model training. For instance, in computer vision, data labeling can tag images of animals so that the model can learn common features and correctly identify animals in new, unlabeled data. Similarly, in autonomous vehicles, labeling helps the AI differentiate between pedestrians, cars, and other objects, ensuring safe navigation. Why Data Labeling is Important Data labeling is integral to supervised learning, a type of machine learning where models are trained on labeled data. Through labeled examples, the model learns the relationships between input data and the desired output, which improves its accuracy in real-world applications. For example, a machine learning algorithm trained on labeled emails can classify future emails as spam or not based on those labels. It’s also used in more advanced applications like self-driving cars, where the model needs to understand its surroundings by recognizing and labeling various objects like roads, signs, and obstacles. Applications of Data Labeling The Data Labeling Process Data labeling involves several key steps: Errors in labeling can negatively affect the model’s performance, so many organizations adopt a human-in-the-loop approach to involve people in quality control and improve the accuracy of labels. Data Labeling vs. Data Classification vs. Data Annotation Types of Data Labeling Benefits and Challenges Benefits: Challenges: Methods of Data Labeling Companies can label data through various methods: Each organization must choose a method that fits its needs, based on factors like data volume, staff expertise, and budget. The Growing Importance of Data Labeling As AI and ML become more pervasive, the need for high-quality data labeling increases. Data labeling not only helps train models but also provides opportunities for new jobs in the AI ecosystem. For instance, companies like Alibaba, Amazon, Facebook, Tesla, and Waymo all rely on data labeling for applications ranging from e-commerce recommendations to autonomous driving. Looking Ahead Data tools are becoming more sophisticated, reducing the need for manual work while ensuring higher data quality. As data privacy regulations tighten, businesses must also ensure that labeling practices comply with local, state, and federal laws. In conclusion, labeling is a crucial step in building effective machine learning models, driving innovation, and ensuring that AI systems perform accurately across a wide range of applications. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
RIG and RAG

RIG and RAG

Imagine you’re a financial analyst tasked with comparing the GDP of France and Italy over the last five years. You query a language model, asking: “What are the current GDP figures of France and Italy, and how have they changed over the last five years?” Using Retrieval-Augmented Generation (RAG), the model first retrieves relevant information from external sources, then generates this response: “France’s current GDP is approximately $2.9 trillion, while Italy’s is around $2.1 trillion. Over the past five years, France’s GDP has grown by an average of 1.5%, whereas Italy’s GDP has seen slower growth, averaging just 0.6%.” In this case, RAG improves the model’s accuracy by incorporating real-world data through a single retrieval step. While effective, this method can struggle with more complex queries that require multiple, dynamic pieces of real-time data. Enter Retrieval Interleaved Generation (RIG)! Now, you submit a more complex query: “What are the GDP growth rates of France and Italy in the past five years, and how do these compare to their employment rates during the same period?” With RIG, the model generates a partial response, drawing from its internal knowledge about GDP. However, it simultaneously retrieves relevant employment data in real time. For example: “France’s current GDP is $2.9 trillion, and Italy’s is $2.1 trillion. Over the past five years, France’s GDP has grown at an average rate of 1.5%, while Italy’s growth has been slower at 0.6%. Meanwhile, France’s employment rate increased by 2%, and Italy’s employment rate rose slightly by 0.5%.” Here’s what happened: RIG allowed the model to interleave data retrieval with response generation, ensuring the information is up-to-date and comprehensive. It fetched employment statistics while continuing to generate GDP figures, ensuring the final output was both accurate and complete for a multi-faceted query. What is Retrieval Interleaved Generation (RIG)? RIG is an advanced technique that integrates real-time data retrieval into the process of generating responses. Unlike RAG, which retrieves information once before generating the response, RIG continuously alternates between generating text and querying external data sources. This ensures each piece of the response is dynamically grounded in the most accurate, up-to-date information. How RIG Works: For example, when asked for GDP figures of two countries, RIG first retrieves one country’s data while generating an initial response and simultaneously fetches the second country’s data for a complete comparison. Why Use RIG? Real-World Applications of RIG RIG’s versatility makes it ideal for handling complex, real-time data across various sectors, such as: Challenges of RIG While promising, RIG faces a few challenges: As AI evolves, RIG is poised to become a foundational tool for complex, data-driven tasks, empowering industries with more accurate, real-time insights for decision-making. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Ambient AI Enhances Patient-Provider Relationship

Ambient AI Enhances Patient-Provider Relationship

How Ambient AI is Enhancing the Patient-Provider Relationship Ambient AI is transforming the patient-provider experience at Ochsner Health by enabling clinicians to focus more on their patients and less on their screens. While some view technology as a barrier to human interaction, Ochsner’s innovation officer, Dr. Jason Hill, believes ambient AI is doing the opposite by fostering stronger connections between patients and providers. Researchers estimate that physicians spend over 40% of consultation time focused on electronic health records (EHRs), limiting face-to-face interactions. “We have highly skilled professionals spending time inputting data instead of caring for patients, and as a result, patients feel disconnected due to the screen barrier,” Hill said. Additionally, increased documentation demands related to quality reporting, patient satisfaction, and reimbursement are straining providers. Ambient AI scribes help relieve this burden by automating clinical documentation, allowing providers to focus on their patients. Using machine learning, these AI tools generate clinical notes in seconds from recorded conversations. Clinicians then review and edit the drafts before finalizing the record. Ochsner began exploring ambient AI several years ago, but only with the advent of advanced language models like OpenAI’s GPT did the technology become scalable and cost-effective for large health systems. “Once the technology became affordable for large-scale deployment, we were immediately interested,” Hill explained. Selecting the Right Vendor Ochsner piloted two ambient AI tools before choosing DeepScribe for an enterprise-wide partnership. After the initial rollout to 60 physicians, the tool achieved a 75% adoption rate and improved patient satisfaction scores by 6%. What set DeepScribe apart were its customization features. “We can create templates for different specialties, but individual doctors retain control over their note outputs based on specific clinical encounters,” Hill said. This flexibility was crucial in gaining physician buy-in. Ochsner also valued DeepScribe’s strong vendor support, which included tailored training modules and direct assistance to clinicians. One example of this support was the development of a software module that allowed Ochsner’s providers to see EHR reminders within the ambient AI app. “DeepScribe built a bridge to bring EHR data into the app, so clinicians could access important information right before the visit,” Hill noted. Ensuring Documentation Quality Ochsner has implemented several safeguards to maintain the accuracy of AI-generated clinical documentation. Providers undergo training before using the ambient AI system, with a focus on reviewing and finalizing all AI-generated notes. Notes created by the AI remain in a “pended” state until the provider signs off. Ochsner also tracks how much text is generated by the AI versus added by the provider, using this as a marker for the level of editing required. Following the successful pilot, Ochsner plans to expand ambient AI to 600 clinicians by the end of the year, with the eventual goal of providing access to all 4,700 physicians. While Hill anticipates widespread adoption, he acknowledges that the technology may not be suitable for all providers. “Some clinicians have different documentation needs, but for the vast majority, this will likely become the standard way we document at Ochsner within a year,” he said. Conclusion By integrating ambient AI, Ochsner Health is not only improving operational efficiency but also strengthening the human connection between patients and providers. As the technology becomes more widespread, it holds the potential to reshape how clinical documentation is handled, freeing up time for more meaningful patient interactions. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
gettectonic.com