Machine Learning - gettectonic.com - Page 2
salesforce and mahindra finance

Salesforce and Mahindra

The new LOS will incorporate machine learning and automation to deliver real-time credit assessments, enabling faster loan processing and competitive interest rates, alongside improved credit risk insights. This strategic partnership underscores Mahindra Finance’s dedication to providing responsible financing solutions to India’s emerging MSME sector.

Read More
Being AI-Driven

Being AI-Driven

Imagine a company where every decision, strategy, customer interaction, and routine task is enhanced by AI. From predictive analytics uncovering market insights to intelligent automation streamlining operations, this AI-driven enterprise represents what a successful business could look like. Does this company exist? Not yet, but the building blocks for creating it are already here. To envision a day in the life of such an AI enterprise, let’s fast forward to the year 2028 and visit Tectonic 5.0, a fictional 37-year-old mid-sized company in Oklahoma that provides home maintenance services. After years of steady sales and profit growth, the 2,300-employee company has hit a rough patch. Tectonic 5.0’s revenue grew just 3% last year, and its 8% operating margin is well below the industry benchmark. To jumpstart growth, Tectonic 5.0 has expanded its product portfolio and decided to break into the more lucrative commercial real estate market. But Tectonic 5.0 needs to act fast. The firm must quickly bring its new offerings to market while boosting profitability by eliminating inefficiencies and fostering collaboration across teams. To achieve these goals, Tectonic 5.0 is relying on artificial intelligence (AI). Here’s how each department at Tectonic 5.0 is using AI to reach these objectives. Spot Inefficiencies with AI With a renewed focus on cost-cutting, Tectonic 5.0 needed to identify and eliminate inefficiencies throughout the company. To assist in this effort, the company developed a tool called Jenny, an AI agent that’s automatically invited to all meetings. Always listening and analyzing, Jenny spots problems and inefficiencies that might otherwise go unnoticed. For example, Jenny compares internal data against industry benchmarks and historical data, identifying opportunities for optimization based on patterns in spending and resource allocation. Suggestions for cost-cutting can be offered in real time during meetings or shared later in a synthesized summary. AI can also analyze how meeting time is spent, revealing if too much time is wasted on non-essential issues and suggesting ways to have more constructive meetings. It does this by comparing meeting summaries against the company’s broader objectives. Tectonic 5.0’s leaders hope that by highlighting inefficiencies and communication gaps with Jenny’s help, employees will be more inclined to take action. In fact, it has already shown considerable promise, with employees being five times more likely to consider cost-cutting measures suggested by Penny. Market More Effectively with AI With cost management underway, Tectonic 5.0’s next step in its transformation is finding new revenue sources. The company has adopted a two-pronged approach: introducing a new lineup of products and services for homeowners, including smart home technology, sustainable living solutions like solar panels, and predictive maintenance on big-ticket systems like internet-connected HVACs; and expanding into commercial real estate maintenance. Smart home technology is exactly what homeowners are looking for, but Tectonic 5.0 needs to market it to the right customers, at the right time, and in the right way. A marketing platform with built-in AI capabilities is essential for spreading the word quickly and effectively about its new products. To start, the company segments its audience using generative AI, allowing marketers to ask the system, in natural language, to identify tech-savvy homeowners between the ages of 30 and 60 who have spent a certain amount on home maintenance in the last 18 months. This enables more precise audience targeting and helps marketing teams bring products to market faster. Previously, segmentation using legacy systems could take weeks, with marketing teams relying on tech teams for an audience breakdown. Now, Tectonic 5.0 is ready to reach out to its targeted customers. Using predictive AI, it can optimize personalized marketing campaigns. For example, it can determine which customers prefer to be contacted by text, email, or phone, the best time of day to reach out, and how often. The system also identifies which messaging—focused on cost savings, environmental impact, or preventative maintenance—will resonate most with each customer. This intelligence helps Tectonic 5.0 reach the optimal customer quickly in a way that speaks to their specific needs and concerns. AI also enables marketers to monitor campaign performance for red flags like decreasing open rates or click-through rates and take appropriate action. Sell More, and Faster, with AI With interested buyers lined up, it’s now up to the sales team to close deals. Generative AI for sales, integrated into CRM, can speed up and personalize the sales process for Tectonic 5.0 in several ways. First, it can generate email copy tailored to products and services that customers are interested in. Tectonic 5.0’s sales reps can prompt AI to draft solar panel prospecting emails. To maximize effectiveness, the system pulls customer info from the CRM, uncovering which emails have performed well in the past. Second, AI speeds up data analysis. Sales reps spend a significant amount of time generating, pulling, and analyzing data. Generative AI can act like a digital assistant, uncovering patterns and relationships in CRM data almost instantaneously, guiding Tectonic 5.0’s reps toward high-value deals most likely to close. Machine learning increases the accuracy of lead scoring, predicting which customers are most likely to buy based on historical data and predictive analytics. Provide Better Customer Service with AI Tectonic 5.0’s new initiatives are progressing well. Costs are starting to decrease, and sales of its new products are growing faster than expected. However, customer service calls are rising as well. Tectonic 5.0 is committed to maintaining excellent customer service, but smart home technology presents unique challenges. It’s more complex than analog systems, and customers often need help with setup and use, raising the stakes for Tectonic 5.0’s customer service team. The company knows that customers have many choices in home maintenance providers, and one bad experience could drive them to a competitor. Tectonic 5.0’s embedded AI-powered chatbots help deliver a consistent and delightful autonomous customer service experience across channels and touchpoints. Beyond answering common questions, these chatbots can greet customers, serve up knowledge articles, and even dispatch a field technician if needed. In the field, technicians can quickly diagnose and fix problems thanks to LLMs like xGen-Small, which

Read More
AI and Disability

AI and Disability

Dr. Johnathan Flowers of American University recently sparked a conversation on Bluesky regarding a statement from the organizers of NaNoWriMo, which endorsed the use of generative AI technologies, such as LLM chatbots, in this year’s event. Dr. Flowers expressed concern about the implication that AI assistance was necessary for accessibility, arguing that it could undermine the creativity and agency of individuals with disabilities. He believes that art often serves as a unique space where barriers imposed by disability can be transcended without relying on external help or engaging in forced intimacy. For Dr. Flowers, suggesting the need for AI support may inadvertently diminish the perceived capabilities of disabled and marginalized artists. Since the announcement, NaNoWriMo organizers have revised their stance in response to criticism, though much of the social media discussion has become unproductive. In earlier discussions, the author has explored the implications of generative AI in art, focusing on the human connection that art typically fosters, which AI-generated content may not fully replicate. However, they now wish to address the role of AI as a tool for accessibility. Not being personally affected by physical disability, the author approaches this topic from a social scientific perspective. They acknowledge that the views expressed are personal and not representative of any particular community or organization. Defining AI In a recent presentation, the author offered a new definition of AI, drawing from contemporary regulatory and policy discussions: AI: The application of specific forms of machine learning to perform tasks that would otherwise require human labor. This definition is intentionally broad, encompassing not just generative AI but also other machine learning applications aimed at automating tasks. AI as an Accessibility Tool AI has potential to enhance autonomy and independence for individuals with disabilities, paralleling technological advancements seen in fields like the Paris Paralympics. However, the author is keen to explore what unique benefits AI offers and what risks might arise. Benefits Risks AI and Disability The author acknowledges that this overview touches only on some key issues related to AI and disability. It is crucial for those working in machine learning to be aware of these dynamics, striving to balance benefits with potential risks and ensuring equitable access to technological advancements. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Data Quality Management Process

Data Quality Management Process

Data quality is often paradoxical—simple in its fundamentals, yet challenging in its details. A solid data quality management program is essential for ensuring processes run smoothly. What is Data Quality? At its core, data quality means having accurate, consistent, complete, and up-to-date data. However, quality is also context-dependent. Different tasks or applications require different types of data and, consequently, different standards of quality. Data that works well for one purpose may not be suitable for another. For instance, a list of customer names and addresses might be ideal for a marketing campaign but insufficient for tracking customer sales history. There isn’t a universal quality standard. A data set of credit card transactions, filled with cancellations and verification errors, may seem messy for sales analysis—but that’s exactly the kind of data the fraud analysis team wants to see. The most accurate way to assess data quality is to ask, “Is the data fit for its current purpose?” Steps to Build a Data Quality Management Process The goal of data quality management is not perfection. Instead, it focuses on ensuring reliable, high-quality data across the organization. Here are five key steps in developing a robust data quality process: Step 1: Data Quality Assessment Begin by assessing the current state of data. All relevant parties—from business units to IT—should understand the current condition of the organization’s data. Check for errors, duplicates, or missing entries and evaluate accuracy, consistency, and completeness. Techniques like data profiling can help identify data issues. This step forms the foundation for the rest of the process. Step 2: Develop a Data Quality Strategy Next, develop a strategy to improve and maintain data quality. This blueprint should define the use cases for data, the required quality for each, and the rules for data collection, storage, and processing. Choose the right tools and outline how to handle errors or discrepancies. This strategic plan will guide the organization toward sustained data quality. Step 3: Initial Data Cleansing This is where you take action to improve your data. Clean, correct, and prepare the data based on the issues identified during the assessment. Remove duplicates, fill in missing information, and resolve inconsistencies. The goal is to establish a strong baseline for future data quality efforts. Remember, data quality isn’t about perfection—it’s about making data fit for purpose. Step 4: Implement the Data Quality Strategy Now, put the plan into action by integrating data quality standards into daily workflows. Train teams on new practices and modify existing processes to include data quality checks. If done correctly, data quality management becomes a continuous, self-correcting process. Step 5: Monitor Data Quality Finally, monitor the ongoing process. Data quality management is not a one-time event; it requires continuous tracking and review. Regular audits, reports, and dashboards help ensure that data standards are maintained over time. In summary, an effective data quality process involves understanding current data, creating a plan for improvement, and consistently monitoring progress. The aim is not perfection, but ensuring data is fit for purpose. The Impact of AI and Machine Learning on Data Quality The rise of AI and machine learning (ML) brings new challenges to data quality management. For AI and ML, the quality of training data is crucial. The performance of models depends on the accuracy, completeness, and bias of the data used. If the training data is flawed, the model will produce flawed outcomes. Volume is another challenge. AI and ML models require vast amounts of data, and ensuring the quality of such large datasets can be a significant task. Organizations may need to prepare data specifically for AI and ML projects. This might involve collecting new data, transforming existing data, or augmenting it to meet the requirements of the models. Special attention must be paid to avoid bias and ensure diversity in the data. In some cases, existing data may not be sufficient or representative enough to meet future needs. Implementing specific validation checks for AI and ML training data is essential. This includes checking for bias, ensuring diversity, and verifying that the data accurately represents the problem the model is designed to address. By applying these practices, organizations can tackle the evolving challenges of data quality in the age of AI and machine learning. Create a great Data Quality Management Process. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI FOMO

AI FOMO

Enterprise interest in artificial intelligence has surged in the past two years, with boardroom discussions centered on how to capitalize on AI advancements before competitors do. Generative AI has been a particular focus for executives since the launch of ChatGPT in November 2022, followed by other major product releases like Amazon’s Bedrock, Google’s Gemini, Meta’s Llama, and a host of SaaS tools incorporating the technology. However, the initial rush driven by fear of missing out (FOMO) is beginning to fade. Business and tech leaders are now shifting their attention from experimentation to more practical concerns: How can AI generate revenue? This question will grow in importance as pilot AI projects move into production, raising expectations for financial returns. Using AI to Increase Revenue AI’s potential to drive revenue will be a critical factor in determining how quickly organizations adopt the technology and how willing they are to invest further. Here are 10 ways businesses can harness AI to boost revenue: 1. Boost Sales AI-powered virtual assistants and chatbots can help increase sales. For example, Ikea’s generative AI tool assists customers in designing their living spaces while shopping for furniture. Similarly, jewelry insurance company BriteCo launched a GenAI chatbot that reduced chat abandonment rates, leading to more successful customer interactions and potentially higher sales. A TechTarget survey revealed that AI-powered customer-facing tools like chatbots are among the top investments for IT leaders. 2. Reduce Customer Churn AI helps businesses retain clients, reducing revenue loss and improving customer lifetime value. By analyzing historical data, AI can profile customer attributes and identify accounts at risk of leaving. AI can then assist in personalizing customer experiences, decreasing churn and fostering loyalty. 3. Enhance Recommendation Engines AI algorithms can analyze customer data to offer personalized product recommendations. This drives cross-selling and upselling opportunities, boosting revenue. For instance, Meta’s AI-powered recommendation engine has increased user engagement across its platforms, attracting more advertisers. 4. Accelerate Marketing Strategies While marketing doesn’t directly generate revenue, it fuels the sales pipeline. Generative AI can quickly produce personalized content, such as newsletters and ads, tailored to customer interests. Gartner predicts that by 2025, 30% of outbound marketing messages will be AI-generated, up from less than 2% in 2022. 5. Detect Fraud AI is instrumental in detecting fraudulent activities, helping businesses preserve revenue. Financial firms like Capital One use machine learning to detect anomalies and prevent credit card fraud, while e-commerce companies leverage AI to flag fraudulent orders. 6. Reinvent Business Processes AI can transform entire business processes, unlocking new revenue streams. For example, Accenture’s 2024 report highlighted an insurance company that expects a 10% revenue boost after retooling its underwriting workflow with AI. In healthcare, AI could streamline revenue cycle management, speeding up reimbursement processes. 7. Develop New Products and Services AI accelerates product development, particularly in industries like pharmaceuticals, where it assists in drug discovery. AI tools also speed up the delivery of digital products, as seen with companies like Ally Financial and ServiceNow, which have reduced software development times by 20% or more. 8. Provide Predictive Maintenance AI-driven predictive maintenance helps prevent costly equipment downtime in industries like manufacturing and fleet management. By identifying equipment on the brink of failure, AI allows companies to schedule repairs and avoid revenue loss from operational disruptions. 9. Improve Forecasting AI’s predictive capabilities enhance planning and forecasting. By analyzing historical and real-time data, AI can predict product demand and customer behavior, enabling businesses to optimize inventory levels and ensure product availability for ready-to-buy customers. 10. Optimize Pricing AI can dynamically adjust prices based on factors like demand shifts and competitor pricing. Reinforcement learning algorithms allow businesses to optimize pricing in real time, ensuring they maximize revenue even as market conditions change. Keeping ROI in Focus While AI offers numerous ways to generate new revenue streams, it also introduces costs in development, infrastructure, and operations—some of which may not be immediately apparent. For instance, research from McKinsey & Company shows that GenAI models account for only 15% of a project’s total cost, with additional expenses related to change management and data preparation often overlooked. To make the most of AI, organizations should prioritize use cases with a clear return on investment (ROI) and postpone those that don’t justify the expense. A focus on ROI ensures that AI deployments align with business goals and contribute to sustainable revenue growth. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Artificial Intelligence and Sales Cloud

Artificial Intelligence and Sales Cloud

Artificial Intelligence and Sales Cloud AI enhances the sales process at every stage, making it more efficient and effective. Salesforce’s AI technology—Einstein—streamlines data entry and offers predictive analysis, empowering sales teams to maximize every opportunity. Artificial Intelligence and Sales Cloud explained. Artificial Intelligence and Sales Cloud Sales Cloud integrates several AI-driven features powered by Einstein and machine learning. To get the most out of these tools, review which features align with your needs and check the licensing requirements for each one. Einstein and Data Usage in Sales Cloud Einstein thrives on data. To fully leverage its capabilities within Sales Cloud, consult the data usage table to understand which types of data Einstein features rely on. Setting Up Einstein Opportunity Scoring in Sales Cloud Einstein Opportunity Scoring, part of the Sales Cloud Einstein suite, is available to eligible customers at no additional cost. Simply activate Einstein, and the system will handle the rest, offering predictive insights to improve your sales pipeline. Managing Access to Einstein Features in Sales Cloud Sales Cloud users can access Einstein Opportunity Scoring through the Sales Cloud Einstein For Everyone permission set. Ensure the right team members have access by reviewing the permissions, features included, and how to manage assignments. Einstein Copilot Setup for Sales Einstein Copilot helps sales teams stay organized by guiding them through deal management, closing strategies, customer communications, and sales forecasting. Each Copilot action corresponds to specific topics designed to optimize the sales process. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Python Alongside Salesforce

Python Losing the Crown

For years, Python has been synonymous with data science, thanks to its robust libraries like NumPy, Pandas, and scikit-learn. It’s long held the crown as the dominant programming language in the field. However, even the strongest kingdoms face threats. Python Losing the Crown. The whispers are growing louder: Is Python’s reign nearing its end? Before you fire up your Jupyter notebook to prove me wrong, let me clarify — Python is incredible and undeniably one of the greatest programming languages of all time. But no ruler is without flaws, and Python’s supremacy may not last forever. Here are five reasons why Python’s crown might be slipping. 1. Performance Bottlenecks: Python’s Achilles’ Heel Let’s address the obvious: Python is slow. Its interpreted nature makes it inherently less efficient than compiled languages like C++ or Java. Sure, libraries like NumPy and tools like Cython help mitigate these issues, but at its core, Python can’t match the raw speed of newer, more performance-oriented languages. Enter Julia and Rust, which are optimized for numerical computing and high-performance tasks. When working with massive, real-time datasets, Python’s performance bottlenecks become harder to ignore, prompting some developers to offload critical tasks to faster alternatives. 2. Python’s Memory Challenges Memory consumption is another area where Python struggles. Handling large datasets often pushes Python to its limits, especially in environments with constrained resources, such as edge computing or IoT. While tools like Dask can help manage memory more efficiently, these are often stopgap solutions rather than true fixes. Languages like Rust are gaining traction for their superior memory management, making them an attractive alternative for resource-limited scenarios. Picture running a Python-based machine learning model on a Raspberry Pi, only to have it crash due to memory overload. Frustrating, isn’t it? 3. The Rise of Domain-Specific Languages (DSLs) Python’s versatility has been both its strength and its weakness. As industries mature, many are turning to domain-specific languages tailored to their specific needs: Python may be the “jack of all trades,” but as the saying goes, it risks being the “master of none” compared to these specialized tools. 4. Python’s Simplicity: A Double-Edged Sword Python’s beginner-friendly syntax is one of its greatest strengths, but it can also create complacency. Its ease of use often means developers don’t delve into the deeper mechanics of algorithms or computing. Meanwhile, languages like Julia, designed for scientific computing, offer intuitive structures for advanced modeling while encouraging developers to engage with complex mathematical concepts. Python’s simplicity is like riding a bike with training wheels: it works, but it may not push you to grow as a developer. 5. AI-Specific Frameworks Are Gaining Ground Python has been the go-to language for AI, powering frameworks like TensorFlow, PyTorch, and Keras. But new challengers are emerging: As AI and machine learning evolve, these specialized frameworks could chip away at Python’s dominance. The Verdict: Python Losing the Crown? Python remains the Swiss Army knife of programming languages, especially in data science. However, its cracks are showing as new, specialized tools and faster languages emerge. The data science landscape is evolving, and Python must adapt or risk losing its crown. For now, Python is still king. But as history has shown, no throne is secure forever. The future belongs to those who innovate, and Python’s ability to evolve will determine whether it remains at the top. The throne of code is only as stable as the next breakthrough. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI in Networking

AI in Networking

AI Tools in Networking: Tailoring Capabilities to Unique Needs AI tools are becoming increasingly common across various industries, offering a wide range of functionalities. However, network engineers may not require every capability these tools provide. Each network has distinct requirements that align with specific business objectives, necessitating that network engineers and developers select AI toolsets tailored to their networks’ needs. While network teams often desire similar AI capabilities, they also encounter common challenges in integrating these tools into their systems. The Rise of AI in Networking Though AI is not a new concept—having existed for decades in the form of automated and expert systems—it is gaining unprecedented attention. According to Jim Frey, principal analyst for networking at TechTarget’s Enterprise Strategy Group, many organizations have not fully grasped AI’s potential in production environments over the past three years. “AI has been around for a long time, but the interesting thing is, only a minority—not even half—have really said they’re using it effectively in production for the last three years,” Frey noted. Generative AI (GenAI) has significantly contributed to this renewed interest in AI. Shamus McGillicuddy, vice president of research at Enterprise Management Associates, categorizes AI tools into two main types: GenAI and AIOps (AI for IT operations). “Generative AI, like ChatGPT, has recently surged in popularity, becoming a focal point of discussion among IT professionals,” McGillicuddy explained. “AIOps, on the other hand, encompasses machine learning, anomaly detection, and analytics.” The increasing complexity of networks is another factor driving the adoption of AI in networking. Frey highlighted that the demands of modern network environments are beyond human capability to manage manually, making AI engines a vital solution. Essential AI Tool Capabilities for Networks While individual network needs vary, many network engineers seek similar functionalities when integrating AI. Commonly desired capabilities include: According to McGillicuddy’s research, network optimization and automated troubleshooting are among the most popular use cases for AI. However, many professionals prefer to retain manual oversight in the fixing process. “Automated troubleshooting can identify and analyze issues, but typically, people want to approve the proposed fixes,” McGillicuddy stated. Many of these capabilities are critical for enhancing security and mitigating threats. Frey emphasized that networking professionals increasingly view AI as a tool to improve organizational security. DeCarlo echoed this sentiment, noting that network managers share similar objectives with security professionals regarding proactive problem recognition. Frey also mentioned alternative use cases for AI, such as documentation and change recommendations, which, while less popular, can offer significant value to network teams. Ultimately, the relevance of any AI capability hinges on its fit within the network environment and team needs. “I don’t think you can prioritize one capability over another,” DeCarlo remarked. “It depends on the tools being used and their effectiveness.” Generative AI: A New Frontier Despite its recent emergence, GenAI has quickly become an asset in the networking field. McGillicuddy noted that in the past year and a half, network professionals have adopted GenAI tools, with ChatGPT being one of the most recognized examples. “One user reported that leveraging ChatGPT could reduce a task that typically takes four hours down to just 10 minutes,” McGillicuddy said. However, he cautioned that users must understand the limitations of GenAI, as mistakes can occur. “There’s a risk of errors or ‘hallucinations’ with these tools, and having blind faith in their outputs can lead to significant network issues,” he warned. In addition to ChatGPT, vendors are developing GenAI interfaces for their products, including virtual assistants. According to McGillicuddy’s findings, common use cases for vendor GenAI products include: DeCarlo added that GenAI tools offer valuable training capabilities due to their rapid processing speeds and in-depth analysis, which can expedite knowledge acquisition within the network. Frey highlighted that GenAI’s rise is attributed to its ability to outperform older systems lacking sophistication. Nevertheless, the complexity of GenAI infrastructures has led to a demand for AIOps tools to manage these systems effectively. “We won’t be able to manage GenAI infrastructures without the support of AI tools, as human capabilities cannot keep pace with rapid changes,” Frey asserted. Challenges in Implementing AI Tools While AI tools present significant benefits for networks, network engineers and managers must navigate several challenges before integration. Data Privacy, Collection, and Quality Data usage remains a critical concern for organizations considering AIOps and GenAI tools. Frey noted that the diverse nature of network data—combining operational information with personally identifiable information—heightens data privacy concerns. For GenAI, McGillicuddy pointed out the importance of validating AI outputs and ensuring high-quality data is utilized for training. “If you feed poor data to a generative AI tool, it will struggle to accurately understand your network,” he explained. Complexity of AI Tools Frey and McGillicuddy agreed that the complexity of both AI and network systems could hinder effective deployment. Frey mentioned that AI systems, especially GenAI, require careful tuning and strong recommendations to minimize inaccuracies. McGillicuddy added that intricate network infrastructures, particularly those involving multiple vendors, could limit the effectiveness of AIOps components, which are often specialized for specific systems. User Uptake and Skills Gaps User adoption of AI tools poses a significant challenge. Proper training is essential to realize the full benefits of AI in networking. Some network professionals may be resistant to using AI, while others may lack the knowledge to integrate these tools effectively. McGillicuddy noted that AIOps tools are often less intuitive than GenAI, necessitating a certain level of expertise for users to extract value. “Understanding how tools function and identifying potential gaps can be challenging,” DeCarlo added. The learning curve can be steep, particularly for teams accustomed to longstanding tools. Integration Issues Integration challenges can further complicate user adoption. McGillicuddy highlighted two dimensions of this issue: tools and processes. On the tools side, concerns arise about harmonizing GenAI with existing systems. “On the process side, it’s crucial to ensure that teams utilize these tools effectively,” he said. DeCarlo cautioned that organizations might need to create in-house supplemental tools to bridge integration gaps, complicating the synchronization of vendor AI

Read More
Google on Google AI

Google on Google AI

As a leading cloud provider, Google Cloud is also a major player in the generative AI market. Google on Google AI provides insights into this new tool. In the past two years, Google has been in a competitive battle with AWS, Microsoft, and OpenAI to gain dominance in the generative AI space. Recently, Google introduced several generative Artificial Intelligence products, including its flagship large language model, Gemini, and the Vertex AI Model Garden. Last week, it also unveiled Audio Overview, a tool that transforms documents into audio discussions. Despite these advancements, Google has faced criticism for lagging in some areas, such as issues with its initial image generation tool, like X’s Grok. However, the company remains committed to driving progress in generative AI. Google’s strategy focuses not only on delivering its proprietary models but also offering a broad selection of third-party models through its Model Garden. Google’s Thoughts on Google AI Warren Barkley, head of product for Google Cloud’s Vertex AI, GenAI, and machine learning, emphasized this approach in a recent episode of the Targeting AI podcast. He noted that a key part of Google’s ongoing effort is ensuring users can easily transition to more advanced models. “A lot of what we did in the early days, and we continue to do now, is make it easy for people to move to the next generation,” Barkley said. “The models we built 18 months ago are a shadow of what we have today. So, providing pathways for people to upgrade and stay on the cutting edge is critical.” Google is also focused on helping users select the right AI models for specific applications. With over 100 closed and open models available in the Model Garden, evaluating them can be challenging for customers. To address this, Google introduced evaluation tools that allow users to test prompts and compare model responses. In addition, Google is exploring advancements in Artificial Intelligence reasoning, which it views as crucial to driving the future of generative AI. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Customer Service Agents Explained

AI Customer Service Agents Explained

AI customer service agents are advanced technologies designed to understand and respond to customer inquiries within defined guidelines. These agents can handle both simple and complex issues, such as answering frequently asked questions or managing product returns, all while offering a personalized, conversational experience. Research shows that 82% of service representatives report that customers ask for more than they used to. As a customer service leader, you’re likely facing increasing pressure to meet these growing expectations while simultaneously reducing costs, speeding up service, and providing personalized, round-the-clock support. This is where AI customer service agents can make a significant impact. Here’s a closer look at how AI agents can enhance your organization’s service operations, improve customer experience, and boost overall productivity and efficiency. What Are AI Customer Service Agents? AI customer service agents are virtual assistants designed to interact with customers and support service operations. Utilizing machine learning and natural language processing (NLP), these agents are capable of handling a broad range of tasks, from answering basic inquiries to resolving complex issues — even managing multiple tasks at once. Importantly, AI agents continuously improve through self-learning. Why Are AI-Powered Customer Service Agents Important? AI-powered customer service technology is becoming essential for several reasons: Benefits of AI Customer Service Agents AI customer service agents help service teams manage growing service demands by taking on routine tasks and providing essential support. Key benefits include: Why Choose Agentforce Service Agent? If you’re considering adding AI customer service agents to your strategy, Agentforce Service Agent offers a comprehensive solution: By embracing AI customer service agents like Agentforce Service Agent, businesses can reduce costs, meet growing customer demands, and stay competitive in an ever-evolving global market. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Recent advancements in AI

Recent advancements in AI

Recent advancements in AI have been propelled by large language models (LLMs) containing billions to trillions of parameters. Parameters—variables used to train and fine-tune machine learning models—have played a key role in the development of generative AI. As the number of parameters grows, models like ChatGPT can generate human-like content that was unimaginable just a few years ago. Parameters are sometimes referred to as “features” or “feature counts.” While it’s tempting to equate the power of AI models with their parameter count, similar to how we think of horsepower in cars, more parameters aren’t always better. An increase in parameters can lead to additional computational overhead and even problems like overfitting. There are various ways to increase the number of parameters in AI models, but not all approaches yield the same improvements. For example, Google’s Switch Transformers scaled to trillions of parameters, but some of their smaller models outperformed them in certain use cases. Thus, other metrics should be considered when evaluating AI models. The exact relationship between parameter count and intelligence is still debated. John Blankenbaker, principal data scientist at SSA & Company, notes that larger models tend to replicate their training data more accurately, but the belief that more parameters inherently lead to greater intelligence is often wishful thinking. He points out that while these models may sound knowledgeable, they don’t actually possess true understanding. One challenge is the misunderstanding of what a parameter is. It’s not a word, feature, or unit of data but rather a component within the model‘s computation. Each parameter adjusts how the model processes inputs, much like turning a knob in a complex machine. In contrast to parameters in simpler models like linear regression, which have a clear interpretation, parameters in LLMs are opaque and offer no insight on their own. Christine Livingston, managing director at Protiviti, explains that parameters act as weights that allow flexibility in the model. However, more parameters can lead to overfitting, where the model performs well on training data but struggles with new information. Adnan Masood, chief AI architect at UST, highlights that parameters influence precision, accuracy, and data management needs. However, due to the size of LLMs, it’s impractical to focus on individual parameters. Instead, developers assess models based on their intended purpose, performance metrics, and ethical considerations. Understanding the data sources and pre-processing steps becomes critical in evaluating the model’s transparency. It’s important to differentiate between parameters, tokens, and words. A parameter is not a word; rather, it’s a value learned during training. Tokens are fragments of words, and LLMs are trained on these tokens, which are transformed into embeddings used by the model. The number of parameters influences a model’s complexity and capacity to learn. More parameters often lead to better performance, but they also increase computational demands. Larger models can be harder to train and operate, leading to slower response times and higher costs. In some cases, smaller models are preferred for domain-specific tasks because they generalize better and are easier to fine-tune. Transformer-based models like GPT-4 dwarf previous generations in parameter count. However, for edge-based applications where resources are limited, smaller models are preferred as they are more adaptable and efficient. Fine-tuning large models for specific domains remains a challenge, often requiring extensive oversight to avoid problems like overfitting. There is also growing recognition that parameter count alone is not the best way to measure a model’s performance. Alternatives like Stanford’s HELM and benchmarks such as GLUE and SuperGLUE assess models across multiple factors, including fairness, efficiency, and bias. Three trends are shaping how we think about parameters. First, AI developers are improving model performance without necessarily increasing parameters. A study of 231 models between 2012 and 2023 found that the computational power required for LLMs has halved every eight months, outpacing Moore’s Law. Second, new neural network approaches like Kolmogorov-Arnold Networks (KANs) show promise, achieving comparable results to traditional models with far fewer parameters. Lastly, agentic AI frameworks like Salesforce’s Agentforce offer a new architecture where domain-specific AI agents can outperform larger general-purpose models. As AI continues to evolve, it’s clear that while parameter count is an important consideration, it’s just one of many factors in evaluating a model’s overall capabilities. To stay on the cutting edge of artificial intelligence, contact Tectonic today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Life of a Salesforce Admin in the AI Era

Life of a Salesforce Admin in the AI Era

The life of Salesforce admins is rapidly evolving as artificial intelligence (AI) becomes integral to business operations. Let’s examine the Life of a Salesforce Admin in the AI Era. By 2025, the Salesforce admin’s role will expand beyond managing CRM systems to include leveraging AI tools to enhance efficiency, boost productivity, and maintain security. While this future offers exciting opportunities, it also comes with new responsibilities that require admins to adapt and learn. So, what will Salesforce admins need to succeed in this AI-driven landscape? The Salesforce Admin’s Role in 2025 In 2025, Salesforce admins will be at the forefront of digital transformation, helping organizations harness the full potential of the Salesforce ecosystem and AI-powered tools. These AI tools will automate processes, predict trends, and improve overall efficiency. Many professionals are already enrolling in Salesforce Administrator courses focused on AI and automation, equipping them with the essential skills to thrive in this new era. Key Responsibilities in Life of a Salesforce Admin in the AI Era 1. AI Integration and Optimization Admins will be responsible for integrating AI tools like Salesforce Einstein AI into workflows, ensuring they’re properly configured and tailored to the organization’s needs. Core tasks include: 2. Automating Processes with AI AI will revolutionize automation, making complex workflows more efficient. Admins will need to: 3. Data Management and Predictive Analytics Admins will leverage AI to manage data and generate predictive insights. Key responsibilities include: 4. Enhancing Security and Compliance AI-powered security tools will help admins proactively protect systems. Responsibilities include: 5. Supporting AI-Driven Customer Experiences Admins will deploy AI tools that enhance customer interactions. Their responsibilities include: 6. Continuous Learning and Upskilling As AI evolves, so too must Salesforce admins. Key learning areas include: 7. Collaboration with Cross-Functional Teams Admins will work closely with IT, marketing, and sales teams to deploy AI solutions organization-wide. Their collaborative efforts will include: Skills Required for Future Salesforce Admins 1. AI and Machine Learning Proficiency Admins will need to understand how AI models like Einstein AI function and how to deploy them. While not requiring full data science expertise, a solid grasp of AI concepts—such as predictive analytics and machine learning—will be essential. 2. Advanced Data Management and Analysis Managing large datasets and ensuring data accuracy will be critical as admins work with AI tools. Proficiency in data modeling, SQL, SOQL, and ETL processes will be vital for handling AI-powered data management. 3. Automation and Process Optimization AI-enhanced automation will become a key responsibility. Admins must master tools like Salesforce Flow and Einstein Automate to build intelligent workflows and ensure smooth process automation. 4. Security and Compliance Expertise With AI-driven security protocols, admins will need to stay updated on data privacy regulations and deploy tools that ensure compliance and prevent data breaches. 5. Collaboration and Leadership Admins will lead the implementation of AI tools across departments, requiring strong collaboration and leadership skills to align AI-driven solutions with business objectives. Advanced Certifications for AI-Era Admins To stay competitive, Salesforce admins will need to pursue advanced certifications. Key certifications include: Tectonic’s Thoughts The Salesforce admin role is transforming as AI becomes an essential part of the platform. By mastering AI tools, optimizing processes, ensuring security, and continuously upskilling, Salesforce admins can become pivotal players in driving digital transformation. The future is bright for those who embrace the AI-powered Salesforce landscape and position themselves at the forefront of innovation. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Data Labeling

Data Labeling

Data Labeling: Essential for Machine Learning and AI Data labeling is the process of identifying and tagging data samples, essential for training machine learning (ML) models. While it can be done manually, software often assists in automating the process. Data labeling is critical for helping machine learning models make accurate predictions and is widely used in fields like computer vision, natural language processing (NLP), and speech recognition. How Data Labeling Works The process begins with collecting raw data, such as images or text, which is then annotated with specific labels to provide context for ML models. These labels need to be precise, informative, and independent to ensure high-quality model training. For instance, in computer vision, data labeling can tag images of animals so that the model can learn common features and correctly identify animals in new, unlabeled data. Similarly, in autonomous vehicles, labeling helps the AI differentiate between pedestrians, cars, and other objects, ensuring safe navigation. Why Data Labeling is Important Data labeling is integral to supervised learning, a type of machine learning where models are trained on labeled data. Through labeled examples, the model learns the relationships between input data and the desired output, which improves its accuracy in real-world applications. For example, a machine learning algorithm trained on labeled emails can classify future emails as spam or not based on those labels. It’s also used in more advanced applications like self-driving cars, where the model needs to understand its surroundings by recognizing and labeling various objects like roads, signs, and obstacles. Applications of Data Labeling The Data Labeling Process Data labeling involves several key steps: Errors in labeling can negatively affect the model’s performance, so many organizations adopt a human-in-the-loop approach to involve people in quality control and improve the accuracy of labels. Data Labeling vs. Data Classification vs. Data Annotation Types of Data Labeling Benefits and Challenges Benefits: Challenges: Methods of Data Labeling Companies can label data through various methods: Each organization must choose a method that fits its needs, based on factors like data volume, staff expertise, and budget. The Growing Importance of Data Labeling As AI and ML become more pervasive, the need for high-quality data labeling increases. Data labeling not only helps train models but also provides opportunities for new jobs in the AI ecosystem. For instance, companies like Alibaba, Amazon, Facebook, Tesla, and Waymo all rely on data labeling for applications ranging from e-commerce recommendations to autonomous driving. Looking Ahead Data tools are becoming more sophisticated, reducing the need for manual work while ensuring higher data quality. As data privacy regulations tighten, businesses must also ensure that labeling practices comply with local, state, and federal laws. In conclusion, labeling is a crucial step in building effective machine learning models, driving innovation, and ensuring that AI systems perform accurately across a wide range of applications. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Third Wave of AI at Salesforce

Third Wave of AI at Salesforce

The Third Wave of AI at Salesforce: How Agentforce is Transforming the Landscape At Dreamforce 2024, Salesforce unveiled several exciting innovations, with Agentforce taking center stage. This post explores the key changes and enhancements designed to improve efficiency and elevate customer interactions. Introducing Agentforce Agentforce is a customizable AI agent builder that empowers organizations to create and manage autonomous agents for various business tasks. But what exactly is an agent? An agent is akin to a chatbot but goes beyond traditional capabilities. While typical chatbots are restricted to scripted responses and predefined questions, Agentforce agents leverage large language models (LLMs) and generative AI to comprehend customer inquiries contextually. This enables them to make independent decisions, whether processing requests or resolving issues using real-time data from your company’s customer relationship management (CRM) system. The Role of Atlas At the heart of Agentforce’s functionality lies the Atlas reasoning engine, which acts as the operational brain. Unlike standard assistive tools, Atlas is an agentic system with the autonomy to act on behalf of the user. Atlas formulates a plan based on necessary actions and can adjust that plan based on evaluations or new information. When it’s time to engage, Atlas knows which business processes to activate and connects with customers or employees via their preferred channels. This sophisticated approach allows Agentforce to significantly enhance operational efficiency. By automating routine inquiries, it frees up your team to focus on more complex tasks, delivering a smoother experience for both staff and customers. Speed to Value One of Agentforce’s standout features is its emphasis on rapid implementation. Many AI projects can be resource-intensive and take months or even years to launch. However, Agentforce enables quick deployment by leveraging existing Salesforce infrastructure, allowing organizations to implement solutions rapidly and with greater control. Salesforce also offers pre-built Agentforce agents tailored to specific business needs—such as Service Agent, Sales Development Representative Agent, Sales Coach, Personal Shopper Agent, and Campaign Agent—all customizable with the Agent Builder. Agentforce for Service and Sales will be generally available starting October 25, 2024, with certain elements of the Atlas Reasoning Engine rolling out in February 2025. Pricing begins at $2 per conversation, with volume discounts available. Transforming Customer Insights with Data Cloud and Marketing Cloud Dreamforce also highlighted enhancements to Data Cloud, Salesforce’s backbone for all cloud products. The platform now supports processing unstructured data, which constitutes up to 90% of company data often overlooked by traditional reporting systems. With new capabilities for analyzing various unstructured formats—like video, audio, sales demos, customer service calls, and voicemails—businesses can derive valuable insights and make informed decisions across Customer 360. Furthermore, Data Cloud One enables organizations to connect siloed Salesforce instances effortlessly, promoting seamless data sharing through a no-code, point-and-click setup. The newly announced Marketing Cloud Advanced edition serves as the “big sister” to Marketing Cloud Growth, equipping larger marketing teams with enhanced features like Path Experiment, which tests different content strategies across channels, and Einstein Engagement Scoring for deeper insights into customer behavior. Together, these enhancements empower companies to engage customers more meaningfully and measurably across all touchpoints. Empowering the Workforce Through Education Salesforce is committed to making AI accessible for all. They recently announced free instructor-led courses and AI certifications available through 2025, aimed at equipping the Salesforce community with essential AI and data management skills. To support this initiative, Salesforce is establishing AI centers in major cities, starting with London, to provide hands-on training and resources, fostering AI expertise. They also launched a global Agentforce World Tour to promote understanding and adoption of the new capabilities introduced at Dreamforce, featuring repackaged sessions from the conference and opportunities for specialists to answer questions. The Bottom Line What does this mean for businesses? With the rollout of Agentforce, along with enhancements to Data Cloud and Marketing Cloud, organizations can operate more efficiently and connect with customers in more meaningful ways. Coupled with a focus on education through free courses and global outreach, getting on board has never been easier. If you’d like to discuss how we can help your business maximize its potential with Salesforce through data and AI, connect with us and schedule a meeting with our team. Legacy systems can create significant gaps between operations and employee needs, slowing lead processes and resulting in siloed, out-of-sync data that hampers business efficiency. Responding to inquiries within five minutes offers a 75% chance of converting leads into customers, emphasizing the need for rapid, effective marketing responses. Salesforce aims to help customers strengthen relationships, enhance productivity, and boost margins through its premier AI CRM for sales, service, marketing, and commerce, while also achieving these goals internally. Recognizing the complexity of its decade-old processes, including lead assignment across three systems and 2 million lines of custom code, Salesforce took on the role of “customer zero,” leveraging Data Cloud to create a unified view of customers known as the “Customer 360 Truth Profile.” This consolidation of disparate data laid the groundwork for enterprise-wide AI and automation, improving marketing automation and reducing lead time by 98%. As Michael Andrew, SVP of Marketing Decision Science at Salesforce, noted, this initiative enabled the company to provide high-quality leads to its sales team with enriched data and AI scoring while accelerating time to market and enhancing data quality. Embracing Customer Zero “Almost exactly a year ago, we set out with a beginner’s mind to transform our lead automation process with a solution that would send the best leads to the right sales teams within minutes of capturing their data and support us for the next decade,” said Andrew. The initial success metric was “speed to lead,” aiming to reduce the handoff time from 20 minutes to less than one minute. The focus was also on integrating customer and lead data to develop a more comprehensive 360-degree profile for each prospect, enhancing lead assignment and sales rep productivity. Another objective was to boost business agility by cutting the average time to implement assignment changes from four weeks to mere days. Accelerating Success with

Read More
Ambient AI Enhances Patient-Provider Relationship

Ambient AI Enhances Patient-Provider Relationship

How Ambient AI is Enhancing the Patient-Provider Relationship Ambient AI is transforming the patient-provider experience at Ochsner Health by enabling clinicians to focus more on their patients and less on their screens. While some view technology as a barrier to human interaction, Ochsner’s innovation officer, Dr. Jason Hill, believes ambient AI is doing the opposite by fostering stronger connections between patients and providers. Researchers estimate that physicians spend over 40% of consultation time focused on electronic health records (EHRs), limiting face-to-face interactions. “We have highly skilled professionals spending time inputting data instead of caring for patients, and as a result, patients feel disconnected due to the screen barrier,” Hill said. Additionally, increased documentation demands related to quality reporting, patient satisfaction, and reimbursement are straining providers. Ambient AI scribes help relieve this burden by automating clinical documentation, allowing providers to focus on their patients. Using machine learning, these AI tools generate clinical notes in seconds from recorded conversations. Clinicians then review and edit the drafts before finalizing the record. Ochsner began exploring ambient AI several years ago, but only with the advent of advanced language models like OpenAI’s GPT did the technology become scalable and cost-effective for large health systems. “Once the technology became affordable for large-scale deployment, we were immediately interested,” Hill explained. Selecting the Right Vendor Ochsner piloted two ambient AI tools before choosing DeepScribe for an enterprise-wide partnership. After the initial rollout to 60 physicians, the tool achieved a 75% adoption rate and improved patient satisfaction scores by 6%. What set DeepScribe apart were its customization features. “We can create templates for different specialties, but individual doctors retain control over their note outputs based on specific clinical encounters,” Hill said. This flexibility was crucial in gaining physician buy-in. Ochsner also valued DeepScribe’s strong vendor support, which included tailored training modules and direct assistance to clinicians. One example of this support was the development of a software module that allowed Ochsner’s providers to see EHR reminders within the ambient AI app. “DeepScribe built a bridge to bring EHR data into the app, so clinicians could access important information right before the visit,” Hill noted. Ensuring Documentation Quality Ochsner has implemented several safeguards to maintain the accuracy of AI-generated clinical documentation. Providers undergo training before using the ambient AI system, with a focus on reviewing and finalizing all AI-generated notes. Notes created by the AI remain in a “pended” state until the provider signs off. Ochsner also tracks how much text is generated by the AI versus added by the provider, using this as a marker for the level of editing required. Following the successful pilot, Ochsner plans to expand ambient AI to 600 clinicians by the end of the year, with the eventual goal of providing access to all 4,700 physicians. While Hill anticipates widespread adoption, he acknowledges that the technology may not be suitable for all providers. “Some clinicians have different documentation needs, but for the vast majority, this will likely become the standard way we document at Ochsner within a year,” he said. Conclusion By integrating ambient AI, Ochsner Health is not only improving operational efficiency but also strengthening the human connection between patients and providers. As the technology becomes more widespread, it holds the potential to reshape how clinical documentation is handled, freeing up time for more meaningful patient interactions. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com