Large Language Model - gettectonic.com - Page 7
Impact of Generative AI on Workforce

Impact of Generative AI on Workforce

The Impact of Generative AI on the Future of Work Automation has long been a source of concern and hope for the future of work. Now, generative AI is the latest technology fueling both fear and optimism. AI’s Role in Job Augmentation and Replacement While AI is expected to enhance many jobs, there’s a growing argument that job augmentation for some might lead to job replacement for others. For instance, if AI makes a worker’s tasks ten times easier, the roles created to support that job could become redundant. A June 2023 McKinsey report highlighted that generative AI (GenAI) could automate 60% to 70% of employee workloads. In fact, AI has already begun replacing jobs, contributing to nearly 4,000 job cuts in May 2023 alone, according to Challenger, Gray & Christmas Inc. OpenAI, the creator of ChatGPT, estimates that 80% of the U.S. workforce could see at least 10% of their jobs impacted by large language models (LLMs). Examples of AI Job Replacement One notable example involves a writer at a tech startup who was let go without explanation, only to later discover references to her as “Olivia/ChatGPT” in internal communications. Managers had discussed how ChatGPT was a cheaper alternative to employing a writer. This scenario, while not officially confirmed, strongly suggested that AI had replaced her role. The Writers Guild of America also went on strike, seeking not only higher wages and more residuals from streaming platforms but also more regulation of AI. Research from the Frank Hawkins Kenan Institute of Private Enterprise indicates that GenAI might disproportionately affect women, with 79% of working women holding positions susceptible to automation compared to 58% of working men. Unlike past automation that typically targeted repetitive tasks, GenAI is different—it automates creative work such as writing, coding, and even music production. For example, Paul McCartney used AI to partially generate his late bandmate John Lennon’s voice to create a posthumous Beatles song. In this case, AI enhanced creativity, but the broader implications could be more complex. Other Impacts of AI on Jobs AI’s impact on jobs goes beyond replacement. Human-machine collaboration presents a more positive angle, where AI helps improve the work experience by automating repetitive tasks. This could lead to a rise in AI-related jobs and a growing demand for AI skills. AI systems require significant human feedback, particularly in training processes like reinforcement learning, where models are fine-tuned based on human input. A May 2023 paper also warned about the risk of “model collapse,” where LLMs deteriorate without continuous human data. However, there’s also the risk that AI collaboration could hinder productivity. For example, generative AI might produce an overabundance of low-quality content, forcing editors to spend more time refining it, which could deprioritize more original work. Jobs Most Affected by AI AI Legislation and Regulation Despite the rapid advancement of AI, comprehensive federal regulation in the U.S. remains elusive. However, several states have introduced or passed AI-focused laws, and New York City has enacted regulations for AI in recruitment. On the global stage, the European Union has introduced the AI Act, setting a common legal framework for AI. Meanwhile, U.S. leaders, including Senate Majority Leader Chuck Schumer, have begun outlining plans for AI regulation, emphasizing the need to protect workers, national security, and intellectual property. In October 2023, President Joe Biden signed an executive order on AI, aiming to protect consumer privacy, support workers, and advance equity and civil rights in the justice system. AI regulation is becoming increasingly urgent, and it’s a question of when, not if, comprehensive laws will be enacted. As AI continues to evolve, its impact on the workforce will be profound and multifaceted, requiring careful consideration and regulation to ensure it benefits society as a whole. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce Data Cloud Pioneer

Salesforce Data Cloud Pioneer

While many organizations are still building their data platforms, Salesforce Data Cloud Pioneer has made a significant leap forward. By seamlessly incorporating metadata integration, Salesforce has transformed the modern data stack into a comprehensive application platform known as the Einstein 1 Platform. Led by Muralidhar Krishnaprasad, executive vice president of engineering at Salesforce, the Einstein 1 Platform is built on the company’s metadata framework. This platform harmonizes metadata and integrates it with AI and automation, marking a new era of data utilization. The Einstein 1 Platform: Innovations and Capabilities Salesforce’s goal with the Einstein 1 Platform is to empower all business users—salespeople, service engineers, marketers, and analysts—to access, use, and act on all their data, regardless of its location, according to Krishnaprasad. The open, extensible platform not only unlocks trapped data but also equips organizations with generative AI functionality, enabling personalized experiences for employees and customers. “Analytics is very important to know how your business is doing, but you also want to make sure all that data and insights are actionable,” Krishnaprasad said. “Our goal is to blend AI, automation, and analytics together, with the metadata layer being the secret sauce.” Salesforce Data Cloud Pioneer In a conversation with George Gilbert, senior analyst at theCUBE Research, Krishnaprasad discussed the platform’s metadata integration, open-API technology, and key features. They explored how its extensibility and interoperability enhance usability across various data formats and sources. Metadata Integration: Accommodating Any IT Environment The Einstein 1 Platform is built on Trino, the federated open-source query engine, and Spark for data processing. It offers a rich set of connectors and an open, extensible environment, enabling organizations to share data between warehouses, lake houses, and other systems. “We use a hyper-engine for sub-second response times in Tableau and other data explorations,” Krishnaprasad explained. “This in-memory overlap engine ensures efficient data processing.” The platform supports various machine learning options and allows users to integrate their own large language models. Whether using Salesforce Einstein, Databricks, Vertex, SageMaker, or other solutions, users can operate without copying data. The platform includes three levels of extensibility, enabling organizations to standardize and extend their customer journey models. Users can start with basic reference models, customize them, and then generate insights, including AI-driven insights. Finally, they can introduce their own functions or triggers to act on these insights. The platform continuously performs unification, allowing users to create different unified graphs based on their needs. “We’re a multimodal system, considering your entire customer journey,” Krishnaprasad said. “We provide flexibility at all levels of the stack to create the right experience for your business.” The Triad of AI, Automation, and Analytics The platform’s foundation ingests, harmonizes, and unifies data, resulting in a standardized metadata model that offers a 360-degree view of customer interactions. This approach unlocks siloed data, much of which is in unstructured forms like conversations, documents, emails, audio, and video. “What we’ve done with this customer 360-degree model is to use unified data to generate insights and make these accessible across application surfaces, enabling reactions to these insights,” Krishnaprasad said. “This unlocks a comprehensive customer journey.” For instance, when a customer views an ad and visits the website, salespeople know what they’re interested in, service personnel understand their concerns, and analysts have the information needed for business insights. These capabilities enhance customer engagement. “Couple this with generative AI, and we enable a lot of self-service,” Krishnaprasad added. “We aim to provide accurate answers, elevating data to create a unified model and powering a unified experience across the entire customer journey.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce Workday Partnership

Salesforce Workday Partnership

Salesforce and Workday Partner to Launch AI-Powered Employee Service Agent Salesforce (NYSE: CRM), the leading AI CRM platform, and Workday, Inc. (NASDAQ: WDAY), a leader in enterprise cloud applications for finance and HR, today announced a strategic partnership to develop a new AI-powered employee service agent. This solution will enhance employee experiences by automating routine tasks, providing personalized support, and delivering data-driven insights across Salesforce and Workday platforms. A Unified Data Foundation for Enhanced Employee Services The partnership will integrate HR and financial data from Workday with CRM data from Salesforce, creating a unified data foundation. This integration will enable the development of AI-driven use cases that increase productivity, reduce costs, and improve the employee experience. A key feature will be the seamless incorporation of Workday into Slack, allowing for enhanced automation and collaboration around HR and financial records, using AI. The new AI employee service agent, built on Salesforce’s Agentforce Platform and Einstein AI, alongside Workday AI, will cater to various employee service needs, such as onboarding, health benefits management, and career development. This agent will utilize a company’s data to interact with employees in natural language, offering personalized support and executing tasks based on trusted business rules and permissions. Enhancing Employee and Customer Success “The AI opportunity lies in augmenting employees and delivering exceptional customer experiences. Our collaboration with Workday will empower businesses to create remarkable experiences using generative and autonomous AI, allowing employees to efficiently find answers, learn new skills, solve problems, and take actions.” Marc Benioff, Chair and CEO of Salesforce Carl Eschenbach, CEO of Workday, highlighted the integration’s benefits: “By combining our platforms, data, and AI capabilities, we empower customers to deliver unmatched AI-powered employee experiences, leading to happier customers and substantial business value.” Key Features of the Partnership Benefits for Employees and Employers For Employees: For Employers: Sal Companieh, Chief Digital and Information Officer at Cushman & Wakefield, noted the strategic advantage: “The integration of Workday and Salesforce will streamline workflows and deliver more personalized, AI-powered employee experiences, significantly enhancing our operational efficiency.” “The shared data foundation between Workday and Salesforce will enable these partners to deliver transformative AI capabilities, enhancing employee experiences and driving business performance.” R “Ray” Wang, CEO of Constellation Research, Inc. About Workday Workday is a leading enterprise platform that helps organizations manage their most important assets – their people and money. The Workday platform is built with AI at the core to help customers elevate people, supercharge work, and move their business forever forward. Workday is used by more than 10,500 organizations around the world and across industries – from medium-sized businesses to more than 60% of the Fortune 500. For more information about Workday, visit workday.com. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Einstein Code Generation and Amazon SageMaker

Einstein Code Generation and Amazon SageMaker

Salesforce and the Evolution of AI-Driven CRM Solutions Salesforce, Inc., headquartered in San Francisco, California, is a leading American cloud-based software company specializing in customer relationship management (CRM) software and applications. Their offerings include sales, customer service, marketing automation, e-commerce, analytics, and application development. Salesforce is at the forefront of integrating artificial general intelligence (AGI) into its services, enhancing its flagship SaaS CRM platform with predictive and generative AI capabilities and advanced automation features. Einstein Code Generation and Amazon SageMaker. Salesforce Einstein: Pioneering AI in Business Applications Salesforce Einstein represents a suite of AI technologies embedded within Salesforce’s Customer Success Platform, designed to enhance productivity and client engagement. With over 60 features available across different pricing tiers, Einstein’s capabilities are categorized into machine learning (ML), natural language processing (NLP), computer vision, and automatic speech recognition. These tools empower businesses to deliver personalized and predictive customer experiences across various functions, such as sales and customer service. Key components include out-of-the-box AI features like sales email generation in Sales Cloud and service replies in Service Cloud, along with tools like Copilot, Prompt, and Model Builder within Einstein 1 Studio for custom AI development. The Salesforce Einstein AI Platform Team: Enhancing AI Capabilities The Salesforce Einstein AI Platform team is responsible for the ongoing development and enhancement of Einstein’s AI applications. They focus on advancing large language models (LLMs) to support a wide range of business applications, aiming to provide cutting-edge NLP capabilities. By partnering with leading technology providers and leveraging open-source communities and cloud services like AWS, the team ensures Salesforce customers have access to the latest AI technologies. Optimizing LLM Performance with Amazon SageMaker In early 2023, the Einstein team sought a solution to host CodeGen, Salesforce’s in-house open-source LLM for code understanding and generation. CodeGen enables translation from natural language to programming languages like Python and is particularly tuned for the Apex programming language, integral to Salesforce’s CRM functionality. The team required a hosting solution that could handle a high volume of inference requests and multiple concurrent sessions while meeting strict throughput and latency requirements for their EinsteinGPT for Developers tool, which aids in code generation and review. After evaluating various hosting solutions, the team selected Amazon SageMaker for its robust GPU access, scalability, flexibility, and performance optimization features. SageMaker’s specialized deep learning containers (DLCs), including the Large Model Inference (LMI) containers, provided a comprehensive solution for efficient LLM hosting and deployment. Key features included advanced batching strategies, efficient request routing, and access to high-end GPUs, which significantly enhanced the model’s performance. Key Achievements and Learnings Einstein Code Generation and Amazon SageMaker The integration of SageMaker resulted in a dramatic improvement in the performance of the CodeGen model, boosting throughput by over 6,500% and reducing latency significantly. The use of SageMaker’s tools and resources enabled the team to optimize their models, streamline deployment, and effectively manage resource use, setting a benchmark for future projects. Conclusion and Future Directions Salesforce’s experience with SageMaker highlights the critical importance of leveraging advanced tools and strategies in AI model optimization. The successful collaboration underscores the need for continuous innovation and adaptation in AI technologies, ensuring that Salesforce remains at the cutting edge of CRM solutions. For those interested in deploying their LLMs on SageMaker, Salesforce’s experience serves as a valuable case study, demonstrating the platform’s capabilities in enhancing AI performance and scalability. To begin hosting your own LLMs on SageMaker, consider exploring their detailed guides and resources. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce Research Produces INDICT

Salesforce Research Produces INDICT

Automating and assisting in coding holds tremendous promise for speeding up and enhancing software development. Yet, ensuring that these advancements yield secure and effective code presents a significant challenge. Balancing functionality with safety is crucial, especially given the potential risks associated with malicious exploitation of generated code. Salesforce Research Produces INDICT. In practical applications, Large Language Models (LLMs) often struggle with ambiguous or adversarial instructions, sometimes leading to unintended security vulnerabilities or facilitating harmful attacks. This isn’t merely theoretical; empirical studies, such as those on GitHub’s Copilot, have revealed that a substantial portion of generated programs—about 40%—contained vulnerabilities. Addressing these risks is vital for unlocking the full potential of LLMs in coding while safeguarding against potential threats. Current strategies to mitigate these risks include fine-tuning LLMs with safety-focused datasets and implementing rule-based detectors to identify insecure code patterns. However, fine-tuning alone may not suffice against sophisticated attack prompts, and creating high-quality safety-related data can be resource-intensive. Meanwhile, rule-based systems may not cover all vulnerability scenarios, leaving gaps that could be exploited. To address these challenges, researchers at Salesforce Research have introduced the INDICT framework. INDICT employs a novel approach involving dual critics—one focused on safety and the other on helpfulness—to enhance the quality of LLM-generated code. This framework facilitates internal dialogues between the critics, leveraging external knowledge sources like code snippets and web searches to provide informed critiques and iterative feedback. INDICT operates through two key stages: preemptive and post-hoc feedback. In the preemptive stage, the safety critic assesses potential risks during code generation, while the helpfulness critic ensures alignment with task requirements. External knowledge sources enrich their evaluations. In the post-hoc stage, after code execution, both critics review outcomes to refine future outputs, ensuring continuous improvement. Evaluation of INDICT across eight diverse tasks and programming languages demonstrated substantial enhancements in both safety and helpfulness metrics. The framework achieved a remarkable 10% absolute improvement in code quality overall. For instance, in CyberSecEval-1 benchmarks, INDICT enhanced code safety by up to 30%, with over 90% of outputs deemed secure. Additionally, the helpfulness metric showed significant gains, surpassing state-of-the-art baselines by up to 70%. INDICT’s success lies in its ability to provide detailed, context-aware critiques that guide LLMs towards generating more secure and functional code. By integrating safety and helpfulness feedback, the framework sets new standards for responsible AI in coding, addressing critical concerns about functionality and security in automated software development. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce API Gen

Salesforce API Gen

Function-calling agent models, a significant advancement within large language models (LLMs), encounter challenges in requiring high-quality, diverse, and verifiable datasets. These models interpret natural language instructions to execute API calls crucial for real-time interactions with various digital services. However, existing datasets often lack comprehensive verification and diversity, resulting in inaccuracies and inefficiencies. Overcoming these challenges is critical for deploying function-calling agents reliably in real-world applications, such as retrieving stock market data or managing social media interactions. Salesforce API Gen. Current approaches to training these agents rely on static datasets that lack thorough verification, hampering adaptability and performance when encountering new or unseen APIs. For example, models trained on restaurant booking APIs may struggle with tasks like stock market data retrieval due to insufficient relevant training data. Addressing these limitations, researchers from Salesforce AI Research propose APIGen, an automated pipeline designed to generate diverse and verifiable function-calling datasets. APIGen integrates a multi-stage verification process to ensure data reliability and correctness. This innovative approach includes format checking, actual function executions, and semantic verification, rigorously verifying each data point to produce high-quality datasets. Salesforce API Gen APIGen initiates its data generation process by sampling APIs and query-answer pairs from a library, formatting them into standardized JSON format. The pipeline then progresses through a series of verification stages: format checking to validate JSON structure, function call execution to verify operational correctness, and semantic checking to align function calls, execution results, and query objectives. This meticulous process results in a comprehensive dataset comprising 60,000 entries, covering 3,673 APIs across 21 categories, accessible via Huggingface. The datasets generated by APIGen significantly enhance model performance, achieving state-of-the-art results on the Berkeley Function-Calling Benchmark. Models trained on these datasets outperform multiple GPT-4 models, demonstrating substantial improvements in accuracy and efficiency. For instance, a model with 7 billion parameters achieves an accuracy of 87.5%, surpassing previous benchmarks by a notable margin. These outcomes underscore the robustness and reliability of APIGen-generated datasets in advancing the capabilities of function-calling agents. In conclusion, APIGen presents a novel framework for generating high-quality, diverse datasets for function-calling agents, addressing critical challenges in AI research. Its multi-stage verification process ensures data reliability, empowering even smaller models to achieve competitive results. APIGen opens avenues for developing efficient and powerful language models, emphasizing the pivotal role of high-quality data in AI advancements. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Used YouTube to Train AI

Used YouTube to Train AI

Announced by siliconANGLE’s Duncan Riley. Companies Used YouTube to Train AI. A new report released today reveals that companies such as Anthropic PBC, Nvidia Corp., Apple Inc., and Salesforce Inc. have used subtitles from YouTube videos to train their AI services without obtaining permission. This raises significant ethical questions about the use of publicly available materials and facts without consent. According to Proof News, these companies allegedly utilized subtitles from 173,536 YouTube videos sourced from over 48,000 channels to enhance their AI models. Rather than scraping the content themselves, Anthropic, Nvidia, Apple, and Salesforce reportedly used a dataset provided by EleutherAI, a nonprofit AI organization. EleutherAI, founded in 2020, focuses on the interpretability and alignment of large AI models. The organization aims to democratize access to advanced AI technologies by developing and releasing open-source AI models like GPT-Neo and GPT-J. EleutherAI also advocates for open science norms in natural language processing, promoting transparency and ethical AI development. The dataset in question, known as “YouTube Subtitles,” includes transcripts from educational and online learning channels, as well as several media outlets and YouTube personalities. Notable YouTubers whose transcripts are included in the dataset are Mr. Beast, Marques Brownlee, PewDiePie, and left-wing political commentator David Pakman. Some creators whose content was used are outraged. Pakman, for example, argues that using his transcripts jeopardizes his livelihood and that of his staff. David Wiskus, CEO of streaming service Nebula, has even called the use of the data “theft.” Despite the data being publicly accessible, the controversy revolves around the fact that large language models are utilizing it. This situation echoes recent legal actions regarding the use of publicly available data to train AI models. For instance, Microsoft Corp. and OpenAI were sued in November over their use of nonfiction authors’ works for AI training. The class-action lawsuit, led by a New York Times reporter, claimed that OpenAI scraped the content of hundreds of thousands of nonfiction books to develop their AI models. Additionally, The New York Times accused OpenAI, Google LLC, and Meta Holdings Inc. in April of skirting legal boundaries in their use of AI training data. While the legality of using AI training data remains a gray area, it has yet to be extensively tested in court. Should a case arise, the key issue will likely be whether publicly stated facts, including utterances, can be copyrighted. Relevant U.S. case law includes Feist Publications Inc. v. Rural Telephone Service Co., 499 U.S. 340 (1991) and International News Service v. Associated Press (1918). In both cases, the U.S. Supreme Court ruled that facts cannot be copyrighted. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Einstein Service Agent

Einstein Service Agent

Introducing Agentforce Service Agent: Salesforce’s Autonomous AI to Transform Chatbot Experiences Accelerate case resolutions with an intelligent, conversational interface that uses natural language and is grounded in trusted customer and business data. Deploy in minutes with ready-made templates, Salesforce components, and a large language model (LLM) to autonomously engage customers across any channel, 24/7. Establish clear privacy and security guardrails to ensure trusted responses, and escalate complex cases to human agents as needed. Editor’s Note: Einstein Service Agent is now known as Agentforce Service Agent. Salesforce has launched Agentforce Service Agent, the company’s first fully autonomous AI agent, set to redefine customer service. Unlike traditional chatbots that rely on preprogrammed responses and lack contextual understanding, Agentforce Service Agent is dynamic, capable of independently addressing a wide range of service issues, which enhances customer service efficiency. Built on the Einstein 1 Platform, Agentforce Service Agent interacts with large language models (LLMs) to analyze the context of customer messages and autonomously determine the appropriate actions. Using generative AI, it creates conversational responses based on trusted company data, such as Salesforce CRM, and aligns them with the brand’s voice and tone. This reduces the burden of routine queries, allowing human agents to focus on more complex, high-value tasks. Customers, in turn, receive faster, more accurate responses without waiting for human intervention. Available 24/7, Agentforce Service Agent communicates naturally across self-service portals and messaging channels, performing tasks proactively while adhering to the company’s defined guardrails. When an issue requires human escalation, the transition is seamless, ensuring a smooth handoff. Ease of Setup and Pilot Launch Currently in pilot, Agentforce Service Agent will be generally available later this year. It can be deployed in minutes using pre-built templates, low-code workflows, and user-friendly interfaces. “Salesforce is shaping the future where human and digital agents collaborate to elevate the customer experience,” said Kishan Chetan, General Manager of Service Cloud. “Agentforce Service Agent, our first fully autonomous AI agent, will revolutionize service teams by not only completing tasks autonomously but also augmenting human productivity. We are reimagining customer service for the AI era.” Why It Matters While most companies use chatbots today, 81% of customers would still prefer to speak to a live agent due to unsatisfactory chatbot experiences. However, 61% of customers express a preference for using self-service options for simpler issues, indicating a need for more intelligent, autonomous agents like Agentforce Service Agent that are powered by generative AI. The Future of AI-Driven Customer Service Agentforce Service Agent has the ability to hold fluid, intelligent conversations with customers by analyzing the full context of inquiries. For instance, a customer reaching out to an online retailer for a return can have their issue fully processed by Agentforce, which autonomously handles tasks such as accessing purchase history, checking inventory, and sending follow-up satisfaction surveys. With trusted business data from Salesforce’s Data Cloud, Agentforce generates accurate and personalized responses. For example, a telecommunications customer looking for a new phone will receive tailored recommendations based on data such as purchase history and service interactions. Advanced Guardrails and Quick Setup Agentforce Service Agent leverages the Einstein Trust Layer to ensure data privacy and security, including the masking of personally identifiable information (PII). It can be quickly activated with out-of-the-box templates and pre-existing Salesforce components, allowing companies to equip it with customized skills faster using natural language instructions. Multimodal Innovation Across Channels Agentforce Service Agent supports cross-channel communication, including messaging apps like WhatsApp, Facebook Messenger, and SMS, as well as self-service portals. It even understands and responds to images, video, and audio. For example, if a customer sends a photo of an issue, Agentforce can analyze it to provide troubleshooting steps or even recommend replacement products. Seamless Handoffs to Human Agents If a customer’s inquiry requires human attention, Agentforce seamlessly transfers the conversation to a human agent who will have full context, avoiding the need for the customer to repeat information. For example, a life insurance company might program Agentforce to escalate conversations if a customer mentions sensitive topics like loss or death. Similarly, if a customer requests a return outside of the company’s policy window, Agentforce can recommend that a human agent make an exception. Customer Perspective “Agentforce Service Agent’s speed and accuracy in handling inquiries is promising. It responds like a human, adhering to our diverse, country-specific guidelines. I see it becoming a key part of our service team, freeing human agents to handle higher-value issues.” — George Pokorny, SVP of Global Customer Success, OpenTable. Content updated October 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Confidence Scores

AI Confidence Scores

In this insight, the focus is on exploring the use of confidence scores available through the OpenAI API. The first section delves into these scores and explains their significance using a custom chat interface. The second section demonstrates how to apply confidence scores programmatically in code. Understanding Confidence Scores To begin, it’s important to understand what an LLM (Large Language Model) is doing for each token in its response: However, it’s essential to clarify that the term “probabilities” here is somewhat misleading. While mathematically, they qualify as a “probability distribution” (the values add up to one), they don’t necessarily reflect true confidence or likelihood in the way we might expect. In this sense, these values should be treated with caution. A useful way to think about these values is to consider them as “confidence” scores, though it’s crucial to remember that, much like humans, LLMs can be confident and still be wrong. The values themselves are not inherently meaningful without additional context or validation. Example: Using a Chat Interface An example of exploring these confidence scores can be seen in a chat interface where: In one case, when asked to “pick a number,” the LLM chose the word “choose” despite it having only a 21% chance of being selected. This demonstrates that LLMs don’t always pick the most likely token unless configured to do so. Additionally, this interface shows how the model might struggle with questions that have no clear answer, offering insights into detecting possible hallucinations. For example, when asked to list famous people with an interpunct in their name, the model shows low confidence in its guesses. This behavior indicates uncertainty and can be an indicator of a forthcoming incorrect response. Hallucinations and Confidence Scores The discussion also touches on the question of whether low confidence scores can help detect hallucinations—cases where the model generates false information. While low confidence often correlates with potential hallucinations, it’s not a foolproof indicator. Some hallucinations may come with high confidence, while low-confidence tokens might simply reflect natural variability in language. For instance, when asked about the capital of Kazakhstan, the model shows uncertainty due to the historical changes between Astana and Nur-Sultan. The confidence scores reflect this inconsistency, highlighting how the model can still select an answer despite having conflicting information. Using Confidence Scores in Code The next part of the discussion covers how to leverage confidence scores programmatically. For simple yes/no questions, it’s possible to compress the response into a single token and calculate the confidence score using OpenAI’s API. Key API settings include: Using this setup, one can extract the model’s confidence in its response, converting log probabilities back into regular probabilities using math.exp. Expanding to Real-World Applications The post extends this concept to more complex scenarios, such as verifying whether an image of a driver’s license is valid. By analyzing the model’s confidence in its answer, developers can determine when to flag responses for human review based on predefined confidence thresholds. This technique can also be applied to multiple-choice questions, allowing developers to extract not only the top token but also the top 10 options, along with their confidence scores. Conclusion While confidence scores from LLMs aren’t a perfect solution for detecting accuracy or truthfulness, they can provide useful insights in certain scenarios. With careful application and evaluation, developers can make informed decisions about when to trust the model’s responses and when to intervene. The final takeaway is that confidence scores, while not foolproof, can play a role in improving the reliability of LLM outputs—especially when combined with thoughtful design and ongoing calibration. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Generative AI for Tableau

Generative AI for Tableau

Tableau’s first generative AI assistant is now generally available. Generative AI for Tableau brings data prep to the masses. Earlier this month, Tableau launched its second platform update of 2024, announcing that its first two GenAI assistants would be available by the end of July, with a third set for release in August. The first of these, Einstein Copilot for Tableau Prep, became generally available on July 10. Tableau initially unveiled its plans to develop generative AI capabilities in May 2023 with the introduction of Tableau Pulse and Tableau GPT. Pulse, an insight generator that monitors data for metric changes and uses natural language to alert users, became generally available in February. Tableau GPT, now renamed Einstein Copilot for Tableau, moved into beta testing in April. Following Einstein Copilot for Tableau Prep, Einstein Copilot for Tableau Catalog is expected to be generally available before the end of July. Einstein Copilot for Tableau Web Authoring is set to follow by the end of August. With these launches, Tableau joins other data management and analytics vendors like AWS, Domo, Microsoft, and MicroStrategy, which have already made generative AI assistants generally available. Other companies, such as Qlik, DBT Labs, and Alteryx, have announced similar plans but have not yet moved their products out of preview. Tableau’s generative AI capabilities are comparable to those of its competitors, according to Doug Henschen, an analyst at Constellation Research. In some areas, such as data cataloging, Tableau’s offerings are even more advanced. “Tableau is going GA later than some of its competitors. But capabilities are pretty much in line with or more extensive than what you’re seeing from others,” Henschen said. In addition to the generative AI assistants, Tableau 2024.2 includes features such as embedding Pulse in applications. Based in Seattle and a subsidiary of Salesforce, Tableau has long been a prominent analytics vendor. Its first 2024 platform update highlighted the launch of Pulse, while the final 2023 update introduced new embedded analytics capabilities. Generative AI assistants are proliferating due to their potential to enable non-technical workers to work with data and increase efficiency for data experts. Historically, the complexity of analytics platforms, requiring coding and data literacy, has limited their widespread adoption. Studies indicate that only about one-quarter of employees regularly work with data. Vendors have attempted to overcome this barrier by introducing natural language processing (NLP) and low-code/no-code features. However, NLP features have been limited by small vocabularies requiring specific business phrasing, while low-code/no-code features only support basic tasks. Generative AI has the potential to change this dynamic. Large language models like ChatGPT and Google Gemini offer extensive vocabularies and can interpret user intent, enabling true natural language interactions. This makes data exploration and analysis accessible to non-technical users and reduces coding requirements for data experts. In response to advancements in generative AI, many data management and analytics vendors, including Tableau, have made it a focal point of their product development. Tech giants like AWS, Google, and Microsoft, as well as specialized vendors, have heavily invested in generative AI. Einstein Copilot for Tableau Prep, now generally available, allows users to describe calculations in natural language, which the tool interprets to create formulas for calculated fields in Tableau Prep. Previously, this required expertise in objects, fields, functions, and limitations. Einstein Copilot for Tableau Catalog, set for release later this month, will enable users to add descriptions for data sources, workbooks, and tables with one click. In August, Einstein Copilot for Tableau Web Authoring will allow users to explore data in natural language directly from Tableau Cloud Web Authoring, producing visualizations, formulating calculations, and suggesting follow-up questions. Tableau’s generative AI assistants are designed to enhance efficiency and productivity for both experts and generalists. The assistants streamline complex data modeling and predictive analysis, automate routine data prep tasks, and provide user-friendly interfaces for data visualization and analysis. “Whether for an expert or someone just getting started, the goal of Einstein Copilot is to boost efficiency and productivity,” said Mike Leone, an analyst at TechTarget’s Enterprise Strategy Group. The planned generative AI assistants for different parts of Tableau’s platform offer unique value in various stages of the data and AI lifecycle, according to Leone. Doug Henschen noted that the generative AI assistants for Tableau Web Authoring and Tableau Prep are similar to those being introduced by other vendors. However, the addition of a generative AI assistant for data cataloging represents a unique differentiation for Tableau. “Einstein Copilot for Tableau Catalog is unique to Tableau among analytics and BI vendors,” Henschen said. “But it’s similar to GenAI implementations being done by a few data catalog vendors.” Beyond the generative AI assistants, Tableau’s latest update includes: Among these non-Copilot capabilities, making Pulse embeddable is particularly significant. Extending generative AI capabilities to work applications will make them more effective. “Embedding Pulse insights within day-to-day applications promises to open up new possibilities for making insights actionable for business users,” Henschen said. Multi-fact relationships are also noteworthy, enabling users to relate datasets with shared dimensions and informing applications that require large amounts of high-quality data. “Multi-fact relationships are a fascinating area where Tableau is really just getting started,” Leone said. “Providing ways to improve accuracy, insights, and context goes a long way in building trust in GenAI and reducing hallucinations.” While Tableau has launched its first generative AI assistant and will soon release more, the vendor has not yet disclosed pricing for the Copilots and related features. The generative AI assistants are available through a bundle named Tableau+, a premium Tableau Cloud offering introduced in June. Beyond the generative AI assistants, Tableau+ includes advanced management capabilities, simplified data governance, data discovery features, and integration with Salesforce Data Cloud. Generative AI is compute-intensive and costly, so it’s not surprising that Tableau customers will have to pay extra for these capabilities. Some vendors are offering generative AI capabilities for free to attract new users, but Henschen believes costs will eventually be incurred. “Customers will want to understand the cost implications of adding these new capabilities,”

Read More
MuleSoft Compostability

MuleSoft Composability

MuleSoft: Enabling AI Integration with Composability Solutions – MuleSoft Composability MuleSoft, a subsidiary of Salesforce, is enhancing its portfolio with new capabilities to help organizations build AI services that serve as the building blocks for more complex applications. The company announced a new AI-powered composability solution designed to assist organizations in constructing discrete AI services to form sophisticated systems and applications. The Power of APIs in AI“We believe the world of AI is really the world of APIs,” said Param Kahlon, Salesforce EVP and GM of Automation and Integration. “Accessing AI in the enterprise fundamentally involves the ability to call a model.” This applies whether the AI model is an internal large language model (LLM) or a foundational model built by a third party. “Using an LLM within the company or federating requests across multiple LLMs through LangChain involves API calls,” Kahlon added. “These API calls need to be managed and governed.” AI Integration with MuleSoftMuleSoft’s goal is to provide a platform that integrates AI, especially generative AI, with business processes. For instance, MuleSoft aims to manage API calls to external LLMs using its API management tools and enable APIs to act as actions for copilot conversational agents in the enterprise. This allows agents to execute backend actions using natural language, such as granting customer credit or escalating orders. The MuleSoft solution enables you to connect data, automate workflows, and build an AI-ready foundation in a single unified platform. The pulse of innovation never stops, and neither does the pressure to cater to employees and customers. In fact, 84% of IT leaders share the need for IT to step up its game and better address shifting customer expectations.  The MuleSoft Composability Solution The MuleSoft composability solution comprises three main pillars: Anypoint Platform: Used to define, design, build, and deploy APIs.API Management: Manages the deployment of APIs throughout their lifecycle, whether built with Anypoint or other technologies.Automation: Includes MuleSoft RPA and MuleSoft Intelligent Document Processing (IDP).While these components are part of MuleSoft’s existing portfolio, the company introduced new features, such as support for AsyncAPI, to facilitate the adoption of event-driven architectures (EDAs). AsyncAPI Support and Real-Time CommunicationCurrently in open beta, AsyncAPI support will be generally available later this year. It will enable systems to add real-time communication for processes with fluctuating data sets, like predictive maintenance, dynamic pricing, or fraud detection. For example, a bank could use AI models for fraud detection by analyzing transactional data and user behavior. This model can be transformed into a service callable by various applications. Enhancing Security and GovernanceSecurity and governance are crucial components of the composability solution. When applications make API calls to LLMs and other external models, it’s vital to ensure that valuable data is encrypted and/or masked. MuleSoft’s API gateways, Anypoint Flex Gateway, and Mule Gateway can act as LLM gateways with custom policies to secure and manage APIs. For example, a financial institution could use an API gateway to implement a custom policy checking for sensitive customer information before sharing data with a third-party LLM. To increase internal collaboration and efficiency, IT leaders are leaning into automation and AI – but these initiatives are not here to replace the human touch, rather to liberate human potential. These technologies free up IT experts to dive into the more “human” aspects of their roles, think innovation, communication, and collaboration. Picture it as IT superheroes, if you will, donning capes of automation. MuleSoft is at the forefront of enabling AI integration and innovation in enterprise environments. By breaking down data silos and fostering interoperability, MuleSoft’s composability solution enhances the efficiency and effectiveness of AI applications, ensuring secure and seamless integration across business processes. MuleSoft has a goal to empower everyone with AI. Salesforce announced AI-powered enhancements to its MuleSoft automation, integration, and API management solutions that help business users and developers improve productivity, simplify workflows, and accelerate time to value.  MuleSoft’s Intelligent Document Processing (IDP) helps teams quickly extract and organize data from diverse document formats including PDFs and images. Unlike other automation solutions, MuleSoft’s IDP is natively integrated into Salesforce Flow, which provides customers with an end-to-end automation experience. Additionally, to speed up project delivery, MuleSoft has embedded Einstein, Salesforce’s predictive and generative AI assistant, in its pro-code and low-code tools. This empowers users to build integrations and automations using natural language prompts directly in IDP, Flow Builder, and Anypoint Code Builder.  Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Impact on Workforce

AI Impact on Workforce

About a month ago, Jon Stewart did a segment on AI causing people to lose their jobs. He spoke against it. Well, his words were against it, but deep down, he’s for it—and so are you, whether you realize it or not. AI Impact on Workforce is real, but is it good or bad? The fact that Jon Stewart can go on TV to discuss cutting-edge technology like large language models in AI is because previous technology displaced jobs. Lots of jobs. What probably felt like most jobs. Remember, for most of human history, 80–90% of people were farmers. The few who weren’t had professions like blacksmithing, tailoring, or other essential trades. They didn’t have TV personalities, TV executives, or even TVs. Had you been born hundreds of years ago, chances are you would have been a farmer, too. You might have died from an infection. But as scientific and technological progress reduced the need for farmers, it also gave us doctors and scientists who discovered, manufactured, and distributed cures for diseases like the plague. Innovation begets innovation. Generative AI is just the current state of the art, leading the next cycle of change. The Core Issue This doesn’t mean everything will go smoothly. While many tech CEOs tout the positive impacts of AI, these benefits will take time. Consider the automobile: Carl Benz patented the motorized vehicle in 1886. Fifteen years later, there were only 8,000 cars in the US. By 1910, there were 500,000 cars. That’s 25 years, and even then, only about 0.5% of people in the US had a car. The first stop sign wasn’t used until 1915, giving society time to establish formal regulations and norms as the technology spread. Lessons from History Social media, however, saw negligible usage until 2008, when Facebook began to grow rapidly. In just four years, users soared from a few million to a billion. Social media has been linked to cyberbullying, self-esteem issues, depression, and misinformation. The risks became apparent only after widespread adoption, unlike with cars, where risks were identified early and mitigated with regulations like stop signs and driver’s licenses. Nuclear weapons, developed in 1945, also illustrate this point. Initially, only a few countries possessed them, understanding the catastrophic risks and exercising restraint. However, if a terrorist cell obtained such weapons, the consequences could be dire. Similarly, if AI tools are misused, the outcomes could be harmful. Just this morning a news channel was covering an AI bot that was doing robo-calling. Can you imagine the increase in telemarketing calls that could create? How about this being an election cycle year? AI and Its Rapid Adoption AI isn’t a nuclear weapon, but it is a powerful tool that can do harm. Unlike past technologies that took years or decades to adopt, AI adoption is happening much faster. We lack comprehensive safety warnings for AI because we don’t fully understand it yet. If in 1900, 50% of Americans had suddenly gained access to cars without regulations, the result would have been chaos. Similarly, rapid AI adoption without understanding its risks can lead to unintended consequences. The adoption rate, impact radius (the scope of influence), and learning curve (how quickly we understand its effects) are crucial. If the adoption rate surpasses our ability to understand and manage its impact, we face excessive risk. Proceeding with Caution Innovation should not be stifled, but it must be approached with caution. Consider historical examples like x-rays, which were once used in shoe stores without understanding their harmful effects, or the industrial revolution, which caused significant environmental degradation. Early regulation could have mitigated many negative impacts. AI is transformative, but until we fully understand its risks, we must proceed cautiously. The potential for harm isn’t a reason to avoid it altogether. Like cars, which we accept despite their risks because we understand and manage them, we need to learn about AI’s risks. However, we don’t need to rush into widespread adoption without safeguards. It’s easier to loosen restrictions later than to impose them after damage has been done. Let’s innovate, but with foresight. Regulation doesn’t kill innovation; it can inspire it. We should learn from the past and ensure AI development is responsible and measured. We study history to avoid repeating mistakes—let’s apply that wisdom to AI. Content updated July 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
DPD Salesforce AI Enhancements

DPD Salesforce AI Enhancements

DPD’s AI Integration: Enhancing Customer and Employee Experience DPD has ambitious plans to integrate AI throughout its Salesforce platform, aiming to automate tasks and significantly enhance the experiences of both customers and employees. DPD Salesforce AI Enhancements. Adam Hooper, Head of Central Platforms at DPD, explains that with over 400 million parcels delivered annually, maintaining robust customer relationships is crucial. To this end, DPD leverages a range of Salesforce technologies, including Service Cloud, Sales Cloud, Marketing Cloud, and Mulesoft. AI-Powered Customer Service In Salesforce’s latest update on DPD: Financial and Operational Efficiency Targeted Marketing Spreadsheets to Salesforce At the Salesforce World Tour event in London, Ben Pyne, Salesforce Platform Manager at DPD, elaborated on their current usage and future AI plans. Pyne’s team acts as internal consultants to optimize organizational workflows. As he explains: “My role is essentially to get people off spreadsheets and onto Salesforce!” He noted that about 40 departments and teams within DPD use Salesforce, far beyond the typical Sales and CRM applications. Custom applications within Salesforce personalize and enhance user experiences by focusing on relevant information. Using tools like Prompt Builder, Pyne’s team recently developed a project management app within Salesforce, streamlining tasks like writing acceptance criteria and user stories. Pyne emphasized: “I want our guys to focus on designing and building, less on the admin.” AI Use Cases When considering AI and generative AI, DPD sees significant potential to reduce operational tasks. Pyne highlighted case summarization as an obvious application, given the millions of customer service cases created each year. Rolling Out Generative AI DPD adopts a cautious approach to rolling out new technologies like generative AI. Pyne explained: “It’s starting small, finding the right teams to be able to do it. But fundamentally, starting somewhere and making slow progressions into it to ensure we don’t scare everybody away.” Ensuring Security and Trust Security and trust are paramount for DPD. Pyne noted their robust IT security team scrutinizes every implementation. Fortunately, Salesforce’s security measures, such as data anonymization and preventing LLMs (Large Language Models) from learning from their data, provide peace of mind. Pyne concluded: “We can focus on what we’re good at and not worry about the rest because Salesforce has thought of everything for us.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com