Sales Archives - gettectonic.com - Page 32
Einstein Code Generation and Amazon SageMaker

Einstein Code Generation and Amazon SageMaker

Salesforce and the Evolution of AI-Driven CRM Solutions Salesforce, Inc., headquartered in San Francisco, California, is a leading American cloud-based software company specializing in customer relationship management (CRM) software and applications. Their offerings include sales, customer service, marketing automation, e-commerce, analytics, and application development. Salesforce is at the forefront of integrating artificial general intelligence (AGI) into its services, enhancing its flagship SaaS CRM platform with predictive and generative AI capabilities and advanced automation features. Einstein Code Generation and Amazon SageMaker. Salesforce Einstein: Pioneering AI in Business Applications Salesforce Einstein represents a suite of AI technologies embedded within Salesforce’s Customer Success Platform, designed to enhance productivity and client engagement. With over 60 features available across different pricing tiers, Einstein’s capabilities are categorized into machine learning (ML), natural language processing (NLP), computer vision, and automatic speech recognition. These tools empower businesses to deliver personalized and predictive customer experiences across various functions, such as sales and customer service. Key components include out-of-the-box AI features like sales email generation in Sales Cloud and service replies in Service Cloud, along with tools like Copilot, Prompt, and Model Builder within Einstein 1 Studio for custom AI development. The Salesforce Einstein AI Platform Team: Enhancing AI Capabilities The Salesforce Einstein AI Platform team is responsible for the ongoing development and enhancement of Einstein’s AI applications. They focus on advancing large language models (LLMs) to support a wide range of business applications, aiming to provide cutting-edge NLP capabilities. By partnering with leading technology providers and leveraging open-source communities and cloud services like AWS, the team ensures Salesforce customers have access to the latest AI technologies. Optimizing LLM Performance with Amazon SageMaker In early 2023, the Einstein team sought a solution to host CodeGen, Salesforce’s in-house open-source LLM for code understanding and generation. CodeGen enables translation from natural language to programming languages like Python and is particularly tuned for the Apex programming language, integral to Salesforce’s CRM functionality. The team required a hosting solution that could handle a high volume of inference requests and multiple concurrent sessions while meeting strict throughput and latency requirements for their EinsteinGPT for Developers tool, which aids in code generation and review. After evaluating various hosting solutions, the team selected Amazon SageMaker for its robust GPU access, scalability, flexibility, and performance optimization features. SageMaker’s specialized deep learning containers (DLCs), including the Large Model Inference (LMI) containers, provided a comprehensive solution for efficient LLM hosting and deployment. Key features included advanced batching strategies, efficient request routing, and access to high-end GPUs, which significantly enhanced the model’s performance. Key Achievements and Learnings Einstein Code Generation and Amazon SageMaker The integration of SageMaker resulted in a dramatic improvement in the performance of the CodeGen model, boosting throughput by over 6,500% and reducing latency significantly. The use of SageMaker’s tools and resources enabled the team to optimize their models, streamline deployment, and effectively manage resource use, setting a benchmark for future projects. Conclusion and Future Directions Salesforce’s experience with SageMaker highlights the critical importance of leveraging advanced tools and strategies in AI model optimization. The successful collaboration underscores the need for continuous innovation and adaptation in AI technologies, ensuring that Salesforce remains at the cutting edge of CRM solutions. For those interested in deploying their LLMs on SageMaker, Salesforce’s experience serves as a valuable case study, demonstrating the platform’s capabilities in enhancing AI performance and scalability. To begin hosting your own LLMs on SageMaker, consider exploring their detailed guides and resources. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Back to the Office

Back to the Office

Salesforce, San Francisco’s largest private employer, is reportedly requiring many of its employees to return to the office. Salesforce says Back to the Office. According to the San Francisco Standard, employees received a memo this week announcing a shift from remote work to hybrid and in-person arrangements starting this fall. Salesforce’s headquarters is downtown on Mission Street in the Salesforce Tower. Back to the Office When questioned about the memo, a Salesforce spokesperson told local news KRON4 on Thursday, “Salesforce is a place where connection and relationships drive success. We believe being together in person deepens relationships, sparks innovation, fosters learning, and strengthens culture — ultimately, resulting in better business outcomes.” Salesforce will implement three hybrid work designations for different departments: The spokesperson added, “We have always had a hybrid approach, which provides flexibility to meet the evolving needs of the business and helps attract and retain world-class, diverse talent. Our hybrid work guidelines focus on in-person connection while recognizing the value of work away from the office.” This new in-person expectation coincides with Salesforce announcing another round of layoffs affecting hundreds of employees. The spokesperson commented, “We continuously assess whether we have the right structure in place to best serve our customers and fuel growth areas. In some cases, that leads to roles being eliminated.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce Research Produces INDICT

Salesforce Research Produces INDICT

Automating and assisting in coding holds tremendous promise for speeding up and enhancing software development. Yet, ensuring that these advancements yield secure and effective code presents a significant challenge. Balancing functionality with safety is crucial, especially given the potential risks associated with malicious exploitation of generated code. Salesforce Research Produces INDICT. In practical applications, Large Language Models (LLMs) often struggle with ambiguous or adversarial instructions, sometimes leading to unintended security vulnerabilities or facilitating harmful attacks. This isn’t merely theoretical; empirical studies, such as those on GitHub’s Copilot, have revealed that a substantial portion of generated programs—about 40%—contained vulnerabilities. Addressing these risks is vital for unlocking the full potential of LLMs in coding while safeguarding against potential threats. Current strategies to mitigate these risks include fine-tuning LLMs with safety-focused datasets and implementing rule-based detectors to identify insecure code patterns. However, fine-tuning alone may not suffice against sophisticated attack prompts, and creating high-quality safety-related data can be resource-intensive. Meanwhile, rule-based systems may not cover all vulnerability scenarios, leaving gaps that could be exploited. To address these challenges, researchers at Salesforce Research have introduced the INDICT framework. INDICT employs a novel approach involving dual critics—one focused on safety and the other on helpfulness—to enhance the quality of LLM-generated code. This framework facilitates internal dialogues between the critics, leveraging external knowledge sources like code snippets and web searches to provide informed critiques and iterative feedback. INDICT operates through two key stages: preemptive and post-hoc feedback. In the preemptive stage, the safety critic assesses potential risks during code generation, while the helpfulness critic ensures alignment with task requirements. External knowledge sources enrich their evaluations. In the post-hoc stage, after code execution, both critics review outcomes to refine future outputs, ensuring continuous improvement. Evaluation of INDICT across eight diverse tasks and programming languages demonstrated substantial enhancements in both safety and helpfulness metrics. The framework achieved a remarkable 10% absolute improvement in code quality overall. For instance, in CyberSecEval-1 benchmarks, INDICT enhanced code safety by up to 30%, with over 90% of outputs deemed secure. Additionally, the helpfulness metric showed significant gains, surpassing state-of-the-art baselines by up to 70%. INDICT’s success lies in its ability to provide detailed, context-aware critiques that guide LLMs towards generating more secure and functional code. By integrating safety and helpfulness feedback, the framework sets new standards for responsible AI in coding, addressing critical concerns about functionality and security in automated software development. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Forecasting With Foundation Models

Forecasting With Foundation Models

On Hugging Face, there are 20 models tagged as “time series” at the time of writing. While this number is relatively low compared to the 125,950 results for the “text-generation-inference” tag, time series forecasting with foundation models has attracted significant interest from major companies such as Amazon, IBM, and Salesforce, which have developed their own models: Chronos, TinyTimeMixer, and Moirai, respectively. Currently, one of the most popular time series models on Hugging Face is Lag-Llama, a univariate probabilistic model developed by Kashif Rasul, Arjun Ashok, and their co-authors. Open-sourced in February 2024, the authors claim that Lag-Llama possesses strong zero-shot generalization capabilities across various datasets and domains. Once fine-tuned, they assert it becomes the best general-purpose model of its kind. In this insight, we showcase experience fine-tuning Lag-Llama and tests its capabilities against a more classical machine learning approach, specifically an XGBoost model designed for univariate time series data. Gradient boosting algorithms like XGBoost are widely regarded as the pinnacle of classical machine learning (as opposed to deep learning) and perform exceptionally well with tabular data. Therefore, it is fitting to benchmark Lag-Llama against XGBoost to determine if the foundation model lives up to its promises. The results, however, are not straightforward. The data used for this exercise is a four-year-long series of hourly wave heights off the coast of Ribadesella, a town in the Spanish region of Asturias. The data, available from the Spanish ports authority data portal, spans from June 18, 2020, to June 18, 2024. For the purposes of this study, the series is aggregated to a daily level by taking the maximum wave height recorded each day. This aggregation helps illustrate the concepts more clearly, as results become volatile with higher granularity. The target variable is the maximum height of the waves recorded each day, measured in meters. Several reasons influenced the choice of this series. First, the Lag-Llama model was trained on some weather-related data, making this type of data slightly challenging yet manageable for the model. Second, while meteorological forecasts are typically produced using numerical weather models, statistical models can complement these forecasts, especially for long-range predictions. In the era of climate change, statistical models can provide a baseline expectation and highlight deviations from typical patterns. The dataset is standard and requires minimal preprocessing, such as imputing a few missing values. After splitting the data into training, validation, and test sets, with the latter two covering five months each, the next step involves benchmarking Lag-Llama against XGBoost on two univariate forecasting tasks: point forecasting and probabilistic forecasting. Point forecasting gives a specific prediction, while probabilistic forecasting provides a confidence interval. While Lag-Llama was primarily trained for probabilistic forecasting, point forecasts are useful for illustrative purposes. Forecasts involve several considerations, such as the forecast horizon, the last observations fed into the model, and how often the model is updated. This study uses a recursive multi-step forecast without updating the model, with a step size of seven days. This means the model produces batches of seven forecasts at a time, using the latest predictions to generate the next set without retraining. Point forecasting performance is measured using Mean Absolute Error (MAE), while probabilistic forecasting is evaluated based on empirical coverage or coverage probability of 80%. The XGBoost model is defined using Skforecast, a library that facilitates the development and testing of forecasters. The ForecasterAutoreg object is created with an XGBoost regressor, and the optimal number of lags is determined through Bayesian optimization. The resulting model uses 21 lags of the target variable and various hyperparameters optimized through the search. The performance of the XGBoost forecaster is assessed through backtesting, which evaluates the model on a test set. The model’s MAE is 0.64, indicating that predictions are, on average, 64 cm off from the actual measurements. This performance is better than a simple rule-based forecast, which has an MAE of 0.84. For probabilistic forecasting, Skforecast calculates prediction intervals using bootstrapped residuals. The intervals cover 84.67% of the test set values, slightly above the target of 80%, with an interval area of 348.28. Next, the zero-shot performance of Lag-Llama is examined. Using context lengths of 32, 64, and 128 tokens, the model’s MAE ranges from 0.75 to 0.77, higher than the XGBoost forecaster’s MAE. Probabilistic forecasting with Lag-Llama shows varying coverage and interval areas, with the 128-token model achieving an 84.67% coverage and an area of 399.25, similar to XGBoost’s performance. Fine-tuning Lag-Llama involves adjusting context length and learning rate. Despite various configurations, the fine-tuned model does not significantly outperform the zero-shot model in terms of MAE or coverage. In conclusion, Lag-Llama’s performance, without training, is comparable to an optimized traditional forecaster like XGBoost. Fine-tuning does not yield substantial improvements, suggesting that more training data might be necessary. When choosing between Lag-Llama and XGBoost, factors such as ease of use, deployment, maintenance, and inference costs should be considered, with XGBoost likely having an edge in these areas. The code used in this study is publicly available on a GitHub repository for further exploration. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
What is CrowdStrike?

What is CrowdStrike?

Global Outage Linked to CrowdStrike: What You Need to Know On Friday, a major global outage caused widespread disruptions, including flight cancellations, outages at hospitals and banks, and interruptions for broadcasters and businesses worldwide. Microsoft attributed the issue to a problem related to CrowdStrike, a cybersecurity and cloud technology firm. About CrowdStrike CrowdStrike, based in Austin, Texas, was founded in 2011 and offers a range of cybersecurity and IT tools. The company supports nearly 300 Fortune 500 firms and provides services to major companies such as Target, Salesforce, and T-Mobile. What Happened? The outage affected various public and private sectors globally, including airlines, banks, railways, and hospitals. According to CrowdStrike’s CEO, George Kurtz, the issue originated from a technical defect in a software update for Windows 10 systems, not from a cyberattack. A fix has been implemented, but some Microsoft 365 apps and services may still experience issues. Flight Disruptions Due to technical problems, American Airlines, United, and Delta requested a global ground stop for all flights on Friday morning. This led to the cancellation of at least 540 flights in the U.S. and significant delays at major airports, including Philadelphia International Airport. Stock Market Impact The outage affected the stock prices of both Microsoft and CrowdStrike. Premarket trading saw Microsoft’s stock (MSFT) drop 2.9% to $427.70, while CrowdStrike shares (CRWD) fell nearly 19% to $279.50, according to the Wall Street Journal. Other Effects The outage impacted universities, hospitals, and various organizations that rely on Microsoft systems. Thousands of train services were canceled in the U.S. and Europe, and some broadcast stations went off air. Hospitals, including Penn and Main Line Health in Philadelphia, canceled elective procedures due to technical difficulties. Blue Screens of Death Millions of Windows 10 users encountered “blue screens of death” (BSOD), indicating a critical error with the system. This problem arose from a bug linked to a Windows update, leaving many users unable to reboot their devices. Next Steps for Users Microsoft is rolling out an update to address the bug. CrowdStrike advises affected users to monitor the company’s customer support portal for further assistance. This incident highlights the significant impact of cybersecurity and software issues on global operations, emphasizing the importance of robust IT solutions and rapid response strategies. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Can We Customize Manufacturing Cloud For Our Business

Can We Customize Manufacturing Cloud For Our Business?

Yes, Salesforce Manufacturing Cloud Can Be Customized to Meet Your Business Needs Salesforce Manufacturing Cloud is designed to be highly customizable, allowing manufacturing organizations to tailor it to their unique business requirements. Whether it’s adapting the platform to fit specific workflows, integrating with third-party systems, or enhancing reporting capabilities, Salesforce provides robust customization options to meet the specific needs of manufacturers. Here are key ways Salesforce Manufacturing Cloud can be customized: 1. Custom Data Models and Objects Salesforce allows you to create custom objects and fields to track data beyond the standard model. This flexibility enables businesses to manage unique production metrics or product configurations seamlessly within the platform. Customization Options: 2. Sales Agreement Customization Sales Agreements in Salesforce Manufacturing Cloud can be tailored to reflect your business’s specific contract terms and pricing models. You can adjust agreement structures, including the customization of terms, conditions, and rebate tracking. Customization Options: 3. Custom Workflows and Automation Salesforce offers tools like Flow Builder and Process Builder, allowing manufacturers to automate routine tasks and create custom workflows that streamline operations. Customization Options: 4. Integration with Third-Party Systems Salesforce Manufacturing Cloud can integrate seamlessly with ERP systems (like SAP or Oracle), inventory management platforms, and IoT devices to ensure a smooth data flow across departments. Integration Options: 5. Custom Reports and Dashboards With Salesforce’s robust reporting tools, you can create custom reports and dashboards that provide real-time insights into key performance indicators (KPIs) relevant to your manufacturing operations. Customization Options: 6. Custom User Interfaces Salesforce Lightning allows you to customize user interfaces to meet the needs of different roles within your organization, such as production managers or sales teams. This ensures users have quick access to relevant data. Customization Options: Conclusion Salesforce Manufacturing Cloud provides a wide range of customization options to suit the unique needs of your manufacturing business. Whether it’s adjusting data models, automating processes, or integrating with external systems, Manufacturing Cloud can be tailored to meet your operational goals. By leveraging these customizations, manufacturers can optimize their operations, improve data accuracy, and gain real-time insights to boost efficiency. If you need help customizing Salesforce Manufacturing Cloud, Service Cloud, or Sales Cloud for your business, our Salesforce Manufacturing Cloud Services team is here to assist. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce API Gen

Salesforce API Gen

Function-calling agent models, a significant advancement within large language models (LLMs), encounter challenges in requiring high-quality, diverse, and verifiable datasets. These models interpret natural language instructions to execute API calls crucial for real-time interactions with various digital services. However, existing datasets often lack comprehensive verification and diversity, resulting in inaccuracies and inefficiencies. Overcoming these challenges is critical for deploying function-calling agents reliably in real-world applications, such as retrieving stock market data or managing social media interactions. Salesforce API Gen. Current approaches to training these agents rely on static datasets that lack thorough verification, hampering adaptability and performance when encountering new or unseen APIs. For example, models trained on restaurant booking APIs may struggle with tasks like stock market data retrieval due to insufficient relevant training data. Addressing these limitations, researchers from Salesforce AI Research propose APIGen, an automated pipeline designed to generate diverse and verifiable function-calling datasets. APIGen integrates a multi-stage verification process to ensure data reliability and correctness. This innovative approach includes format checking, actual function executions, and semantic verification, rigorously verifying each data point to produce high-quality datasets. Salesforce API Gen APIGen initiates its data generation process by sampling APIs and query-answer pairs from a library, formatting them into standardized JSON format. The pipeline then progresses through a series of verification stages: format checking to validate JSON structure, function call execution to verify operational correctness, and semantic checking to align function calls, execution results, and query objectives. This meticulous process results in a comprehensive dataset comprising 60,000 entries, covering 3,673 APIs across 21 categories, accessible via Huggingface. The datasets generated by APIGen significantly enhance model performance, achieving state-of-the-art results on the Berkeley Function-Calling Benchmark. Models trained on these datasets outperform multiple GPT-4 models, demonstrating substantial improvements in accuracy and efficiency. For instance, a model with 7 billion parameters achieves an accuracy of 87.5%, surpassing previous benchmarks by a notable margin. These outcomes underscore the robustness and reliability of APIGen-generated datasets in advancing the capabilities of function-calling agents. In conclusion, APIGen presents a novel framework for generating high-quality, diverse datasets for function-calling agents, addressing critical challenges in AI research. Its multi-stage verification process ensures data reliability, empowering even smaller models to achieve competitive results. APIGen opens avenues for developing efficient and powerful language models, emphasizing the pivotal role of high-quality data in AI advancements. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Verified First Expands Salesforce HR Capabilities

Verified First Expands Salesforce HR Capabilities

Expand the abilities of Salesforce as an HR platform with integrated background screening! Easily order background checks within Salesforce. With a few simple clicks, you can access Verified First’s background screening solutions, including robust background and drug screen packages, all within Salesforce. Plus, the report results are available on the Contact/Candidate page, meaning you’ll never have to leave Salesforce! Verified First Expands Salesforce HR Capabilities. So, get ready to: Verified First Expands Salesforce HR Capabilities Verified First has the only app on the Salesforce AppExchange that lets you background check and drug screen your applicants. Our seamless integration sets up in seconds and delivers the report results into Salesforce. Works seamlessly with other Salesforce apps such as Bullhorn for Salesforce, Bullhorn Jobscience, Sage, FinancialForce, and the Nonprofit Success Pack. About Verified First There are hundreds of background screening service providers to choose from, so what makes Verified First stand out? Compared to popular background screening companies, Verified First offers robust screening services with industry-leading customer care and cutting-edge technology. Unlike the big-box providers, Verified First is a privately-owned Idaho company, so we can focus on the needs of our customers and not the whims of shareholders. Experience the difference in service, technology, and client care—get to know Verified First! Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Used YouTube to Train AI

Used YouTube to Train AI

Announced by siliconANGLE’s Duncan Riley. Companies Used YouTube to Train AI. A new report released today reveals that companies such as Anthropic PBC, Nvidia Corp., Apple Inc., and Salesforce Inc. have used subtitles from YouTube videos to train their AI services without obtaining permission. This raises significant ethical questions about the use of publicly available materials and facts without consent. According to Proof News, these companies allegedly utilized subtitles from 173,536 YouTube videos sourced from over 48,000 channels to enhance their AI models. Rather than scraping the content themselves, Anthropic, Nvidia, Apple, and Salesforce reportedly used a dataset provided by EleutherAI, a nonprofit AI organization. EleutherAI, founded in 2020, focuses on the interpretability and alignment of large AI models. The organization aims to democratize access to advanced AI technologies by developing and releasing open-source AI models like GPT-Neo and GPT-J. EleutherAI also advocates for open science norms in natural language processing, promoting transparency and ethical AI development. The dataset in question, known as “YouTube Subtitles,” includes transcripts from educational and online learning channels, as well as several media outlets and YouTube personalities. Notable YouTubers whose transcripts are included in the dataset are Mr. Beast, Marques Brownlee, PewDiePie, and left-wing political commentator David Pakman. Some creators whose content was used are outraged. Pakman, for example, argues that using his transcripts jeopardizes his livelihood and that of his staff. David Wiskus, CEO of streaming service Nebula, has even called the use of the data “theft.” Despite the data being publicly accessible, the controversy revolves around the fact that large language models are utilizing it. This situation echoes recent legal actions regarding the use of publicly available data to train AI models. For instance, Microsoft Corp. and OpenAI were sued in November over their use of nonfiction authors’ works for AI training. The class-action lawsuit, led by a New York Times reporter, claimed that OpenAI scraped the content of hundreds of thousands of nonfiction books to develop their AI models. Additionally, The New York Times accused OpenAI, Google LLC, and Meta Holdings Inc. in April of skirting legal boundaries in their use of AI training data. While the legality of using AI training data remains a gray area, it has yet to be extensively tested in court. Should a case arise, the key issue will likely be whether publicly stated facts, including utterances, can be copyrighted. Relevant U.S. case law includes Feist Publications Inc. v. Rural Telephone Service Co., 499 U.S. 340 (1991) and International News Service v. Associated Press (1918). In both cases, the U.S. Supreme Court ruled that facts cannot be copyrighted. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Democratizing CLM

Democratizing CLM

IntelAgree, a leader in AI-powered contract lifecycle management (CLM) software, is excited to announce the integration of generative AI functionality into its existing Salesforce platform. The new feature, Saige Assist: Contract Advice, enhances the contracting process by providing users with immediate answers to contract-related questions directly within the familiar Salesforce environment. Available to IntelAgree users with AI-enabled subscriptions, Saige Assist: Contract Advice significantly enhances productivity and efficiency. Traditional inquiries to legal teams, which might take 48-72 hours due to staffing or prioritization constraints, are now addressed in seconds, enabling faster decision-making. “Many of our clients draft and manage contracts through Salesforce. With this new feature, they won’t need to leave Salesforce to get the answers they need,” said Michael Schacter, Director of Product Management at IntelAgree. “They can ask questions right within the platform, making it an all-in-one solution for contract management.” Key benefits of this new update include: “At IntelAgree, we aim to make contracting a team sport. A major part of this is meeting non-legal users where they work and how they prefer to work,” said Kyle Myers, EVP of Product and Engineering at IntelAgree. “With this new Salesforce integration update, we’re not just making contract management easier – we’re democratizing it, making AI-powered contract insights available to anyone using Salesforce.” IntelAgree distinguishes itself with a user-first approach to contract management, addressing the evolving needs of modern businesses beyond just legal departments. Looking ahead, the company plans to expand Saige Assist’s functionality to other native integrations. Along with the launch of Saige Assist: Contract Advice, IntelAgree has introduced an attributes tab to its Salesforce integration, providing users with quick access to key attribute values like arbitration, payment terms, and publicity restrictions. In a future release, users will also be able to complete smart forms within Salesforce, further minimizing the need to switch platforms. About IntelAgree:IntelAgree is an AI-powered contract lifecycle management (CLM) platform that helps enterprise teams do impactful work, not busy work. The platform uses machine learning to identify, extract, and analyze text in agreements, making contract analytics more accessible. IntelAgree also uses intelligent automation to optimize every part of the contracting process, so teams can create, negotiate, sign, manage, and analyze contracts faster. IntelAgree is trusted by leading companies, ranging from major league sports teams to Fortune 500 companies, to automate the most painful, costly parts of the contracting process. For more information about IntelAgree, visit intelagree.com. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Einstein Service Agent

Einstein Service Agent

Introducing Agentforce Service Agent: Salesforce’s Autonomous AI to Transform Chatbot Experiences Accelerate case resolutions with an intelligent, conversational interface that uses natural language and is grounded in trusted customer and business data. Deploy in minutes with ready-made templates, Salesforce components, and a large language model (LLM) to autonomously engage customers across any channel, 24/7. Establish clear privacy and security guardrails to ensure trusted responses, and escalate complex cases to human agents as needed. Editor’s Note: Einstein Service Agent is now known as Agentforce Service Agent. Salesforce has launched Agentforce Service Agent, the company’s first fully autonomous AI agent, set to redefine customer service. Unlike traditional chatbots that rely on preprogrammed responses and lack contextual understanding, Agentforce Service Agent is dynamic, capable of independently addressing a wide range of service issues, which enhances customer service efficiency. Built on the Einstein 1 Platform, Agentforce Service Agent interacts with large language models (LLMs) to analyze the context of customer messages and autonomously determine the appropriate actions. Using generative AI, it creates conversational responses based on trusted company data, such as Salesforce CRM, and aligns them with the brand’s voice and tone. This reduces the burden of routine queries, allowing human agents to focus on more complex, high-value tasks. Customers, in turn, receive faster, more accurate responses without waiting for human intervention. Available 24/7, Agentforce Service Agent communicates naturally across self-service portals and messaging channels, performing tasks proactively while adhering to the company’s defined guardrails. When an issue requires human escalation, the transition is seamless, ensuring a smooth handoff. Ease of Setup and Pilot Launch Currently in pilot, Agentforce Service Agent will be generally available later this year. It can be deployed in minutes using pre-built templates, low-code workflows, and user-friendly interfaces. “Salesforce is shaping the future where human and digital agents collaborate to elevate the customer experience,” said Kishan Chetan, General Manager of Service Cloud. “Agentforce Service Agent, our first fully autonomous AI agent, will revolutionize service teams by not only completing tasks autonomously but also augmenting human productivity. We are reimagining customer service for the AI era.” Why It Matters While most companies use chatbots today, 81% of customers would still prefer to speak to a live agent due to unsatisfactory chatbot experiences. However, 61% of customers express a preference for using self-service options for simpler issues, indicating a need for more intelligent, autonomous agents like Agentforce Service Agent that are powered by generative AI. The Future of AI-Driven Customer Service Agentforce Service Agent has the ability to hold fluid, intelligent conversations with customers by analyzing the full context of inquiries. For instance, a customer reaching out to an online retailer for a return can have their issue fully processed by Agentforce, which autonomously handles tasks such as accessing purchase history, checking inventory, and sending follow-up satisfaction surveys. With trusted business data from Salesforce’s Data Cloud, Agentforce generates accurate and personalized responses. For example, a telecommunications customer looking for a new phone will receive tailored recommendations based on data such as purchase history and service interactions. Advanced Guardrails and Quick Setup Agentforce Service Agent leverages the Einstein Trust Layer to ensure data privacy and security, including the masking of personally identifiable information (PII). It can be quickly activated with out-of-the-box templates and pre-existing Salesforce components, allowing companies to equip it with customized skills faster using natural language instructions. Multimodal Innovation Across Channels Agentforce Service Agent supports cross-channel communication, including messaging apps like WhatsApp, Facebook Messenger, and SMS, as well as self-service portals. It even understands and responds to images, video, and audio. For example, if a customer sends a photo of an issue, Agentforce can analyze it to provide troubleshooting steps or even recommend replacement products. Seamless Handoffs to Human Agents If a customer’s inquiry requires human attention, Agentforce seamlessly transfers the conversation to a human agent who will have full context, avoiding the need for the customer to repeat information. For example, a life insurance company might program Agentforce to escalate conversations if a customer mentions sensitive topics like loss or death. Similarly, if a customer requests a return outside of the company’s policy window, Agentforce can recommend that a human agent make an exception. Customer Perspective “Agentforce Service Agent’s speed and accuracy in handling inquiries is promising. It responds like a human, adhering to our diverse, country-specific guidelines. I see it becoming a key part of our service team, freeing human agents to handle higher-value issues.” — George Pokorny, SVP of Global Customer Success, OpenTable. Content updated October 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Summer 24 Salesforce Maps Release

Summer 24 Salesforce Maps Release

Announcing the Salesforce Maps Summer ’24 Release! We are thrilled to announce the availability of the Salesforce Summer ’24 Maps release, designed to significantly enhance your experience and bring valuable benefits to your business. Key Features and Enhancements Summer 24 Salesforce Maps Release: For a comprehensive overview, please refer to the Maps Summer ‘24 Release Notes. We encourage you to enable this new experience and provide your valuable feedback to ensure it meets your needs and expectations. Note that the new experience will be auto-enabled in the Winter ’25 Release (October). Instructions on activating the new experience can be found here. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Confidence Scores

AI Confidence Scores

In this insight, the focus is on exploring the use of confidence scores available through the OpenAI API. The first section delves into these scores and explains their significance using a custom chat interface. The second section demonstrates how to apply confidence scores programmatically in code. Understanding Confidence Scores To begin, it’s important to understand what an LLM (Large Language Model) is doing for each token in its response: However, it’s essential to clarify that the term “probabilities” here is somewhat misleading. While mathematically, they qualify as a “probability distribution” (the values add up to one), they don’t necessarily reflect true confidence or likelihood in the way we might expect. In this sense, these values should be treated with caution. A useful way to think about these values is to consider them as “confidence” scores, though it’s crucial to remember that, much like humans, LLMs can be confident and still be wrong. The values themselves are not inherently meaningful without additional context or validation. Example: Using a Chat Interface An example of exploring these confidence scores can be seen in a chat interface where: In one case, when asked to “pick a number,” the LLM chose the word “choose” despite it having only a 21% chance of being selected. This demonstrates that LLMs don’t always pick the most likely token unless configured to do so. Additionally, this interface shows how the model might struggle with questions that have no clear answer, offering insights into detecting possible hallucinations. For example, when asked to list famous people with an interpunct in their name, the model shows low confidence in its guesses. This behavior indicates uncertainty and can be an indicator of a forthcoming incorrect response. Hallucinations and Confidence Scores The discussion also touches on the question of whether low confidence scores can help detect hallucinations—cases where the model generates false information. While low confidence often correlates with potential hallucinations, it’s not a foolproof indicator. Some hallucinations may come with high confidence, while low-confidence tokens might simply reflect natural variability in language. For instance, when asked about the capital of Kazakhstan, the model shows uncertainty due to the historical changes between Astana and Nur-Sultan. The confidence scores reflect this inconsistency, highlighting how the model can still select an answer despite having conflicting information. Using Confidence Scores in Code The next part of the discussion covers how to leverage confidence scores programmatically. For simple yes/no questions, it’s possible to compress the response into a single token and calculate the confidence score using OpenAI’s API. Key API settings include: Using this setup, one can extract the model’s confidence in its response, converting log probabilities back into regular probabilities using math.exp. Expanding to Real-World Applications The post extends this concept to more complex scenarios, such as verifying whether an image of a driver’s license is valid. By analyzing the model’s confidence in its answer, developers can determine when to flag responses for human review based on predefined confidence thresholds. This technique can also be applied to multiple-choice questions, allowing developers to extract not only the top token but also the top 10 options, along with their confidence scores. Conclusion While confidence scores from LLMs aren’t a perfect solution for detecting accuracy or truthfulness, they can provide useful insights in certain scenarios. With careful application and evaluation, developers can make informed decisions about when to trust the model’s responses and when to intervene. The final takeaway is that confidence scores, while not foolproof, can play a role in improving the reliability of LLM outputs—especially when combined with thoughtful design and ongoing calibration. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com