On Target - gettectonic.com
Small Language Models

Small Language Models

Large language models (LLMs) like OpenAI’s GPT-4 have gained acclaim for their versatility across various tasks, but they come with significant resource demands. In response, the AI industry is shifting focus towards smaller, task-specific models designed to be more efficient. Microsoft, alongside other tech giants, is investing in these smaller models. Science often involves breaking complex systems down into their simplest forms to understand their behavior. This reductionist approach is now being applied to AI, with the goal of creating smaller models tailored for specific functions. Sébastien Bubeck, Microsoft’s VP of generative AI, highlights this trend: “You have this miraculous object, but what exactly was needed for this miracle to happen; what are the basic ingredients that are necessary?” In recent years, the proliferation of LLMs like ChatGPT, Gemini, and Claude has been remarkable. However, smaller language models (SLMs) are gaining traction as a more resource-efficient alternative. Despite their smaller size, SLMs promise substantial benefits to businesses. Microsoft introduced Phi-1 in June last year, a smaller model aimed at aiding Python coding. This was followed by Phi-2 and Phi-3, which, though larger than Phi-1, are still much smaller than leading LLMs. For comparison, Phi-3-medium has 14 billion parameters, while GPT-4 is estimated to have 1.76 trillion parameters—about 125 times more. Microsoft touts the Phi-3 models as “the most capable and cost-effective small language models available.” Microsoft’s shift towards SLMs reflects a belief that the dominance of a few large models will give way to a more diverse ecosystem of smaller, specialized models. For instance, an SLM designed specifically for analyzing consumer behavior might be more effective for targeted advertising than a broad, general-purpose model trained on the entire internet. SLMs excel in their focused training on specific domains. “The whole fine-tuning process … is highly specialized for specific use-cases,” explains Silvio Savarese, Chief Scientist at Salesforce, another company advancing SLMs. To illustrate, using a specialized screwdriver for a home repair project is more practical than a multifunction tool that’s more expensive and less focused. This trend towards SLMs reflects a broader shift in the AI industry from hype to practical application. As Brian Yamada of VLM notes, “As we move into the operationalization phase of this AI era, small will be the new big.” Smaller, specialized models or combinations of models will address specific needs, saving time and resources. Some voices express concern over the dominance of a few large models, with figures like Jack Dorsey advocating for a diverse marketplace of algorithms. Philippe Krakowski of IPG also worries that relying on the same models might stifle creativity. SLMs offer the advantage of lower costs, both in development and operation. Microsoft’s Bubeck emphasizes that SLMs are “several orders of magnitude cheaper” than larger models. Typically, SLMs operate with around three to four billion parameters, making them feasible for deployment on devices like smartphones. However, smaller models come with trade-offs. Fewer parameters mean reduced capabilities. “You have to find the right balance between the intelligence that you need versus the cost,” Bubeck acknowledges. Salesforce’s Savarese views SLMs as a step towards a new form of AI, characterized by “agents” capable of performing specific tasks and executing plans autonomously. This vision of AI agents goes beyond today’s chatbots, which can generate travel itineraries but not take action on your behalf. Salesforce recently introduced a 1 billion-parameter SLM that reportedly outperforms some LLMs on targeted tasks. Salesforce CEO Mark Benioff celebrated this advancement, proclaiming, “On-device agentic AI is here!” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Guide to Creating a Working Sales Plan Creating a sales plan is a pivotal step in reaching your revenue objectives. To ensure its longevity and adaptability to Read more CRM Cloud Salesforce What is a CRM Cloud Salesforce? Salesforce Service Cloud is a customer relationship management (CRM) platform for Salesforce clients to Read more

Read More
Engagement Frequency Dashboard

Engagement Frequency Dashboard

The Einstein Engagement Frequency Dashboard The Einstein Engagement Frequency Dashboard provides a comprehensive overview of your contacts’ email saturation levels. By analyzing this data, you can understand how your email sending frequency influences engagement metrics like opens, clicks, and unsubscribes over time. The What-If analyzer is a handy tool within the dashboard, allowing you to experiment with different sending frequencies to maximize your On Target saturation levels. Accessing the Dashboard To access the Einstein Engagement Frequency Dashboard: Once on the dashboard, you can click “View Details” at the top to check your data quality scores and get tips on how to improve them. This will give you an idea of how reliable your email or mobile engagement data is. Note on Data Quality If Einstein lacks sufficient data for certain contacts, it will assign frequency scores based on global model data. This can sometimes cause discrepancies between the Einstein Engagement Frequency dashboard and activity-level analytics in specific journeys. What-If Analyzer The What-If Analyzer is a feature on the dashboard that allows you to test different future saturation levels based on varying message frequencies. The goal is to increase the number of contacts in the “On Target” range for engagement. The analyzer provides a bar chart that predicts how adjusting your email frequency can shift contacts from being “Saturated” or “Undersaturated” to “On Target.” This tool helps you fine-tune your communication strategy to optimize engagement across your contact base. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
TEFCA could drive payer-provider interoperability

TEFCA could drive payer-provider interoperability

Bridging the Interoperability Gap: TEFCA’s Role in Payer-Provider Data Exchange The electronic health information exchange (HIE) between healthcare providers has seen significant growth in recent years. However, interoperability between healthcare providers and payers has lagged behind. The Trusted Exchange Framework and Common Agreement (TEFCA) aims to address this gap and enhance data interoperability across the healthcare ecosystem. TEFCA could drive payer-provider interoperability with a little help from the world of technology. TEFCA’s Foundation and Evolution TEFCA was established under the 21st Century Cures Act to improve health data interoperability through a “network of networks” approach. The Office of the National Coordinator for Health Information Technology (ONC) officially launched TEFCA in December 2023, designating five initial Qualified Health Information Networks (QHINs). By February 2024, two additional QHINs had been designated. The Sequoia Project, TEFCA’s recognized coordinating entity, recently released several key documents for stakeholder feedback, including draft standard operating procedures (SOPs) for healthcare operations and payment under TEFCA. During the 2024 WEDI Spring Conference, leaders from three QHINs—eHealth Exchange, Epic Nexus, and Kno2—discussed the future of TEFCA in enhancing provider and payer interoperability. ONC released Version 2.0 of the Common Agreement on April 22, 2024. Common Agreement Version 2.0 updates Common Agreement Version 1.1, published in November 2023, and includes enhancements and updates to require support for Health Level Seven (HL7®) Fast Healthcare Interoperability Resources (FHIR®) based transactions. The Common Agreement includes an exhibit, the Participant and Subparticipant Terms of Participation (ToP), that sets forth the requirements each Participant and Subparticipant must agree to and comply with to participate in TEFCA. The Common Agreement and ToPs incorporate all applicable standard operating procedures (SOPs) and the Qualified Health Information Network Technical Framework (QTF). View the release notes for Common Agreement Version 2.0 The Trusted Exchange Framework and Common AgreementTM (TEFCATM) has 3 goals: (1) to establish a universal governance, policy, and technical floor for nationwide interoperability; (2) to simplify connectivity for organizations to securely exchange information to improve patient care, enhance the welfare of populations, and generate health care value; and (3) to enable individuals to gather their health care information. Challenges in Payer Data Exchange Although the QHINs on the panel have made progress in facilitating payer HIE, they emphasized that TEFCA is not yet fully operational for large-scale payer data exchange. Ryan Bohochik, Vice President of Value-Based Care at Epic, highlighted the complexities of payer-provider data exchange. “We’ve focused on use cases that allow for real-time information sharing between care providers and insurance carriers,” Bohochik said. “However, TEFCA isn’t yet capable of supporting this at the scale required.” Bohochik also pointed out that payer data exchange is complicated by the involvement of third-party contractors. For example, health plans often partner with vendors for tasks like care management or quality measure calculation. This adds layers of complexity to the data exchange process. Catherine Bingman, Vice President of Interoperability Adoption for eHealth Exchange, echoed these concerns, noting that member attribution and patient privacy are critical issues in payer data exchange. “Payers don’t have the right to access everything a patient has paid for themselves,” Bingman said. “This makes providers cautious about sharing data, impacting patient care.” For instance, manual prior authorization processes frequently delay patient access to care. A 2023 AMA survey found that 42% of doctors reported care delays due to prior authorization, with 37% stating that these delays were common. Building Trust Through Use Cases Matt Becker, Vice President of Interoperability at Kno2, stressed the importance of developing specific use cases to establish trust in payer data exchange via TEFCA. “Payment and operations is a broad category that includes HEDIS measures, quality assurance, and provider monitoring,” Becker said. “Each of these requires a high level of trust.” Bohochik agreed, emphasizing that narrowing the scope and focusing on specific, high-value use cases will be essential for TEFCA’s adoption. “We can’t solve everything at once,” Bohochik said. “We need to focus on achieving successful outcomes in targeted areas, which will build momentum and community support.” He also noted that while technical data standards are crucial, building trust in the data exchange process is equally important. “A network is only as good as the trust it inspires,” Bohochik said. “If healthcare systems know that data requests for payment and operations are legitimate and secure, it will drive the scalability of TEFCA.” By focusing on targeted use cases, ensuring rigorous data standards, and building trust, TEFCA has the potential to significantly enhance interoperability between healthcare providers and payers, ultimately improving patient care and operational efficiency. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
gettectonic.com