PHAs Archives - gettectonic.com - Page 13
SearchGPT and Knowledge Cutoff

SearchGPT and Knowledge Cutoff

Tackling the Knowledge Cutoff Challenge in Generative AI In the realm of generative AI, a significant hurdle has been the issue of knowledge cutoff—where a large language model (LLM) only has information up until a specific date. This was an early concern with OpenAI’s ChatGPT. For example, the GPT-4o model that currently powers ChatGPT has a knowledge cutoff in October 2023. The older GPT-4 model, on the other hand, had a cutoff in September 2021. Traditional search engines like Google, however, don’t face this limitation. Google continuously crawls the internet to keep its index up to date with the latest information. To address the knowledge cutoff issue in LLMs, multiple vendors, including OpenAI, are exploring search capabilities powered by generative AI (GenAI). Introducing SearchGPT: OpenAI’s GenAI Search Engine SearchGPT is OpenAI’s GenAI search engine, first announced on July 26, 2024. It aims to combine the strengths of a traditional search engine with the capabilities of GPT LLMs, eliminating the knowledge cutoff by drawing real-time data from the web. SearchGPT is currently a prototype, available to a limited group of test users, including individuals and publishers. OpenAI has invited publishers to ensure their content is accurately represented in search results. The service is positioned as a temporary offering to test and evaluate its performance. Once this evaluation phase is complete, OpenAI plans to integrate SearchGPT’s functionality directly into the ChatGPT interface. As of August 2024, OpenAI has not announced when SearchGPT will be generally available or integrated into the main ChatGPT experience. Key Features of SearchGPT SearchGPT offers several features designed to enhance the capabilities of ChatGPT: OpenAI’s Challenge to Google Search Google has long dominated the search engine landscape, a position that OpenAI aims to challenge with SearchGPT. Answers, Not Links Traditional search engines like Google act primarily as indexes, pointing users to other sources of information rather than directly providing answers. Google has introduced AI Overviews (formerly Search Generative Experience or SGE) to offer AI-generated summaries, but it still relies heavily on linking to third-party websites. SearchGPT aims to change this by providing direct answers to user queries, summarizing the source material instead of merely pointing to it. Contextual Continuity In contrast to Google’s point-in-time search queries, where each query is independent, SearchGPT strives to maintain context across multiple queries, offering a more seamless and coherent search experience. Search Accuracy Google Search often depends on keyword matching, which can require users to sift through several pages to find relevant information. SearchGPT aims to combine real-time data with an LLM to deliver more contextually accurate and relevant information. Ad-Free Experience SearchGPT offers an ad-free interface, providing a cleaner and more user-friendly experience compared to Google, which includes ads in its search results. AI-Powered Search Engine Comparison Here’s a comparison of the AI-powered search engines available today: Search Engine Platform Integration Publisher Collaboration Ads Cost SearchGPT (OpenAI) Standalone prototype Strong emphasis Ad-free Free (prototype stage) Google SGE Built on Google’s infrastructure SEO practices, content partnerships Includes ads Free Microsoft Bing AI/Copilot Built on Microsoft’s infrastructure SEO practices, content partnerships Includes ads Free Perplexity AI Standalone Basic source attribution Ad-free Free; $20/month for premium You.com AI assistant with various modes Basic source attribution Ad-free Free; premium tiers available Brave Search Independent search index Basic source attribution Ad-free Free Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Impact of EHR Adoption

Connected Care Technology

How Connected Care Technology Can Transform the Provider Experience Northwell Health is leveraging advanced connected care technologies, including AI, to alleviate administrative burdens and foster meaningful interactions between providers and patients. While healthcare technology has revolutionized traditional care delivery models, it has also inadvertently created barriers, increasing the administrative workload and distancing providers from their patients. Dr. Michael Oppenheim, Senior Vice President of Clinical Digital Solutions at Northwell Health, highlighted this challenge during the Connected Health 2024 virtual summit, using a poignant illustration published a decade ago in the Journal of the American Medical Association. The image portrays a physician focused on a computer with their back to a patient and family, emphasizing how technology can inadvertently shift attention away from patient care. Reimagining Technology to Enhance Provider-Patient Connections To prevent technology from undermining the patient-provider relationship, healthcare organizations must reduce the administrative burden and enhance connectivity between patients and care teams. Northwell Health exemplifies this approach by implementing innovative solutions aimed at improving access, efficiency, and communication. 1. Expanding Access Without Overloading Providers Connected healthcare technologies can dramatically improve patient access but may strain clinicians managing large patient panels. Dr. Oppenheim illustrated how physicians often need to review extensive patient histories for every interaction, consuming valuable time. Northwell Health addresses this challenge by employing mapping tools, propensity analyses, and matching algorithms to align patients with the most appropriate providers. By connecting patients to specialists who best meet their needs, providers can maximize their time and expertise while ensuring better patient outcomes. 2. Leveraging Generative AI for Chart Summarization Generative AI is proving transformative in managing the immense data volumes clinicians face. AI-driven tools help summarize patient records, extracting clinically relevant details tailored to the provider’s specialty. For instance, in a pilot at Northwell Health, AI successfully summarized complex hospitalizations, capturing the critical elements of care transitions. This “just right” approach ensures providers receive actionable insights without unnecessary data overload. Additionally, ambient listening tools are being used to document clinical consultations seamlessly. By automatically summarizing interactions into structured notes, physicians can focus entirely on their patients during visits, improving care quality while reducing after-hours charting. 3. Streamlining Team-Based Care Effective care delivery often involves a multidisciplinary team, including primary physicians, specialists, nurses, and social workers. Coordinating communication across these groups has historically been challenging. Northwell Health is addressing this issue by adopting EMR systems with integrated team chat functionalities, enabling real-time collaboration among care teams. These tools facilitate better care planning and communication, ensuring patients receive coordinated and consistent treatment. Dr. Oppenheim emphasized the importance of not only uniting clinicians in decision-making but also involving patients in discussions. By presenting clear, viable options, providers can enhance patient engagement and shared decision-making. The Path Forward: Balancing Technology with Provider Needs As healthcare continues its digital transformation, connected care technologies must prioritize clinician satisfaction alongside patient outcomes. Tools that simplify workflows, enhance communication, and reduce administrative burdens are crucial for fostering provider buy-in and ensuring the success of health IT initiatives. Northwell Health’s efforts demonstrate how thoughtfully implemented technologies can empower clinicians, strengthen patient relationships, and create a truly connected healthcare experience. Tectonic is here to help your facility plan. Content updated November 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Healthcare IT Lessons from CrowdStrike

Healthcare IT Lessons from CrowdStrike

Post-Outage Recovery and Lessons from the CrowdStrike Incident Following the CrowdStrike outage on July 19, 2024, companies globally have been working to restore business continuity and enhance their resilience for future incidents. The outage, caused by a faulty content update, led to crashes on approximately 8.5 million Windows devices, affecting hospitals, airlines, and other businesses. Although less than 1% of all Windows machines were impacted, the incident caused significant disruptions, including appointment cancellations at hospitals. For instance, Mass General Brigham canceled all non-urgent visits on the day the outage began. Other healthcare organizations, such as Memorial Sloan Kettering Cancer Center, Cleveland Clinic, and Mount Sinai, also faced operational challenges. The cause of the outage was a defective content configuration update to CrowdStrike’s Falcon threat detection platform, not a cyberattack. A bug in the content validator allowed the faulty update to bypass validation, as noted in CrowdStrike’s preliminary post-incident review. David Finn, Executive Vice President of Governance, Risk, and Compliance at First Health Advisory, shared with TechTarget Editorial, “The recovery is well underway, and most healthcare organizations are back up and running. While the scope was smaller compared to other recent incidents in healthcare, the response was effective. There are valuable lessons to be learned.” Preparing for Future Incidents Finn, with 40 years of experience in health IT security, emphasized that incidents are inevitable. “The challenge is to plan, prepare, and be able to recover and stay resilient,” he stated. Whether facing a major cyberattack like the February 2024 Change Healthcare incident or an IT outage without malicious intent, healthcare organizations must be ready for various cyber incidents affecting critical systems. He highlighted the importance of thorough due diligence and incident response planning. Addressing potential operational challenges in advance and planning for cybersecurity events or IT failures will prove beneficial when an incident occurs. “We need to rethink how we deploy software,” Finn added. “Human errors will always happen, and it’s our job to protect against those mistakes.” Building Cyber-Resilience Cyber-resilience is crucial for quickly recovering and resuming operations. Organizations should anticipate incidents and focus on building resilience. Finn noted, “While I still trust CrowdStrike, trust does not guarantee perfection. Resilience and redundancy are vital.” Healthcare organizations responded swiftly to the CrowdStrike incident, with Mass General Brigham activating its incident command to manage the situation. The organization ensured that clinics and emergency departments remained open for urgent health concerns and resumed scheduled appointments and procedures by July 22. Evaluating Risk and Updating Protocols Erik Weinick, co-head of the privacy and cybersecurity practice at Otterbourg, urged organizations to use the CrowdStrike incident as an opportunity to reevaluate their risk management protocols. “Even if the incident was accidental, organizations should conduct information audits, penetration testing, update system mappings, and reinforce security practices like multifactor authentication and strong password policies.” Addressing Third-Party Risk The outage underscored the importance of managing third-party risks. The interconnectedness of healthcare systems amplifies these risks, as evidenced by some of the largest healthcare data breaches in recent years originating from third-party vendors. Finn suggested that while organizations may conduct risk analyses on vendors like CrowdStrike, they should also inquire about the tools used in software development. “We need standards and certifications for software used in critical infrastructure sectors,” he said. In response to the incident, CrowdStrike committed to enhancing its software resilience by adding more validation checks and conducting independent third-party security code reviews. Weinick advised reviewing vendor agreements, updating business disruption insurance coverage, and conducting tabletop exercises to rehearse business continuity and recovery procedures for all potential disruptions. Overall, the CrowdStrike outage highlighted critical IT and security considerations, emphasizing the need for resilience, effective third-party risk management, and robust incident response and recovery plans. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Need AI Training

Need AI Training

Workers in office jobs are missing out on essential AI training, crucial for adapting to the evolving labor market, according to a report from software giant Salesforce. Released Tuesday, the report is based on a series of anonymous surveys and reveals that approximately 70% of desk workers have not received training in generative AI. Furthermore, only 21% of respondents said their companies have clearly defined policies regarding approved AI tools and their usage. Additionally, 62% believe they lack the necessary skills to use the technology effectively. Despite this lack of formal training and clear policies, many workers are taking the initiative to use AI tools independently. The report highlights that “workers aren’t waiting for permission to use AI,” with 55% of survey respondents using unapproved tools and 40% using AI tools explicitly banned by their employers. The report emphasizes the need for clear protocols and approved tools to address data security and ethical concerns. Clara Shih, CEO of Salesforce AI, stresses the importance of continuous learning in response to the rapid changes in AI technology, stating, “The unprecedented pace of change in AI requires companies to upskill their entire workforce. This is not a ‘one-and-done’ exercise, but rather a continuous cycle of learning as AI evolves.” However, some companies are stepping up to meet this challenge. While less than half of U.S. companies had initiated AI training for their workers by April, according to a LinkedIn study, businesses like JPMorgan Chase, Amazon, PricewaterhouseCoopers, AT&T, Verizon, Moderna, and General Motors are launching AI literacy initiatives for their employees and the broader workforce. For instance, Amazon aims to train 2 million people globally in generative artificial intelligence by 2025. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
ChatBots in Medical Diagnostics

ChatBots in Medical Diagnostics

Researchers from the National Institutes of Health (NIH) have demonstrated that a multimodal AI can achieve high accuracy on a medical diagnostic quiz, yet struggles to describe medical images and explain the reasoning behind its answers. ChatBots in Medical Diagnostics may not be ready for prime time. To evaluate AI’s potential in clinical settings, the research team tasked Generative Pre-trained Transformer 4 with Vision (GPT-4V) with answering 207 questions from the New England Journal of Medicine (NEJM) Image Challenge. This challenge, designed to help healthcare professionals test their diagnostic abilities, prompts users to select a diagnosis from multiple-choice options after reviewing clinical images and a text-based description of patient symptoms. The researchers asked the AI to both answer the questions and provide a rationale for each answer, including a description of the image presented, a summary of current, relevant clinical knowledge, and step-by-step reasoning for how GPT-4V arrived at its answer. Nine clinicians from various specialties were also tasked with answering the same questions, first in a closed-book environment with no access to external resources, then in an open-book setting where they could refer to external sources. The research team then provided the clinicians with the correct answers and the AI’s responses, asking them to score GPT-4V’s ability to describe the images, summarize medical knowledge, and provide step-by-step reasoning. The analysis revealed that both clinicians and the AI scored highly in choosing the correct diagnosis. In closed-book settings, the AI outperformed the clinicians, whereas humans outperformed the model in open-book settings. Moreover, GPT-4V frequently made mistakes when explaining its reasoning and describing medical images, even in cases where it selected the correct answer. Despite the study’s small sample size, the researchers noted that their findings highlight how multimodal AI could be used to provide clinical decision support. “This technology has the potential to help clinicians augment their capabilities with data-driven insights that may lead to improved clinical decision-making,” said Zhiyong Lu, Ph.D., corresponding author of the study and senior investigator at NIH’s National Library of Medicine (NLM), in a press release. “Understanding the risks and limitations of this technology is essential to harnessing its potential in medicine.” However, the research team emphasized the importance of assessing AI-based clinical decision support tools. “Integration of AI into healthcare holds great promise as a tool to help medical professionals diagnose patients faster, allowing them to start treatment sooner,” explained Stephen Sherry, Ph.D., NLM acting director. “However, as this study shows, AI is not advanced enough yet to replace human experience, which is crucial for accurate diagnosis.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce Model Tester

Salesforce Model Tester

Salesforce is taking steps to ensure its AI models perform accurately, even with unexpected data. The company recently filed a patent for an “automated testing pipeline for neural network models.” This technology helps developers predict whether their AI models will maintain accuracy when dealing with “unseen queries,” using customer service bots as a primary example. Salesforce Model Tester Typically, developers test their AI models using a subset of the original training data. However, Salesforce notes that this approach may not be ideal for smaller datasets or when real-time data differs significantly from the training set. To address this, Salesforce’s system creates both easy and hard evaluation datasets from real-time customer data. The “hard” datasets contain queries significantly different from the training data, while the “easy” datasets are more similar. The system begins by passing customer data through a “dependency parser,” which filters out specific actions or verbs representing meaningful commands. Then, a pre-trained language model ranks the queries based on their similarity to the training data. A “bag of words” classifier removes queries that are too similar, ensuring the testing data is diverse. These curated datasets are used to evaluate the model’s performance. The pipeline also includes a “human-in-the-loop” feedback mechanism to notify developers when a model isn’t performing well, allowing for adjustments. Salesforce’s primary AI product, Einstein, enables customers to create generative AI experiences using their data. Unlike some companies that focus on building massive AI models, Salesforce aims to empower enterprise clients to develop their own models, according to Bob Rogers, Ph.D., co-founder of BeeKeeperAI and CEO of Oii.ai. This patent could enhance Salesforce’s offerings by ensuring the AI models built under its platform function as intended. “I think Salesforce wants Einstein to generate more leads and faster. And if that’s not happening, it could be a miss for Salesforce,” Rogers said. The patent’s emphasis on improving customer service chatbots suggests Salesforce is focusing on AI-driven customer interactions. This is in line with the company’s recent unveiling of its fully-autonomous Einstein Service Agent, highlighting where Salesforce believes the most traction for Einstein might be. Rogers noted that while creating tools for customers to build their own AI models is challenging, Salesforce’s approach stands out in a market dominated by companies like Google, Microsoft, and OpenAI, which offer ready-to-use AI services. “At the end of the day, most AI utilization is still people saying, ‘solve my problem for me,’” Rogers said. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Impact of Generative AI on Workforce

Impact of Generative AI on Workforce

The Impact of Generative AI on the Future of Work Automation has long been a source of concern and hope for the future of work. Now, generative AI is the latest technology fueling both fear and optimism. AI’s Role in Job Augmentation and Replacement While AI is expected to enhance many jobs, there’s a growing argument that job augmentation for some might lead to job replacement for others. For instance, if AI makes a worker’s tasks ten times easier, the roles created to support that job could become redundant. A June 2023 McKinsey report highlighted that generative AI (GenAI) could automate 60% to 70% of employee workloads. In fact, AI has already begun replacing jobs, contributing to nearly 4,000 job cuts in May 2023 alone, according to Challenger, Gray & Christmas Inc. OpenAI, the creator of ChatGPT, estimates that 80% of the U.S. workforce could see at least 10% of their jobs impacted by large language models (LLMs). Examples of AI Job Replacement One notable example involves a writer at a tech startup who was let go without explanation, only to later discover references to her as “Olivia/ChatGPT” in internal communications. Managers had discussed how ChatGPT was a cheaper alternative to employing a writer. This scenario, while not officially confirmed, strongly suggested that AI had replaced her role. The Writers Guild of America also went on strike, seeking not only higher wages and more residuals from streaming platforms but also more regulation of AI. Research from the Frank Hawkins Kenan Institute of Private Enterprise indicates that GenAI might disproportionately affect women, with 79% of working women holding positions susceptible to automation compared to 58% of working men. Unlike past automation that typically targeted repetitive tasks, GenAI is different—it automates creative work such as writing, coding, and even music production. For example, Paul McCartney used AI to partially generate his late bandmate John Lennon’s voice to create a posthumous Beatles song. In this case, AI enhanced creativity, but the broader implications could be more complex. Other Impacts of AI on Jobs AI’s impact on jobs goes beyond replacement. Human-machine collaboration presents a more positive angle, where AI helps improve the work experience by automating repetitive tasks. This could lead to a rise in AI-related jobs and a growing demand for AI skills. AI systems require significant human feedback, particularly in training processes like reinforcement learning, where models are fine-tuned based on human input. A May 2023 paper also warned about the risk of “model collapse,” where LLMs deteriorate without continuous human data. However, there’s also the risk that AI collaboration could hinder productivity. For example, generative AI might produce an overabundance of low-quality content, forcing editors to spend more time refining it, which could deprioritize more original work. Jobs Most Affected by AI AI Legislation and Regulation Despite the rapid advancement of AI, comprehensive federal regulation in the U.S. remains elusive. However, several states have introduced or passed AI-focused laws, and New York City has enacted regulations for AI in recruitment. On the global stage, the European Union has introduced the AI Act, setting a common legal framework for AI. Meanwhile, U.S. leaders, including Senate Majority Leader Chuck Schumer, have begun outlining plans for AI regulation, emphasizing the need to protect workers, national security, and intellectual property. In October 2023, President Joe Biden signed an executive order on AI, aiming to protect consumer privacy, support workers, and advance equity and civil rights in the justice system. AI regulation is becoming increasingly urgent, and it’s a question of when, not if, comprehensive laws will be enacted. As AI continues to evolve, its impact on the workforce will be profound and multifaceted, requiring careful consideration and regulation to ensure it benefits society as a whole. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Boosting Payer Patient Education with Technology

Boosting Payer Patient Education with Technology

Data and Technology Strategies Elevate Payer-Driven Patient Education Analytics platforms, omnichannel engagement, telehealth, and other technology and data innovations are transforming patient education initiatives within the payer space. Dr. Cathy Moffitt, a pediatrician with over 15 years of emergency department experience and now Chief Medical Officer at Aetna within CVS Health, emphasizes the crucial role of patient education in empowering individuals to navigate their healthcare journeys. “Education is empowerment; it’s engagement. In my role with Aetna, I continue to see health education as fundamental,” Moffitt explained on an episode of Healthcare Strategies. Leveraging Data for Targeted Education At large payers like Aetna, patient education starts with deep data insights. By analyzing member data, payers can identify key opportunities to deliver educational content precisely when members are most receptive. “People are more open to hearing and being educated when they need help right then,” Moffitt said. Aetna’s Next Best Action initiative, launched in 2018, is one such program that reaches out to members at optimal times, focusing on guiding individuals with specific conditions on the next best steps for their health. By sharing patient education materials in these key moments, Aetna aims to maximize the impact and relevance of its outreach. Tailoring Education with Demographic Data Data on member demographics—such as race, ethnicity, gender identity, and zip code—further customizes Aetna’s educational efforts. By incorporating translation services and sensitivity training for customer representatives, Aetna ensures that all communication is accessible and relevant for members from diverse backgrounds. Additionally, having an updated provider directory allows members to connect with healthcare professionals who understand their cultural and linguistic needs, increasing trust and the likelihood of engaging with educational resources. Technology’s Role in Mental Health and Preventive Care Education With over 20 years in healthcare, Moffitt observes that patient education has made significant strides in mental health and preventive care, areas where technology has had a transformative impact. In mental health, for example, education has helped reduce stigma, and telemedicine has expanded access. Preventive care education has raised awareness of screenings, vaccines, and wellness visits, with options like home health visits and retail clinics contributing to increased engagement among Aetna’s members. The Future of Customized, Omnichannel Engagement Looking ahead, Moffitt envisions even more personalized and seamless engagement through omnichannel solutions, allowing members to receive educational materials via their preferred methods—whether email, text, or phone. “I can’t predict exactly where we’ll be in 10 years, but with the technological commitments we’re making, we’ll continue to meet evolving member demands,” Moffitt added. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Confidential AI Computing in Health

Confidential AI Computing in Health

Accelerating Healthcare AI Development with Confidential Computing Can confidential computing accelerate the development of clinical algorithms by creating a secure, collaborative environment for data stewards and AI developers? The potential of AI to transform healthcare is immense. However, data privacy concerns and high costs often slow down AI advancements in this sector, even as other industries experience rapid progress in algorithm development. Confidential computing has emerged as a promising solution to address these challenges, offering secure data handling during AI projects. Although its use in healthcare was previously limited to research, recent collaborations are bringing it to the forefront of clinical AI development. In 2020, the University of California, San Francisco (UCSF) Center for Digital Health Innovation (CDHI), along with Fortanix, Intel, and Microsoft Azure, formed a partnership to create a privacy-preserving confidential computing platform. This collaboration, which later evolved into BeeKeeperAI, aimed to accelerate clinical algorithm development by providing a secure, zero-trust environment for healthcare data and intellectual property (IP), while facilitating streamlined workflows and collaboration. Mary Beth Chalk, co-founder and Chief Commercial Officer of BeeKeeperAI, shared insights with Healthtech Analytics on how confidential computing can address common hurdles in clinical AI development and how stakeholders can leverage this technology in real-world applications. Overcoming Challenges in Clinical AI Development Chalk highlighted the significant barriers that hinder AI development in healthcare: privacy, security, time, and cost. These challenges often prevent effective collaboration between the two key parties involved: data stewards, who manage patient data and privacy, and algorithm developers, who work to create healthcare AI solutions. Even when these parties belong to the same organization, workflows often remain inefficient and fragmented. Before BeeKeeperAI spun out of UCSF, the team realized how time-consuming and costly the process of algorithm development was. Regulatory approvals, data access agreements, and other administrative tasks could take months to complete, delaying projects that could be finished in a matter of weeks. Chalk noted, “It was taking nine months to 18 months just to get approvals for what was essentially a two-month computing project.” This delay and inefficiency are unsustainable in a fast-moving technology environment, especially given that software innovation outpaces the development of medical devices or drugs. Confidential computing can address this challenge by helping clinical algorithm developers “move at the speed of software.” By offering encryption protection for data and IP during computation, confidential computing ensures privacy and security at every stage of the development process. Confidential Computing: A New Frontier in Healthcare AI Confidential computing protects sensitive data not only at rest and in transit but also during computation, which sets it apart from other privacy technologies like federated learning. With federated learning, data and IP are protected during storage and transmission but remain exposed during computation. This exposure raises significant privacy concerns during AI development. In contrast, confidential computing ensures end-to-end encrypted protection, safeguarding both data and intellectual property throughout the entire process. This enables stakeholders to collaborate securely while maintaining privacy and data sovereignty. Chalk emphasized that with confidential computing, stakeholders can ensure that patient privacy is protected and intellectual property remains secure, even when multiple parties are involved in the development process. As a result, confidential computing becomes an enabling core competency that facilitates faster and more efficient clinical AI development. Streamlining Clinical AI Development with Confidential Computing Confidential computing environments provide a secure, automated platform that facilitates the development process, reducing the need for manual intervention. Chalk described healthcare AI development as a “well-worn goat path,” where multiple stakeholders know the steps required but are often bogged down by time-consuming administrative tasks. BeeKeeperAI’s platform streamlines this process by allowing AI developers to upload project protocols, which are then shared with data stewards. The data steward can determine if they have the necessary clinical data and curate it according to the AI developer’s specifications. This secure collaboration is built on automated workflows, but because the data and algorithms remain encrypted, privacy is never compromised. The BeeKeeperAI platform enables a collaborative, familiar interface for developers and data stewards, allowing them to work together in a secure environment. The software does not require extensive expertise in confidential computing, as BeeKeeperAI manages the infrastructure and ensures that the data never leaves the control of the data steward. Real-World Applications of Confidential Computing Confidential computing has the potential to revolutionize healthcare AI development, particularly by improving the precision of disease detection, predicting disease trajectories, and enabling personalized treatment recommendations. Chalk emphasized that the real promise of AI in healthcare lies in precision medicine—the ability to tailor interventions to individual patients, especially those on the “tails” of the bell curve who may respond differently to treatment. For instance, confidential computing can facilitate research into precision medicine by enabling AI developers to analyze patient data securely, without risking exposure of sensitive personal information. Chalk explained, “With confidential computing, I can drill into those tails and see what was unique about those patients without exposing their identities.” Currently, real-world data access remains a significant challenge for clinical AI development, especially as research moves from synthetic or de-identified data to high-quality, real-world clinical data. Chalk noted that for clinical AI to demonstrate efficacy, improve outcomes, or enhance safety, it must operate on real-world data. However, accessing this data while ensuring privacy has been a major obstacle for AI teams. Confidential computing can help bridge this “data cliff” by providing a secure environment for researchers to access and utilize real-world data without compromising privacy. Conclusion While the use of confidential computing in healthcare is still evolving, its potential is vast. By offering secure data handling throughout the development process, confidential computing enables AI developers and data stewards to collaborate more efficiently, overcome regulatory hurdles, and accelerate clinical AI advancements. This technology could help realize the promise of precision medicine, making personalized healthcare interventions safer, more effective, and more widely available. Chalk highlighted that many healthcare and life sciences organizations are exploring confidential computing use cases, particularly in neurology, oncology, mental health, and rare diseases—fields that require the use of

Read More
Content Marketing Lessons

Content Marketing Lessons

Content Marketing Lessons: Beyond Creativity Content marketing requires more than just creativity; it demands a strategic approach rooted in collaboration, consistency, and data-driven insights. Salesforce, a leader in customer relationship management, exemplifies how to revolutionize content marketing to achieve meaningful business outcomes. Centralize Content Strategy for Consistency One of the key takeaways from Salesforce’s content marketing evolution is the power of centralization. Jessica Bergmann, Vice President of Content and Customer Marketing at Salesforce, led a shift that elevated content marketing to a strategic function within the company. By centralizing content operations, Salesforce ensured consistency in voice, tone, and messaging across all channels. This centralization wasn’t about controlling content but about creating a unified narrative that resonates with customers at every touchpoint. Empower Teams with Strategic Roles To bridge the gap between audience needs and Salesforce’s business objectives, Jessica introduced two pivotal roles: content strategists and editorial leads. These roles are embedded within brand, persona, and industry teams, ensuring content aligns with business goals and is tailored to the specific needs of different customer segments. This approach underscores the importance of empowering teams with the right expertise and tools to deliver impactful content. Leverage Technology for Seamless Operations Salesforce’s centralized content operations team plays a crucial role in managing the company’s content ecosystem. By utilizing a central content operations tool, the team oversees real-time editorial calendars, workflows, and a global measurement dashboard. This technological foundation allows Salesforce to streamline content production and maintain a cohesive strategy across its global teams. For any organization aiming to scale content marketing efforts, investing in the right technology is essential. Integrate Cross-Functional Collaboration A key to Salesforce’s success is its emphasis on cross-functional collaboration. By working closely with product marketing, creative, and campaigns teams, the content marketing function at Salesforce is integral to the broader marketing strategy. This integrated approach ensures content is not created in isolation but as part of a larger, cohesive effort to educate customers and drive business growth. Measure What Matters In content marketing, measurement is everything. Salesforce’s content performance dashboard provides visibility into how content is performing across the organization. By tracking metrics like traffic, engagement, and progression, Salesforce ensures its content efforts align with business objectives. This focus on actionable metrics helps teams make informed decisions about optimizing, promoting, or cutting content. Prioritize Strategic Initiatives Salesforce’s ability to manage multiple high-impact projects, such as Dreamforce, Salesforce+, and the #TeamEarth campaign, demonstrates its strategic prioritization process. Using the V2MOM framework (vision, values, methods, obstacles, and measures), Salesforce aligns its content marketing efforts with the company’s broader goals. This structured approach allows Salesforce to allocate resources effectively and ensure content initiatives deliver maximum impact. Focus on Audience-First Content At the heart of Salesforce’s content marketing strategy is an unwavering focus on the audience. By adopting an “audience-first” mindset, Salesforce’s content teams strive to create content that addresses customer needs while earning the right to market to them. This approach is crucial in today’s content-saturated environment, where businesses must offer genuine value to stand out. Develop Long-Range Content Plans Content marketing isn’t just about quick wins; it’s about building long-term relationships with your audience. Salesforce’s commitment to long-range content planning, integrating thought leadership, search, and editorial efforts, ensures the company remains top-of-mind for customers throughout their buying journey. This long-term focus is key to nurturing leads and converting them into loyal customers. Invest in Content Marketing Talent Hiring the right talent is vital for a successful content marketing strategy. Salesforce’s experience highlights the importance of bringing in content marketing experts who can execute the strategy effectively. These experts bring fresh ideas and ensure the content marketing function is respected and prioritized within the organization. Show Early Wins to Build Momentum Finally, one of the most important lessons from Salesforce’s content marketing journey is the value of showcasing early wins. By focusing on quick victories that demonstrate the impact of content marketing, Jessica and her team built momentum and secured buy-in from senior leadership. This approach is essential for any content marketing team seeking to establish itself as a strategic function within the organization. Conclusion Salesforce’s content marketing transformation offers valuable insights for businesses at any stage of their content marketing journey. By centralizing content strategy, empowering teams with strategic roles, leveraging technology, and focusing on audience-first content, Salesforce has created a content marketing engine that drives real business results. For organizations looking to elevate their content marketing efforts, these lessons provide a clear roadmap to success. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Requirements Engineering

Requirements Engineering

Every project needs clear requirements. No exceptions. Without them, a project turns into a group of people standing around, unsure of what to do, essentially making things up as they go. This scenario may sound familiar to anyone who has been involved in disorganized projects. What are requirements? According to the Association for Project Management (APM), “Requirements are the wants and needs of stakeholders clearly defined with acceptance criteria.” Requirements engineering is the process for managing the entire lifecycle of these needs and involves five key stages: Let’s dive deeper into these stages: 1. Requirements Elicitation Sometimes, the term “requirements capture” is used, as if stakeholders’ needs are floating around, waiting to be caught. However, requirements are not passively waiting; they must be actively elicited. Elicitation Methods: Eliciting requirements involves interpreting genuine needs, not just compiling a wish list of requested features. 2. Requirements Analysis Once you’ve gathered a set of requirements, it’s time for analysis to ensure they are comprehensive, feasible, and aligned with the project’s objectives. This phase is crucial because 80% of project errors occur during the requirements phase, yet it often receives less than 20% of a project’s time. Key steps include: 3. Requirements Documentation After analyzing requirements, document them clearly to communicate with stakeholders and developers. A good requirements document typically includes: One popular method for documenting requirements is through user stories, which frame requirements from the user’s perspective: User stories focus on meeting user needs rather than prescribing technical specifications. 4. Requirements Validation The next step is validating your documented requirements. This ensures they accurately represent what users and stakeholders need. Validation methods include: Validation is essential to ensure requirements are complete, realistic, and verifiable. 5. Requirements Management The final phase involves tracking and managing changes to requirements throughout the project. Key Concepts: Agile frameworks often rely on iterative approaches, where product owners manage changes during sprint reviews and retrospectives. Summary Requirements engineering consists of five interdependent stages: elicitation, analysis, documentation, validation, and management. While these concepts may seem detailed, they offer a structured framework that’s essential for delivering high-quality solutions. By following this approach, even smaller, lower-risk digital projects can benefit from clear and actionable requirements. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
TEFCA could drive payer-provider interoperability

TEFCA could drive payer-provider interoperability

Bridging the Interoperability Gap: TEFCA’s Role in Payer-Provider Data Exchange The electronic health information exchange (HIE) between healthcare providers has seen significant growth in recent years. However, interoperability between healthcare providers and payers has lagged behind. The Trusted Exchange Framework and Common Agreement (TEFCA) aims to address this gap and enhance data interoperability across the healthcare ecosystem. TEFCA could drive payer-provider interoperability with a little help from the world of technology. TEFCA’s Foundation and Evolution TEFCA was established under the 21st Century Cures Act to improve health data interoperability through a “network of networks” approach. The Office of the National Coordinator for Health Information Technology (ONC) officially launched TEFCA in December 2023, designating five initial Qualified Health Information Networks (QHINs). By February 2024, two additional QHINs had been designated. The Sequoia Project, TEFCA’s recognized coordinating entity, recently released several key documents for stakeholder feedback, including draft standard operating procedures (SOPs) for healthcare operations and payment under TEFCA. During the 2024 WEDI Spring Conference, leaders from three QHINs—eHealth Exchange, Epic Nexus, and Kno2—discussed the future of TEFCA in enhancing provider and payer interoperability. ONC released Version 2.0 of the Common Agreement on April 22, 2024. Common Agreement Version 2.0 updates Common Agreement Version 1.1, published in November 2023, and includes enhancements and updates to require support for Health Level Seven (HL7®) Fast Healthcare Interoperability Resources (FHIR®) based transactions. The Common Agreement includes an exhibit, the Participant and Subparticipant Terms of Participation (ToP), that sets forth the requirements each Participant and Subparticipant must agree to and comply with to participate in TEFCA. The Common Agreement and ToPs incorporate all applicable standard operating procedures (SOPs) and the Qualified Health Information Network Technical Framework (QTF). View the release notes for Common Agreement Version 2.0 The Trusted Exchange Framework and Common AgreementTM (TEFCATM) has 3 goals: (1) to establish a universal governance, policy, and technical floor for nationwide interoperability; (2) to simplify connectivity for organizations to securely exchange information to improve patient care, enhance the welfare of populations, and generate health care value; and (3) to enable individuals to gather their health care information. Challenges in Payer Data Exchange Although the QHINs on the panel have made progress in facilitating payer HIE, they emphasized that TEFCA is not yet fully operational for large-scale payer data exchange. Ryan Bohochik, Vice President of Value-Based Care at Epic, highlighted the complexities of payer-provider data exchange. “We’ve focused on use cases that allow for real-time information sharing between care providers and insurance carriers,” Bohochik said. “However, TEFCA isn’t yet capable of supporting this at the scale required.” Bohochik also pointed out that payer data exchange is complicated by the involvement of third-party contractors. For example, health plans often partner with vendors for tasks like care management or quality measure calculation. This adds layers of complexity to the data exchange process. Catherine Bingman, Vice President of Interoperability Adoption for eHealth Exchange, echoed these concerns, noting that member attribution and patient privacy are critical issues in payer data exchange. “Payers don’t have the right to access everything a patient has paid for themselves,” Bingman said. “This makes providers cautious about sharing data, impacting patient care.” For instance, manual prior authorization processes frequently delay patient access to care. A 2023 AMA survey found that 42% of doctors reported care delays due to prior authorization, with 37% stating that these delays were common. Building Trust Through Use Cases Matt Becker, Vice President of Interoperability at Kno2, stressed the importance of developing specific use cases to establish trust in payer data exchange via TEFCA. “Payment and operations is a broad category that includes HEDIS measures, quality assurance, and provider monitoring,” Becker said. “Each of these requires a high level of trust.” Bohochik agreed, emphasizing that narrowing the scope and focusing on specific, high-value use cases will be essential for TEFCA’s adoption. “We can’t solve everything at once,” Bohochik said. “We need to focus on achieving successful outcomes in targeted areas, which will build momentum and community support.” He also noted that while technical data standards are crucial, building trust in the data exchange process is equally important. “A network is only as good as the trust it inspires,” Bohochik said. “If healthcare systems know that data requests for payment and operations are legitimate and secure, it will drive the scalability of TEFCA.” By focusing on targeted use cases, ensuring rigorous data standards, and building trust, TEFCA has the potential to significantly enhance interoperability between healthcare providers and payers, ultimately improving patient care and operational efficiency. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Gen AI Role in Healthcare

Gen AI Role in Healthcare

Generative AI’s Growing Role in Healthcare: Potential and Challenges The rapid advancements in large language models (LLMs) have introduced generative AI tools into nearly every business sector, including healthcare. As defined by the Government Accountability Office, generative AI is “a technology that can create content, including text, images, audio, or video, when prompted by a user.” These systems learn patterns and relationships from vast datasets, enabling them to generate new content that resembles but is not identical to the original training data. This capability is powered by machine learning algorithms and statistical models. In healthcare, generative AI is being utilized for various applications, including clinical documentation, patient communication, and clinical text summarization. Streamlining Clinical Documentation Excessive documentation is a leading cause of clinician burnout, as highlighted by a 2022 athenahealth survey conducted by the Harris Poll. Generative AI shows promise in easing these documentation burdens, potentially improving clinician satisfaction and reducing burnout. A 2024 study published in NEJM Catalyst explored the use of ambient AI scribes within The Permanente Medical Group (TPMG). This technology employs smartphone microphones and generative AI to transcribe patient encounters in real-time, providing clinicians with draft documentation for review. In October 2023, TPMG deployed this ambient AI technology across various settings, benefiting 10,000 physicians and staff. Physicians who used the ambient AI scribe reported positive outcomes, including more personal and meaningful patient interactions and reduced after-hours electronic health record (EHR) documentation. Early patient feedback was also favorable, with improved provider interactions noted. Additionally, ambient AI produced high-quality clinical documentation for clinician review. However, a 2023 study in the Journal of the American Medical Informatics Association (JAMIA) cautioned that ambient AI might struggle with non-lexical conversational sounds (NLCSes), such as “mm-hm” or “uh-uh,” which can convey clinically relevant information. The study found that while the ambient AI tools had a word error rate of about 12% for all words, the error rate for NLCSes was significantly higher, reaching up to 98.7% for those conveying critical information. Misinterpretation of these sounds could lead to inaccuracies in clinical documentation and potential patient safety issues. Enhancing Patient Communication With the digital transformation in healthcare, patient portal messages have surged. A 2021 study in JAMIA reported a 157% increase in patient portal inbox messages since 2020. In response, some healthcare organizations are exploring the use of generative AI to draft replies to these messages. A 2024 study published in JAMA Network Open evaluated the adoption of AI-generated draft replies to patient messages at an academic medical center. After five weeks, clinicians used the AI-generated drafts 20% of the time, a notable rate considering the LLMs were not fine-tuned for patient communication. Clinicians reported reduced task load and emotional exhaustion, suggesting that AI-generated replies could help alleviate burnout. However, the study found no significant changes in reply time, read time, or write time between the pre-pilot and pilot periods. Despite this, clinicians expressed optimism about time savings, indicating that the cognitive ease of editing drafts rather than writing from scratch might not be fully captured by time metrics. Summarizing Clinical Data Summarizing information within patient records is a time-consuming task for clinicians, and errors in this process can negatively impact clinical decision support. Generative AI has shown potential in this area, with a 2023 study finding that LLM-generated summaries could outperform human expert summaries in terms of conciseness, completeness, and correctness. However, using generative AI for clinical data summarization presents risks. A viewpoint in JAMA argued that LLMs performing summarization tasks might not fall under FDA medical device oversight, as they provide language-based outputs rather than disease predictions or numerical estimates. Without statutory changes, the FDA’s authority to regulate these LLMs remains unclear. The authors also noted that differences in summary length, organization, and tone could influence clinician interpretations and subsequent decision-making. Furthermore, LLMs might exhibit biases, such as sycophancy, where responses are tailored to user expectations. To address these concerns, the authors called for comprehensive standards for LLM-generated summaries, including testing for biases and errors, as well as clinical trials to quantify potential harms and benefits. The Path Forward Generative AI holds significant promise for transforming healthcare and reducing clinician burnout, but realizing this potential requires comprehensive standards and regulatory clarity. A 2024 study published in npj Digital Medicine emphasized the need for defined leadership, adoption incentives, and ongoing regulation to deliver on the promise of generative AI in healthcare. Leadership should focus on establishing guidelines for LLM performance and identifying optimal clinical settings for AI tool trials. The study suggested that a subcommittee within the FDA, comprising physicians, healthcare administrators, developers, and investors, could effectively lead this effort. Additionally, widespread deployment of generative AI will likely require payer incentives, as most providers view these tools as capital expenses. With the right leadership, incentives, and regulatory framework, generative AI can be effectively implemented across the healthcare continuum to streamline clinical workflows and improve patient care. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
gettectonic.com