Accelerating Healthcare AI Development with Confidential Computing
Thank you for reading this post, don't forget to subscribe!Can confidential computing accelerate the development of clinical algorithms by creating a secure, collaborative environment for data stewards and AI developers?
The potential of AI to transform healthcare is immense. However, data privacy concerns and high costs often slow down AI advancements in this sector, even as other industries experience rapid progress in algorithm development. Confidential computing has emerged as a promising solution to address these challenges, offering secure data handling during AI projects. Although its use in healthcare was previously limited to research, recent collaborations are bringing it to the forefront of clinical AI development.
In 2020, the University of California, San Francisco (UCSF) Center for Digital Health Innovation (CDHI), along with Fortanix, Intel, and Microsoft Azure, formed a partnership to create a privacy-preserving confidential computing platform. This collaboration, which later evolved into BeeKeeperAI, aimed to accelerate clinical algorithm development by providing a secure, zero-trust environment for healthcare data and intellectual property (IP), while facilitating streamlined workflows and collaboration.
Mary Beth Chalk, co-founder and Chief Commercial Officer of BeeKeeperAI, shared insights with Healthtech Analytics on how confidential computing can address common hurdles in clinical AI development and how stakeholders can leverage this technology in real-world applications.
Overcoming Challenges in Clinical AI Development
Chalk highlighted the significant barriers that hinder AI development in healthcare: privacy, security, time, and cost. These challenges often prevent effective collaboration between the two key parties involved: data stewards, who manage patient data and privacy, and algorithm developers, who work to create healthcare AI solutions. Even when these parties belong to the same organization, workflows often remain inefficient and fragmented.
Before BeeKeeperAI spun out of UCSF, the team realized how time-consuming and costly the process of algorithm development was. Regulatory approvals, data access agreements, and other administrative tasks could take months to complete, delaying projects that could be finished in a matter of weeks. Chalk noted, “It was taking nine months to 18 months just to get approvals for what was essentially a two-month computing project.” This delay and inefficiency are unsustainable in a fast-moving technology environment, especially given that software innovation outpaces the development of medical devices or drugs.
Confidential computing can address this challenge by helping clinical algorithm developers “move at the speed of software.” By offering encryption protection for data and IP during computation, confidential computing ensures privacy and security at every stage of the development process.
Confidential Computing: A New Frontier in Healthcare AI
Confidential computing protects sensitive data not only at rest and in transit but also during computation, which sets it apart from other privacy technologies like federated learning. With federated learning, data and IP are protected during storage and transmission but remain exposed during computation. This exposure raises significant privacy concerns during AI development.
In contrast, confidential computing ensures end-to-end encrypted protection, safeguarding both data and intellectual property throughout the entire process. This enables stakeholders to collaborate securely while maintaining privacy and data sovereignty.
Chalk emphasized that with confidential computing, stakeholders can ensure that patient privacy is protected and intellectual property remains secure, even when multiple parties are involved in the development process. As a result, confidential computing becomes an enabling core competency that facilitates faster and more efficient clinical AI development.
Streamlining Clinical AI Development with Confidential Computing
Confidential computing environments provide a secure, automated platform that facilitates the development process, reducing the need for manual intervention. Chalk described healthcare AI development as a “well-worn goat path,” where multiple stakeholders know the steps required but are often bogged down by time-consuming administrative tasks.
BeeKeeperAI’s platform streamlines this process by allowing AI developers to upload project protocols, which are then shared with data stewards. The data steward can determine if they have the necessary clinical data and curate it according to the AI developer’s specifications. This secure collaboration is built on automated workflows, but because the data and algorithms remain encrypted, privacy is never compromised.
The BeeKeeperAI platform enables a collaborative, familiar interface for developers and data stewards, allowing them to work together in a secure environment. The software does not require extensive expertise in confidential computing, as BeeKeeperAI manages the infrastructure and ensures that the data never leaves the control of the data steward.
Real-World Applications of Confidential Computing
Confidential computing has the potential to revolutionize healthcare AI development, particularly by improving the precision of disease detection, predicting disease trajectories, and enabling personalized treatment recommendations. Chalk emphasized that the real promise of AI in healthcare lies in precision medicine—the ability to tailor interventions to individual patients, especially those on the “tails” of the bell curve who may respond differently to treatment.
For instance, confidential computing can facilitate research into precision medicine by enabling AI developers to analyze patient data securely, without risking exposure of sensitive personal information. Chalk explained, “With confidential computing, I can drill into those tails and see what was unique about those patients without exposing their identities.”
Currently, real-world data access remains a significant challenge for clinical AI development, especially as research moves from synthetic or de-identified data to high-quality, real-world clinical data. Chalk noted that for clinical AI to demonstrate efficacy, improve outcomes, or enhance safety, it must operate on real-world data. However, accessing this data while ensuring privacy has been a major obstacle for AI teams. Confidential computing can help bridge this “data cliff” by providing a secure environment for researchers to access and utilize real-world data without compromising privacy.
Conclusion
While the use of confidential computing in healthcare is still evolving, its potential is vast. By offering secure data handling throughout the development process, confidential computing enables AI developers and data stewards to collaborate more efficiently, overcome regulatory hurdles, and accelerate clinical AI advancements. This technology could help realize the promise of precision medicine, making personalized healthcare interventions safer, more effective, and more widely available.
Chalk highlighted that many healthcare and life sciences organizations are exploring confidential computing use cases, particularly in neurology, oncology, mental health, and rare diseases—fields that require the use of real-world data and are highly sensitive to data privacy concerns. As healthcare AI continues to evolve, confidential computing will play a pivotal role in accelerating innovation while ensuring data privacy and security.