In late 2023, significant attention was given to building artificial intelligence (AI) algorithms to predict post-surgery complications, surgical risk models, and recovery pathways for patients with surgical needs. This naturally elevated the appropriate debate of whether using AI in this manner would result in hospitals and providers prioritizing revenue from automation over excellence in patient care. It also raised questions about responsibility and accountability among healthcare organizations and the technology providers that develop the AI technology and the models used in care-delivery pathways.
While a leapfrog in human innovation history, AI presents unforeseen complications and challenges. Beyond the often-discussed “black box” of predictive algorithms lies the critical issue of autonomous deployment, devoid of requisite clinical oversight and governance frameworks. This concern is further compounded by the pervasive risks inherent in data quality, accessibility, and bias throughout the AI development process. Consequently, to truly and responsibly harness the transformative potential of AI, it is essential to establish and implement robust oversight mechanisms that promote regular wellness checks to help ensure that predictive models consistently prioritize patient well-being above all else.
Advancing Healthcare’s Transformation Through AI Integration
Healthcare is experiencing unprecedented change. While typically characterized by a cautious and gradual uptake of new technology, the industry is accelerating the adoption of large-scale digital transformations fueled by cloud, data democratization, and AI. Machine learning (ML), AI, and generative AI (GAI) are creating considerable excitement because of the technologies’ potential to revolutionize every facet of healthcare.
Paul Graham’s assertion that “AI is ‘the exact opposite of a solution in search of a problem’” has never been more accurate for any other industry. AI has the potential to impact everything from modernizing infrastructures to curb inefficiencies to achieving substantial clinical productivity with decision-support and diagnostic tools and leveraging data-driven insights for research and development aimed at improving patient outcomes. Furthermore, the advent of GAI holds promise for automating mundane administrative tasks in clinical settings, thereby enabling healthcare professionals to concentrate on more complex endeavors like improving clinical outcomes.
AI has already catalyzed significant advancements in patient care methodologies and modalities. Models have evolved to a high degree of sophistication. They are now trusted to provide preliminary diagnostic mammography screenings, lab screenings, and early detection of lung nodules in CT scans. Early use cases also indicate tremendous potential to leverage GAI in the advancement of precision medicine, especially in oncology. AI and GAI models are poised to transform healthcare operations as well. In fact, supply chain leaders are already harnessing the capabilities to improve the availability of critical healthcare commodities and services and to forecast potential disruptions. Throughout this trajectory of innovation, human expertise, creativity, and complex problem-solving remain indispensable to delivering high-quality patient care.
A pivotal question for us to ask as a society is not whether we embrace these technological advancements but how we can do so responsibly. Recent stories of AI “hallucinations,” wherein models present inaccurate or misleading information as facts or certainties, serve as poignant reminders of inherent flaws within AI models. This creates an urgency for us – as innovators and consumers – to develop thoughtful measures and mechanisms that can help nurture the technology toward maturity while minimizing undesirable impacts on societies and humanity.
Safeguarding AI Integrity in Healthcare: Critical Steps for Establishing Mature Governance
The first step to improve AI integrity is to address security, privacy, and data governance concerns and nurture the development of responsible, bias-free AI models. This can be challenging because humans may inadvertently inject their own subjective viewpoints into the models despite their best intentions. A fundamental measure to mitigate bias risk requires the implementation of robust data governance practices that help ensure the models are built with diverse and comprehensive data sets.
To achieve this objective, organizations must prioritize investing in a resilient data infrastructure encompassing data collection, storage, processing, and security. A robust data infrastructure is the cornerstone for safeguarding patient data, which is increasingly vulnerable to cyber threats. The pursuit of quality and quantity in data acquisition mandates the aggregation of diverse data sources, including enterprise resource planning platforms (ERPs), electronic health records (EHRs), and medical imaging repositories, to name a few, while concurrently ensuring data cleanliness and accessibility. Equally important is establishing data governance protocols to uphold privacy standards and regulatory compliance.
The second step is to build a strong coalition of futurists, researchers, experts, technologists, industry stakeholders, and governmental bodies that can collectively define what responsible AI is and outline operational strategies and policies to achieve that goal. We recently saw the European Union take steps along this line with the European Union’s AI Act, the world’s first law regulating high-risk AI that balances the need for safety and compliance with fundamental human rights with the need to boost innovation. The law is expected to go into effect this month, and strict rules for GAI systems like chatbots are anticipated. By 2026, a more complete set of regulations will be enforced, including requirements for high-risk systems. The law is an important and much-awaited first step to safeguard against the hallucinations and biases inherent in GAI technology.
Adopting a Holistic Approach to Build Ethical and Effective AI for Healthcare
In the pursuit of integrating AI models into healthcare, a balanced and deliberate approach is essential as we gradually learn and understand this technology’s full capabilities and potential impact. While there may be an inherent appeal in being at the forefront of innovation, moving too quickly without adequate testing and validation can yield poorly performing models that may compromise user experience at best and potentially cause harm to individuals, communities, and businesses at worst. Adopting an iterative mindset is fundamental to fostering the gradual development and refinement of existing models while building new ones. Central to this approach is rigorous testing, refinement, and enhancement of every facet of the model, ensuring adherence to requisite standards and mitigating the risk of errors or “hallucinations” post-deployment. Additionally, robust quality control mechanisms must be ingrained within the software development lifecycle process to ascertain the readiness of AI solutions for widespread implementation and adoption.
Fostering diversity within AI development teams to minimize the risk of bias in the algorithm has emerged as a critical best practice. Recognizing the inherent conscious and sub-conscious biases and prejudices in humans, diverse teams serve as a safeguard, helping mitigate the potential for bias by offering varied perspectives and insights.
Additionally, data governance “wellness checks” must be consistently implemented as a standard practice to help ensure the AI model’s quality and accuracy, especially after retraining it with new data sets. A model may have previously exhibited optimal performance, but periodic evaluations are indispensable to assess any deviations in output resulting from updated data. These checks also encompass post-deployment evaluations and comprehensive validation using error analysis, compliance verification, and feedback injection. It is also critical to capture comprehensive documentation to help ensure reliability, adherence to regulations, and knowledge sharing for AI development, which can be extremely dangerous for fast-moving teams if ignored. By conducting these checks, organizations can improve the integrity of AI models in healthcare, bolstering trust and confidence in their effectiveness and ethical use over time.
The transformative potential of AI in healthcare is immense but comes with significant responsibility. As the integration of AI models continues to gain traction, it is crucial to prioritize patient well-being, uphold strong data governance standards, and champion responsible development practices. Collaboration between industry stakeholders, technology experts, and government entities must also accelerate. While the journey ahead may be fraught with challenges, concerted efforts to educate and raise awareness about AI literacy and bias continue to be a strength. Through meticulous planning and conscientious implementation, AI promises to revolutionize healthcare, ushering humanity into a new era of innovation and efficacy.
Content Courtesy – Data Versity