There is often a gap between emerging technology and its implementation. New technology can improve our lives. But they can also change our lives. These changes can initially cause fear and anxiety.
Emerging technologies are particularly complex in healthcare. There are many factors to consider, including patient preferences and government regulations.
At Kaiser Permanente, it’s our job to address these issues as we think about how new technologies can help us provide better care to our patients. Artificial intelligence is not.
We believe that our physicians and care teams can use AI to improve health outcomes for our members and the communities we serve. But we also know that nothing slows down the adoption of new technologies more than mistrust — or worse, technologies that could lead to patient harm.
That’s why we use a responsible approach to AI. This means that we only use AI tools and solutions after we have thoroughly vetted them for quality, safety, reliability and equity. With a focus on building trust, we only use AI when it furthers our mission to provide high-quality, affordable healthcare.
Our principles of responsible use review
So how do we evaluate and use AI tools to make sure they meet our standards?
- We start with privacy. AI applications require large amounts of data. Ongoing monitoring, quality control, and security are necessary to protect the safety and privacy of our members and patients.
- We always check for reliability. What works today may not work a few years down the road as technology, care delivery, and patient preferences change. We choose AI tools that will work for a long time.
- We focus on results. If an AI tool doesn’t advance high-quality and affordable care, we don’t use it.
- We strive to use the tools in a transparent manner. We alert patients and ask for consent to use AI tools whenever appropriate. For our employees who use AI, we provide explanations of how our AI tools are designed, how they work, and what their limitations are.
- We encourage equality. Humans and algorithms (the instructions that AI tools follow) can similarly contribute to biasing AI tools. Our AI tools are built to reduce bias. We also know that AI has the potential to harness more data and help identify and address the causes of health inequities. So, we are also focusing on that capability.
- We design tools for our clients – in the case of AI, our customers are ours members, physicians, and staff will use the facilities. Users must prioritize their needs and preferences.
- We build trust. We know there is uncertainty about the success of AI. We select equipment that provides excellence in safety and performance, and conforms to industry standards and leading practices. We also build trust by constantly monitoring the resources we use. We continue to invest in research that rigorously examines the impact of AI in clinical settings.
Our principles apply: Helpful clinical documentation
One example of how we’ve applied these principles is in our use of a helpful documentation tool. This tool helps our doctors and other nurses to focus on their patients and spend less time on administrative tasks.
The tool summarizes clinical interviews and generates clinical notes. Our doctors and nurses can use the app during patient visits.
The doctor or physician then reviews and organizes the information before entering it into the patient’s electronic health record.
When using the tool, called Abridge, we carefully applied each of our responsible AI principles. For example:
- The app complies with state and federal privacy laws. Stores patient data to protect their privacy. We also obtain consent from each patient before using the device. If the patient doesn’t want us to use it, we don’t.
- We require our doctors and other doctors to review and edit the medical information recorded by the device. Our patients can trust that AI does not make medical decisions at Kaiser Permanente. Our doctors and other doctors do.
- Before making the tool widely available at Kaiser Permanente, we went through a rigorous quality assurance process. We made sure it worked for all patients, including our non-English speaking patients. And we continue to collect feedback from our patients and doctors about their experiences with the app.
How policymakers can help
As we work to ensure that AI is used responsibly, policymakers can help by:
- To support the initiation of large clinical trials. Healthcare organizations need more robust evidence to assess the safety and effectiveness of AI tools. This evidence is important for building public trust.
- Developing systems to evaluate AI tools used in medical care. Monitoring systems will allow health care organizations to learn from each other’s experiences. We may share performance information, security risks, and best practices.
- It supports independent testing of the quality assurance of AI algorithms. Policy makers and regulators should work with health care organizations to establish a nationwide AI health validation network to evaluate the effectiveness of AI health. AI developers can test their algorithms on different datasets and help demonstrate the safety and effectiveness of AI tools across populations and regions. Many other industries rely on similar types of independent testing. For example, the electronics and automotive industries conduct this type of testing. Testing cannot replace our validation of AI tools. It would fill it.
To realize the full potential of AI, we and all healthcare organizations must use it responsibly.
At Kaiser Permanente, we actively pursue AI principles. And we work closely with policy leaders to support industry-wide efforts.
#Healthcare #Principles #Responsible
Leave a Reply