Last updated March 28, 2018 at 1:29 pm
AI will revolutionise healthcare, but there are a host of ethical issues most of us haven’t even thought of.
It’s hard not to be excited about artificial intelligence, it promises to completely revolutionise our lives in ways that we can’t even predict now.
One of the most powerful ways it will change the future is in healthcare, crunching health data to improve diagnostics and help doctors make better decisions for their patients.
Already we’ve seen it at work on diagnosing X-rays, eye scans and cancer risk.
However, as with every monumental shift in technology, there are serious ethical considerations we need to make. Sometimes it is just as valuable to slow down our furious pace and carefully examine some of these ethical risks before we go too far.
In a new paper, Stanford University researchers have identified some of these ethical issues. Without addressing them, we won’t be able to fully take advantage of the benefits of AI in healthcare.
“Because of the many potential benefits, there’s a strong desire in society to have these tools piloted and implemented into health care,” said Danton Char, who was the lead author on the paper. “But we have begun to notice, from implementations in non-health care areas, that there can be ethical problems with algorithmic learning when it’s deployed at a large scale.”
Concerns for the future
The authors identified several areas of ethical concern, in particular some serious worries revolving around bias.
In machine learning, data is given to computers in order for them to build algorithms to reach an outcome. Say for example you wanted to build a macular degeneration diagnosis tool, the algorithm would be fed with thousands of images of retinas from which the machine learns what are patterns of disease and what aren’t.
However, that means that any bias in the data originally used will be reflected in the algorithms and in the clinical recommendations they generate.
This can occur through unconscious biases and data selection, or deliberate actions to skew results, depending on the motives of the programmers, companies or health care systems deploying them.
“You can easily imagine that the algorithms being built into the health care system might be reflective of different, conflicting interests,” said David Magnus, one of the authors from the Stanford Center for Bioethics. “What if the algorithm is designed around the goal of saving money? What if different treatment decisions about patients are made depending on insurance status or their ability to pay?”
Related Seeing the future of AI in diagnosis of eye diseases
There is also concerns around how doctors would utilise the data from AI. Indeed, rather than just seeing them as a tool producing an answer, the authors argue that doctors should become more invested in knowing how they work and their limitations, so that they can better use and consider the information that emerges from an artificial intelligence system.
“Ethical guidelines can be created to catch up with the age of machine learning and artificial intelligence that is already upon us,” write the authors. “Physicians who use machine-learning systems can become more educated about their construction, the data sets they are built on and their limitations. Remaining ignorant about the construction of machine-learning systems or allowing them to be constructed as black boxes could lead to ethically problematic outcomes.”
Knowing how the systems were constructed would allow the end users to more accurately assess the system’s bias and take that into account during patient care.
By educating themselves about the systems they use and any potential biases they might contain, the end-users will also prevent themselves from becoming too over-reliant on the technology without applying their own critical thought and training into interpreting the results.
“We need to be cautious about caring for people based on what algorithms are showing us,” Char said. “The one thing people can do that machines can’t do is step aside from our ideas and evaluate them critically.”
Keeping the human element in healthcare
Maintaining this combination of data-derived information and human experience is vital for ensuring appropriate health care, the authors say. Overreliance on machine guidance might lead to self-fulfilling prophesies.
For example, they said, if clinicians always withdraw care in patients with certain diagnoses, such as extreme prematurity or brain injury, machine-learning systems may learn that such diagnoses are always fatal. Conversely, machine-learning systems, properly deployed, may help resolve disparities in health care delivery by compensating for known biases or by identifying where more research is needed to balance the underlying data.
To investigate the impact of this effect, the researchers are currently running a study. So far, they have illustrated how collaboration between physicians and the algorithm developers during design can guard against the misinterpretation of data in making care decisions.
Machine-learning-based algorithms essentially introduce a third party into the physician-patient relationship. As well as balancing that input and the biases of the algorithm and the doctor, the authors also point out that introducing this new element will be a challenge to the dynamics of responsibility in the relationship.
Related Brain scans plus artificial intelligence can predict language ability in child cochlear implant recipients
The integration of computerised system will also change the expectation of confidentiality.
According to the authors, “Once machine-learning-based decision support is integrated into clinical care, withholding information from electronic records will become increasingly difficult, since patients whose data aren’t recorded can’t benefit from machine-learning analyses.”
Essentially, because algorithms rely on data, there won’t be any way of benefitting from machine learning-based tools without releasing data to the system.
They suggest we need to have a conversation about data gathered on patient health, diagnostics and outcomes become part of the “collective knowledge” of published literature and information collected by health care systems. This data might be used without regard for clinical experience and the human aspect of patient care.
“I think society has become very breathless in looking for quick answers,” said Char. “I think we need to be more thoughtful in implementing machine learning.”
The paper has been published in The New England Journal of Medicine