IRBs Are Reviewing Artificial Intelligence Research, Outside Expertise Needed
Health-related artificial intelligence (AI) research poses some unique and complex ethical issues, some of which are outside the expertise of most IRBs. “One potential challenge is that a lot of AI tools and systems are coming from the software technology world, where ‘move fast and break things’ remains a common cultural theme,” says Brian R. Jackson, MD, MS, associate professor of pathology at the University of Utah.
There could be ethical concerns built into the tools themselves. A database or application could include built-in hidden biases, it could have been developed using data resources that were not appropriately collected, or it could put subjects’ privacy at risk. “Silicon Valley developers don’t apply the same level of ethical scrutiny to their R&D projects that IRBs do. Not even close,” says Jackson, medical director of support services, IT, and business development at ARUP Laboratories, a nonprofit enterprise of the University of Utah.
Another ethical concern is some AI projects fall outside what constitutes “research.” Therefore, AI projects could bypass IRBs entirely. One widely used algorithm used to identify patients with complex health needs underestimated the future health needs of Black patients.1,2 “This app was implemented for hospital case management and, thus, lacked IRB oversight — though in theory, research groups could have used its data in research projects as well,” Jackson notes.
One solution is to ask a separate IRB to review the details of arrangements with outside entities, such as software vendors or biotech firms that analyze healthcare data. “Granted, IRBs have lots of experience dealing with external companies — in particular, pharmaceutical and device companies. The difference here is that many of the risks may be hidden inside a ‘black box.’ Researchers and IRB members may not know all the right questions to get inside that box,” Jackson explains.
Researchers and IRB members probably do not fully understand bias and privacy risks associated with the AI tools they are using. “The companies who developed them may or may not be forthcoming about these issues,” Jackson adds.
IRBs needing expertise in this area could turn to the academic computer science community, which has been actively grappling with AI ethical issues for many years. Mainly, this has been examined in non-medical domains. “Still, there are many computer science experts, including on college campuses, who could potentially be recruited to participate in the IRB review of study protocols, at least ad hoc,” Jackson suggests.
In a recent paper, Margaret Levi, PhD, and colleagues argued a new process is needed to assess the ethical implications and downstream consequences of technological and scientific discoveries, including AI and machine learning.3 “IRBs are established to consider effects on subjects and participants in a study. Indeed, they are legally prohibited from considering the ethical implications and downstream consequences of the research,” says Levi, director of the Center for Advanced Study in the Behavioral Sciences at Stanford University.
IRBs are focused on how a clinical trial will affect human subjects, not society overall. In light of this limitation, Levi and colleagues proposed a separate “Ethics and Society Review” process. Principal investigators should provide a short statement about what they perceive as problems (e.g., the possibility that another entity will misuse the data) and how they will mitigate those problems. Trained staff sort out research proposals, and qualified faculty panelists assess which ones need deeper consideration. “Staff and panelists are required to have subject matter expertise, competence in making such ethical judgments, or both,” Levi says.
The proposals and short statements become the basis of a conversation between study investigators and panelists. “Ultimately, we will build a scaffolding that researchers can use as they design their studies and as they come upon unintended consequences. Over time, we hope the Ethics and Society Review will be needed for fewer cases,” Levi says.
Ethicists offer some specific recommendations on how IRBs might be adapted to address ethics oversight of health-related AI research.4 “While we are most familiar with institutional IRBs that review research protocols prepared in academic settings, we had noticed an interesting trend in the technology sector,” says Phoebe Friesen, PhD, the paper’s lead author and an assistant professor at McGill University’s biomedical ethics unit.
Companies like Facebook and Google’s DeepMind are developing their own IRBs. The companies adopted some features from institutional IRBs and left out others. “This compelled us to organize a workshop exploring the topic of IRBs for health research involving AI, and examining challenges arising at this intersection in both academia and in industry,” Friesen says.
Experts made recommendations on how to adapt ethics review to the challenges brought about by health research involving AI. “Many of these recommendations relate to features of research governance that IRBs or researchers have little control over,” Friesen notes.
IRBs review only study protocols deemed as “research,” which requires oversight. Friesen and colleagues recommended using risk and uncertainty as the basis for review instead of what constitutes “research.” Making those kind of changes would need to take place at the regulatory level, rather than the level of individual IRBs. “There is some space for IRBs to be responsive to our recommendations, however,” Friesen says.
The authors recommended IRB members be given clear authority over research approval, with transparent decision-making. “These recommendations are especially relevant to emerging IRBs within the tech industry, many of which are advisory, internal, and hidden from public view,” Friesen says.
REFERENCES
- Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019;366:447-453.
- Ledford H. Millions of Black people affected by racial bias in health-care algorithms. Nature 2019;574:608-609.
- Bernstein MS, Levi M, Magnus D, et al. Ethics and society review: Ethics reflection as a precondition to research funding. Proc Natl Acad Sci USA 2021;118:e2117261118.
- Friesen P, Douglas-Jones R, Marks M, et al. Governing AI-driven health research: Are IRBs up to the task? Ethics Hum Res 2021;43:35-42.
How might IRBs be adapted to address ethics oversight of health-related artificial intelligence research?
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.