Ethical Input Needed for Digital Models and Simulations in Healthcare
Digital models and simulations are a quickly evolving technology that, like artificial intelligence (AI) tools, will change clinical practice and patient care. Kristin Kostick-Quenet, PhD, an assistant professor at Baylor College of Medicine’s Center for Medical Ethics and Health Policy, was lead author of a recent paper on how ethicists are integrating social determinants of health into digital models and simulations.1 Medical Ethics Advisor (MEA) spoke with Kostick-Quenet about ethical considerations with this technology.
MEA: Where do things stand currently with digital models and simulations in healthcare? What should clinicians and ethicists be aware of regarding this evolving technology?
Kostick-Quenet: Creating these types of virtual models is becoming enabled by a growing variety of data collection devices. These could be wearables or other types of computer perception devices that can detect things about you or your surroundings and translate them into computational representations of your biological processes, behaviors, or — some even argue — your thoughts and feelings.
When you put enough of these relevant data together, along with scientific understandings of biological or psychological processes, you can potentially create virtual models that can predict the likelihood that certain clinical outcomes might happen.
These types of virtual models were first created by NASA in the 1960s to predict how spacecraft would react to the extreme environments of spaceflight. After this pioneering step to create a “digital twin” of these mechanical and physical systems, virtual models progressed and were extrapolated to airplanes, transportation systems, and buildings in an effort to model how these systems might react to different conditions and physical pressures. Only recently have these models promised to play a role in healthcare and patient outcomes.
In healthcare, virtual models are still in the very early stages and are just beginning to take shape. Building virtual models for healthcare is a little trickier than building a virtual twin of a mechanical system.
We are still a long way from using virtual simulations as a replacement for clinical trials or from being able to use virtual models in clinical care with real patients. But because these are the early stages, we have an opportunity to lay the groundwork for what kinds of standards — both technical and ethical — we expect from these technologies.
MEA: What are unique ethical concerns when digital models and simulations are applied to human health?
Kostick-Quenet: For a physical or mechanical system, with the right kind of tools and background knowledge about causal properties from physics and engineering, you can probably anticipate outcomes pretty well. But health and illness are quite different. All kinds of social phenomena (collectively called “social determinants of health”) shape disease, illness states, and health outcomes. These could include anything from eating habits, where you live, exercise habits, cultural and social practices, or treatment-seeking behaviors. If you are building a system where you are trying to capture all the mechanisms that lead to a health outcome, this means you have to computationalize all of those things. And this is something that we still do not do very well at all. Often, we don’t even know the full range of health predictors or which are the most important for which types of conditions. Even just capturing a few of them has proven to be notoriously difficult.
For a long time, the idea that sociocultural and politico-economic factors contribute to health was just kind of skirted under the rug. Luckily, there are now a lot of funding mechanisms focused on identifying and addressing social determinants of health.
MEA: What is the central ethical concern with digital models or simulations that you see currently?
Kostick-Quenet: If you create a model that is the prototype model for a given patient or maybe an average patient, it’s not going to be generalizable or applicable to all patients. You want a virtual model to be broadly customizable across different demographics, societies, ethnic and cultural subgroups. Otherwise, you will have a model that is prone to error and nonapplicable across diverse patient populations.
This same conversation is being had in the wider context of AI. If you seed models with training data that are not diverse, the model will not be able to accommodate the wide range of heterogeneity that may be relevant to understanding or robustly predicting your outcome of interest.
The same is true for these virtual models. If you’re trying to model how a patient’s heart might function based on a simulation built from limited and nondiverse data, the model is not likely to perform well for a patient that falls outside the average characteristics of the training data. It would be unethical to use a virtual model in this way, because there is not a lot of wiggle room for inaccuracy in high-stakes health decisions.
I would say that is the main ethical concern: Ensuring diversity in virtual models and simulations, not only in terms of clinical characteristics but also clinical etiologies.
MEA: What about ethicists whose skill set does not include technology? How can those ethicists play a role in ensuring digital models and simulations are ethically developed and implemented?
Kostick-Quenet: There’s a growing recognition among developers that you can’t wait until the deployment stages to bring in perspectives from clinicians and ethicists. You need to integrate these perspectives from the beginning. Developers are starting to do this in good faith. It’s not in their interests to operate with blinders on.
A lot of ethicists come from an interdisciplinary background, with varying levels of training, some minimal, in computational or other technical fields.
Now that AI is such a hot topic, there is a wider range of people who don’t have technical backgrounds contributing to the discourse on these topics. But to engage in some of the hairier questions that these virtual models raise, you do have to be willing to dive more into certain literatures that you might not be able to fully understand at first. It is a humbling process. But there has to be a willingness for anyone interested in chiming in from an ethical, regulatory, or philosophical perspective to engage with more of the technical side of things as well. And we see that happening, which is a good sign.
REFERENCE
- Kostick-Quenet K, Rahimzadeh V, Anandasabapathy S, et al. Integrating social determinants of health into ethical digital simulations. Am J Bioeth 2023;23:57-60.
Digital models and simulations are a quickly evolving technology that, like artificial intelligence tools, will change clinical practice and patient care.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.