A new article has offered a peek at the ethical decisions facing a nascent innovation in healthcare: the creation of digital twins (DTs) that can be used to simulate an individual’s health.
While DTs are not yet a reality beyond limited research studies, article authors Matthias Braun and Jenny Krutzinna argue that the pace of advances in the area make discussion of the ethical impacts of the technology essential.
A simulation of a child’s health status
Braun and Krutzinna, researchers at Friedrich-Alexander-Universität Erlangen-Nürnberg and the University of Bergen respectively, focus their analysis on the role that DTs could play in raising the profile of children’s health.
“[We] argue that bringing a solid conceptual basis into the development process is of utmost importance for the effective protection of children’s rights and interests,” Braun and Krutzinna told Technology Networks.
Digital twinning is a broad technology. Put simply, it involves creating accurate simulations of real events or objects. By constantly updating the DT with real-time information, it can be used to track a system and even predict its future performance.
The DTs considered by Braun and Krutzinna are likely to involve AI-driven simulations of individuals’ health statuses. The key element that separates a DT of an adult from that of a child, the duo argue, is children’s unique vulnerability.
Braun and Krutzinna see DTs as a technology that could both address the vulnerabilities that children are exposed to in current healthcare systems and also create new vulnerabilities for them as well.
How do we protect children currently?
In current child protection systems, the authors say, numerous third parties come together to make decisions about how best to serve children’s interests. Perhaps a service receives a call from a teacher or relative concerned about the welfare of a child. That service must make relatively rapid decisions about how best to address the situation, decisions that are almost always based on incomplete information. “Very little health information is available about the child's actual condition,” say Braun and Krutzinna. “Knowing precisely the health status of a child and basing surrogate decisions on information that is as accurate as possible is a key condition for not inflicting new harm with surrogate decisions.”
Additionally, in the scenario above, the decision is made without any input from the child themselves. The authors argue that international conventions, such as the UN Convention on the Rights of the Child (CRC), which has been ratified in every country except the USA, should guarantee that decision made about children focus on the child’s best interests and their right to participation. Clearly, this doesn’t always happen.
Failure to predict
Given these limitations, there is an obvious appeal to the idea of using technology to make decisions around child welfare more informed. One particular approach that Braun and Krutzinna discuss is the use of predictive algorithms to anticipate which children are likely to be put at risk before they end up in such situations. This well-intentioned approach is not without its own risks. The authors mention the case of the Allegheny Family Screening Tool (AFST), a predictive system that attempted to apply predictive risk scores to potential cases of child abuse or neglect that passed through Pennsylvania’s Allegheny County Office of Children, Youth and Families (CYF).
As detailed extensively before, the tool suffered from inherent bias as it incorporated measures of how likely a family was to be referred to authorities. This meant that black and biracial families were disproportionately scored higher, as the wider community were three-and-a-half times more likely to report them to the County Office’s hotline than they were white families. Some subsequent analysis of the tool suggests it remains “less bad” than existing screener tools, but the case reveals the dangers of viewing predictive tools as somehow objective or free from bias.
The ideal digital twin
Can DTs avoid the traps that predictive technologies fall into? DTs are also designed to make predictions, but unlike the AFST, are constantly updated with real-time information about a given child.
A child’s DT could incorporate their medical data, genetic data and current metabolic data from sources such as wearables. The authors lay out what their ideal use of a DT would be – a one-to-one digital representation of a physical person, with verified, secure data that can be controlled by a guardian or by the child themselves. Braun and Krutzinna say that one of the most important aspects of an idealized DT would be its ability to involve the child in decision-making processes. “Since a central aspect of DT technology is the interaction between the digital twin and the simulated person (in our case the child), the decision-making processes could be designed in such a way that children themselves can, for example, decide to allow their DT to bring certain health information that is important to them into the decision-making process,” they say.
DTs could not only act to make up-to-date information on a potentially at-risk child more accessible, but to maximize the reach of any input from the child themselves. Rather than repeatedly sharing potentially traumatic events with multiple parties, any inputs shared by a child into a DT system could then be made available to the required third parties without forcing the child to repeatedly give testimony.
Threading an ethical needle
The authors ultimately conclude that an ideal DT system would have to thread an ethical needle. While the surveillance that would be required to make a DT function would inarguably be considered a potential privacy risk, current procedures, such as poorly informed investigations by child welfare authorities, are also likely to impact privacy.
Braun and Krutzinna sum up this ethical dilemma in their article: “The question that arises is whether there is an acceptable level of surveillance that, when applied and used correctly, leads to greater respect for the right to bodily integrity.”
Many ethical quandaries remain unanswered at this stage – the question of who would decide to have a DT created for a child, and whether the most vulnerable children would be in a position to access a DT, for example, is not fully discussed in Braun and Krutzinna’s analysis. But what the authors are most ardent about is the need to have these discussions urgently. “We have to learn how challenging, if not impossible, it is to think about the normative conditions and consequences only when a technology is already in use,” the duo write, pointing to the example of CRISPR, the gene-editing technology that took just six years to go from first being used in biological samples to editing the genomes of unborn children.
In short, if this is the first you have heard about DTs, be sure it won’t be the last – there are many ethical debates yet to come. Those debates will likely decide whether the technology will prove to be a tool to improve children’s health or to impose unnecessary surveillance on the next generation. As the authors conclude in their paper, “Thinking about how DTs can be developed from an ethical perspective in such a way that they eliminate existing grievances in proxy decision-making, and at the same time make the respective representatives more visible, is central to responsibly shaping the future.”