Can AI intervene in end-of-life care? Can machines decide the life and death of
A few months ago, a woman in her fifties named Sophie began to experience bleeding in her brain after a hemorrhagic stroke. Despite undergoing brain surgery, her heart eventually stopped beating.
This left her with severe brain damage. She was unresponsive, unable to grip with her fingers, and could not open her eyes even when asked, showing no reaction even when pinched. She required a tracheostomy tube to breathe and was fed directly into her stomach through a gastric feeding tube because she could not swallow. What should be done about her medical care going forward?
As usual, this difficult question was left to Sophie's family to decide, recalled Holland Kaplan, an internist at Baylor College of Medicine, who was involved in Sophie's treatment. However, her family members could not agree. Sophie's daughter firmly believed that her mother would want to stop medical intervention and pass away peacefully, while another family member strongly disagreed with this view and insisted that Sophie was "a fighter." This situation caused great distress for all involved, including Sophie's doctors.
Advertisement
David Wendler, a bioethicist at the US National Institutes of Health, pointed out that end-of-life decisions can be extremely painful for proxies. Wendler and his colleagues are researching a conceptual tool—artificial intelligence-based technology—to help proxies predict how the patient themselves would choose in various specific situations.This tool has not yet been developed, but Wendler plans to train it using personal medical data, private information, and social media posts, hoping that it can not only more accurately infer the patient's own wishes but also alleviate the pressure and emotional burden on family members when making difficult decisions.
Wendler, along with bioethicist Brian Earp from the University of Oxford and his colleagues, hopes to start building this tool as soon as possible after obtaining funding support, which may be realized in the coming months. However, it is not easy to truly launch this tool. Critics question how it can be ethically justified to use personal data to train such a tool, and whether life-and-death decisions should be made by artificial intelligence.
To be or not to be
Approximately 34% of patients are considered unable to make decisions about their own medical care for various reasons. For example, they may be in a state of unconsciousness or unable to think and communicate rationally. This proportion is higher among the elderly - a study of people over 60 in the United States found that 70% of them do not have the ability to make these important medical decisions on their own when facing them. "It's not just about making a lot of decisions," Wendler said, "but a lot of extremely important decisions, which basically determine whether a person will survive or die in the near future."
Performing cardiopulmonary resuscitation on patients with heart failure may extend their lives, but this treatment has a chance of causing fractures of the sternum and ribs, and even if the patient eventually wakes up - if they can wake up - they may have already suffered severe brain damage. Maintaining the function of the heart and lungs through machines may ensure that other organs receive oxygenated blood, but it is not necessarily recoverable. And during this period, the patient may also be infected with various diseases. A terminally ill patient may be willing to continue trying the drugs and treatment methods provided by the hospital in the hope of living a few more weeks or months, but others may give up these interventions and prefer to spend the rest of their lives comfortably at home.In the United States, only about one-third of adults have completed an advance directive—a legal document that specifies the type of end-of-life care they wish to receive. Wendler estimates that over 90% of end-of-life decisions are ultimately made by someone other than the patient. The role of the proxy is to make this decision based on the patient's wishes for how they want to be treated, but people are generally not very good at making these kinds of predictions. Studies have shown that the proportion of proxies who accurately predict patients' end-of-life decisions is about 68%.
Wendler further points out that these decisions themselves can also cause great distress. While some proxies may feel fulfilled for supporting their loved ones, others may struggle under the emotional burden and may feel guilt for months or even years afterward. Some may worry that they ended their loved one's life too soon, while others fear they unnecessarily prolonged their loved one's suffering. "This is really very bad for many people," Wendler said, "people describe it as one of the worst things they have ever experienced."
Wendler has always been committed to developing a method to help proxies make decisions. Over a decade ago, he proposed the idea of a tool that would be based on a computer algorithm, trained repeatedly through surveys of the general population, to predict patients' preferences based on characteristics such as age, gender, and insurance status. It may sound a bit crude, but these characteristics do seem to affect people's attitudes towards medical care. For example, a teenager is more likely to choose aggressive treatment than a 90-year-old. And research shows that predictions based on averages may be more accurate than family members' guesses.
In 2007, Wendler and his colleagues built a basic version of this tool based on a small amount of data. Wendler said that the simplified tool was "at least as good as direct relatives in predicting what kind of care people want."
Now, Wendler, Earp, and their colleagues are working on a new idea. The new tool will no longer be based on crude characteristics, but instead plans to build a personalized tool. The research team proposes to use artificial intelligence and machine learning technology to predict patients' treatment preferences based on individual data such as medical history, as well as email, personal messages, web browsing records, social media posts, and even Facebook likes. The result will be a person's "digital psychological twin"—a tool that doctors and family members can consult to guide patient care. It is not yet clear what it will look like in practical application, but the research team hopes to build and test the tool before further refinement.Researchers have dubbed their tool the Patient-Personalized Preference Predictor, or P4 for short. In theory, if it works as expected, it may be more accurate than the previous prototype tools—perhaps even more accurate than human agents, says Wendler. Earp suggests that it might reflect the patient's current thoughts better than an advance medical directive, which could have been signed a decade ago.
A Better Choice?
Jennifer Blumenthal-Barby, a medical ethicist at Baylor College of Medicine in Texas, says that tools like P4 can also help alleviate the emotional burden agents bear when making such significant life-and-death decisions for their family members, as these decisions can sometimes lead to symptoms of post-traumatic stress disorder.
Kaplan points out that some agents may experience "decision paralysis," opting to use the tool to assist them through the decision-making process. In this case, P4 can help agents alleviate some of the burden they might bear, without providing black-and-white answers. For example, it might suggest that the patient is "very likely" or "unlikely" to have a specific attitude towards a certain treatment, or it could provide a percentage score to indicate the likelihood of the answer being correct.
Kaplan believes that tools like P4 would be helpful in situations similar to Sophie's—where family members have differing views on someone's medical care, the tool could be offered to them, ideally helping them reach a consensus.This tool can also help patients without advocates make care decisions. Kaplan is an internist at Houston's Ben Taub Hospital, a "safety net" hospital that treats patients regardless of whether they have health insurance. "Many of our patients are undocumented, incarcerated, or homeless," she said. "We take care of patients who basically cannot get treatment elsewhere."
When Kaplan sees these patients, they are usually in the late stages of disease and in very difficult situations. Many patients cannot personally participate in discussions about their treatment plans, and some do not have family members who can speak on their behalf. Kaplan said she can imagine a tool like P4 being used in such situations to help doctors better understand what patients might want. In these cases, it may be difficult to access patients' social media profiles, but other information could prove useful. "If certain factors can be predictive, I want them to be included in the model," Wendler said. "If people's hair color, the schools they attended, or the initials of their surnames can predict their wishes, then I want to include these factors in the model."
This approach is supported by preliminary research by Earp and his colleagues, who have begun surveys to understand individuals' feelings about using P4. The study is ongoing, but early feedback suggests that people are willing to try the model only if no human proxy decision-makers are available. Earp said he shares the same view. He also mentioned that if P4 and proxy decision-makers give different predictions, "I might be inclined to trust the human who knows me, rather than the model."
Not Human
Earp's feelings reflect an intuition shared by many: these major decisions are best made by humans. Georg Starke, a researcher at the Swiss Federal Institute of Technology in Lausanne, said, "The question is: how do we want to make end-of-life decisions, and who should make them?" He is concerned that adopting a technological solution may turn intimate, complex, and personalized decisions into "an engineering problem."When Bryanna Moore, an ethicist at the University of Rochester, first heard about P4, her initial reaction was: "Oh, no." Moore is a clinical ethicist who provides consultation to patients, families, and hospital staff at two hospitals. "A large part of our work is to accompany those who face difficult choices... they don't have good options," she said. "What surrogate decision-makers really need is just someone who can listen to their stories and affirm their role through active listening and support... Honestly, I really don't know if there is such a need."
Moore agrees that surrogate decision-makers may not always be able to make the completely right choice when deciding on care plans for their loved ones. Even if we can directly ask the patient themselves, their answers may change over time. Moore refers to this situation as the "past self and present self" problem.
Moore does not believe that tools like P4 can necessarily solve this problem. Even if a person's wishes are clearly expressed in past handwritten notes, messages, or social media posts, it is very difficult to know exactly how they feel when actually facing a medical situation. Kaplan recalls treating an 80-year-old man with osteoporosis who had firmly stated that he wanted to receive chest compressions if his heart stopped beating. But when that moment came, his bones were too thin and fragile to withstand the compression. Kaplan remembers hearing the sound of his bones breaking like toothpicks and his sternum detaching from his ribs. "Then you wonder, what are we doing? Who are we helping? Would anyone really want this?" Kaplan said.
There are other concerns. First, AI trained through social media posts may not ultimately become a true "psychological twin." "Anyone with social media knows that what we post often does not truly represent our beliefs, values, or desires," Blumenthal-Barby said. Even so, it is difficult to know how these posts reflect our feelings about end-of-life care - many people find it difficult enough to discuss these issues with their families, let alone on public platforms.
For now, artificial intelligence does not always provide good answers when answering human questions. Even a slight change in the prompt given to an AI model can result in a completely different response. "Imagine this happening with a finely tuned large language model that is supposed to tell you the patient's wishes at the end of life," Starke said. "It's terrifying."On the other hand, humans are prone to mistakes as well. Vasiliki Rahimzadeh, a bioethicist at Baylor College of Medicine, believes that P4 is a good idea, provided that it undergoes rigorous testing. "We should not hold these technologies to a higher standard than we hold ourselves," she says.
Earp and Wendler acknowledge the immense challenges they face. They hope to build a tool that captures useful information, which can reflect a person's wishes without infringing on privacy, serving as a helpful guide that patients and surrogate decision-makers can choose to use, but it should not become the default method for determining patient care.
Even if they succeed in these aspects, they may not be able to control the ultimate use of such tools. Taking Sophie's case as an example, if P4 were used, its predictions might only further exacerbate the already strained family relationships. If it is considered to be closest to the patient's own wishes, Blumenthal-Barby points out that the patient's doctor might feel legally obligated to follow P4's output results, rather than the opinions of family members. "This could be very confusing and cause great distress for family members," she says.
"The thing I worry about the most is who controls it," Wendler says. He is concerned that hospitals might misuse tools like P4 to avoid performing costly procedures. "There could be various financial incentives," he says.
Everyone interviewed by MIT Technology Review agreed that the use of tools like P4 should be voluntary, and it is not suitable for everyone. "I think it might be helpful for some people," Earp says. "But there are also many people who would be unwilling to let an artificial intelligence system be involved in their decision-making in any way, because their decisions are of great importance."
Comment