
It has been reported that an Australian euthanasia campaigner has put forward an AI system to assess people’s ability for assisted dying
Philip Nitschke, the inventor of the controversial Sarco suicide pod, stated that artificial intelligence might one day take the place of psychiatrists in evaluating whether those seeking assisted dying are mentally fit to make the decision, as reported by Euronews on Thursday.
The Sarco, which is short for sarcophagus, is a 3D-printed capsule that is designed for a single person to enter, lie down, and press a button. The device quickly reduces the oxygen level and fills the capsule with nitrogen, leading to death by hypoxia.
Nitschke, an Australian euthanasia campaigner and the creator of the pod, said that AI could figure out who has the “mental capacity” to end their own life. He told the media that doctors shouldn’t be “going around giving you permission or not to die” and that the choice should be up to those “of sound mind.”
In countries where assisted dying is allowed, psychiatrists usually assess whether a person is mentally capable, though this practice is limited and highly controversial. Nitschke said that the process is often inconsistent.
“I’ve seen many cases where the same patient, seeing three different psychiatrists, gets four different answers,” he said.
He has proposed an AI system that uses a conversational avatar to evaluate capacity. Users would “sit there and discuss the issues” raised by the avatar, after which it would decide whether they are able to proceed. If the AI determines that a person is of sound mind, the Sarco pod would be activated, giving a 24-hour period to go ahead, after which the assessment has to be repeated. Nitschke said that early versions of the software are working, though they haven’t been independently verified.
The first and only use of the Sarco pod in Switzerland in September 2024 caused international outrage. Swiss authorities arrested several people, including the CEO of the assisted dying group The Last Resort, and said that the device violated Swiss law, which only allows assisted suicide under strict conditions.
Nitschke’s proposal has reignited the debate about the role of AI in life-and-death decisions. Last year, OpenAI updated ChatGPT after an internal review found that over a million users had revealed suicidal thoughts to the chatbot. Psychiatrists have expressed concerns that prolonged AI interactions could contribute to delusions and paranoia, a phenomenon sometimes called “AI psychosis.”