Computing Medical Ethics
Building an advisory system for medical Dilemmas (1)
Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the difficult task of operationalizing the principles of beneficence, non-maleficence and patient autonomy, and describe how we selected suitable input parameters that we extracted from a training dataset of clinical cases. The first performance results are promising, but an algorithmic approach to ethics also comes with several weaknesses and limitations. Should one really entrust the sensitive domain of clinical ethics to machine intelligence?
Building an advisory system for medical Dilemmas (2)
Although machine intelligence is increasingly employed in healthcare, the realm of decision-making in medical ethics remains largely unexplored from a technical perspective. We propose an approach based on fuzzy cognitive maps (FCMs), which builds on Beauchamp and Childress’ prima-facie principles. The FCM’s weights are optimized using a genetic algorithm to provide recommendations regarding the initiation, continuation, or withdrawal of medical treatment. The resulting model approximates the answers provided by our team of medical ethicists fairly well and offers a high degree of interpretability. Possible applications of such a system include informal guidance on medical ethics dilemmas as well as educational purposes.
How to make algorithms behave ethically
Machine intelligence is being employed for an increasing number of tasks in the healthcare sector. The more encompassing these tasks are becoming, the likelier it will be that, besides working with empirical data, algorithms are also coming into contact with normative aspects. This is a consequence of the fact that doing medicine is intimately intertwined with the taking of value judgments. Will the automation of healthcare therefore result in a new – this time algorithmic – form of medical paternalism? I argue that two design requirements could mitigate this tendency: embedding moral theories into medical AI and equipping it with input mechanisms for individual and situational preferences and values. I analyse how well consequentialist, deontological, and virtue-ethical theories are suited for algorithmic implementation and propose principlism as a compromise solution. Finally, I sketch how value input can be realised on this basis.
the future of AI-based clinical ethics consultation
Hein, A.; Meier, L. J. (2026): Data Science in Clinical Ethics Consultation: Analytics, Chatbots, and Specialized Decision Support Systems. In: Handbook of Digital and Experimental Methods in Bioethics, edited by S. Salloch, K. B. Francis, and B. D. Earp. Berlin: Springer [forthcoming]
[forthcoming in 2026]
Beauchamp and Childress’ Principles in the Era of AI
In November 2022 ChatGPT rang in the age of generative AI. Since then, conversational AI bots have been developing at an enormous pace. Unless specifically prompted otherwise, the models often default to principlism when analysing cases in medical ethics. We reflect on the journey of Beauchamp and Childress’ influential framework from an ‘analogue’ methodology into the era of artificial intelligence, discuss why the approach lends itself so well to automation, and provide an outlook on what the digital future of principlism might look like.
Using AI for medical dilemmas: A Reply to Our Critics
Can machine intelligence do clinical ethics? And if so, would applying it to actual medical cases be desirable? In a recent target article (Meier et al. 2022), we described the piloting of our advisory algorithm METHAD. Here, we reply to commentaries published in response to our project. The commentaries fall into two broad categories: concrete criticism that concerns the development of METHAD; and the more general question as to whether one should employ decision-support systems of this kind—the debate we set out to ignite with our target article.
Are large language maps or Fuzzy cognitive maps better at doing medical ethics?
Meier, L. J. (2026): Large Language Models versus Fuzzy Cognitive Maps for Solving Moral Dilemmas. Croatian Journal of Philosophy [forthcoming]
Which is better at doing medical ethics: conversational artificial intelligence bots like ChatGPT or tools based on fuzzy cognitive maps? The article compares the performance of chatbots that rely on large language models to that of our own METHAD algorithm. While both tools approach dilemmas in medical ethics through the lens of Beauchamp and Childress’ mid-level principles, ChatGPT and METHAD differ considerably in the format of their inputs and outputs, in their interpretability, and in the kinds of mistakes that they make. An ideal advisory algorithm would combine their characteristics.
ChatGPT’s Mistakes when confronted with clinical dilemmas
In their Target Article, Rahimzadeh et al. (2023) discuss the virtues and vices of employing ChatGPT in ethics education for healthcare professionals. To this end, they confront the chatbot with a moral dilemma and analyse its response. In interpreting the case, ChatGPT relies on Beauchamp and Childress’ four prima-facie principles: beneficence, non-maleficence, respect for patient autonomy, and justice. While the chatbot’s output appears admirable at first sight, it is worth taking a closer look: ChatGPT not only misses the point when applying the principle of non-maleficence; its response also fails, in several places, to honour patient autonomy – a flaw that should be taken seriously if large language models are to be employed in ethics education. I therefore subject ChatGPT’s reply to detailed scrutiny and point out where it went astray.
