ChatGPT & LLMs

Can large language models predict patient preferences?

Automatic Patient Preference Predictors are algorithms that use statistically predictive demographic key characteristics to infer an individual’s treatment preferences in health care. Lately, this initial proposal was expanded into a personalised version: instead of relying on demographic data, large language models are to extract values from a variety of sources generated by individual patients. I sketch two potential problems with the two arguably most transformative types of data source on which such an algorithm would be based: the difficulty of differentiating between general and idiosyncratic statements in online activities, and the tension between incentivising users and the faithfulness of the preferences so elicited. The accuracy of personalised predictor algorithms might be much lower than currently expected.

 

Associated Publication:

Meier, L. J. (2024): Predicting Patient Preferences with Artificial Intelligence: The Problem of the Data Source. The American Journal of Bioethics 24 (7): 48-50. DOI: 10.1080/15265161.2024.2353832

Can ChatGPT solve ethical dilemmas?

In their Target Article, Rahimzadeh et al. (2023) discuss the virtues and vices of employing ChatGPT in ethics education for healthcare professionals. To this end, they confront the chatbot with a moral dilemma and analyse its response. In interpreting the case, ChatGPT relies on Beauchamp and Childress’ four prima-facie principles: beneficence, non-maleficence, respect for patient autonomy, and justice. While the chatbot’s output appears admirable at first sight, it is worth taking a closer look: ChatGPT not only misses the point when applying the principle of non-maleficence; its response also fails, in several places, to honour patient autonomy – a flaw that should be taken seriously if large language models are to be employed in ethics education. I therefore subject ChatGPT’s reply to detailed scrutiny and point out where it went astray.

 

Associated Publication:

Meier, L. J. (2023): ChatGPT’s Responses to Dilemmas in Medical Ethics: The Devil Is In the Details. The American Journal of Bioethics 23 (10): 63-65.