Opportunities and Limits of “Simulated Empathy”: LLMs in Surgical Informed Consent

Large Language Models” (LLMs), such as ChatGPT, could potentially support the informed consent process for surgical interventions, although the full extent of their capabilities remains unclear. A recent study by TU Braunschweig and Medizinische Hochschule Hannover investigates whether advanced AI systems can make medical information more understandable while easing the workload on medical personnel.

However, the concept of “simulated empathy” – in which an AI appears to respond in a caring, human-like way – raises serious ethical questions. Therefore, it is crucial to examine whether imitations of empathy might undermine patient trust or skew clinical decision-making.

The researchers propose a clear-cut solution: use LLMs solely for identifying and flagging patient anxieties, rather than attempting to mimic genuine emotional involvement. This ensures that surgeons and healthcare teams can engage in authentic, empathetic dialogue while still benefiting from AI-driven analytical strengths – such as clarifying medical terms or spotting signs of patient distress.

Ultimately, the study underscores that AI should not replace human interaction but rather complement it in the consent process. Patients must always be informed when an automated system is involved, to preserve transparency and trust.

View the full paper here:

Large language models for surgical informed consent: an ethical perspective on simulated empathy.