Beneficent dehumanization: Employing artificial intelligence and carebots to mitigate shame-induced barriers to medical care

Amitabha Palmer, David Schwan

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

As costs decline and technology inevitably improves, current trends suggest that artificial intelligence (AI) and a variety of “carebots” will increasingly be adopted in medical care. Medical ethicists have long expressed concerns that such technologies remove the human element from medicine, resulting in dehumanization and depersonalized care. However, we argue that where shame presents a barrier to medical care, it is sometimes ethically permissible and even desirable to deploy AI/carebots because (i) dehumanization in medicine is not always morally wrong, and (ii) dehumanization can sometimes better promote and protect important medical values. Shame is often a consequence of the human-to-human element of medical care and can prevent patients from seeking treatment and from disclosing important information to their healthcare provider. Conditions and treatments that are shame-inducing offer opportunities for introducing AI/carebots in a manner that removes the human element of medicine but does so ethically. We outline numerous examples of shame-inducing interactions and how they are overcome by implementing existing and expected developments of AI/carebot technology that remove the human element from care.

Original languageEnglish (US)
Pages (from-to)187-193
Number of pages7
JournalBioethics
Volume36
Issue number2
DOIs
StatePublished - Feb 2022

ASJC Scopus subject areas

  • Health(social science)
  • Philosophy
  • Health Policy

Fingerprint

Dive into the research topics of 'Beneficent dehumanization: Employing artificial intelligence and carebots to mitigate shame-induced barriers to medical care'. Together they form a unique fingerprint.

Cite this