Currently submitted to: JMIR Mental Health
Date Submitted: Mar 15, 2024
Open Peer Review Period: Mar 15, 2024 - May 10, 2024
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Patient Perspectives on AI for Mental Health - With Great [Computing] Power, Comes Great Responsibility: A Cross-sectional Public Survey
ABSTRACT
Background:
The application of artificial intelligence to health and healthcare is rapidly increasing. Several studies have assessed the attitudes of health professionals but far fewer have explored perspectives of patients or the general public. Studies investigating patient perspectives have focused on somatic issues including radiology, perinatal health, and general applications. Patient feedback has been elicited in the development of specific mental health solutions, but broader perspectives towards AI for mental health have been under-explored.
Objective:
To understand public perceptions regarding potential benefits of AI, concerns, comfort with AI accomplishing various tasks, and values related to AI, all pertaining to mental health.
Methods:
We conducted a one-time cross-section survey with a nationally representative sample of 500 United States-based adults. Participants provided structured responses on their perceived benefits, concerns, comfortability, and values on AI related to mental health. They could also add free text responses to elaborate on their concerns and values.
Results:
A plurality of participants (49.3%) believed AI may be beneficial for mental healthcare, but this perspective differed based on socio-demographic variables (p<0.05). Specifically, Black participants (OR = 1.76) and those with lower health literacy (OR=2.16), perceived AI to be more beneficial, and females (OR=0.68) perceived AI to be less beneficial. Participants endorsed concerns related to the use of AI for mental health regarding its accuracy, possible unintended consequences such as misdiagnosis, confidentiality of their information, and loss of connection with their health professional. Over 80% of participants also valued being able to understand individual factors driving their risk, confidentiality, and autonomy as it pertained to the use of AI for their mental health. When asked about who was responsible for misdiagnosis of mental health conditions using AI, 81.6% of participants found the health professional to be responsible. Qualitative results revealed similar concerns related to the accuracy of AI and how its use may impact the confidentiality of their information.
Conclusions:
Future work involving the use of AI for mental health should investigate strategies for conveying the level of AI's accuracy, factors that drive risk, and how data are used confidentially so that patients may work with their health professionals to determine when AI may be beneficial. It will also be important in a mental health context to ensure the patient-health professional relationship is preserved when AI is utilized. Clinical Trial: Not applicable
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.