Welcome to Psychiatryai.com: Latest Evidence - RAISR4D

Users’ Perceptions and Trust in AI in Direct-to-Consumer mHealth: Qualitative Interview Study

JMIR Mhealth Uhealth. 2025 May 20;13:e64715. doi: 10.2196/64715.

ABSTRACT

BACKGROUND: The increasing use of direct-to-consumer artificial intelligence (AI)-enabled mobile health (AI-mHealth) apps presents an opportunity for more effective health management and monitoring and expanded mobile health (mHealth) capabilities. However, AI’s early developmental stage has prompted concerns related to trust, privacy, informed consent, and bias, among others. While some of these concerns have been explored in early stakeholder research related to AI-mHealth, the broader landscape of considerations that hold ethical significance to users remains underexplored.

OBJECTIVE: Our aim was to document and explore the perspectives of individuals who reported previous experience using mHealth apps and their attitudes and ethically salient considerations regarding direct-to-consumer AI-mHealth apps.

METHODS: As part of a larger study, we conducted semistructured interviews via Zoom with self-reported users of mHealth apps (N=21). Interviews consisted of a series of open-ended questions concerning participants’ experiences, attitudes, and values relating to AI-mHealth apps and were conducted until topic saturation was reached. We collaboratively reviewed the interview transcripts and developed a codebook consisting of 37 codes describing recurring or otherwise noteworthy sentiments that inductively arose from the data. A single coder coded all transcripts, and the entire team contributed to conventional qualitative analysis.

RESULTS: Our qualitative analysis yielded 3 major categories and 9 subcategories encompassing participants’ perspectives. Participants described attitudes toward the impact of AI-mHealth on users’ health and personal data (ie, influences on health awareness and management, value for mental vs physical health use cases, and the inevitability of data sharing), influences on their trust in AI-mHealth (ie, endorsements and guidance from health professionals or health or regulatory organizations, attitudes toward technology companies, and reasonable but not necessarily explainable output), and their preferences relating to the amount and type of information that is shared by AI-mHealth apps (ie, the types of data that are collected, future uses of user data, and the accessibility of information).

CONCLUSIONS: This paper provides additional context relating to a number of concerns previously posited or identified in the AI-mHealth literature, including trust, explainability, and information sharing, and revealed additional considerations that have not been previously documented, that is, users’ differentiation between the value of AI-mHealth for physical and mental health use cases and their willingness to extend empathy to nonexplainable AI. To the best of our knowledge, this study is the first to apply an open-ended, qualitative descriptive approach to explore the perspectives of end users of direct-to-consumer AI-mHealth apps.

PMID:40392584 | DOI:10.2196/64715

Document this CPD

AI-Assisted Evidence Search

Share Evidence Blueprint

QR Code

Search Google Scholar

close chatgpt icon
ChatGPT

Enter your request.

Psychiatry AI: Real-Time AI Scoping Review (RAISR4D)