LGBTQ+ teenagers serving as co-designers identified key limitations in an AI-powered sexual health chatbot's inclusivity, tone, and interface, leading to targeted content updates and interface refinements that enhanced the tool for broader use.
Key Findings
Results
LGBTQ+ teenagers expressed both skepticism and curiosity about AI's role in delivering sexual health information.
A Youth Advisory Council comprising 15 LGBTQ+ teenagers independently explored and interacted with Roo, Planned Parenthood Federation of America's AI-powered sexual education chatbot.
Feedback was gathered through a week-long asynchronous discussion on a private Discord server.
The research team posed open-ended questions prompting participants to reflect on Roo's inclusivity, usability, and content priorities.
Participants offered critical insights on the chatbot's language, trustworthiness, and relevance.
Results
Teenagers identified key limitations in Roo's inclusivity, tone, and interface, particularly around transgender-specific content, conversational depth, and stigma reduction.
Concerns were specifically raised about transgender-specific content being insufficient or inadequate.
Participants noted limitations in the chatbot's conversational depth.
Stigma reduction was identified as an area needing improvement.
These findings were gathered using rapid qualitative analysis organized around the study's research questions.
Results
Participatory feedback from LGBTQ+ teenagers directly informed targeted updates to the Roo chatbot implemented by Planned Parenthood Federation of America.
Specific changes included expanding LGBTQ+ affirming content.
Language was revised to eliminate gendered assumptions.
Concrete statistics and contextualized examples were incorporated to reduce stigma.
Clearer disclosures around Roo's AI capabilities and limitations were added.
These changes were implemented by PPFA to enhance Roo for broader use beyond the study context.
Methods
The study was embedded within a larger hybrid effectiveness implementation trial of a digital HIV prevention intervention for LGBTQ+ teenagers.
The parent study is called Sharing Health Education Resources (SHER), described as a hybrid effectiveness implementation trial.
The trial focused on a digital HIV prevention intervention for LGBTQ+ teenagers.
The researchers collaborated with PPFA to create a customized instance of Roo for integration into this study.
The Youth Advisory Council comprised 15 LGBTQ+ teenagers who served as co-designers rather than only as testers.
Conclusions
Academic and nonprofit collaborations using participatory methods offer a scalable model for integrating community voice into the development of inclusive AI-enabled health technologies.
The partnership between a nonprofit (PPFA) and an academic team is described as a model for real-world digital health tool enhancement.
LGBTQ+ teenagers served not only as testers but as co-designers, shaping the chatbot's evolution.
The study surfaced broader lessons about trust, AI literacy, and health equity.
The authors conclude that 'marginalized teenagers possess the critical insights needed to meaningfully shape AI-enabled health interventions when provided with structured opportunities for engagement.'
Background
Many existing digital sexual health tools fail to meet the needs of LGBTQ+ teenagers, who often lack access to inclusive, affirming resources.
AI-enabled chatbots have emerged as promising tools to address these gaps.
Concerns remain around bias, usability, and trustworthiness of AI tools, particularly for queer and transgender teenagers.
Few studies have systematically engaged LGBTQ+ teenagers in the co-design of AI-powered sexual health interventions prior to this work.
Participatory design approaches centered around marginalized teenagers are described as critical to ensuring tools are relevant, trustworthy, and equitable.
What This Means
This research describes a study in which 15 LGBTQ+ teenagers were recruited to a Youth Advisory Council to test and provide feedback on Roo, an AI-powered sexual health chatbot developed by Planned Parenthood. The teenagers interacted with the chatbot on their own and then shared their thoughts over a week-long online discussion. Researchers analyzed their feedback to identify what was working well and what needed improvement, focusing on how inclusive, trustworthy, and useful the chatbot was for LGBTQ+ youth.
The teenagers identified several important problems with the chatbot, including a lack of content specifically relevant to transgender people, language that assumed heterosexual or cisgender identities, limited conversational depth, and a need for better information to reduce stigma around topics like HIV. They also expressed mixed feelings about trusting an AI with sensitive health questions. Importantly, their feedback was not just collected and filed away — Planned Parenthood used it to make real changes to the chatbot, including adding more LGBTQ+-affirming content, removing gendered assumptions from the language, adding statistics and examples to reduce stigma, and being more transparent about what the AI can and cannot do.
This research suggests that involving marginalized young people directly in the design and improvement of digital health tools leads to more relevant and trustworthy products. The partnership between a nonprofit organization and an academic research team is presented as a model that other organizations could follow to ensure that AI health tools genuinely serve the communities they are intended to help, particularly communities that have historically been underserved or excluded from mainstream health education.
Liem W, Casline E, Lorenzo J, Gordon J, Avila A, Taylor A, et al.. (2026). Enhancing LGBTQ+ Inclusivity in an AI-Powered Sexual Health Chatbot: User-Centered Design Approach Through a Nonprofit and Academic Partnership.. Journal of medical Internet research. https://doi.org/10.2196/78621