Mental Health

Explainable AI framework for improved Thalassemia mental health classification and feature selection.

TL;DR

The proposed AMSE-DFI framework using SF-36 data from 356 Bangladeshi Thalassemia patients outperformed conventional approaches for mental health classification, with LIME-based explainable AI providing interpretable insights into key predictors such as total SF score, role emotional, and physical health summary.

Key Findings

The AMSE-DFI framework identified total SF score, role emotional, and physical health summary as key predictors of mental health outcomes in Thalassemia patients.

  • Feature selection dynamically integrated mutual information, ensemble learning, and graph attention mechanisms.
  • These features collectively reflect both physical and psychological well-being.
  • The framework was designed to capture intricate feature interdependencies often missed by traditional approaches.
  • Data were derived from the SF-36 health survey administered to 356 Bangladeshi Thalassemia patients.

The AMSE-DFI model outperformed conventional machine learning and statistical approaches in classifying mental health challenges in Thalassemia patients.

  • The model demonstrated 'strong predictive reliability and robust generalization' compared to traditional approaches.
  • Traditional statistical and machine learning approaches were noted to 'fail to capture the complex, nonlinear relationships between psychosocial and clinical variables.'
  • SMOTE (Synthetic Minority Over-sampling Technique) was applied to effectively address class imbalance in the clinical dataset.
  • The framework was evaluated on SF-36 health survey data from 356 Bangladeshi patients.

LIME-based explainable AI provided clear and interpretable insights into how key features affect individual patient outcomes in Thalassemia mental health classification.

  • Local Interpretable Model-Agnostic Explanations (LIME) was used as the explainability method.
  • LIME made the model 'more understandable and actionable for clinicians.'
  • The explainability component was described as offering 'clear, interpretable insights into how key features affect individual patient outcomes.'
  • This addressed a noted limitation of traditional approaches, which limit 'both accuracy and interpretability.'
  • The framework was positioned as a 'practical, transparent tool to support early detection and personalized management of mental health challenges in Thalassemia care.'

Mental health challenges in Thalassemia patients are frequently overlooked despite their significant impact on quality of life.

  • The study focused on Bangladeshi patients (n=356) as the study population.
  • The SF-36 health survey was used to capture psychosocial and clinical variables.
  • Both psychosocial and clinical variables were identified as relevant to mental health outcomes in this population.
  • The authors noted that conventional approaches fail to capture 'complex, nonlinear relationships between psychosocial and clinical variables.'

The study used a dataset of 356 Bangladeshi Thalassemia patients assessed with the SF-36 health survey, with SMOTE applied to handle class imbalance.

  • Sample size was 356 Bangladeshi Thalassemia patients.
  • The SF-36 (Short Form-36) health survey instrument was used for data collection.
  • SMOTE was applied and described as 'effectively addressing class imbalance in the clinical data.'
  • The dataset included both physical and psychological well-being variables as captured by the SF-36.

Have a question about this study?

Citation

Ayon S, Al Mamun A, Hossain M, Alamro W, Allawi Y, Prova N, et al.. (2026). Explainable AI framework for improved Thalassemia mental health classification and feature selection.. PloS one. https://doi.org/10.1371/journal.pone.0341168