Synthetic Data Generation and Evaluation Techniques for Classifiers in Data Starved Medical Applications

Wan D. Bae, Shayma Alkobaisi, Matthew Horak, Siddheshwari Bankar, Sartaj Bhuvaji, Sungroul Kim, Choon Sik Park

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

With their ability to find solutions among complex relationships of variables, machine learning (ML) techniques are becoming more applicable to various fields, including health risk prediction. However, prediction models are sensitive to the size and distribution of the data they are trained on. ML algorithms rely heavily on vast quantities of training data to make accurate predictions. Ideally, the dataset should have an equal number of samples for each label to encourage the model to make predictions based on the input data rather than the distribution of the training data. In medical applications, class imbalance is a common issue because the occurrence of a disease or risk episode is often rare. This leads to a training dataset where healthy cases outnumber unhealthy ones, resulting in biased prediction models that struggle to detect the minority, unhealthy cases effectively. This paper addresses the problem of class imbalance, given the scarcity of training datasets by improving the quality of generated data. We propose an incremental synthetic data generation system that improves data quality over iterations by gradually adjusting to the data distribution and thus avoids overfitting in classifiers. Through extensive experimental assessments on real asthma patients' datasets, we demonstrate the efficiency and applicability of our proposed system for individual-based health risk prediction models. Incremental SMOTE methods were compared to the original SMOTE variants as well as various architectures of autoencoders. Our incremental data generation system enhances selected state-of-the-art SMOTE methods, resulting in sensitivity improvements for deep transfer learning (TL) classifiers ranging from 4.01% to 7.79%. Compared with the performance of TL without oversampling, the improvement achieved by the incremental SMOTE methods ranged from 27.18% to 40.97%. These results highlight the effectiveness of our technique in predicting asthma risk and their applicability to imbalanced, data-starved medical contexts.

Original languageEnglish
Pages (from-to)16584-16602
Number of pages19
JournalIEEE Access
Volume13
DOIs
Publication statusPublished - 2025

Keywords

  • Autoencoders
  • class imbalance problem
  • control coefficient
  • data starved contexts
  • rare event prediction
  • synthetic minority oversampling technique
  • transfer learning

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering

Fingerprint

Dive into the research topics of 'Synthetic Data Generation and Evaluation Techniques for Classifiers in Data Starved Medical Applications'. Together they form a unique fingerprint.

Cite this