TY - JOUR
T1 - Evaluating segment anything model (SAM) on MRI scans of brain tumors
AU - Ali, Luqman
AU - Alnajjar, Fady
AU - Swavaf, Muhammad
AU - Elharrouss, Omar
AU - Abd-alrazaq, Alaa
AU - Damseh, Rafat
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024/12
Y1 - 2024/12
N2 - Addressing the challenge of automatically segmenting anatomical structures from brain images has been a long-standing problem, attributed to subject- and image-based variations and constraints in available data annotations. The Segment Anything Model (SAM), developed by Meta, is a foundational model trained to provide zero-shot segmentation outputs with or without interactive user inputs, demonstrating notable performance on various objects and image domains without explicit prior training. This study evaluated SAM’s performance in brain tumor segmentation using two publicly available Magnetic Resonance Imaging (MRI) datasets. The study analyzed SAM’s standalone segmentation as well as its performance when provided user interaction through point prompts and bounding box inputs. SAM exhibited versatility across configurations and datasets, with the bounding box consistently outperforming others in achieving superior localized precision, with average Dice scores of 0.68 for TCGA and 0.56 for BRATS, along with average IoU values of 0.89 and 0.65, respectively, especially for tumors with low-to-medium curvature. Inconsistencies were observed, particularly in relation to variations in tumor size, shape, and textural features. The conclusion drawn from the study is that while SAM can automate medical image segmentation, further training and careful implementation are necessary for diagnostic purposes, especially with challenging cases such as MRI scans of brain tumors.
AB - Addressing the challenge of automatically segmenting anatomical structures from brain images has been a long-standing problem, attributed to subject- and image-based variations and constraints in available data annotations. The Segment Anything Model (SAM), developed by Meta, is a foundational model trained to provide zero-shot segmentation outputs with or without interactive user inputs, demonstrating notable performance on various objects and image domains without explicit prior training. This study evaluated SAM’s performance in brain tumor segmentation using two publicly available Magnetic Resonance Imaging (MRI) datasets. The study analyzed SAM’s standalone segmentation as well as its performance when provided user interaction through point prompts and bounding box inputs. SAM exhibited versatility across configurations and datasets, with the bounding box consistently outperforming others in achieving superior localized precision, with average Dice scores of 0.68 for TCGA and 0.56 for BRATS, along with average IoU values of 0.89 and 0.65, respectively, especially for tumors with low-to-medium curvature. Inconsistencies were observed, particularly in relation to variations in tumor size, shape, and textural features. The conclusion drawn from the study is that while SAM can automate medical image segmentation, further training and careful implementation are necessary for diagnostic purposes, especially with challenging cases such as MRI scans of brain tumors.
UR - http://www.scopus.com/inward/record.url?scp=85204300363&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85204300363&partnerID=8YFLogxK
U2 - 10.1038/s41598-024-72342-x
DO - 10.1038/s41598-024-72342-x
M3 - Article
C2 - 39289395
AN - SCOPUS:85204300363
SN - 2045-2322
VL - 14
JO - Scientific reports
JF - Scientific reports
IS - 1
M1 - 21659
ER -