MHAttnSurv: Multi-head attention for survival prediction using whole-slide pathology images

Title:

MHAttnSurv: Multi-head attention for survival prediction using whole-slide pathology images

Link:

https://www.sciencedirect.com/science/article/pii/S0010482523003487

Abstract:

Whole slide images (WSI) based survival prediction has attracted increasing interest in pathology. Despite this, extracting prognostic information from WSIs remains a challenging task due to their enormous size and the scarcity of pathologist annotations. Previous studies have utilized multiple instance learning approach to combine information from several randomly sampled patches, but this approach may not be adequate as different visual patterns may contribute unequally to prognosis prediction. In this study, we introduce a multi-head attention mechanism that allows each attention head to independently explore the utility of various visual patterns on a tumor slide, thereby enabling more comprehensive information extraction from WSIs. We evaluated our approach on four cancer types from The Cancer Genome Atlas database. Our model achieved an average c-index of 0.640, outperforming three existing state-of-the-art approaches for WSI-based survival prediction on these datasets. Visualization of attention maps reveals that the attention heads synergistically focus on different morphological patterns, providing additional evidence for the effectiveness of multi-head attention in survival prediction.

Citation:

Shuai Jiang, Arief A. Suriawinata, Saeed Hassanpour, “MHAttnSurv: Multi-Head Attention for Survival Prediction using Whole-Slide Pathology Images”, Computers in Biology and Medicine, 158, 2023.

Previous
Previous

Patient Engagement in a Multimodal Digital Phenotyping Study of Opioid Use Disorder

Next
Next

Detection of Colorectal Adenocarcinoma and Grading Dysplasia on Histopathologic Slides Using Deep Learning