Self-supervised Learning to Predict Ejection Fraction using Motion-mode Images
Open access
Date
2023-03-20Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Data scarcity is a fundamental problem since data lies at the heart of any ML project. For most applications, annotation is an expensive task in addition to data collection. Thus, learning from limited labeled data is very critical for data-limited problems, such as in healthcare applications, to have the ability to learn in a sample-efficient manner. Self-supervised learning (SSL) can learn meaningful representations from exploiting structures in unlabeled data, which allows the model to achieve high accuracy in various downstream tasks, even with limited annotations. In this work, we extend contrastive learning, an efficient implementation of SSL, to cardiac imaging. We propose to use generated M(otion)-mode images from readily available B(rightness)-mode echocardiograms and design contrastive objectives with structure and patient-awareness. Experiments on EchoNet-Dynamic show that our proposed model can achieve an AUROC score of 0.85 by simply training a linear head on top of the learned representations, and is insensitive to the reduction of labeled data. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000648766Publication status
publishedExternal links
Publisher
OpenReviewEvent
Subject
cardiac imaging; self-supervised learning; contrastive learning; motion-mode imageOrganisational unit
09670 - Vogt, Julia / Vogt, Julia
More
Show all metadata
ETH Bibliography
yes
Altmetrics