MMC

Making Meta-Data Count

Machine learning has shown promising results in medical image diagnosis, at times with claims of expert-level performance. The availability of large public datasets have shifted the interest of the medical community to high-performance algorithms. However, little attention is paid to the quality of the data or annotations. Algorithms with high reported performances have been shown to suffer from overfitting or shortcuts, i.e. spurious correlations between artifacts in images and diagnostic labels. Examples include pen marks in skin lesion classification, patient position in detection of COVID-19, and chest drains in pneumothorax classification. Performance may appear high when training and evaluating on data with shortcuts, but degraded when the shortcut is removed. This happens because the algorithm cannot generalize based on the actual features related to the diagnosis.

Our goal is to redefine how meta-data is used and thus improve the robustness of algorithms. We plan to:

  • investigate what kind of different shortcuts (based on demographics or image artefacts) might occur and how these affect the performance and fairness of the algorithms ⚖️.
  • investigate meta-data-aware methods to try to avoid learning biases or shortcuts ⚔️🛡.

People

Amelia Jiménez-Sánchez, Veronika Cheplygina.

Funding

DFF (Independent Research Council Denmark) Inge Lehmann 1134-00017B

References


  1. actionability.png
    Towards actionability for open medical imaging datasets: lessons from community-contributed platforms for data management and stewardship
    Amelia Jiménez-Sánchez, Natalia-Rozalia Avlona , Dovile JuodelyteThéo Sourget, Caroline Vang-Larsen , and 2 more authors
    arXiv preprint arXiv:2402.06353, 2024