MMC
Making Meta-Data Count
Machine learning has shown promising results in medical image diagnosis, at times with claims of expert-level performance. The availability of large public datasets have shifted the interest of the medical community to high-performance algorithms. However, little attention is paid to the quality of the data or annotations. Algorithms with high reported performances have been shown to suffer from overfitting or shortcuts, i.e. spurious correlations between artifacts in images and diagnostic labels. Examples include pen marks in skin lesion classification, patient position in detection of COVID-19, and chest drains in pneumothorax classification. Performance may appear high when training and evaluating on data with shortcuts, but degraded when the shortcut is removed. This happens because the algorithm cannot generalize based on the actual features related to the diagnosis.
Our goal is to redefine how meta-data is used and thus improve the robustness of algorithms. We plan to:
- investigate what kind of different shortcuts (based on demographics or image artefacts) might occur and how these affect the performance and fairness of the algorithms ⚖️.
- investigate meta-data-aware methods to try to avoid learning biases or shortcuts ⚔️🛡.
Some students have done work related to this project:
- Trine Naja Eriksen and Cathrine Damgaard developed a chest drain detector with their non-expert annotations that generalizes well to expert labels.
- Paula Victoria Menshikoff and Katarina Kraljevic investigated shortcut learning across different demographic attributes for chest X-ray classification.
People
Amelia Jiménez-Sánchez, Théo Sourget, Veronika Cheplygina.
Funding
DFF (Independent Research Council Denmark) Inge Lehmann 1134-00017B
References
- In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2024
- arXiv preprint arXiv:2309.02244, 2023