We focus on out-of-distribution (OOD) detection, a fundamental mechanism to ensure the reliability of AI systems in medical imaging. Our goal is to develop and evaluate methods that enable models to reliably identify unknown or irregular inputs, which is crucial for patient safety.
Our approach is divided into two steps: First, we create domain-specific benchmarks to systematically evaluate existing OOD methods. Based on this, we optimize promising approaches originally developed for classification tasks and transfer them to image segmentation.
Paper Link