Articles

Oct 22, 2025

Development of AI Models for Detecting Label Errors in Radiographs

Screenshot 2026-01-08 at 22.58.31

We have developed two deep learning models that automatically detect and correct labeling errors regarding body parts and projection orientation in large-scale radiograph datasets with high accuracy.

Paper

Deep learning models for radiography body-part classification and chest radiograph projection/orientation classification: a multi-institutional study

European Radiology

10.1007/s00330-025-12053-7

Author's Comments

Label information attached to images is crucial in the development of medical AI and daily clinical practice. However, as datasets grow larger, the inclusion of incorrect labels due to human input errors becomes unavoidable. The technology we developed automatically and efficiently detects these errors. We believe that utilizing these models will maintain the quality of medical data and contribute to more reliable AI research and the construction of safer clinical environments.

Paper Overview

n recent years, research on artificial intelligence (AI) in the medical field has gained momentum, leading to increasingly large image datasets used for training. However, within these vast amounts of data, information (labels) regarding the imaged body part or image orientation is often registered incorrectly. Labeling errors not only reduce the training efficiency of AI but can also potentially cause confusion in clinical settings. Therefore, in this study, using large-scale data collected from multiple medical institutions, we developed two models to automatically identify the body part in radiographs and the projection or rotation of chest radiographs, and verified their utility.

Paper Details

n this study, we utilized a total of over 860,000 radiographs to build two types of deep learning models. The first model classifies images into seven categories, such as "Head," "Neck," "Chest," "Abdomen," "Pelvis," and "Extremities." The second model determines the projection direction (whether taken from the back, front, or side) and image rotation (orientation) for chest radiographs.

As a result of validation using external institutional data that was not used for training, the body part classification model achieved an accuracy of 98.5%. Similarly, the chest radiograph model demonstrated high accuracy, achieving 98.5% for projection classification and over 99.9% for rotation classification. The Area Under the Curve (AUC), a performance indicator, was 0.99 or higher for almost all categories. Because this technology allows for the efficient correction of incorrect information within large-scale databases, it is expected to be useful for improving the quality of future medical research and clinical operations.