Virginia Commonwealth University Richmond, Virginia
Sri Harsha Boppana, MBBS, MD1, Manaswitha Thota, MD2, Gautam Maddineni, MD3, Sachin Sravan Kumar Komati, 4, Sai Lakshmi Prasanna Komati, MBBS5, C. David Mintz, MD, PhD6 1Nassau University Medical Center, East Meadow, NY; 2Virginia Commonwealth University, Richmond, VA; 3Florida State University, Cape Coral, FL; 4Florida International University, Florida, FL; 5Government Medical College, Ongole, Ongole, Andhra Pradesh, India; 6Johns Hopkins University School of Medicine, Baltimore, MD Introduction: Early colorectal polyp detection via colonoscopy reduces cancer incidence, but privacy regulations hinder pooling endoscopic images. Federated learning (FL) enables collaborative training without centralizing raw data. We created an FL framework using the ERCPMPv5 dataset and simulated site‐specific subsets to develop a robust multicenter polyp classifier. Methods: ERCPMPv5 comprises 796 RGB images and 21 videos from 191 patients (Olympus colonoscope, 368×256 pixels), annotated with Paris, Pit, JNET, and histopathological (tubular, villous, tubulovillous, hyperplastic, serrated, inflammatory, adenocarcinoma) labels. We randomly split images into five nonoverlapping subsets (\~160/site), ensuring proportional subtype and quality representation. Each site fine‐tuned a ResNet‐50 backbone (pretrained on ImageNet) for five local epochs per communication round, using categorical cross‐entropy loss, the Adam optimizer (learning rate 1×10⁻⁴), and data augmentation (rotations ±10°, brightness ±15%, flips). Filenames encoded pathology and JNET class for label extraction. After local updates, encrypted weights were federated via Federated Averaging (FedAvg) over 50 rounds. The global model was then fine‐tuned on a validation set (ERCPMP supplementary: n=120; external: n=300). Final evaluation used a hold‐out test set (150 images/site; n=750), balanced by subtype and resolution. Results: The federated model achieved an AUC of 0.95 (95% CI, 0.94–0.96) and 89.2% subtype accuracy. Adenomatous sensitivity was 92.5% (CI, 90.7%–94.2%) and specificity 90.3% (CI, 88.5%–92.0%). Fine‐tuning raised AUC to 0.96 (CI, 0.95–0.97), sensitivity to 94.1%, and specificity to 91.7%. Site‐specific AUCs ranged from 0.94 to 0.96, indicating consistent performance. Discussion: Our FL framework using ERCPMPv5 achieved performance comparable to centralized training while preserving privacy. Fine‐tuning on supplemental data enhanced generalizability. This approach enables multicenter deployment of AI tools for colorectal polyp detection under strict privacy constraints.
Disclosures: Sri Harsha Boppana indicated no relevant financial relationships. Manaswitha Thota indicated no relevant financial relationships. Gautam Maddineni indicated no relevant financial relationships. Sachin Sravan Kumar Komati indicated no relevant financial relationships. Sai Lakshmi Prasanna Komati indicated no relevant financial relationships. C. David Mintz indicated no relevant financial relationships.
Sri Harsha Boppana, MBBS, MD1, Manaswitha Thota, MD2, Gautam Maddineni, MD3, Sachin Sravan Kumar Komati, 4, Sai Lakshmi Prasanna Komati, MBBS5, C. David Mintz, MD, PhD6. P4781 - Privacy‐Preserving Federated Deep Learning for Robust Multicenter Colorectal Polyp Classification, ACG 2025 Annual Scientific Meeting Abstracts. Phoenix, AZ: American College of Gastroenterology.