Specular Reflection Removal in Smart Colposcopy Images Using Deep Learning Models for Enhanced Grading of Cervical Cancer
No Thumbnail Available
Date
2024-04
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Avinashilingam
Abstract
Cervical cancer is a significant global health concern, ranking as the fourth most
common cancer among women, primarily caused by Human Papillomavirus (HPV)
affecting the lower uterus. Despite preventive measures like HPV vaccination and
screening programs, many women hesitate due to invasiveness. Smart colposcopy, an
advanced non-invasive approach, captures cervix images for examination. However, white
specular reflections caused by body moisture pose challenges, hindering accurate analysis
and potentially leading to misclassification of dysplasia regions. This research aims to
improve cervical cancer grading by identifying and removing specular reflection from
smart colposcopy images. Initial focus lies on specular reflection identification, employing
RGB and XYZ color spaces for optimal detection. The proposed intensity-based threshold
method accurately identifies specular reflection on XYZ color, overcoming challenges
posed by vaginal discharge and acetowhite regions.
In the second phase, pixel-wise segmentation models like Fully Convolutional
Neural Network (FCN), SegNet, and UNet Model are employed. On comparison analysis
of the segmentation model, the UNet model indeed demonstrates higher accuracy.
However, when it comes to the intersection of Union, the UNet model falls short due to
the overlapping of segmentation. To address this limitation, different versions of the UNet
model are compared, and the UNet++ model emerges as the most promising, exhibiting
higher intersection of union metrics. Subsequently, the UNet++ model is fine-tuned to
optimize its performance in segmenting reflection regions. After segmentation of the
reflection, the empty region should be filled with neighboring pixels to improve the
quality of the images. A novel Bilateral-based Convolutional Inpainting model fills
eliminated regions, improving image quality. This model outperforms traditional methods,
particularly in medical image applications, showcasing efficacy across different masking
ratios. Enhanced images, with removed specular reflection, undergo grading using
DenseNet121, VGG19, and EfficientNet.
Trained on both enhanced and non-enhanced images, classification models achieve
significantly improved prediction accuracy with enhanced images, underscoring the
enhancement technique's impact on cancer stage classification. This research offers
valuable insights into medical image analysis, presenting an integrated approach for
cervical cancer grading. The proposed methodologies exhibit promising results, laying the
groundwork for further advancements in women's health and cancer diagnosis.
Description
Keywords
Computer Science