WP 2
Artificial Intelligence in picture analysis
Primary Investigator: Prof. Esmaeil S. Nadimi
WP 2 in a nutshell
WP2 focuses on the development of algorithms for automated analyses of capsule investigations. In the current arrangement where highly specialised health workers spend many resources on video analysis, this work package seeks to develop automated analyses of capsule investigations. WP2 will not conduct independent trials but build on investigations collected from the clinical trials.
Aim
To develop a new generation of intelligent dual-mode camera pills with the ability of processing image data in real-time and on the go.
Relevance
- The clinical implementation of capsule investigations is hampered by high price partly due to the manual analysis of pictures.
- Objective assessment of bowel cleansing quality and completeness of the investigation has a low reproducibility.
- Quality of polyp size estimation is poor.
- 50 % of capsule investigations are followed by an OC. This can be significantly reduced by real time in situ characterisation of polyps.
Design
To address the aforementioned issues, we will develop a deep learning algorithm based on convolutional neural networks (CNN) combined with Region proposal networks for autonomous detection, localisation and semantic segmentation of colorectal polyps of any size or morphology. To tackle this, several research units have suggested innovative schemes for autonomously discriminating endoscopic images based on the existence of abnormalities such as polyps.
The task is then to detect colorectal polyps, which typically do not have common morphology, size, texture and colour features from a single patient’s video. In addition, variable lighting and infrequent occurrence of polyps in a given CE video create immense difficulties in devising a robust and data-driven method for reliable detection and segmentation process.
As a pilot study and proof of concept to show AI in practice, we developed a deep convolutional neural network (CNN) for autonomous detection of colorectal polyps, in images captured during wireless colon capsule endoscopy, with considerable risk of malignant evolution to colorectal cancer.
Our CNN was an improved version of AlexNet and ZF-Net, which uses a combination of transfer learning, pre-processing and data augmentation. We further deployed our CNN as the basis for a Faster R-CNN to localise regions of images containing colorectal polyps.
We created an image database of approximately 11.300 capsule endoscopy images from a screening population, including colorectal polyps (of any size or morphology, N=4800) and normal mucosa (N=6500). Our CNN scored an accuracy of 98.0%, a sensitivity of 98.1% and a specificity of 96.3%.
Also, in collaboration with WP7 on pan-enteric capsules, autonomous analysis of capsule endoscopy of the small and large intestine in Crohn’s disease will be developed and tested.
Timeline
Full implementation of polyp recognition is achievable within 2 years. The other algorithms will be ready in 2-4 years.