< Back to previous page

Publication

Layered deep learning for automatic mandibular segmentation in cone-beam computed tomography

Journal Contribution - Journal Article

OBJECTIVE: To develop and validate a layered deep learning algorithm which automatically creates three-dimensional (3D) surface models of the human mandible out of cone-beam computed tomography (CBCT) imaging. MATERIALS & METHODS: Two convolutional networks using a 3D U-Net architecture were combined and deployed in a cloud-based artificial intelligence (AI) model. The AI model was trained in two phases and iteratively improved to optimize the segmentation result using 160 anonymized full skull CBCT scans of orthognathic surgery patients (70 preoperative scans and 90 postoperative scans). Finally, the final AI model was tested by assessing timing, consistency, and accuracy on a separate testing dataset of 15 pre- and 15 postoperative full skull CBCT scans. The AI model was compared to user refined AI segmentations (RAI) and to semi-automatic segmentation (SA), which is the current clinical standard. The time needed for segmentation was measured in seconds. Intra- and inter-operator consistency were assessed to check if the segmentation protocols delivered reproducible results. The following consistency metrics were used: intersection over union (IoU), dice similarity coefficient (DSC), Hausdorff distance (HD), absolute volume difference and root mean square (RMS) distance. To evaluate the match of the AI and RAI results to those of the SA method, their accuracy was measured using IoU, DSC, HD, absolute volume difference and RMS distance. RESULTS: On average, SA took 1218.4s. RAI showed a significant drop (p<0.0001) in timing to 456.5s (2.7-fold decrease). The AI method only took 17s (71.3-fold decrease). The average intra-operator IoU for RAI was 99.5% compared to 96.9% for SA. For inter-operator consistency, RAI scored an IoU of 99.6% compared to 94.6% for SA. The AI method was always consistent by default. In both the intra- and inter-operator consistency assessments, RAI outperformed SA on all metrics indicative of better consistency. With SA as the ground truth, AI and RAI scored an IoU of 94.6% and 94.4%, respectively. All accuracy metrics were similar for AI and RAI, meaning that both methods produce 3D models that closely match those produced by SA. CONCLUSION: A layered 3D U-Net architecture deep learning algorithm, with and without additional user refinements, improves time-efficiency, reduces operator error, and provides excellent accuracy when benchmarked against the clinical standard. CLINICAL SIGNIFICANCE: Semi-automatic segmentation in CBCT imaging is time-consuming and allows user-induced errors. Layered convolutional neural networks using a 3D U-Net architecture allow direct segmentation of high-resolution CBCT images. This approach creates 3D mandibular models in a more time-efficient and consistent way. It is accurate when benchmarked to semi-automatic segmentation.
Journal: Journal of Dentistry
ISSN: 0300-5712
Volume: 114
Publication year:2021
BOF-keylabel:yes
IOF-keylabel:yes
BOF-publication weight:3
CSS-citation score:1
Authors:International
Authors from:Higher Education
Accessibility:Open