OH S. J. (1), LEE G. (2), CHOI T. (2), SHIN S. H. (1)
(1) National Health Insurance Service Ilsan Hospital , Goyang REPUBLIC OF KOREA(2) MediWhale, Seoul REPUBLIC OF
Near-infrared spectroscopy(NIRS) can detect coronary lipid-core plaque (LCP) leading to fatal future event. We are to reconstruct NIRS images from corresponding intravascular ultrasound (IVUS) images with deep learning method, which have shown high sensitivity and specificity for medical image diagnosis.
METHODS AND RESULTS
We used institutional image database of true vessel characterization (TVC, infraredx Inc, USA) and developed deep learning algorithms for segmenting lumen/adventitia and detecting LCP in coronary vessels. The TVC database of 199
arteries from 116 patients were used as training dataset and 17 arteries from 10 patients were used as validation dataset. First, we segmented lumen and adventitia of vessel as ROI of LCP by training U-net architecture integrated with residual block. After that, a specific type (VGG model) was used for classifying segmented vessel images as low(red) and high(yellow) probability LCP labeled by TVC images. The sensitivity and specificity of the algorithm for detecting high probability LCP were generated on Halogram and Chemogram. Training set consisted of 7,132 rectangles randomly chosen from 151,884 rectangles extracted from IVUS images. Test set was 176 rectangles, randomly chosen from 31,716 rectangles of 10 patients. Vessels? lumen and adventitia can be segmented with 0.95 and 0.9 IoU(Intersection over Union) respectively. For detecting LCP chemogram, IVUS frame images from validation dataset were classified with AUC 0.89 (95% CI, 0.88-0.90), with 85% sensitivity and 91% and specificity. For detecting LCP halogram, rectangular patches extracted from validation IVUS images are classified with AUC 0.88 (95% CI, 0.87-0.89) with 0.80 sensitivity and 0.92 specificity.
Deep learning method had high IoU for vessels? lumen and adventitia segmentation and high sensitivity and specificity for detecting LCP from IVUS images.
Deep Learning Is Effective for Classifying Non-referable versus Referable Eye Condition using Fundus Photographs
Fundus photographs is the most common imaging modality for screening eye disease. This study aimed to determine whether deep learning could be utilized to distinguish referable eye disease (RED) from normal fundus photographs for general eye screening.
This study included fundus photographs and results for 98,816 anonymous eye examinations from single tertiary center between March 2013 and June 2016.
Automated extraction of a fundus photographs from health screening database was performed and linked to the results of ophthalmologic examination report. Deep learning algorithm was developed randomly selected 20,000 non-RED and a total of 13,877 RED images in order to make similar ratio. RED includes any macular diseases, glaucoma suspect, or media opacity and so on. Validation was performed using a random subset of 300 fundus photographs from 4,199 health screening database between Jan 2017 to Mar 2017 was performed.
Main Outcome Measure: Accuracy and area under the receiver operating characteristic curve.
For detecting RED, we achieve an area under the ROC curve of 0.898 (95% CI, 0.863-0.933) using validation set. Using the cut point with high sensitivity of >90%, validation study showed that sensitivity/specificity was 91.0% (95% confidence interval [CI] 84.4 to 95.4) / 73.0% (95% CI 65.9 to 79.4) in validation sets.
Our findings suggest that an algorithm based on deep learning have a possibility to use for detecting referable eye disease with a high sensitivity setting. Further clinical based study to determine feasibility of applying this algorithm is needed.