발간년도 : [2022]
논문정보 |
|
논문명(한글) |
[Vol.17, No.1] Fundus Photograph Discrimination Using Transfer Learning over Limited Computing Power Environment |
|
논문투고자 |
Dongwook Song, Kwangmin Hyun, Sung-Phil Heo |
|
논문내용 |
In this paper, we demonstrates a reliable and efficient approach to detect eye-related disease with automated fundus screening using convolutional neural network (CNN) and transfer learning that counteracts to insufficient annotated data set and image domain shifts. The weight values learned from the data sets can be used as initial parameters for the other desired neural networks, and additional learning can be conducted on top of the pre-learned model, called transfer learning. It is a particularly useful method when the number of data sets is and small over limited computing power environment. Four different fundus image data sets, image domains such as ethnicity of the target and equipment which the fundus photograph was captured with, were used for the validation. The data sets were annotated by ophthalmologists as healthy, abnormal, or diabetic retinopathy. The ResNet-18 model, pre-trained with ImageNet data set of 1.2 million images of 1000 daily routine objects, were used for transfer learning. The pre-trained model were modified and additionally trained to learn features from the fundus images, and were validated with separate test sets. Given limited quantity of fundus photograph data set and various image domains, the deep learning models can yield robust ophthalmological performance in discriminating pathologies in the eyes. In spite of the simplicity, this study illustrates the capability of transfer learning and suggests pragmatic and practical approach to varied medical settings with fluctuating status of data maintenance and different image domains. |
|
첨부논문 |
|
|
|
|
|