TY - GEN
T1 - Applying Deep Learning for the Detection of Abnormalities in Mammograms
AU - Wessels, Steven
AU - van der Haar, Dustin
N1 - Publisher Copyright:
© Springer Nature Singapore Pte Ltd 2020.
PY - 2020
Y1 - 2020
N2 - Medical imaging produces massive amounts of data. Computer aided diagnosis (CAD) systems that use traditional machine learning algorithms to derive insights from the data provided in the medical industry struggle to perform at a competent level regarding sensitivity and false positive minimization. This paper looks at some of the current methods used to improve CAD systems in the domain of forming breast cancer diagnosis with mammograms. This paper presents deep learning models that use Convolutional Neural Networks (CNN) to identify abnormalities in mammographic studies that can be used as a tool for the diagnosis of breast cancer. We run two experimental cases on two public mammogram databases, namely MIAS and the DDSM. Firstly, the abnormality severity was classified. Secondly, the combination of abnormality type and its severity were compared in multi-label classification. Two CNN architectures, namely miniature versions of VGGNet and GoogLeNet, were also compared. We were able to achieve a best AUC of 0.85 for the classification of abnormality severity on the DDSM data set and a best Hamming loss of 0.27 on the MIAS data set for the multi-label classification task.
AB - Medical imaging produces massive amounts of data. Computer aided diagnosis (CAD) systems that use traditional machine learning algorithms to derive insights from the data provided in the medical industry struggle to perform at a competent level regarding sensitivity and false positive minimization. This paper looks at some of the current methods used to improve CAD systems in the domain of forming breast cancer diagnosis with mammograms. This paper presents deep learning models that use Convolutional Neural Networks (CNN) to identify abnormalities in mammographic studies that can be used as a tool for the diagnosis of breast cancer. We run two experimental cases on two public mammogram databases, namely MIAS and the DDSM. Firstly, the abnormality severity was classified. Secondly, the combination of abnormality type and its severity were compared in multi-label classification. Two CNN architectures, namely miniature versions of VGGNet and GoogLeNet, were also compared. We were able to achieve a best AUC of 0.85 for the classification of abnormality severity on the DDSM data set and a best Hamming loss of 0.27 on the MIAS data set for the multi-label classification task.
KW - Convolutional neural networks
KW - Deep learning
KW - Medical imaging
UR - http://www.scopus.com/inward/record.url?scp=85077496881&partnerID=8YFLogxK
U2 - 10.1007/978-981-15-1465-4_21
DO - 10.1007/978-981-15-1465-4_21
M3 - Conference contribution
AN - SCOPUS:85077496881
SN - 9789811514647
T3 - Lecture Notes in Electrical Engineering
SP - 201
EP - 210
BT - Information Science and Applications, ICISA 2019
A2 - Kim, Kuinam J.
A2 - Kim, Hye-Young
PB - Springer
T2 - 10th International Conference on Information Science and Applications, ICISA 2019
Y2 - 16 December 2019 through 18 December 2019
ER -