Feed-forward networks using logistic regression and support vector machine for whole-slide breast cancer histopathology image classification

Aruna Devi Karuppasamy*, Abdelhamid Abdesselam, Rachid Hedjam, Hamza zidoum, Maiya Al-Bahri

*المؤلف المقابل لهذا العمل

نتاج البحث: المساهمة في مجلةArticleمراجعة النظراء

ملخص

The performance of an image classification depends on the efficiency of the feature learning process. This process is a challenging task that traditionally requires prior knowledge from domain experts. Recently, representation learning was introduced to extract features directly from the raw images without any prior knowledge. Deep learning using a Convolutional Neural Network (CNN) has gained massive attention for performing image classification, as it achieves remarkable accuracy that sometimes exceeds human performance. But this type of network learns features by using a back-propagation approach. This approach requires a huge amount of training data and suffers from the vanishing gradient problem that deteriorates the feature learning. The forward-propagation approach uses predefined filters or filters learned outside the model and applied in a feed-forward manner. This approach is proven to achieve good results with small size labeled datasets. In this work, we investigate the suitability of using two feed-forward methods such as Convolutional Logistic Regression Network (CLR), and Convolutional Support Vector Machine Network for Histopathology Images (CSVM-H). The experiments we have conducted on two small breast cancer datasets (Sultan Qaboos University Hospital (SQUH) and BreaKHis dataset) demonstrate the advantage of using feed-forward approaches over the traditional back-propagation ones. On those datasets, the proposed models CLR and CSVM-H were faster to train and achieved better classification performance than the traditional back-propagation methods (VggNet-16 and ResNet-50) on the SQUH dataset. Importantly, our proposed approach CLR and CSVM-H efficiently learn representations from small amounts of breast cancer whole-slide images and achieve an AUC of 0.83 and 0.84, respectively, on the SQUH dataset. Moreover, the proposed models reduce memory footprint in the classification of Whole-Slide histopathology images since their training time is significantly reduced compared to the traditional CNN on the SQUH and BreaKHis datasets.

اللغة الأصليةEnglish
رقم المقال100126
دوريةIntelligence-Based Medicine
مستوى الصوت9
المعرِّفات الرقمية للأشياء
حالة النشرPublished - يناير 1 2024

ASJC Scopus subject areas

  • ???subjectarea.asjc.2700.2701???
  • ???subjectarea.asjc.2700.2718???
  • ???subjectarea.asjc.1700.1706???
  • ???subjectarea.asjc.1700.1702???

قم بذكر هذا