• Login
    View Item 
    •   MUT Research Archive
    • Journal Articles
    • School of Computing and IT (JA)
    • Journal Articles (CI)
    • View Item
    •   MUT Research Archive
    • Journal Articles
    • School of Computing and IT (JA)
    • Journal Articles (CI)
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Architecture of Deep Learning Algorithms in Image Classification: Systematic Literature Review

    Thumbnail
    View/Open
    Architecture of Deep Learning Algorithms in Image Classification.pdf (643.0Kb)
    Date
    2023
    Author
    Ochango, Vincent Mbandu
    Ndia, John Gichuki
    Metadata
    Show full item record
    Abstract
    The development of deep learning algorithms has led to major improvements in image classification, a key problem in computer vision. In this study, the researcher provide an in-depth analysis of the various deep learning method architectures used for image classification. By efficiently learning hierarchical representations straight from raw image data, deep learning has brought about amazing performance gains across a wide range of applications, therefore revolutionizing the discipline. The objective was to review how different architectural choices impact the performance of deep learning models in image classification. Journals and papers published by IEEE access, ACM, Springer, Google scholar, Wiley online library, and Springer between 2013 and 2023 were analyzed. Sixty two publications were chosen based on their titles from the results of the search. The results show that more complex designs usually have better accuracy, but they may also be prone to overfitting and so benefit from regularization methods. Convolutional layers for feature extraction, pooling layers for down sampling and lowering spatial dimensions, and fully linked layers for classification are typical architectural components in deep learning algorithms for image classification. The common occurrence of skip connections in residual networks allows for a more uniform gradient flow and the training of more complex models. Models' discriminatory skills may be improved with the use of attention processes that help them zero down on important parts of a picture. In conclusion to prevent overfitting, regularization techniques like batch normalization and dropout are often used. Improved feature propagation and targeted learning, enabled by skip connections and attention techniques, greatly boosts model performance.
    URI
    http://repository.mut.ac.ke:8080/xmlui/handle/123456789/6519
    Collections
    • Journal Articles (CI) [118]

    MUT Library copyright © 2017-2024  MUT Library Website
    Contact Us | Send Feedback
     

     

    Browse

    All of Research ArchiveCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    MUT Library copyright © 2017-2024  MUT Library Website
    Contact Us | Send Feedback