• Login
    View Item 
    •   MUT Repository
    • Journal Articles
    • School of Computing and IT (JA)
    • Journal Articles (CI)
    • View Item
    •   MUT Repository
    • Journal Articles
    • School of Computing and IT (JA)
    • Journal Articles (CI)
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Natural Language Processing with Transformer-BasedModels: AMeta-Analysis

    Thumbnail
    View/Open
    Natural Language Processing with Transformer-Based Models A Meta-Analysis.pdf (1.529Mb)
    Date
    2025
    Author
    Munyao, Charles
    Ndia, John G.
    Metadata
    Show full item record
    Abstract
    The natural language processing (NLP) domain has witnessed significant advancements with the emergence of transformer-based models, which have reshaped the text understanding and generation landscape.While their capabilities are well recognized, there remains a limited systematic synthesis of how these models perform across tasks, scale efficiently, adapt to domains, and address ethical challenges. Therefore, the aim of this paper was to analyze the performance of transformer-based models across various NLP tasks, their scalability, domain adaptation, and the ethical implications of such models. This meta-analysis paper synthesizes findings from 25 peer-reviewed studies on NLP transformer-based models, adhering to the PRISMA framework. Relevant papers were sourced from electronic databases, including IEEE Xplore, Springer, ACM Digital Library, Elsevier, PubMed, and Google Scholar. The findings highlight the superior performance of transformers over conventional approaches, attributed to selfattention mechanisms and pre-trained language representations. Despite these advantages, challenges such as high computational costs, data bias, and hallucination persist. The study provides new perspectives by underscoring the necessity for future research to optimize transformer architectures for efficiency, address ethical AI concerns, and enhance generalization across languages.This paper contributes valuable insights into the current trends, limitations, and potential improvements in transformer-based models for NLP.
    URI
    10.32604/jai.2025.069226
    http://repository.mut.ac.ke:8080/xmlui/handle/123456789/6659
    Collections
    • Journal Articles (CI) [132]

    MUT Library copyright © 2017-2025  MUT Library Website
    Contact Us | Send Feedback
     

     

    Browse

    All of Research ArchiveCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    MUT Library copyright © 2017-2025  MUT Library Website
    Contact Us | Send Feedback