| All | Since 2020 | |
| Citation | 172 | 110 |
| h-index | 7 | 5 |
| i10-index | 1 | 0 |
WJERT Citation 
Login
News & Updation
Abstract
PERFORMANCE ANALYSIS OF NEURAL LANGUAGE MODELS FOR AUTOMATED TEXT SUMMARIZATION
*G. Krishnaveni, B. Sarushma, L. Samuel Raju, A. Rakesh, B. Sai Pavan
ABSTRACT
The massive increase in digital text has created a huge need for effective, automated summarization systems. Manually summarizing large documents takes significant time and is often not practical. Recent improvements in transformer neural language models have helped with abstractive summarization significantly. In this research, we assess the relative performance of three neural language models for automatic text summarization, specifically BART, T5, and GPT, using the same conditions for each model (compression ratio for conciseness and ROUGE metrics for summary quality). We propose a combined performance metric to compare this trade-off of length versus informativeness. Our experimental results demonstrate encoder-decoder architectures outperform autoregressive architectures in automatic summarization tasks. BART produced the highest quality summaries among all three models evaluated. Our study proposes an evaluation framework and methods for model selection to assist in the efficient development of practical summarization systems.
[Full Text Article] [Download Certificate] https://doi.org/10.5281/zenodo.20021540