SciELO - Scientific Electronic Library Online

 
vol.21 número1The preference selection index performance in large alternatives’ decisions to support the AHP: The case of a university selectionAn intelligent system for heart disease diagnosis using regularized deep neural network índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Indicadores

Links relacionados

  • Não possue artigos similaresSimilares em SciELO

Compartilhar


Journal of applied research and technology

versão On-line ISSN 2448-6736versão impressa ISSN 1665-6423

Resumo

TRIPATHY, Saroj Anand  e  SHARMILA, A.. Abstractive method-based text summarization using bidirectional long short-term memory and pointer generator mode. J. appl. res. technol [online]. 2023, vol.21, n.1, pp.73-86.  Epub 23-Maio-2023. ISSN 2448-6736.  https://doi.org/10.22201/icat.24486736e.2023.21.1.1446.

With the rise of the Internet, we now have a lot of information at our disposal. We are swamped from many sources - news, social media, to name a few, office emails. This paper addresses the problem of reading through such extensive information by summarizing it using text summarizer based on Abstractive Summarization using deep learning models, i.e., using bidirectional Long Short-Term Memory (LSTM) networks and Pointer Generator mode. The LSTM model (which is a modification of the Recurrent Neural Network) is trained and tested on the Amazon Fine Food Review dataset using the Bahadau Attention Model Decoder with the use of Conceptnet Numberbatch embeddings that are very similar and better to GloVe. Pointer Generator mode is trained and tested by the CNN / Daily Mail dataset and the model uses both Decoder and Attention inputs. But due 2 major problems in LSTM model like the inability of the network to copy facts and repetition of words the second method is, i.e., Pointer Generator mode is used. This paper in turn aims to provide an analysis on both the models to provide a better understanding of the working of the models to enable to create a strong text summarizer. The main purpose is to provide reliable summaries of datasets or uploaded files, depending on the choice of the user. Unnecessary sentences will be rejected in order to obtain the most important sentences. The use of different parameters for measuring the performance of the model helped to analyze its efficiency in terms of Cosine Similarity of sentence and Model: 0.29277002, at the 200th epoch we obtained a ROUGE 1, ROUGE 2 and ROUGE L scores of 22.16, 38.76 and 39.12. Also Training accuracy of 0.9817 and Training loss of 0.0312 was obtained at the 200th epoch.

Palavras-chave : Text summarizer; deep learning; bidirectional long short-term memory; pointer generator mode; Bahadau attention model decoder; Conceptnet Numberbatch; GloVe.

        · texto em Inglês     · Inglês ( pdf )