Advancing Multi-Perspective Summarization for Scientific Documents

Slide Note
Embed
Share

In a short paper for SDP@COLING 2022, Sajad Sotudeh and Nazli Goharian from Georgetown University introduce a method for generating topic-aware multi-perspective summaries for scientific documents. They propose utilizing Neural Topic Modeling (NTM) to incorporate various perspectives from the papers into the summarization process, aiming to enhance the evaluation of summarization systems by capturing multiple viewpoints efficiently. The approach involves extracting key sentences from multiple gold summaries and generating abstractive summaries, thereby improving the representation of diverse perspectives in the final summary.


Uploaded on Oct 10, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. GUIR @ MuP 2022: Towards Generating Topic-aware Multi-perspective Summaries for Scientific Documents Sajad Sotudeh, and Nazli Goharian IRLab, Georgetown University Department of Computer Science, Georgetown University Short paper appearing at SDP@COLING, 2022

  2. Scientific Document Summarization Multi-perspective Summarization Scientific paper Summary Multiple Summaries 2

  3. Multi-perspective Summarization Why? For the evaluation of summarization systems; scoring a summary in terms of its goodness in capturing multiple perspectives from the paper. How? Need to have multiple gold summaries unlike having one single summary per paper. Benchmarks? The MuP shared task provides a dataset of papers, with over 8k scientific papers with multiple gold summaries to this end. 3

  4. Method Topic modeling Intuition: each topic captures a specific perspective of the paper. Method: Incorporating Neural Topic Modeling (NTM) into summarization system. Two-step summarization Intuition: each perspective is discussed within specific setsof the paper s sentences. Method: Extracting top sentences given each gold summary, then generating abstractive summaries (training), and a single summary (inference). 4

  5. LED (decoder) w11 w12 w13 w1m</s> <s> w11 w12 w13 w1m</s> <s> <s> Sentence extraction probabilities Extractor 0.9 0.2 0.1 0.1 0.4 Multiple summaries Sentence Scorer Selecting sentence representations w11 w12 w13 w1m</s> <s> w11 w12 w13 w1m</s> <s> <s> Topic representation ( ) x Information Gating layer Neural Topic Model Concatentation Average pooling sampled representation Xdoc reconstructed BoW Xbow Hidden sampling w11 w12 w13 w1m</s> <s> w11 w12 w13 w1m</s> <s> <s> 2 Global attention Xbow LED (encoder) Local attention <s> BOS token wij j-th word of the i-th sentence w11 w12 w13 w1m</s> <s> w11 w12 w13 w1m</s> <s> <s> EOS token </s> 5

  6. Dataset The MuP dataset proposed by the shared task organizers. 8,734 (train), 1,060 (validation), and 1,052 (test) Summaries are 100.1 words long on average # Summaries # Papers 1 2276 2 3039 3 2867 4 1827 5 225 >5 257 6

  7. Experimental results 7

  8. Analysis (15 papers from validation set) Outperformed cases 47%: The detected topics by NTM model align with gold topics the summarizer is guided to pick up the information around the gold topics. 66%: Two-step summarization has the most effect on dropping unimportant sentences. Underperformed cases 72%: The extracted topics are more focused on those frequently mentioned in the paper, but not in gold summaries, and vice versa! 8

  9. Generated Sample Summary (1) The paper presents a new saliency map interpretability method for the task of image classification. It considers the saliency map as a random variable and computes the posterior distribution over it. The likelihood measures the predictions of the classifier for an image and its perturbed counterpart. The prior encodes positive correlation among adjacent pixels. Variational approximation is used to approximate the posterior. Generated Summary This paper proposes variational saliency maps (VARSAL), a new interpretability method that considers a saliency map as a random variable and aims to calculate the posterior distribution over it. The likelihood function is designed to measure the distance between the classifier s predictive probability of an image and that of a perturbed counterpart. The prior distribution is modeled as a soft-tv gaussian prior. The authors use a variational approximation to make the posterior behave as the distribution of explanation. Summary (2) This paper proposed a method for generating saliency maps for image classifiers that are stochastic (instead of deterministic). The probabilistic model assumes a saliency map random variable that generates the data with a classifier. The inference is done by variational methods. The paper presents several qualitative examples and a comparison to previous work using the pixel perturbation benchmark. Summary (3) This paper proposes a new interpretability method for image classification networks. It considers a saliency map as a random variable and aims to calculate the posterior distribution over the saliency map. The likelihood function and the prior distribution are then designed to make the posterior distribution over the saliency map explain the behavior of the classifier s prediction. Quantitative evaluation on the perturbation benchmark as well as qualitative result show the effectiveness of the proposed method over baselines. Future work Saliency-aware Topic Modeling 9

  10. Conclusion Incorporation of NTM model into summarization system + two-step summarization yield a promising performance on this task. The analysis revealed that: The extracted topics align with gold topics (47%) Unimportant information are dropped by two-step approach (66%) The extracted topics are focused on frequently mentioned topics, that are not in gold summaries (72%) 10

  11. Thank You For Your Attention 11

Related


More Related Content