Coreference Resolution System Architecture and Inference Methods

Slide Note
Embed
Share

This research focuses on coreference resolution within the OntoNotes-4.0 dataset, utilizing inference methods such as Best-Link and All-Link strategies. The study investigates the contributions of these methods and the impact of constraints on coreference resolution. Mention detection and system architecture play crucial roles in achieving high recall rates with a rule-based system, despite facing challenges with precision. The approach showcases innovations in coreference resolution techniques and system design.


Uploaded on Sep 15, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Inference Protocols for Coreference Resolution Kai-Wei Chang Department of Computer Science University of Illinois at Urbana-Champaign Joint Work With Rajhans Samdani, Alla Rozovskaya, Nick Rizzolo, Mark Sammons, and Dan Roth 1

  2. CoNLL-Shared Task 2011 Coreference resolution on the OntoNotes-4.0 data set. Based on Bengtson and Roth (2008), our system is built using Learning Based Java (Rizzolo and Roth, 2010) We participated in the closed track of the shared task 2

  3. Contributions We investigate two inference methods: Best-Link & All-Link provide a flexible constraints We architecture for incorporating We compare and evaluate the two inference approaches and the contribution of constraints 3

  4. System Architecture Mention Detection Coreference Resolution with Pairwise Coreference Model Inference Procedure Best-Link strategy All-Link strategy Knowledge-based Constraints Post-Processing 4

  5. System Architecture Mention Detection Coreference Resolution with Pairwise Coreference Model Inference Procedure Best-Link strategy All-Link strategy Knowledge-based Constraints Post-Processing 5

  6. Mention Detection Singleton mentions are not annotated in OntoNotes-4.0 A specific NP may correspond to a mention in one document but will not be a mention in another document Training a classifier might not achieve good performance We design a rule-based mention detection system High recall (~90%) but low precision (~35%) At post-processing step, we remove all predicted mentions that remain in singleton cluster after inference stage The mention detection system achieves 64.88% in F1 score on TEST set 6

  7. System Architecture Mention Detection Coreference Resolution with Pairwise Coreference Model Inference Procedure Best-Link strategy All-Link strategy Knowledge-based Constraints Post-Processing 7

  8. Inference in Pairwise Coreference Model Assume that we have a pairwise mention score, which indicates the compatibility score of a pair of mentions Inference procedure: Input: a set of pairwise mention scores over a document Output: globally consistent cliques representing entities We investigate two approaches: Best-Link All-Link 8

  9. Best-Link Inference For each mention u, Best-Link considers the best mention on its left to connect to Then, it creates a link between them if the score is above some threshold (typically 0) 1.5 3.1 3.1 m m* * u u 1.2 -1.5 0.2 Best-Link inference is simple and effective (Bengtson and Roth, 2008) 9

  10. All-Link Inference It scores a clustering of mentions by including all possible pairwise links in the score: -0.5 1.5 3.1 1.5 Score: 1.5 + 3.1 - 0.5 + 1.5 = 5.6 Has been applied to coreference resolution (Mccallum and Wellner, 2003, Finley and Joachims, 2005) 10

  11. Integer Linear Programming (ILP) Formulation for Inference Both Best-Link and All-Link inference can be written as an ILP problem: Pairwise mention score Best-Link: Binary variable Enforce the transitivity closure of the clustering All-Link: 11

  12. Pairwise Mention Scoring For each mention pair (u, v), generate the compatibility score Weight vector learned from training data Compatibility score given by constraints A threshold parameter (to be tuned) Extracted features 12

  13. Training Procedure The choice of a learning strategy depends on the inference procedure 1) Binary classification for pairwise coreference model Positive examples: mention pairs (u, v), where v is the closest preceding mention in u s equivalence class. Negative examples: mention pairs (u, v), where v is a preceding mention of u and u, v are in different classes 2) Structured Learning Structured perceptron algorithm (similar to Finley and Joachims, 2005) We apply the mention detector to the training set, and train the classifier using the union set of gold and prediction mentions 13

  14. Linguistic and Knowledge-based Constraints Most mistakes are in recall: The system fails to link mentions that refer to the same entity We consider three constraints that improve recall on NPs with definite determiner and mentions whose heads are NE For example, the following pairs are corrected by constraints: [Governor Bush] and [Bush] [Sony itself] and [Sony] [Farmers] and [Los Angeles based Farmers] 14

  15. Results on DEV Set -- Predicted Mentions Method MD MUC BCUB CEAF AVG Best-Link 64.70 55.67 69.21 43.78 56.22 Best-Link W/ Const. 64.69 55.80 69.29 43.96 56.35 All-Link 63.30 54.56 68.50 42.15 55.07 All-Link W/ Const. 63.39 54.56 68.46 42.20 55.07 15

  16. Results on DEV Set -- Gold Mentions Method MUC BCUB CEAF AVG Best-Link 80.58 75.68 64.68 73.65 Best-Link W/ Const. 80.56 75.02 64.24 73.27 All-Link 77.72 73.65 59.17 70.18 All-Link W/ Const. 77.94 73.43 59.47 70.28 16

  17. Official Scores on TEST Set Task MD MUC BCUB CEAF AVG Pred. Mentions w/ Pred. Boundaries (Best-Link W/ Const. ) 64.88 57.15 67.14 41.94 55.96 Pred. Mentions w/ Gold Boundaries (Best-Link W/ Const. ) 67.92 59.75 68.65 41.42 56.62 Gold Mentions (Best-Link) - 82.55 73.70 65.24 73.83 We train the system on both TRAIN and DEV Set 17

  18. Discussion and Conclusion Best-Link outperforms All-Link Our approach accommodates infusion of knowledge via constraints Constraints improve the recall on a subset of mentions There are other common errors for the system that might be fixed by constraints Future work: this approach can be used to incorporate additional knowledge sources Thank you 18

Related