Exploring Knowledge Base Construction and Commonsense Knowledge in Fiction

Slide Note
Embed
Share

Delve into innovative research interests focusing on knowledge base construction using fictional texts as archetypes, taxonomies for constructing knowledge bases, and extraction of commonsense knowledge from diverse sources. Challenges such as sparsity and semantics are addressed through comprehensive extraction techniques and multifaceted semantics consolidation.


Uploaded on Jul 22, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. What knowledge bases know (and what they don't) Simon Razniewski Max Planck Institute for Informatics

  2. My background: Max Planck Institute for Informatics (MPII) Max Planck Society Foundational research organization in Germany MPII 150 members Located in Saarbr cken (next to Paris) Department 5: Headed by Gerhard Weikum ~25 members Themes: Language, data, knowledge Notable projects: YAGO, WebChild Saarbr cken 2

  3. Myself Senior Researcher at MPI for Informatics, Germany Heading Knowledge Base Construction and Quality area of department 5 4 PhD students Assistant professor FU Bozen-Bolzano, Italy, 2014-2017 PhD FU Bozen-Bolzano, 2014 Research stays at UCSD (2012), AT&T Labs-Research (2013), University of Queensland (2015) Research interests: 1. KB construction in fiction (1 slide) 2. Common-sense knowledge (1 slide) 3. KB recall assessment (remainder of talk) 3

  4. Research interests (1): KB construction in fiction Fictional texts as archetypes of domain-specific low-resource universes Lord of the Rings, Marvel Superheroes Amazon titles and roles, French Army lingo, model railway terminology Taxonomies as backbones for KBC Construction from noisy category systems and exploiting WordNet for abstract levels [WWW 19] Entity types outside typical news/Wikipedia domains Reference type systems from related universes Typing by combining supervised, dependency-based and lookup- based modules Consolidation using type correlation and taxonomical coherence [WSDM 20] 4

  5. Research interests (2): Commonsense knowledge Properties of general world concepts instead of instances Elephants, submarines, pianos Not: Seattle, Trump, Amazon Challenges: Sparsity Reporting bias (web knows as many pink as grey elephants) Semantics (lions have manes <> lions attack humans) Our approach: Comprehensive extraction from question datasources [CIKM 19] Multifaceted semantics, consolidation via taxonomy-based soft constraints [under review/arXiv 20] 5

  6. What knowledge bases know (and what they don't) Simon Razniewski Max Planck Institute for Informatics

  7. KB construction: Current state General-world knowledge an old dream of AI Large KBs general and domain-specific KBs at most major tech companies Research progress visible downstream IBM Watson beats humans in trivia game in 2011 Entity linking systems competitive with humans on popular news corpora Systems pass 8th grade science tests in the AllenAI Science challenge in 2016 Intrinsic question: How good are these KBs? 7

  8. Intrinsic analysis Is what they know true? (precision or correctness) Do they know what is true? (recall or completeness) 8

  9. Recall awareness: Extrinsic relevance Resource efficiency Directing extraction efforts towards incomplete regions Truth consolidation Complete sources as evidence against spurious extractions Question answering Integrity: Say when you don t know Negation and counts rely on completeness 9

  10. KB recall: Good? DBpedia: 167 out of 204 Nobel laureates in Physics Wikidata: 2 out of 2 children of Obama Google Knowledge Graph: 39 out of 48 Tarantino movies 10

  11. KB recall: Bad? Wikidata knows only 15 employees of Amazon DBpedia: contains 6 out of 35 Dijkstra Prize winners Google Knowledge Graph: ``Points of Interest Completeness? 11

  12. What previous work says KB5 KB4 KB3 KB2 KB1 KB engineers have mainly tried to make KBs bigger. Another point, however is to understand how much they know. There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns the ones we don't know we don't know. [Marx, 1845] 12 [Rumsfeld, 2002]

  13. Outline Assessing KB recall 1. Logical foundations 2. Data mining 3. Information extraction 4. Comparative coverage 13

  14. Outline Assessing KB recall 1. Logical foundations 2. Data mining 3. Information extraction 4. Comparative coverage 14

  15. Closed and open-world assumption won name award Brad Pitt Oscar Einstein Nobel Prize Berners-Lee Turing Award Closed-world assumption Open-world assumption Yes won(BradPitt, Oscar)? Yes Maybe No won(Pitt, Nobel Prize)? Databases traditionally employ closed-world assumption KBs (semantic web) necessarily operate under open-world assumption 15

  16. Open-world assumption Q: Game of Thrones directed by Shakespeare? KB: Maybe Q: Brad Pitt works at Amazon? KB: Maybe Q: Trump brother of Kim Jong Un? KB: Maybe 16

  17. The logicians way out Need power to express both maybe and no = Partial-closed world assumption Approach: Completeness statements [Motro 1989] won Completeness statement: wonAward is complete for Nobel Prizes name award Brad Pitt Oscar Yes won(Pitt, Oscar)? Einstein Nobel Prize No Maybe won(Pitt, Nobel)? won(Pitt, Turing)? Berners-Lee Turing Award These statements are cool [VLDB 11, CIKM 12, SIGMOD 15]17

  18. Where would completeness statements come from? Data creators should pass them along as metadata Or editors should add them in curation steps Developed COOL-WD (Completeness tool for Wikidata) 18

  19. 19

  20. But Requires human effort Editors are lazy Automatically created KBs do not even have editors Remainder of this talk: How to automatically acquire information about KB completeness/recall 20

  21. Outline Assessing KB recall 1. Logical foundations 2. Data mining 3. Information extraction 4. Comparative coverage 21

  22. Data mining: Idea (1/2) Certain patterns in data hint at completeness/incompleteness People with a death date but no death place are incomplete for death place People with less than two parents are incomplete for parents Movies with a producer are complete for directors 22

  23. Data mining: Idea (2/2) Examples can be expressed as Horn rules: dateOfDeath(X, Y) lessThan1(X, placeOfDeath) incomplete(X, placeOfDeath) lessThan2(X, hasParent) incomplete(X, hasParent) movie(X) producer(X, Z) complete(X, director) Can such patterns be discovered with association rule mining? 23

  24. Rule mining: Implementation We extended the AMIE association rule mining system with meta-predicates on Complete/incomplete complete(X, director) Object counts lessThan2(X, hasParent) Then mined rules with complete/incomplete in the head for 20 YAGO/Wikidata relations Result: Can predict (in-)completeness with 46-100% F1 [WSDM 17] 24

  25. Data mining: Challenges Consensus: human(x) Complete(x, graduatedFrom) schoolteacher(x) Incomplete(x, graduatedFrom) professor(x) Complete(x, graduatedFrom) John (human, schoolteacher, professor) Complete(John, graduatedFrom)? Rare properties require very large training data E.g., US presidents being complete for education Annotated ~3000 rows at 10ct/row 0 US presidents 25

  26. Outline Assessing KB recall 1. Logical foundations 2. Data mining 3. Information extraction 4. Comparative coverage 26

  27. IE idea 1: Count information Barack and Michelle have two children KB: 0 KB: 1 KB: 2 Recall: 0% Recall: 50% Recall: 100% 27

  28. Count extraction: Implementation Developed a LSTM-based classifier for identifying numbers that express relation cardinalities Works for a variety of topics such as Family relations has 2 siblings Geopolitics Artwork is composed of seven boroughs consists of three episodes Counts sometimes the rule, not the exception E.g., 178% more children in counts on Wikipedia than as facts in Wikidata [ACL 17+ISWC 18] 28

  29. Count extraction: Details Cardinalities are frequently expressed nonnumeric: Nouns has twins, is a trilogy Indefinite articles They have a daughter Negation/adjectives Have no children/is childless Extended candidate set Often requires reasoning He has 3 children from Ivana and one from Marla Detecting compositional cues 29

  30. Idea 2: Recall estimation during IE Which sentence mentions all districts? Linguistic theory: Quantity and relevance are context- dependent [Grice 1975] The wording matters! Preliminary results: Context-based coverage estimation is possible [EMNLP 19] 30

  31. Outline Assessing KB recall 1. Logical foundations 2. Data mining 3. Information extraction 4. Comparative coverage 31

  32. Comparative coverage: Idea Date of birth, author, genre, Single-valued properties: Having one value Property is complete No need for external metadata Look at data alone suffices! 32

  33. What are single-value properties? year Extreme case, but Multiple citizenships More parents due to adoption Several Twitter accounts due to presidentship 33

  34. All hopes lost? Presence of a value is better than nothing Even better: For multi-valued attributes, data is still frequently added in batches All clubs Diego Maradona played for All ministers of a new cabinet Checking data presence is a common heuristic among Wikidata editors 34

  35. Value presence heuristic - example [https://www.wikidata.org/wiki/Wikidata:Wikivoyage/Lists/Embassies]

  36. Can we automate data presence assessment? 4.1: Which properties to look at? 4.2: How to quantify data presence? 36

  37. 4.1: Which properties to look at? (1/2) Coverage(WikidataforPutin)? There are more than 3000 properties one can assign to Putin Are at least all relevant properties there? What do you mean by relevant? 37

  38. 4.1: Which properties to look at? (2/2) Crowd-based property relevance task: State-of-the-art (itemset mining) gets 61% of high-agreement triples right Mistakes frequency for interestingness Our weakly-supervised text model achieves 75% [ADMA 17] 38

  39. 4.2: How to quantify data presence? We have values for 46 out of 77 relevant properties for Putin Hard to interpret Proposal: Quantify based on comparison with other similar entities Ingredients: Similarity metric Data quantification How much data is good/bad? Who is similar to Trump? Deployed in Wikidata as Relative Completeness Indicator (Recoin) [ESWC 17] 39

  40. 40

  41. 41

  42. Outline Assessing KB recall 1. Logical foundations 2. Data mining 3. Information extraction 4. Comparative coverage 5. Summary 42

  43. Acknowledgement 43

  44. Summary (1/2) Increasing KB quality can be noticed downstream Precision easy to evaluate Recall largely unknown 44

  45. Summary (2/2) Proposal: Make recall information a first-class citizen of KBs Methods for obtaining recall information: 1. Supervised data mining 2. Numeric or context-based text extraction 3. Comparative data presence Questions? 45

  46. Relevance (1/3): IE resource efficiency IE Districts(NY) = Manhattan, Bronx, Queens, Brooklyn, Staten Island Coverage = High Stop further extraction Districts(Hong Kong) = Wan Chai, Kowloon City, Yau Tsim Mong Coverage = Low Explore more resources 46

  47. Relevance (2/3): Adjust IE thresholds HK consists of the districts Wan Chai, , , , and . Coverage 0.98 IE District(HK, Wan Chai) District(HK, Kowloon City) - confidence 0.86 District(HK, Yau Tsim) District(HK, Macao) - confidence 0.93 Accept - confidence 0.74 - confidence 0.67 Reject 47

  48. Relevance (3/3): QA negation and completeness QA Which US presidents were married only once? Which countries participated in no UN mission? For which cities do we know all districts? Without coverage awareness, QA systems cannot answer these Focus of our research [SIGMOD 15, WSDM 17, ACL 17, ISWC 18, ] 48

Related


More Related Content