Evolution of Theory and Knowledge Refinement in Machine Learning

Slide Note
Embed
Share

Early work in the 1990s focused on combining machine learning and knowledge engineering to refine theories and enhance learning from limited data. Techniques included using human-engineered knowledge in rule bases, symbolic theory refinement, and probabilistic methods. Various rule refinement methods were explored, such as rule deletion, generalization, addition, and specialization. The integration of inductive logic programming and probabilistic learning further advanced the refinement of uncertain rule bases and Bayesian networks.


Uploaded on Sep 16, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. A Recap of Early Work on Theory and Knowledge Refinement Raymond J. Mooney The University of Texas at Austin Jude W. Shavlik The University of Wisconsin at Madison 1

  2. History of Combining Machine Learning and Knowledge Engineering In the early 1990 s there was a sizable body of work on integrating ML & KE that exploited prior knowledge to improve learning from limited training data. A variety of techniques were developed for taking human-engineered knowledge in the form of propositional or first-order logical rule bases and revising them to fit empirical data using different ML methods: Symbolic Probabilistic Neural-network Results were demonstrated on a range of applications. 2

  3. Learning curve results exploiting prior knowledge 3

  4. Symbolic Theory/Rule-base Refinement Various methods used to revise propositional or first-order predicate logic rule bases to fit empirical training data. Different methods supported different types of rule refinement: Rule deletion Rule generalization (dropping antecedents) Rule addition Rule specialization (adding antecedents) 4

  5. Propositional Rule Refinement Methods EITHER (Explanation-based and Inductive THeory Extension and Revision) (Ourston & Mooney, 1990, 1994) 5

  6. EITHER Results Results on refining biological theory on recognizing promoter sequences in DNA. 6

  7. First-order Rule Refinement Methods Many methods based on early work in Inductive Logic Programming (ILP) FORTE (First-Order Revision of Theories from Examples) (Richards & Mooney, 1991, 1995) combined: Rule specialization and learning using top-down induction (FOIL, First Order Inductive Learner) and relational pathfinding. Greedy rule and antecedent deletion. Applied to automatically debugging initial Prolog programs written by students in a Programming Language class learning logic programming. 7

  8. Probabilistic Knowledge Refinement Use probabilistic learning methods to refine uncertain rule bases or Bayesian networks. RAPTURE (Revising Approximate Probabilistic Theories using Repositories of Examples) (Mahoney & Mooney, 1993) refined certainty-factor rule bases using: Backpropagation to revise certainty factor parameters Rule induction to add new rules BANNER (Ramachandran & Mooney, 1996, 1998) refined Bayesian networks using: Backpropagation to revise noisy-or and noisy-and parameters. Alter graphical structure of model using information gain. 8

  9. RAPTURE Results Results on classical machine learning data set on diagnosing diseased soybean plants. 9

  10. Neural Network Knowledge Refinement Initialize a neural network with structure and parameters from a knowledge base rather than randomly. KBANN (Knowledge-Based Artificial Neural Networks) (Towell & Shavlik, 1990, 1994) revised neural networks by initializing them with propositional rule-bases. Use backpropagation to adjust parameters on existing links and learn parameters on new links added to fully connect layers (initialized with small random numbers). 10

  11. Mapping Rules to Neural Networks 11

  12. Conclusions History of work on combining knowledge engineering and machine learning from the 1990 s. Paper provides review of this early work with extensive citations to relevant papers. Subsequent work on adding prior knowledge to Support-Vector Machines (SVMs), reinforcement learning (RL), etc. Many of the basic ideas from this early work could potentially be updated and adapted to current deep learning methods. 12

Related


More Related Content