Explaining Models for Predicting At-Risk Students

 
Explaining models for
predicting at-risk students –
Look under the bonnet
 
Jakub Kocvara, Dr. Martin Hlosta, Prof. Zdenek Zdrahal
 
What is OU Analyse?
 
Creating a predictive model that identifies 
students at risk of failing
the module as early as possible so that OU 
intervention is efficient and
meaningful.
Tutor facing dashboard (Early Alert Indicators)
Supporting around 250 modules at the moment
 
 
 
 
How does a machine learning model
work?
Training data
Prediction data
Predicted outcome
Machine Learning
algorithm
Model
 
Previous presentation
demographic factors
previous study results
online activity
assignment result
 
Current presentation
demographic factors
previous study results
online activity
 
Predicted assignment result
 
What are some properties of a model?
 
Predictive power – e.g. accuracy
Training speed – training can be a very expensive
operation
Interpretability
how much do we understand the inner workings of
a model
often connected to complexity
 
Interpretability
 
Complexity
 
Interpretable models
 
Linear model, decision trees, …
 
Last week online activity
 
Last assignment score
Not
Submit
Last assignment
score > 60
Forum Posts > 5
Online activity last
week = Yes
Submit
Not
Submit
Submit
 
”Black box” models
 
More complex models can be more accurate, but are inherently harder to interpret
Large decision trees, ensemble methods, neural networks, …
 
Why is intepretability important?
 
Trust – providing tutors with reasoning behind model’s decisions
If the users do not trust a model or a prediction, they will not use it
” – Ribeiro, et al. (2016)
Error checking – assessing validity of the model by analysing suspicious predictions
Can point to incomplete or wrongly chosen training dataset
Ethical and legal reasons – GDPR’s “right to explain”
 
 
Types of interpretability
 
Model vs individual prediction
Does the interpretation method explain an individual prediction or the entire model behaviour?
Model-specific vs model-agnostic
Are explanations dependent on the knowledge of used algorithm or do they take only model’s
output into account?
Intrinsic vs post-hoc
Is interpretability achieved by restricting the complexity of the machine learning model (intrinsic)
or by applying methods that analyse the model after training (post hoc).
 
Explaining predictions in OU Analyse
 
Explanations generated by an independent Naïve Bayes model
Some intrinsic disadvantages
Expecting independent variables
Explanations are usable only when Naïve Bayes agrees with the main model
 
 
The
importance of
variables in
the model
 
LIME – Local Interpretable Model-agnostic
Explanations
 
Takes an individual prediction
and tweaks individual
features to create new
instances
Weighs them by proximity to
the original
Tries to train a simple model
(i.e. linear regression), that
approximates the black-box
model well in the vicinity of
the original prediction
Evaluates which feature
values confirm or contradict
black-box model’s hypothesis
 
LIME – Husky vs. wolf anecdote
 
Future of OU Analyse
 
Work in progress: Incorporating LIME module into the predictions pipeline
Goal is to have every student’s prediction explained and showing on tutor’s dashboard
This would aid tutors in while deciding whether to act on the basis of a prediction or not
Increased trust in our models would lead to increased adoption and ultimately retention
 
Sources
 
"Why Should I Trust You?": Explaining the Predictions of Any Classifier (2016) – 
Ribeiro, Singh,
Guestrin
Interpretable machine learning (2019) – 
Cristoph Molnar
A Unified Approach to Interpreting Model Predictions (2017) – 
Lundberg, Lee
Slide Note
Embed
Share

Models for predicting at-risk students involve creating a predictive model to identify students at risk of failing a module early on. These models utilize machine learning algorithms, training data, and various factors to predict outcomes. Interpretable models focus on easily understandable criteria, while black box models, though potentially more accurate, are harder to interpret. The importance of interpretability lies in building trust and ensuring the validity of model decisions in educational settings like OU Analyse.

  • Predictive models
  • At-risk students
  • Machine learning
  • Interpretability
  • Black box models

Uploaded on Feb 17, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Explaining models for predicting at-risk students Look under the bonnet Jakub Kocvara, Dr. Martin Hlosta, Prof. Zdenek Zdrahal

  2. What is OU Analyse? Creating a predictive model that identifies students at risk of failing the module as early as possible so that OU intervention is efficient and meaningful. Tutor facing dashboard (Early Alert Indicators) Supporting around 250 modules at the moment

  3. How does a machine learning model work? Current presentation demographic factors previous study results online activity Prediction data Machine Learning algorithm Training data Model Previous presentation demographic factors previous study results online activity assignment result Predicted assignment result Predicted outcome

  4. What are some properties of a model? Predictive power e.g. accuracy Training speed training can be a very expensive operation Interpretability Interpretability how much do we understand the inner workings of a model often connected to complexity Complexity

  5. Interpretable models Last assignment score > 60 Linear model, decision trees, Last week online activity Online activity last week = Yes Forum Posts > 5 Not Submit Not Submit Submit Submit Last assignment score

  6. Black box models More complex models can be more accurate, but are inherently harder to interpret Large decision trees, ensemble methods, neural networks,

  7. Why is intepretability important? Trust providing tutors with reasoning behind model s decisions If the users do not trust a model or a prediction, they will not use it Ribeiro, et al. (2016) Error checking assessing validity of the model by analysing suspicious predictions Can point to incomplete or wrongly chosen training dataset Ethical and legal reasons GDPR s right to explain

  8. Explaining predictions in OU Analyse Explanations generated by an independent Na ve Bayes model Some intrinsic disadvantages Expecting independent variables Explanations are usable only when Na ve Bayes agrees with the main model

  9. The importance of variables in the model

  10. LIME Local Interpretable Model-agnostic Explanations Takes an individual prediction and tweaks individual features to create new instances Weighs them by proximity to the original Tries to train a simple model (i.e. linear regression), that approximates the black-box model well in the vicinity of the original prediction Evaluates which feature values confirm or contradict black-box model s hypothesis

  11. LIME Husky vs. wolf anecdote

  12. Future of OU Analyse Work in progress: Incorporating LIME module into the predictions pipeline Goal is to have every student s prediction explained and showing on tutor s dashboard This would aid tutors in while deciding whether to act on the basis of a prediction or not Increased trust in our models would lead to increased adoption and ultimately retention

  13. Sources "Why Should I Trust You?": Explaining the Predictions of Any Classifier (2016) Ribeiro, Singh, Guestrin Interpretable machine learning (2019) Cristoph Molnar A Unified Approach to Interpreting Model Predictions (2017) Lundberg, Lee

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#