Behavior Detection Models in Educational Settings

undefined
 
Behavior Detection
 
Week 3 Video 1
Welcome to Week 3
 
Over the last two weeks, we’ve discussed prediction
models
 
This week, we focus on a type of prediction model
called behavior detectors
 
Behavior Detectors
 
Automated models that can infer from
interaction/logs whether a student is behaving in a
certain way
 
We discussed examples of this – SRL behaviors – in
the Zhang et al. case study in week 1
 
 
The Goal
 
Infer meaningful (and complex) behaviors from logs
or in real-time
 
So we can study those behaviors more deeply
How do they correlate with learning?
What are their antecedents?s
 
And so we can identify when they occur
In order to intervene
 
Behaviors people have detected
 
Disengaged Behaviors
 
Gaming the System (Baker et al., 2004; Paquette &
Baker, 2019; Xia et al., 2021; Almoubayyed &
Fancsali, 2022; Levin et al., 2022)
Off-Task Behavior (Baker, 2007; Kovanovic et al.,
2015; Gokhan et al., 2019)
Carelessness (San Pedro et al., 2011; Hershkovitz et
al., 2011)
WTF/Inexplicable Behavior (Rowe et al., 2009;
Wixon et al., 2012; Gobert et al., 2015)
Stopout (Botelho et al., 2019)
 
 
 
6
SRL Behaviors
 
Help Avoidance (Aleven et al., 2004, 2006;
Almeda et al., 2017)
Persistence (Ventura et al., 2012; Botelho, 2018;
Kai et al., 2018)
Wheel-Spinning (Beck & Gong, 2013; Matsuda et
al., 2016; Kai et al., 2018; Botelho et al., 2019;
Owen et al., 2019)
Unscaffolded Self-Explanation (Shih et al., 2008;
Baker et al., 2013)
Computational Thinking Strategies (Rowe et al.,
2021)
 
7
Teacher Strategic Behaviors
 
Curriculum Planning Behaviors (Maull et al., 2010)
Teacher “Proactive Remediation” Interventions
(Miller et al., 2015)
Classroom Discussion Management Strategies
(Suresh et al., 2022)
Asking Authentic Questions (Kelly et al., 2018; Dale
et al., 2022)
Uptake of Student Ideas (Dale et al., 2022)
 
8
Related problem:
Sensor-free affect detection
 
Not quite the same conceptually
 
But the methods turn out to be quite similar
 
Detecting
Boredom
Confusion
Frustration
Engaged Concentration
Delight
 
(D’Mello et al., 2008 and dozens of papers afterwards)
Related problem:
LMS and MOOC Usage Analysis
 
Studying online learning behaviors
 
But typically not the same methods/process I’ll be
discussing in the next lectures
 
More often, researchers in this area have looked to
predict outcomes from relatively straightforward
behaviors
 
Due to what’s visible in the log files (access to resources
rather than thinking processes made visible through
complex activities)
 
10
Related problem:
LMS and MOOC Usage Analysis
 
That said, lots of great prediction modeling
research in this area
 
Predicting and analyzing outcomes based on when
and how much learners use videos, quizzes, labs,
forums, and other resources
 
(Arnold, 2010; Breslow et al., 2013; Sharkey &
Sanders, 2014; dozens of other examples)
11
 
Ground Truth
 
Where do you get the prediction labels?
 
Behavior Labels are Noisy
 
No perfect way to get indicators of student
behavior
 
It’s not truth
It’s ground truth
 
Behavior Labels are Noisy
 
Another way to think of it
 
In some fields, there are “gold-standard” measures
As good as gold
 
With behavior detection, we have to work with
“bronze-standard” measures
Gold’s less expensive cousin
 
Is this a problem?
 
Not really
 
It does limit how good we can realistically expect our
models to be
 
If your training labels have inter-rater agreement of
Kappa = 0.62
 
You probably should not expect (or want) your detector
to have Kappa = 0.75
 
Some Sources of Ground Truth for Behavior
 
Self-report
Field observations
Hand-coding of logs (aka “text replays”)
Video coding
 
Advantages and Disadvantages
 
Somewhat outside the scope of this course
 
Important note: no matter which coding method you
use, try to make sure the reliability/quality is as
high as feasible
While noting that 1,000 codes with Kappa = 0.5
might be better than 100 codes with Kappa = 0.8!
 
Harder to get higher reliability for video coding
than other methods
A bit non-intuitive
 
Next Lecture
 
Data Synchronization and Grain-Sizes
Slide Note
Embed
Share

Explore how behavior detectors automatically infer student behaviors from interaction logs, such as disengaged behaviors like Gaming the System and Off-Task Behavior, as well as Self-Regulated Learning (SRL) behaviors like Help Avoidance and Persistence. Discover the goal of identifying these behaviors and their impact on learning outcomes, along with examples of Teacher Strategic Behaviors. Additionally, learn about related research on sensor-free affect detection techniques detecting emotions like Boredom and Delight.

  • Behavior Detection
  • Student Behavior
  • Educational Technology
  • Self-Regulated Learning
  • Classroom Management

Uploaded on Sep 14, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Week 3 Video 1 Behavior Detection

  2. Welcome to Week 3 Over the last two weeks, we ve discussed prediction models This week, we focus on a type of prediction model called behavior detectors

  3. Behavior Detectors Automated models that can infer from interaction/logs whether a student is behaving in a certain way We discussed examples of this SRL behaviors in the Zhang et al. case study in week 1

  4. The Goal Infer meaningful (and complex) behaviors from logs or in real-time So we can study those behaviors more deeply How do they correlate with learning? What are their antecedents?s And so we can identify when they occur In order to intervene

  5. Behaviors people have detected

  6. Disengaged Behaviors 6 Gaming the System (Baker et al., 2004; Paquette & Baker, 2019; Xia et al., 2021; Almoubayyed & Fancsali, 2022; Levin et al., 2022) Off-Task Behavior (Baker, 2007; Kovanovic et al., 2015; Gokhan et al., 2019) Carelessness (San Pedro et al., 2011; Hershkovitz et al., 2011) WTF/Inexplicable Behavior (Rowe et al., 2009; Wixon et al., 2012; Gobert et al., 2015) Stopout (Botelho et al., 2019)

  7. SRL Behaviors 7 Help Avoidance (Aleven et al., 2004, 2006; Almeda et al., 2017) Persistence (Ventura et al., 2012; Botelho, 2018; Kai et al., 2018) Wheel-Spinning (Beck & Gong, 2013; Matsuda et al., 2016; Kai et al., 2018; Botelho et al., 2019; Owen et al., 2019) Unscaffolded Self-Explanation (Shih et al., 2008; Baker et al., 2013) Computational Thinking Strategies (Rowe et al., 2021)

  8. Teacher Strategic Behaviors 8 Curriculum Planning Behaviors (Maull et al., 2010) Teacher Proactive Remediation Interventions (Miller et al., 2015) Classroom Discussion Management Strategies (Suresh et al., 2022) Asking Authentic Questions (Kelly et al., 2018; Dale et al., 2022) Uptake of Student Ideas (Dale et al., 2022)

  9. Related problem: Sensor-free affect detection Not quite the same conceptually But the methods turn out to be quite similar Detecting Boredom Confusion Frustration Engaged Concentration Delight (D Mello et al., 2008 and dozens of papers afterwards)

  10. Related problem: LMS and MOOC Usage Analysis 10 Studying online learning behaviors But typically not the same methods/process I ll be discussing in the next lectures More often, researchers in this area have looked to predict outcomes from relatively straightforward behaviors Due to what s visible in the log files (access to resources rather than thinking processes made visible through complex activities)

  11. Related problem: LMS and MOOC Usage Analysis 11 That said, lots of great prediction modeling research in this area Predicting and analyzing outcomes based on when and how much learners use videos, quizzes, labs, forums, and other resources (Arnold, 2010; Breslow et al., 2013; Sharkey & Sanders, 2014; dozens of other examples)

  12. Ground Truth Where do you get the prediction labels?

  13. Behavior Labels are Noisy No perfect way to get indicators of student behavior It s not truth It s ground truth

  14. Behavior Labels are Noisy Another way to think of it In some fields, there are gold-standard measures As good as gold With behavior detection, we have to work with bronze-standard measures Gold s less expensive cousin

  15. Is this a problem? Not really It does limit how good we can realistically expect our models to be If your training labels have inter-rater agreement of Kappa = 0.62 You probably should not expect (or want) your detector to have Kappa = 0.75

  16. Some Sources of Ground Truth for Behavior Self-report Field observations Hand-coding of logs (aka text replays ) Video coding

  17. Advantages and Disadvantages Somewhat outside the scope of this course Important note: no matter which coding method you use, try to make sure the reliability/quality is as high as feasible While noting that 1,000 codes with Kappa = 0.5 might be better than 100 codes with Kappa = 0.8! Harder to get higher reliability for video coding than other methods A bit non-intuitive

  18. Next Lecture Data Synchronization and Grain-Sizes

Related


More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#