|
Understanding Learning in Digital Environments – A Framework for Benchmarking Student Models
How do Intelligent Learning Systems understand what students actually know? Modern educational technologies—such as Intelligent Tutoring Systems, adaptive learning platforms, and other forms of interactive learning media—aim to support learners in a personalized way. To do this, they rely on student models: computational models that estimate a learner’s knowledge state based on their interactions with a learning system.
Over the past few decades, many different approaches have been proposed for estimating students’ knowledge levels. Examples include models Bayesian Knowledge Tracing, Item Response Theory, and Deep Knowledge Tracing, and etc. However, comparing these models remains difficult. Different studies often use different datasets, preprocessing methods, feature representations, and evaluation metrics. This makes it challenging to determine which models best capture learning processes in real-world settings.
This project aims to develop a unified framework for benchmarking student models across datasets and methods. This framework will make it easier to explore how different computational models capture and predict human learning processes. It also offers students the opportunity to investigate how computational methods can be used to better understand and support learning in digital environments.
Objectives
- Design a standardized data format for representing student interactions in digital learning environments.
- Integrate multiple state-of-the-art student modeling approaches.
- Develop a benchmarking pipeline that enables consistent evaluation of models using well-defined metrics.
Required Knowledge
- Basic programming skills (Python or R)
- Interest in educational technology and learning analytics.
- Familiar with machine learning or data analysis (optionally)
Students will have the opportunity to:
- Contribute to a research-oriented project at the intersection of AI, data science and education.
- Build experience that is valuable for later theses (bachelor or master's thesis) in AI or data science.
|