论文标题

QDKT:以问题为中心的深知识跟踪

qDKT: Question-centric Deep Knowledge Tracing

论文作者

Sonkar, Shashank, Waters, Andrew E., Lan, Andrew S., Grimaldi, Phillip J., Baraniuk, Richard G.

论文摘要

知识追踪(KT)模型,例如,深刻的知识跟踪(DKT)模型,通过检查学习者在与这些技能相关的问题上的表现,跟踪个人学习者对技能的获取。在大多数现有的KT模型中,实际限制是,在特定技能下嵌套的所有问题都被视为对学习者能力的等效观察,这在现实世界教育场景中是一个不准确的假设。为了克服这一限制,我们介绍了QDKT,QDKT是DKT的一种变体,该变体随着时间的推移,每个学习者对单个问题的成功概率进行了建模。首先,QDKT将图形laplacian正则化结合到每个技能下的平滑预测,当数据集中的问题数量很大时,这特别有用。其次,QDKT使用了受FastText算法启发的初始化方案,该方案在各种语言建模任务中都取得了成功。我们在几个现实世界数据集上的实验表明,QDKT在预测学习者成果方面取得了最先进的表现。因此,对于以问题为中心的KT模型,QDKT可以作为简单但很难的基线。

Knowledge tracing (KT) models, e.g., the deep knowledge tracing (DKT) model, track an individual learner's acquisition of skills over time by examining the learner's performance on questions related to those skills. A practical limitation in most existing KT models is that all questions nested under a particular skill are treated as equivalent observations of a learner's ability, which is an inaccurate assumption in real-world educational scenarios. To overcome this limitation we introduce qDKT, a variant of DKT that models every learner's success probability on individual questions over time. First, qDKT incorporates graph Laplacian regularization to smooth predictions under each skill, which is particularly useful when the number of questions in the dataset is big. Second, qDKT uses an initialization scheme inspired by the fastText algorithm, which has found success in a variety of language modeling tasks. Our experiments on several real-world datasets show that qDKT achieves state-of-art performance on predicting learner outcomes. Because of this, qDKT can serve as a simple, yet tough-to-beat, baseline for new question-centric KT models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源