论文标题
关于大学教学的代码提交和自动评估平台的观点
Perspective on Code Submission and Automated Evaluation Platforms for University Teaching
论文作者
论文摘要
我们对大学教学的背景下的代码提交和自动化评估的平台介绍了一个观点。由于Covid-19的大流行,此类平台已成为远程课程的重要资产,也是有关计算机科学学生数量增加的结构化代码提交的合理标准。利用自动代码评估技术在质量和可扩展性方面对学生和教师都具有显着的积极影响。我们在实际适用性和安全代码提交环境方面确定了此类平台的相关技术和非技术要求。此外,还进行了一项调查,以获取有关一般感知的经验数据。我们得出的结论是,提交和自动化评估涉及持续的维护,但降低了教师所需的工作量,并为学生提供了更好的评估透明度。
We present a perspective on platforms for code submission and automated evaluation in the context of university teaching. Due to the COVID-19 pandemic, such platforms have become an essential asset for remote courses and a reasonable standard for structured code submission concerning increasing numbers of students in computer sciences. Utilizing automated code evaluation techniques exhibits notable positive impacts for both students and teachers in terms of quality and scalability. We identified relevant technical and non-technical requirements for such platforms in terms of practical applicability and secure code submission environments. Furthermore, a survey among students was conducted to obtain empirical data on general perception. We conclude that submission and automated evaluation involves continuous maintenance yet lowers the required workload for teachers and provides better evaluation transparency for students.