[workpaper] Entwicklungspapier E-Assessment an der TU Graz #eAssessment

Um das Thema E-Assessment an der TU Graz strukturiert zu behandeln, haben wir letztes Jahr einen Bericht verfasst, den wir gerne offen lizenziert zur Verfügung stellen:

Durch die COVID-19-Pandemie und die damit einhergehende Schließung der Universitäten waren ab dem Sommersemester 2020 aber fast alle Lehrenden der TU Graz gezwungen, ihre Prüfungen in den virtuellen Raum zu verlagern. Dieses Entwicklungspapier soll einerseits den Stand der Forschung wiedergeben, andererseits einen Überblick darüber geben, welche Strategien und Tools an anderen (Technischen) Universitäten international bzw. national eingesetzt werden. Außerdem wird aufgezeigt, welche Arten der Leistungsfeststellung bisher schon an der TU Graz eingesetzt werden und welche Tools dafür in Verwendung sind. Abschließend wird auf zu beachtende Kriterien und mögliche Entwicklungen im Bereich von E-Assessment eingegangen.

[Entwicklungspapier @ ResearchGate]
[Entwicklungspapier @ TU Graz]

Zitation: Edelsbrunner, S., Hohla, K., & Ebner, M. (2022). Entwicklungspapier E-Assessment an der TU Graz. Graz University of Technology. https://doi.org/10.3217/s2rpc-x5g66

[publication] Concepts for E-Assessments in STEM on the Example of Engineering Mechanics #eAssessment #STEMeducation #TEL

Our publication about „Concepts for E-Assessments in STEM on the Example of Engineering Mechanics“ got published in the International Journal for Emerging Technologies in Learning (i-JET).

Abstract:
We discuss if and how it is possible to develop meaningful e-assessments in Engineering Mechanics. The focus is on complex example problems, resembling traditional paper-pencil exams. Moreover, the switch to e-assessments should be as transparent as possible for the students, i.e., it shouldn’t lead to additional difficulties, while still maintaining sufficiently high discrimination indices for all questions. Example problems have been designed in such a way, that it is possible to account for a great variety of inputs ranging from graphical to numerical and algebraic as well as string input types. Thanks to the implementation of random variables it is even possible to create an individual set of initial values for every participant. Additionally, when dealing with complex example problems errors carried forward have to be taken into account. Different approaches to do so are detailed and discussed, e.g., pre-defined paths for sub-questions, usage of students’ previous inputs or decision trees. The main finding is that complex example problems in Engineering Mechanics can very well be used in e-assessments if the design of these questions is well structured into meaningful sub-questions and errors carried forward are accounted for.

[article @ ResearchGate]
[article @ Journal’s Homepage]

Reference: Orthaber, M., Stütz, D., Antretter, T., Ebner, M. (2020) Concepts for E-Assessments in STEM on the Example of Engineering Mechanics. International Journal for Emerging Technologies in Learnig. 15(12). pp. 136-152

[talk] Digital Tests, Examinations and Assessments – Chances and Challenges #fnma #webinar #eassessment

Am 6.7.2020 um 11.00 Uhr findet der nächste fnma-Talk zum Thema „Digital Tests, Examinations and Assessments – Chances and Challenges“ statt. Die Teilnahme ist wieder immer kostenlos und wird online gestreamt. Alexander Schulz wird dabei den Input geben:

Alexander Schulz koordiniert am Center für Digitale Systeme (CeDiS) der Freien Universität Berlin seit 2017 den Bereich E-Learning und E-Examinations. Zu dem Bereich gehören das E-Examination Center (EEC) und seit 2019 das E-Examination Center 2 (EEC2), die mit insgesamt mehr als 330 Prüfungsplätzen zu den größten digitalen Prüfungszentren im deutschsprachigen Raum gehören. Er beschäftigt sich seit 2005 mit digitalen Prüfungen und ist langjähriger Referent und Fachautor in diesem Bereich.

fnma.at

[Link zum Webinar]

[publication] Development of a Quiz – Implementation of a (Self-) Assessment Tool and its Integration in Moodle #elearning #eassessment

Our publication about „Development of a Quiz – Implementation of a (Self-) Assessment Tool and its Integration in Moodle“ got published in the new issue of the Journal of Emerging Technologies for Learning (i-JET).

Abstract:
Technology Enhanced Learning has become more popular in recent times and many organizations and universities use it as a key instrument in various teaching and training scenarios. At the University of Technology of Graz, some courses require randomized quizzes where question variables can be assigned by arbitrary mathematical functions and this feature is missing in the current solutions. This article describes the development of a quiz application that can be integrated into Moodle by utilization of the Learning Tools Interoperability protocol (LTI). The PHP application is built to support programmable questions that can contain JavaScript and HTML code. Teachers are able to build interactive, randomized quizzes, in which the random variables can be assigned with complex mathematical functions. Furthermore, the application provides a programmable grading mechanism. With this mechanism, it is possible for students to self-assess their performance, as well as for teachers to formally assess their students‘ learning success automatically and send the results back to Moodle (or other LTI compatible consumer applications).

(Homepage i-JET)

[Full article @ ResearchGate]
[Full article @ Journal’s Homepage]

Reference: Schweighofer, J. , Taraghi, B., Ebner, M. (2019) Development of a Quiz – Implementation of a (Self-) Assessment Tool and its Integration in Moodle. Journal of Emerging Technologies for Learning, 14(23). pp. 1414-151

[presenation] Mathematik am Papier – das war gestern! #eassessment #conference

Im Rahmen des E-Prüfungs Symposium 2016 an der Universität Graz sind wir eingeladen worden um über die Entwicklungen rund um die Software neo-lernhilfen.at zu berichten. Hier findet man die Präsentationsfolien:

Klicken Sie auf den unteren Button, um den Inhalt von www.slideshare.net zu laden.

Inhalt laden

[presentation] Can Confidence Assessment Enhance Traditional Multiple-Choice Testing?

My second talk at this year ICL conference is about the use of a confidence paramater within a Multiple-Choice Testing scenario. I show the results of this experiments.
You will find here my slides:

Can Confidence Assessment Enhance Traditional Multiple-Choice Testing?