[publication] InfoFit and Beyond: AI Chatbots as EdTech Tools for Self-Regulated Learning in MOOCs #AIinEducation #research

This is an impactful contributions, methodological rigor, and exceptional novelty in the research field of AI in education.

Our publication titled „InfoFit and Beyond: AI Chatbots as EdTech Tools for Self-Regulated Learning in MOOCs“ at this year’s HCII conference is available.

Abstract:
Massive Open Online Courses (MOOCs) have become essential for the democratization of education by providing accessible learning opportunities to broad audiences. However, their asynchronous and open structure is challenging for learning, especially in terms of maintaining engagement, and self-regulated learning (SRL) is necessary. This study investigates the integration of a retrieval-augmented-generation (RAG) chatbot into a MOOC and uses generative AI (genAI) to enhance learn-ers‘ SRL processes. The chatbot is based on Zimmermann’s SRL framework and is prepared for the MOOC content, basics of computer science. It is designed to support learners in the forethought, performance, and self-reflection phases by providing concise, context-specific responses. A mixed-method evaluation with 79 participants revealed high levels of satisfaction , with over 98% of respondents recommending the chatbot for future courses. The chatbot proved effective in supporting tasks such as summarization and concept clarification; however, its role in maintaining motivation emerged as a key area for further investigation. These findings underscore the transformative potential of AI chatbots in asynchronous learning environments, while also highlighting the importance of incorporating multimodal and motivational features to maximize educational technology (EdTech) impact.

[full article @ publisher’s homepage]
[draft @ ResearchGate]

Reference: Brünner, B., Ebner, M. (2025). InfoFit and Beyond: AI Chatbots as EdTech Tools for Self-Regulated Learning in MOOCs. In: Smith, B.K., Borge, M. (eds) Learning and Collaboration Technologies. HCII 2025. Lecture Notes in Computer Science, vol 15807. Springer, Cham. https://doi.org/10.1007/978-3-031-93567-1_4

[publication] Digitaler Überblick statt Datenfragmentierung: Ein Studienfortschritts-Dashboard für die TU Graz

Für das fnma-Magazin 02/25 haben wir einen kurzen Beitrag zu „Digitaler Überblick statt Datenfragmentierung: Ein Studienfortschritts-Dashboard für die TU Graz“ geschrieben. Dieser bzw. das gesamte Magazin ist online erhältlich.

Abstract:
Transparenz, Selbststeuerung und digitale Unterstützung sind zentrale Faktoren für Studierbarkeit im digitalen Zeitalter. In diesem Beitrag zeigen wir, wie an der TU Graz gemeinsam mit Studierenden ein datenbasiertes Studienfortschritts-Dashboard entwickelt wird, das Orientierung schafft, Planung erleichtert und neue Impulse für ein nachhaltiges Digital Wellbeing im Student-Life-Cycle setzt.

[fnma Magazin 02/25]
[Beitrag @ ResearchGate]

Zitation: Brünner, B., Leitner, P., Pranter, P.-P. & Ebner, M. (2025) Digitaler Überblick statt Datenfragmentierung: Ein Studienfortschritts-Dashboard für die TU Graz, fnma-Magazin 02/25, S. 16-19, https://www.fnma.at/content/download/3220/21290?version=2

[publication] Ensuring Quality in AI-Generated Multiple-Choice Questions for Higher Education with the QUEST Framework #AI #tugraz #research

This is an impactful contributions, methodological rigor, and exceptional novelty in the research field of AI in education.

Another research paper about the use of AI for education – titled „Ensuring Quality in AI-Generated Multiple-Choice Questions for Higher Education with the QUEST Framework“ – was published.

Abstract:
With the rise of generative AI models, such as large language models (LLMs), in educational settings, there is a growing demand to ensure the quality of AI-generated multiple-choice questions (MCQs) used in higher education. Traditional quiz development methods fall short in addressing the unique challenges posed by AI-generated content, such as consistency, cognitive demand, and question uniqueness. This paper presents the QUEST framework, a structured approach designed specifically to evaluate the quality of LLM-generated MCQs across five dimensions: Quality, Uniqueness, Effort, Structure, and Transparency. Following an iterative research process, AI-generated questions were assessed and refined using QUEST, revealing that the framework effectively improves question clarity, relevance, and educational value. The findings suggest that QUEST is a viable tool for educators to maintain high-quality standards in AI-generated assessments, ensuring these resources meet the pedagogical needs of diverse learners in higher education.

[draft @ ResearchGate]
[full article @ publisher’s website]

Reference: Ebner, M., Brünner, B., Forjan, N., Schön, S. (2025). Ensuring Quality in AI-Generated Multiple-Choice Questions for Higher Education with the QUEST Framework. In: Tomczyk, Ł. (eds) New Media Pedagogy: Research Trends, Methodological Challenges, and Successful Implementations. NMP 2024. Communications in Computer and Information Science, vol 2537. Springer, Cham. https://doi.org/10.1007/978-3-031-95627-0_20

[ijet, journal] Journal of Emerging Technologies in Learning Vol. 20 / No.02 #ijet #research

Issue 20(02) of our journal on emerging technologies for learning got published. Enjoy the readings as usual for free :-).

Table of Contents:

  • Evaluating User Experience in Learning Applications among University Students in Nigeria Using UEQ
  • Enhancing Language Learning Experience with Augmented Reality Games: A Systematic Review of Empirical Studies from 2019–2023
  • Clustering Students Based on Online Learning Interactions Using Social Network Analysis
  • Strengths and Challenges in Teaching and Learning in Education with the Use of Information and Communication Technologies/li>
  • Navigating a 360-Degree Cued Virtual Classroom
  • Art Education in the Era of Artificial Intelligence: Advancing the Elimination of Technological Anxiety

[Link to Issue 20/02]

Nevertheless, if you are interested to become a reviewer for the journal, please just contact me 🙂 .

[publication] aicast: Combining AI-Generated and Instructor-Defined Content in Educational Podcasts #research #tugraz

At this year’s ED-Media conference, we introduced a new AI-based concept for educational podcasts: „aicast: Combining AI-Generated and Instructor-Defined Content in Educational Podcasts„.

Abstract:
Generative AI is transforming educational content. In this poster, we present aicast.fyi, an open-source platform for producing personalized educational podcasts that combine AI-generated and instructor-defined content. Each track contains dynamic and fixed segments. Instructors define guiding questions during track design, ensuring alignment with learning objectives. This hybrid approach addresses concerns about maintaining instructional integrity with automation and personalization. The poster provides an architectural overview and demonstrates how the system integrates text generation and voice synthesis to produce audio-based learning experiences. Instructional designers must play an active role in shaping AI-assisted educational tools. This poster invites educators, designers, and researchers to engage with aicast and collaborate on developing high-quality, responsible, and learner-centered audio content. By allowing learners to control content sequences and personalize segments based on their own inputs, aicast.fyi also supports self-regulated learning (SRL), empowering users to take ownership of their educational journey.

[draft @ ResearchGate]
[full article @ conference website]

Reference: Brünner, B., Leitner, P., Geier, G. & Ebner, M. (2025). aicast: Combining AI-Generated and Instructor-Defined Content in Educational Podcasts. In T. Bastiaens (Ed.), Proceedings of EdMedia + Innovate Learning (pp. 148-152). Barcelona, Spain: Association for the Advancement of Computing in Education (AACE). Retrieved June 4, 2025 from https://www.learntechlib.org/primary/p/226139/

[presentation] Implementing multilingual MOOCs in European University Alliances with the help of AI usage, LTI and open licenses: Technical & organizational challenges #tugraz #emoocs2025

We also presented one paper within the Experience Track of the EMOOCs 2025 conference in Paris. This time the presentation was about „Implementing multilingual MOOCs in European University Alliances with the help of AI usage, LTI and open licenses: Technical & organizational challenges (Presentation)„. Our slides are, of course, available online.

[Link to the slides]

[presentation] Synthetic Educators: Analyzing AI-Driven Avatars in Digital Learning Environments #HCII25

Our research about AI-Driven Avatars was presented at 27th International Conference on Human-Computer Interaction, Gothenburg, Sweden.

Struger, P., Brünner, B., & Ebner, M. (2025, Juni 23). Presentation: Synthetic Educators: Analyzing AI-Driven Avatars in Digital Learning Environments. Graz University of Technology. https://doi.org/10.3217/bngt5-p2053

[publication] Understanding the Core of LLMs as genAI -CollectiveGPT and Human Intelligence #tugraz #workshop

At this year’s ED-Media conference, we also did a workshop on „Understanding the Core of LLMs as genAI -CollectiveGPT and Human Intelligence„. Find a short published summary about it.

Abstract:
This workshop provides an engaging and interactive exploration of Large Language Models (LLMs), with a focus on how they operate at a foundational level. Using CollectiveGPT, an educational chatbot developed by the Ed-Tech Research Community Graz, participants gained firsthand experience understanding the principles behind generative AI (genAI) systems such as ChatGPT. Designed for educators of all disciplines, the workshop demystifies key LLM concepts such as probability-based word prediction, contextual understanding, and the role of training data. Attendees generated texts and compare their results with real-time outputs from ChatGPT, highlighting the differences between human reasoning and AI prediction. Key topics include prompting techniques, context framing, training bias, misinformation, and ethical considerations in AI use. Participants explored system prompt injection techniques and developed advanced prompting skills to optimize responses from AI systems. This workshop empowered educators with the knowledge to critically evaluate and responsibly use AI tools in their teaching. By fostering AI literacy, attendees got clear understanding of how LLMs work and how they can be leveraged to enhance learning experiences across diverse educational settings.

[publication @ conference website]
[draft @ ResearchGate]

Reference: Brünner, B. & Ebner, M. (2025). Understanding the Core of LLMs as genAI – CollectiveGPT and Human Intelligence. In T. Bastiaens (Ed.), Proceedings of EdMedia + Innovate Learning (pp. 153-154). Barcelona, Spain: Association for the Advancement of Computing in Education (AACE). Retrieved June 14, 2025 from https://www.learntechlib.org/primary/p/226329/.