Please use this identifier to cite or link to this item:
http://hdl.handle.net/1942/46328
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | THYS, Jarne | - |
dc.contributor.author | VANACKEN, Davy | - |
dc.contributor.author | ROVELO RUIZ, Gustavo | - |
dc.date.accessioned | 2025-07-01T08:36:03Z | - |
dc.date.available | 2025-07-01T08:36:03Z | - |
dc.date.issued | 2025 | - |
dc.date.submitted | 2025-06-16T08:30:53Z | - |
dc.identifier.citation | 3rd Workshop on Engineering Interactive Systems Embedding AI Technologies, Trier, Germany, 2025, June 24 | - |
dc.identifier.uri | http://hdl.handle.net/1942/46328 | - |
dc.description.abstract | LLMs have rapidly evolved into versatile ''foundation models'', repurposed - despite persistent gaps in reliability - for a variety of tasks, such as legal document summarization, medical question answering, and text classification. In this paper, we propose an approach to engineer better text classification solutions for educational grading. We address this challenge with a solution that couples (i) a transformer cascade for rubric-level prediction with (ii) a transparent, traffic-light feedback interface powered by a Mixture-of-Agents LLM system. We compared our approach to a standard LLM and a single transformer architecture using the ASAG dataset. Results show that our approach increases recall for incorrect answers by more than 50% and precision on fully correct answers by 20% compared to a single transformer. Finally, we describe a prototype implementing our approach in an end-to-end, minimally intrusive solution for semi-automatic grading, which allows the teaching staff to review and revise the feedback generated by a Mixture-of-Agents LLM system based on the grade classification. | - |
dc.description.sponsorship | This work was supported by the Special Research Fund (BOF) of Hasselt University (BOF24OWB28). This research was made possible with support from the MAXVR-INFRA project, a scalable and flexible infrastructure that facilitates the transition to digital-physical work environments. The MAXVR-INFRA project is funded by the European Union - NextGenerationEU and the Flemish Government. The authors would like to thank Ruben Swidzinski for providing us with Figure 3. | - |
dc.language.iso | en | - |
dc.subject | Computer Science - Human-Computer Interaction | - |
dc.subject | Computer Science - Human-Computer Interaction | - |
dc.subject | Computer Science - Artificial Intelligence | - |
dc.subject.other | Cascade Models | - |
dc.subject.other | AI-Augmented Workflows | - |
dc.subject.other | Automated Grading Systems | - |
dc.subject.other | AI Text Classification | - |
dc.title | Improving AI Text Classification: A Cascaded Approach | - |
dc.type | Conference Material | - |
local.bibliographicCitation.conferencedate | 2025, June 24 | - |
local.bibliographicCitation.conferencename | 3rd Workshop on Engineering Interactive Systems Embedding AI Technologies | - |
local.bibliographicCitation.conferenceplace | Trier, Germany | - |
local.bibliographicCitation.jcat | C2 | - |
local.type.refereed | Refereed | - |
local.type.specified | Conference Material | - |
local.bibliographicCitation.status | In press | - |
local.provider.type | - | |
local.uhasselt.international | no | - |
item.fulltext | With Fulltext | - |
item.fullcitation | THYS, Jarne; VANACKEN, Davy & ROVELO RUIZ, Gustavo (2025) Improving AI Text Classification: A Cascaded Approach. In: 3rd Workshop on Engineering Interactive Systems Embedding AI Technologies, Trier, Germany, 2025, June 24. | - |
item.contributor | THYS, Jarne | - |
item.contributor | VANACKEN, Davy | - |
item.contributor | ROVELO RUIZ, Gustavo | - |
item.accessRights | Restricted Access | - |
Appears in Collections: | Research publications |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
EICS_workshop_2025_AI_assisted_grading.pdf Restricted Access | Conference material | 1.17 MB | Adobe PDF | View/Open Request a copy |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.