< Back to previous page

Project

Automatic text analysis in comparative judgments: increase the efficiency of judgments and the quality of feedback

Comparative judgment (CJ) is an alternative and innovative assessment method that allows to reliably and validly assess writing products based on repeated pairwise comparisons by multiple assessors. However, there are still two scientific challenges that hinder the efficient application of this assessment method. First, there is the cold start problem. There is no information about the quality of products at the start of the assessment process, so the selection of pairs in the selection algorithms are still random. The PhD candidate investigates how automatic text analysis (or text mining) can solve the cold-start problem in product scoring and pair selection. A second limitation of CJ is the limited information value of the scores. CJ scores only reflect how much better (or worse) a particular product is compared to the others, not why. Giving feedback in CJ is still very time-consuming and it breaks the flow of the assessment process. The use of automatic text analysis techniques could automate feedback within CJ. The PhD candidate will investigate which text features can best explain the results of the comparisons, and thus, can be used to automate feedback in CJ for students and assessors.

Date:1 Oct 2022 →  Today
Keywords:Comparative judgments, Text Mining, Automatische tekstanalyse, Natural Language Processing, Computational Linguistics, Educatieve Technologie, Psychometrie, Artificial Intelligence
Disciplines:Psychometrics, Natural language processing, Educational technology, Computational linguistics
Project type:PhD project