iTELL Research

iTELL is driven by evidence-based studies and academic research driving innovation in education technology and learning methodologies

Featured Research

Explore our latest peer-reviewed studies and academic publications

Automated Scoring of Constructed Response Items in Math Assessment Using Large Language Models
32 min read
Learning Technology

Automated Scoring of Constructed Response Items in Math Assessment Using Large Language Models

This study details a winning approach for automatically scoring math test responses using Large Language Models (LLMs). By balancing the training data and customizing the input for each question, their method achieved near-human levels of agreement with human scorers on nine out of ten test items.

By: Wesley Morris, Langdon Holmes, Joon Suh Choi, Scott Crossley
Read Study
Focus Time and Writing Performance in An Intelligent Textbook
32 min read
Focus Time

Focus Time and Writing Performance in An Intelligent Textbook

This study found that the more time students spend focused on an "intelligent textbook," the better they perform on subsequent writing assessments. This suggests that tracking reading focus time can be a useful metric for personalizing learning and understanding reading comprehension.

By: Joon Suh Choi, Wesley Morris, Langdon Holmes, Scott Crossley
Read Study
Using Intelligent Texts in a Computer Science Classroom: Findings from an iTELL Deployment
40 min read
AI Research

Using Intelligent Texts in a Computer Science Classroom: Findings from an iTELL Deployment

This study found that an "intelligent text" system, which uses LLMs to provide feedback on student responses, helped computer science students learn more effectively than traditional texts. Students using the system showed greater learning gains, which were predicted by factors like their level of interaction and performance on the systems exercises.

By: Scott Crossley, Joon Suh Choi, Wesley Morris, Langdon Holmes, David Joyner, Vaibhav Gupta
Read Study
Automatic Question Generation and Constructed Response Scoring in Intelligent Texts
38 min read
AI Research

Automatic Question Generation and Constructed Response Scoring in Intelligent Texts

This study developed a pipeline using large language models (LLMs) to automatically generate questions and score student-written answers for "intelligent texts." They found that GPT-3.5 effectively created questions, while other models scored the answers, and participants in a trial reported a positive experience with the system.

By: Wesley Morris, Joon Suh Choi, Langdon Holmes, Vaibhav Gupta, Scott Crossley
Read Study
iScore: Visual Analytics for Interpreting How Language Models Automatically Score Summaries
42 min read
UX Research

iScore: Visual Analytics for Interpreting How Language Models Automatically Score Summaries

This study describes the creation of iScore, a visual analytics tool that helps learning engineers understand, evaluate, and improve the large language models (LLMs) they use to automatically score student summaries. By allowing engineers to visualize and interact with the models, the tool helped improve score accuracy and build trust in the AI systems.

By: Adam Coscia, Langdon Holmes, Wesley Morris, Joon Suh Choi, Scott Crossley, Alex Endert
Read Study
AI Enhanced Intelligent Texts and Learning Gains
38 min read
Learning Science

AI Enhanced Intelligent Texts and Learning Gains

This study found that an interactive, AI-enhanced "intelligent textbook" led to greater learning gains for students in an introductory computer science class compared to a standard digital textbook. Higher-performing students, in particular, scored higher on a post-test after using the intelligent text.

By: Scott Crossley, Joon Suh Choi, Wesley Morris, Langdon Holmes, David Joyner
Read Study
Exploratory Assessment of Learning in an Intelligent Text Framework: iTELL RCT
45 min read
Learning Science

Exploratory Assessment of Learning in an Intelligent Text Framework: iTELL RCT

This study compared three types of digital texts and found that an interactive version with AI feedback led to the best results, including better-written summaries and more successful revisions. While users were satisfied with all versions, those in the interactive condition who spent more time reading showed stronger overall learning gains.

By: Scott Crossley, Wesley Morris, Joon Suh Choi, Langdon Holmes
Read Study

All Research Publications

32 min read
Learning Technology

Automated Scoring of Constructed Response Items in Math Assessment Using Large Language Models

This study details a winning approach for automatically scoring math test responses using Large Language Models (LLMs). By balancing the training data and customizing the input for each question, their method achieved near-human levels of agreement with human scorers on nine out of ten test items.

Authors: Wesley Morris, Langdon Holmes, Joon Suh Choi, Scott Crossley
Published in: Artificial Intelligence in Education
Read Study
32 min read
Focus Time

Focus Time and Writing Performance in An Intelligent Textbook

This study found that the more time students spend focused on an "intelligent textbook," the better they perform on subsequent writing assessments. This suggests that tracking reading focus time can be a useful metric for personalizing learning and understanding reading comprehension.

Authors: Joon Suh Choi, Wesley Morris, Langdon Holmes, Scott Crossley
Published in: Proceedings of 18th Educational Data Mining in Computer Science Education
Read Study
40 min read
AI Research

Using Intelligent Texts in a Computer Science Classroom: Findings from an iTELL Deployment

This study found that an "intelligent text" system, which uses LLMs to provide feedback on student responses, helped computer science students learn more effectively than traditional texts. Students using the system showed greater learning gains, which were predicted by factors like their level of interaction and performance on the systems exercises.

Authors: Scott Crossley, Joon Suh Choi, Wesley Morris, Langdon Holmes, David Joyner, Vaibhav Gupta
Published in: Proceedings of the 17th International Conference on Educational Data Mining
Read Study
38 min read
AI Research

Automatic Question Generation and Constructed Response Scoring in Intelligent Texts

This study developed a pipeline using large language models (LLMs) to automatically generate questions and score student-written answers for "intelligent texts." They found that GPT-3.5 effectively created questions, while other models scored the answers, and participants in a trial reported a positive experience with the system.

Authors: Wesley Morris, Joon Suh Choi, Langdon Holmes, Vaibhav Gupta, Scott Crossley
Published in: Proceedings of the 17th International Conference on Educational Data Mining
Read Study
39 min read
AI Research

Formative Feedback on Student-Authored Summaries in Intelligent Textbooks Using Large Language Models

This study created AI models to automatically evaluate the content and wording of student-written summaries for intelligent textbooks. The models explain a high degree of score variance and can provide real-time formative feedback to learners.

Authors: Wesley Morris, Scott Crossley, Langdon Holmes, Chaohua Ou, Mihai Dascalu, Danielle McNamara
Published in: International Journal of Artificial Intelligence in Education
Read Study
42 min read
UX Research

iScore: Visual Analytics for Interpreting How Language Models Automatically Score Summaries

This study describes the creation of iScore, a visual analytics tool that helps learning engineers understand, evaluate, and improve the large language models (LLMs) they use to automatically score student summaries. By allowing engineers to visualize and interact with the models, the tool helped improve score accuracy and build trust in the AI systems.

Authors: Adam Coscia, Langdon Holmes, Wesley Morris, Joon Suh Choi, Scott Crossley, Alex Endert
Published in: Proceedings of the 29th International Conference on Intelligent User Interfaces
Read Study
38 min read
Learning Science

AI Enhanced Intelligent Texts and Learning Gains

This study found that an interactive, AI-enhanced "intelligent textbook" led to greater learning gains for students in an introductory computer science class compared to a standard digital textbook. Higher-performing students, in particular, scored higher on a post-test after using the intelligent text.

Authors: Scott Crossley, Joon Suh Choi, Wesley Morris, Langdon Holmes, David Joyner
Published in: Proceedings of the Sixth International Workshop on Intelligent Textbooks 2025
Read Study
45 min read
Learning Science

Exploratory Assessment of Learning in an Intelligent Text Framework: iTELL RCT

This study compared three types of digital texts and found that an interactive version with AI feedback led to the best results, including better-written summaries and more successful revisions. While users were satisfied with all versions, those in the interactive condition who spent more time reading showed stronger overall learning gains.

Authors: Scott Crossley, Wesley Morris, Joon Suh Choi, Langdon Holmes
Published in: L@S 25: Proceedings of the Twelfth ACM Conference on Learning @ Scale
Read Study