Objective data from Dr. Mustafa Akben's business courses at Elon University: 88% voluntary adoption, 94% exam averages, and the strongest correlation between AI tutor usage and exam performance.
Dr. Mustafa Akben
Director of AI Integration, Elon University
In spring 2026, Elon University deployed TUEL AI across multiple undergraduate business courses taught by Dr. Mustafa Akben. The AI tutor was grounded in course-specific materials and made available as an optional study resource. This post reports the objective data collected across those classes over a full semester.
The results are drawn from student usage logs, closed-book exam scores, and a study habits survey administered at the end of the term. All data was collected under standard institutional research protocols.
Mustafa Akben is Director of AI Integration and Assistant Professor of Management at Elon University. He holds a PhD from Temple University and has published in the Journal of Applied Psychology. In 2025, Poets&Quants named him one of the Best Undergraduate Business Professors.
His research focuses on applied AI in higher education. He needed an AI tutor that could answer student questions using his own course materials, not generic web content. The TUEL-powered Elon AI tutor met that requirement by grounding every response in uploaded textbooks, lecture slides, and supplementary readings.
88% of students voluntarily used the AI tutor during the semester. Usage was never required or incentivized with extra credit. Students chose to use it because onboarding included clear instructions on how to use the tool as a learning aid rather than an answer generator.
Across 202 users, the platform logged thousands of sessions. Peak usage occurred in the 48 hours before each exam, consistent with students treating the AI tutor as a targeted review tool.
"What stood out to me most is that 88% of my students voluntarily used the Elon AI Tutor as part of their study routine." -- Dr. Mustafa Akben, Director of AI Integration, Elon University
Request a DemoStudents who engaged with the AI tutor for 1-3 hours per week averaged 94% on closed-book exams. These were standard course exams, not modified for the study. The AI tutor showed the strongest correlation with exam performance (r=0.32) compared to all other study methods measured in the survey.
A correlation of r=0.32 is a medium effect size in educational research. In a field where interventions often produce small effects (r=0.10-0.20), this number is worth attention. It means the AI tutor had a measurable, positive association with how students performed under test conditions.
The end-of-semester survey measured correlations between six study methods and closed-book exam scores. AI tutor usage had the highest individual correlation with exam performance.
Correlation between study method and exam score:
The negative correlation for textbook review (r=-0.32) does not mean reading the textbook harms performance. It likely reflects that students who relied heavily on passive re-reading as their primary strategy performed worse than students who used active methods. The regression model (R²=0.37) confirmed these patterns held when controlling for the combined effect of all study methods.
The study compared two groups. Top performers (n=11) averaged 94% on exams. Lower performers (n=6) averaged 84%. Both groups passed, but their study habits differed in observable ways.
How the two groups differed:
The gap between 94% and 84% is a full letter grade. The distinguishing factor was not time spent studying but how students studied. Active methods, led by AI tutoring combined with practice exams, outperformed passive methods.
A common concern about AI in education is that students will use it to bypass learning. The data from Elon tells a different story. Students used the AI tutor as a supplement to traditional methods, not a replacement for them. The highest-performing students combined AI tutoring with practice exams and study groups.
The tutor's design reinforced this pattern. It used a Socratic method, guiding students through reasoning rather than providing direct answers. When a student asked "What is the answer to question 5?", the tutor responded with clarifying questions that led the student to work through the concept. This approach turned each session into active learning rather than passive consumption.
Pairing AI tutor usage with repeated practice exams produced the highest results in the dataset. The combination makes intuitive sense: the tutor helps students identify gaps in understanding, and practice exams let them test whether they have closed those gaps.
Several technical and pedagogical choices separated this deployment from generic AI chatbot experiments on other campuses.
Design decisions that shaped outcomes:
The course-grounding requirement is worth emphasizing. Every AI response drew from materials that Dr. Akben had uploaded and approved. Students could not receive information that contradicted the course curriculum. Faculty could verify any response by checking the cited source. This eliminated the hallucination risk that has made other institutions hesitant to adopt AI tutoring.
Elon University is expanding the AI tutor to additional courses and departments for the fall 2026 semester. Dr. Akben is preparing a formal research publication based on the data collected during the spring deployment. The study will include a more detailed statistical analysis of learning outcomes across student demographics.
The TUEL platform that powers Elon AI is available to other institutions. Schools interested in running a similar pilot can start with a single course and scale based on their own data. Visit /pricing for pilot program details.
The Elon University data provides one of the first controlled looks at AI tutoring outcomes in higher education. 88% voluntary adoption, a medium-effect correlation with exam performance (r=0.32), and a clear pattern of active learning over passive study methods. These numbers come from real courses with real stakes, not a lab experiment.
For the full case study, including implementation details and faculty perspectives, visit /case-studies/elon-university. To explore how TUEL AI can support a pilot at your institution, see /pricing or request a demo.
Schedule a demo to see verified AI for learning in action—with your own course materials.