Watson Tutor creates a learning experience with natural language dialog in which the conversation with student is described as Socratic because the tutor guides the student through concepts via dialogue moves, which can include questions, hints, and other prompts.
The tutor models an expectation-misconception discourse model of tutoring, consisting of a set of anticipated correct ideal answers (expectations) and a set of invalid answers frequently expressed by students (misconceptions). A given main question might have several expectations (parts to the ideal answer) and misconceptions. The tutor begins with a deep reasoning questions and then provides hints to facilitate the student to give a response that matches the expectation.
In an expectation-misconception tailored dialogue, student responses need to be classified to update the system's estimates of student knowledge and to drive the dialogue forward. The student response analyzer (SRA) classifies the student response into three categories (correct, incorrect, and partial) using a 3-category classifier. It also analyzes the gap between the expected answer and the student response. Since SRA evaluates student response in context of current dialogue, understanding and analyzing student responses involves challenges such as: answer granularity mismatch, answer specificity mismatch, grammatical inaccuracies, and difference in vocabulary/presentation of the knowledge. As part of feature extraction, SRA identifies important parts of the assertions utilizing syntactic and semantic processing of sentences. A set of nonlinear classifiers are learned with the help of labeled training data collected in similar dialogue settings as well as domain-general corpora. Since different features and classification models capture various aspects of semantics, the final classification is obtained as an ensemble.
The learner model developed in the tutor estimates a student's degree of mastery of each topic. It uses a technique used to estimate player ranks in competitive games. Each assessment is assigned its own mastery rating, as if it were a player in a competitive game. We considered students correctly answering an assertion a “win” against that assertion, incorrect answers a “loss,” and partial answers a “draw,” updating both the student and the question’s mastery rating accordingly. This enabled the model to converge quickly to reasonable estimates of learner skill and question difficulty without requiring large datasets.
The tutor also provides personalized question recommendations based on input question and student’s current mastery level using collaborative filtering techniques.
Current State and Future Plans
The tutor is being developed at IBM on the text books provided by Pearson. It was tained on textbook: “Discovering the Life Span - R. Feldman” and released as a pilot in Oct 2017 which is being used by 255 students across 11 schools. Future plan is to release it in the production by end of 2018 for more than 10 textbooks which will be used by more than 1M students.