2020
Conversational Optimization of Cogintive Models
Martin Hirzel, Harold L. Ossher, David J. Piorkowski, John T. Richards.
Abstract
Systems and methods to generate a cognitive model are described. A particular example of a system includes a memory including program code having an application programming interface and a user interface, and a processor configured to access the memory and to execute the program code to generate a cognitive model, to run analysis on the cognitive model to determine a factor that is impacting a performance of the cognitive model, to determine an action based on the factor, to report at least one of the factor and the action to a user, and to use the action to generate a second cognitive model.
2019
Framework of Proactive and/or Reactive Strategies for Improving Labeling Consistency and Efficiency
Evelyn Duesterwald, Austin Z. Henley, David J. Piorkowski, John T. Richards
Abstract
Performance of a computer implementing a machine learning system is improved by providing, via a graphical user interface, to an annotator, unlabeled corpus data to be labeled; obtaining, via the interface, labels for the unlabeled corpus data; and detecting, with a consistency calculation routine, concurrent with the labeling, internal inconsistency and/or external inconsistency in the labeling. Responsive to the detection, intervene in the labeling with a reactive intervention subsystem until the inconsistency is addressed. The labeling is completed subsequent to the intervention; the system is trained to provide a trained machine learning system, based on results of the completed labeling; and classification of new data is carried out with the trained system. Proactive intervention schemes are also provided.
Cognitive Virtual Detector
Guillaume A. Baudart, Julian T. Dolby, Evelyn Duesterwald, David J. Piorkowski
Abstract
Aspects of the present invention disclose a method , computer program product , and system for detecting and mitigating adversarial virtual interactions . The method includes one or more processors detecting a user communication that is interacting with a virtual agent . The method further includes one or more processors determining a risk level associated with the detected user communication based on one or more actions performed by the detected user while interacting with the virtual agent . The method further includes one or more processors in response to determining that the determined risk level associated with the detected user communication exceeds a risk level threshold , initiating , a mitigation protocol on interactions between the detected user and the virtual agent , where the mitigation protocol is based on the actions performed by the detected user while interacting with the virtual agent.
Filter for harmful training samples in online learning systems
Evelyn Duesterwald, Yiyun Chen, Michael Desmond, Harold L. Ossher, David J. Piorkowski
Abstract
A computing method receives a labeled sample from an annotator . The method may determine a plurality of reference model risk scores for the first labeled sample , where each reference model risk score corresponds to an amount of risk associated with adding the first labeled sample to a respective reference model of a plurality of reference models . The method may determine an overall risk score for the first labeled sample based on the plurality of reference model risk scores . The method may further determine a probe for confirmation of the first labeled sample and a trust score for the annotator by sending the probe to one or more annotators. In response to determining a trust score for the annotator the method may add the labeled sample to a ground truth or reject the labeled sample.