IBM Research India - Neuro-Symbolic AI
<! -- ========================== PAGE CONTENT ========================== ->
Undoubtedly, Deep Learning (DL) based models are the dominant and the crucial components behind the enviable glory and success that AI has earned in the past few years - may it be Foundation Models for Natural Language (NL) text processing, Convolutional Neural Networks for images/video processing, or Graph Neural Networks for structured data processing. The secret sauce behind the success of DL models is in their ability to extract information from raw data as statistical patterns. It has taken several decades for these DL models to be finally here. Despite their unprecedented capability to represent input objects in a noise-robust manner, they are susceptible to several limitations that refrain them from leveraging in mission-critical applications. These limitations include i) black-box nature, ii) data and compute hunger, iii) poor domain generalization, iv) inability to ingest domain knowledge, v) absence of explicit logical reasoning.
On the other hand, the classical field of Symbolic AI seems to be in a complete contrast to the Deep Learning. Symbolic AI methods i) require much less data, ii) generalize well across domains, iii) are interpretable, and iv) perform human-like formal reasoning. However, these methods are not so good at handling noise and variations in the input data. Also, these methods require substantial hand-tuning, which make them hard to use for complex problems. Realizing such complementary strengths and weaknesses of these two fields - Symbolic AI, and Deep Learning - a recent trend has been to fuse the ideas from these two fields. The resulting field is popularly referred by "Neuro-Symbolic AI (NSAI)".
Neuro-Symbolic AI aims at developing models and techniques that combine deep networks, which are good at extracting statistical patterns, with symbolic representation and logical reasoning. In other words, NSAI techniques extract features from data using DL approaches and then manipulate these features using classical Symbolic AI approaches. The good news is that such models require much less data and also posit step-by-step reasoning behind their decisions/conclusions during inference time. Another goal of NSAI techniques is to ingest domain knowledge supplied in different forms such as knowledge graph, relational database, domain-specific rules, etc. This enables them to generalize well across unseen domains without requiring further training, that is zero or few-shot domain transfer.
At IBM Research, we have a world-class global team researching on this exciting upcoming field of NSAI and continuously pushing the boundaries and setting new records to advance the field. IBM Research India is a key player in this space and comprises an outstanding team exploring this whole field of NSAI. Driven by the specific needs of our business and customers, we are focused mainly on two kinds of input data - i) natural language text documents and ii) programming language-based codes. Our long-term vision is to build a suite of NSAI techniques that would allow machines to understand as well as generate/translate not only natural languages but also programming languages. For each of these two data modalities, we are interested in building models that can do high-quality semantic parsing and generation. We also require our solutions to ingest domain knowledge/constraints and perform explicit logical reasoning when shifting to new domains. This helps in adapting solutions to a newer domain with very minimal (few-shot) or no (zero-shot) fine-tuning.
As far as code is concerned, there is a whole new upcoming field called AI4Code that aims at solving problems such as automatic code completion, code retrieval, source-to-source code translation, code summarization, etc. Some of the work that we are undertaking in our group directly helps in advancing the state-of-the-art in several of these AI4Code tasks. Coming to NL documents, we are developing NSAI solutions for important NLP problems such as Entity and Relation Linking, Schema Linking, Reading Comprehension Question Answering, Knowledge Graph Question Answering, Table Question Answering, etc. For these problems, we are building NSAI solutions that offer the same performance as state-of-the-art DL models but with, say, 1/10 of the training data, adapts to unseen domain with minimal or no fine tuning, and the design is also modular/interpretable.
The central question that lies at the core of almost all these problems is "what is the right representation of my data as well as given domain knowledge?" For this, another major effort in our group is to revisit the problem of baking symbolic and distributed representation of the data and knowledge from the first principle. Some of our recent attempts in this direction have leveraged the Quantum Logic theory to bake distributed representation of symbolic knowledge for the purpose of carrying out soft reasoning during question answering over knowledge graphs.