Artificial Intelligence - Seminal Contributions to AI

IBM has been a leader in AI research ...

IBM has been a leader in AI research since the field's early days in the 1950s, when Arthur Samuel developed a checker player that learned from experience. This work was one of the earliest and most influential examples of machine learning. Forty years later, IBM Research's chess-playing program Deep Blue made history when it beat Gary Kasparov, becoming the first chess-playing program to defeat a reigning world champion. We continue to take on new challenges, including Jeopardy! and Go.

Here is a cogent account of IBM's contributions to AI. Please check out IBM's seminal contributions to NLP.

Arthur Samuel's checkers player (1950s)

Samuel wrote what appears to be the first self-learning program, inventing several seminal techniques in rote learning and generalization learning. He also used what is now known as alpha-beta pruning, which avoided exploring paths that it could prove were suboptimal. This technique has been reinvented by John McCarthy, Allen Newell with Herbert Simon, Alexander Brudno and others.

"He [Samuel] completed the first checker program on the 701, and when it was about to be demonstrated, Thomas J. Watson Sr., the founder and President of IBM, remarked that the demonstration would raise the price of IBM stock 15 points. It did.

In 1961, when Ed Feigenbaum and Julian Feldman were putting together the first AI anthology, Computers and Thought, they asked Samuel to give them, as an appendix to his splendid paper on his checker player, the best game the program had ever played. Samuel used that request as an opportunity to challenge the Connecticut state checker champion, the number four ranked player in the nation. Samuel's program won. The champion provided annotation and commentary to the game when it was included in the volume."

From In Memoriam: Arthur Samuel: Pioneer in Machine Learning, by John McCarthy and Edward A. Feigenbaum

Reference: Arthur, Samuel. Some Studies in Machine Learning Using the Game of Checkers, IBM Journal 3 (3): 210-229, 1959.

IBM RS 1 Robotic system (1980s)

In 1973, IBM researchers built their first robot. By the following year they had programmed it to assemble 22 of the 25 pieces that form the rail support in a then current model of IBM typewriter. By 1983, IBM already had demonstrated publicly and announced two robots: the low-cost IBM 7535, adapted from a Japanese-made device, and the advanced RS 1, which was born at the company's Thomas J. Watson Research Center in Yorktown Heights, N.Y. The RS 1 had six degrees of freedom; its arm could move at speeds up to 40 inches per second, performing in a variety of precision assembly, parts insertion and other intricate manufacturing operations. The software permitted the RS 1 to respond moment-by-moment to changes in its work environment. For example, it could automatically realign a misfed part in order to complete a task.

Reference: Charles A. Pratt: Robotics at IBM, SIMULATION 1982 39: 60-63.

Deep Blue -- Computer Chess (1997)

In 1958, Herb Simon, AI pioneer and future Nobel Prize winner, predicted that "within ten years, a computer will be the world's chess champion." It was not until 1997, almost forty years after this prediction, that the IBM chess machine Deep Blue defeated World Chess Champion Garry Kasparov in a six-game match. The delay in achieving Simon's prediction was in part due to the need for a sufficiently powerful machine to help deal with the combinatorial complexity of chess. Deep Blue employed 480 single-chip chess search engines that could, in parallel, search more than 100 million chess positions per second. But speed alone was not enough. The development of algorithms that focused this computing power in an intelligent way, combined with a complex positional evaluation function, enabled Deep Blue to be successful. While it is now routine for super-human chess programs to train and assist human Grandmasters, Deep Blue's 1997 victory was a milestone in the field of Artificial Intelligence. --Murray Campbell

Reference: M. Campbell, A. Hoane, and F. Hsu. Deep Blue. Artificial Intelligence, 134:57-83, 2002.


Through his pioneering work on a self-teaching backgammon program called TD-Gammon in the early 1990's, IBM Researcher Gerry Tesauro demonstrated that reinforcement learning (RL), hitherto regarded as a mere theoretical curiosity, could achieve spectacular success in complex real-world problems. The ensuing intense interest led to RL becoming one of the most important areas of machine learning research, particularly for tasks requiring automated decision-making. Using "temporal difference" RL combined with a neural network, TD-Gammon played millions of games against itself, in the process developing a level of play on par with world champion human backgammon players. Considering that it started from a completely random initial strategy, used only the raw board state (with no hand-crafted features), and used only the binary win/loss signal at the end of the game to guide its learning, this result shocked the machine learning world. The result inspired other researchers to develop a vastly improved theory of RL combined with function approximation, as well as dozens of subsequent applications RL in real-world domains including elevator control, production scheduling, network routing, financial trading, spoken dialog systems, power plant control, and video game AI. TD-Gammon also rocked the backgammon world: a former top-10 player called it "the biggest breakthrough in the history of backgammon". The program's innovative style of play sparked a revolution in concepts and strategies used by human expert backgammon players, and ushered in a new generation of very strong computer backgammon players. Today, 15 years and over 2000 paper citations later, TD-Gammon is widely taught in the AI academic curriculum, and is prominently featured in the leading AI textbooks. --Jeff Kephart

Listen to Gerry's invited talk at the Multidisciplinary Symposium on Reinforcement Learning workshop at ICML-2009, "50 years of RL in games".

Read Rich Sutton's summary of TD-Gammon.


Gerry Tesauro: Temporal difference learning and TD-Gammon, Communications of the ACM, Vol. 38, No. 3, 1995.

Gerry Tesauro: TD-Gammon, a self-teaching backgammon program, achieves master-level play, Neural computation, MIT Press, 1994.

Infomax Principle for Neural Network Learning

Ralph Linsker proposed and developed the infomax principle for unsupervised learning in neural networks, starting in 1987. The principle prescribes that each processing stage should learn a function that maximizes the mutual information between input and output patterns in the presence of noise and design constraints. It was motivated by his discovery that a standard (Hebbian) learning rule, combined with locally correlated random activity, causes a model visual system network to automatically form "neurons" that respond selectively to light-dark edges having a preferred orientation, and to organize a layer of these neurons in a particular way. That work provided an answer to a 25-year old mystery: how could these response patterns arise in mammalian visual cortex, as Hubel and Wiesel had found experimentally in the 1960s?

The infomax principle addresses a general feature of biological information processing -- the brain's ability to learn automatically to recognize visual, auditory, and other features present in the environment. During the past two decades, it has been influential in the field of neural computation, contributing to a range of advances by researchers in neuroscience and in artificial pattern recognition, and to new signal processing techniques including infomax-based independent components analysis (ICA).

Selected references:

  • Ralph Linsker. Perceptual Neural Organization: Some Approaches Based on Network Models and Information Theory, Annual Review of Neuroscience, Vol. 13: 257-281, 1990.
  • Ralph Linsker, "Self-Organization in a Perceptual Network," Computer, vol. 21, no. 3, pp. 105-117, March, 1988.

Listen to Ralph's video lecture at the Almaden Institute for Cognitive Computing, 2006.

Automatic Question Answering (DeepQA)

Cognitive Computing

Reference: Rajagopal Ananthanarayanan, Steven K. Esser, Horst D. Simon, and Dharmendra S. Modha, The Cat is Out of The Bag: Cortical Simulations with 10^9 neurons and 10^13 synapses, Supercomputing 09: Proceedings of the ACM/IEEE SC2009 Conference on High Performance Networking and Computing, 2009. ACM Gordon Bell Prize.