In 2011, a computer (Watson) outplayed two human Jeopardy champions. In 1997, chess computer Deep Blue defeated chess champion Garry Kasparov. In both cases, the computer “solved” the game—found the right questions or good moves—differently than humans do. Defeating humans in these domains took years of research and programming by teams of engineers, but only with huge advantages in speed, efficiency, memory, and precision could computers compete with much more limited humans.
What allows human experts to match wits with custom-designed computers equipped with tremendous processing power? Chess players have a limited ability to evaluate all of the possible moves, the responses to those moves, the responses to the responses, etc. Even if they could evaluate all of the possible alternatives several moves deep, they still would need to remember which moves they had evaluated, which ones led to the best outcomes, and so on. Computers expend no effort remembering possibilities that they had already rejected or revisiting options that proved unfruitful.
This question, how do chess experts evaluate positions to find the best move, has been studied for decades, dating back to the groundbreaking work of Adriaan de Groot and later to work by William Chase and Herbert Simon. de Groot interviewed several chess players as they evaluated positions, and he argued that experts and weaker players tended to “look” about the same number of moves ahead and to evaluate similar numbers of moves with roughly similar speed. The relatively small differences between experts and novices suggested that their advantages came not from brute force calculation ability but from something else: knowledge. According to De Groot, the core of chess expertise is the ability to recognize huge number of chess positions (or parts of positions) and to derive moves from them. In short, their greater efficiency came not from evaluating more outcomes, but from considering only the better options. [Note: Some of the details of de Groot’s claims, which he made before the appropriate statistical tests were in widespread use, did not hold up to later scrutiny—experts do consider somewhat more options, look a bit deeper, and process positions faster than less expert players (Holding, 1992). But de Groot was right about the limited nature of expert search and the importance of knowledge and pattern recognition in expert performance.]
In de Groot’s most famous demonstration, he showed several players images of chess positions for a few seconds and asked the players to reconstruct the positions from memory. The experts made relatively few mistakes even though they had seen the position only briefly. Years later, Chase and Simon replicated de Groot’s finding with another expert (a master-level player) as well as an amateur and a novice. They also added a critical control: The players viewed both real chess positions and scrambled chess positions (that included pieces in implausible and even impossible locations). The expert excelled with the real positions, but performed no better than the amateur and novice for the scrambled positions (later studies showed that experts can perform slightly better than novices for random positions too if given enough time; Gobet & Simon, 1996). The expert advantage apparently comes from familiarity with real chess positions, something that allows more efficient encoding or retrieval of the positions.
Chase and Simon recorded their expert performing the chess reconstruction task, and found that he placed the pieces on the board in spatially contiguous chunks, with pauses of a couple seconds after he reproduced each chunk. This finding has become part of the canon of cognitive psychology: People can increase their working memory capacity by grouping together otherwise discrete items to form a larger unit in memory. In that way, we can encode more information into the same limited number of memory slots.
In 1998, Chris Chabris and I invited two-time US Champion and International Grandmaster Patrick Wolff (a friend of Chris’s) to the lab and asked him to do the chess position reconstruction task. Wolff viewed each position (on a printed index card) for five seconds and then immediately reconstructed it on a chess board. After he was satisfied with his work, we gave him the next card. At the end of the study, after he had recalled five real positions and five scrambled positions, we asked him to describe how he did the task.
The video below shows his performance and his explanations (Chris is the one handing him the cards and holding the stopwatch—I was behind the camera). Like other experts who have been tested, Wolff rarely made mistakes in reconstructing positions, and when he did, the errors were trivial—they did not alter the fundamental meaning or structure of the position. Watch for the interesting comments at the end when Wolff describes why he was focused on some aspects of a position but not others.
HT to Chris Chabris for comments on a draft of this post
Sources cited:
For an extended discussion of chess expertise and the nature of expert memory, see Christopher Chabris’s dissertation: Chabris, C. F. (1999). Cognitive and neuropsychological mechanisms of expertise: Studies with chess masters. Doctoral Dissertation, Harvard University. http://en.scientificcommons.org/43254650
Chase, W. G., & Simon, H. A. (1973). Perception in chess. Cognitive Psychology,4, 55-81.
de Groot, A.D. (1946). Het denken van de schaker. [The thought of the chess player.] Amsterdam: North-Holland. (Updated translation published as Thought and choice in chess, Mouton, The Hague, 1965; corrected second edition published in 1978.)
Holding, D.H. (1992). Theories of chess skill. Psychological Research, 54(1), 10–16.
Gobet, F., & Simon, H.A. (1996a). Recall of rapidly presented random chess positions is a function of skill. Psychonomic Bulletin & Review, 3(2), 159–163.