Research Specialization

 

Analogical Reasoning

 

The Lisa Model

A major focus of the Reasoning Lab is on the role of analogy in thinking. Holyoak and his colleagues developed a method for experimentally studying the use of cross-domain analogies in problem solving, suitable for use with people ranging in age from preschool children to adults. Our research showed that different types of similarity have differential impact on retrieval versus mapping, and that analogical transfer enhances learning of new abstract concepts. Other work has demonstrated how analogy can be used to teach complex scientific topics by allowing transfer across different knowledge domains, and how analogy can be used as a powerful tool for persuasion in areas such as foreign policy. A detailed neural-network model of relational thought, LISA (Learning and Inference with Schemas and Analogies; Hummel & Holyoak, 2005) was developed in our lab. Using analogy as the primary example, the LISA model has been used to simulate both normal human thinking and its breakdown in cases of brain damage.

BACK TO TOP

Causal Induction

 

 

 

 

 

Immanuel Kant, 1724-1804

Historically, causality has been the domain of philosophers, from Aristotle through to Hume and Kant. The fundamental challenge since Hume is that causality per se is not directly in the input. Nothing in the evidence available to our sensory system can ensure someone of a causal relation between, say, flicking a switch and the hallway light turning on. Causal knowledge has to emerge from noncausal input. Yet, we regularly and routinely have strong convictions about causality.

Cheng and her colleagues have developed a theory of how people (and non-human animals) discover new causal relations. Her power PC theory (short for a causal power theory of the probabilistic contrast model) starts with the Humean constraint that

David Hume, 1711-1776
(portrait by Allan Ramsay)

causality can only be inferred, using observable evidence as input to the reasoning process. She combines that constraint with Kant’s postulate that reasoners have a priori notions that types of causal relations exist in the universe. This unification can best be illustrated with an analogy. According to Cheng, the relation between a causal relation and a covariation is like the relation between a scientific theory and a model. Scientists postulate theories (involving unobservable entities) to explain models (i.e. observed regularities or laws); the kinetic theory of gases, for instance, is used to explain Boyle’s law. Likewise, a causal relation is the unobservable entity that reasoners strive to infer in order to explain observable regularities between events.

The power PC theory explains many phenomena observed in both causal reasoning studies with humans and classical conditioning experiments with animals. The theory provides a unified account of a variety of types of causal judgments. The Reasoning Lab is currently extending the theory to explain how causal knowledge influences the formation of new categories.

BACK TO TOP

Decision Making

 

 

 

 

 

The Reasoning Lab is investigating the mental processes that people use to make decisions that require the integration of multiple inferences. Tasks such as judging complex legal cases, deciding which job offer to accept, or what candidate to support in an election, involve sets of inferences that tend to be ambiguous, contradictory and complex. Holyoak and Professor Dan Simon of the School of Law, University of Southern California have examined decision making in a laboratory analog of judicial decision making. They found that the decision-making process was accompanied by a systematic change in the evaluation of the inferences towards a pattern of coherence with the emerging decision. Assessments of inferences increasingly spread apart, with those supporting the chosen decision growing stronger as those supporting the rejected alternative waned. Holyoak and Simon interpreted their findings in terms of a decision-making model in which options and inferences are represented in a connectionist network that operates by parallel constraint satisfaction.

BACK TO TOP

 

 

| | ©2003 trl